Docs / Trust & Security / Privacy & Data

Privacy & Data

Let's be specific about what stays on your machine, what leaves, and where it goes. No vague assurances. Actual data flows.

What stays local (never leaves your machine)

DataWhere it livesWho can see it
Workspace files (SOUL.md, USER.md, IDENTITY.md, LOOKS.md)~/.vellum/workspace/Only you
Saved memories (facts, preferences, decisions)Local memory storeOnly you
Configuration (config.json, settings)~/.vellum/workspace/Only you
Custom skills~/.vellum/workspace/skills/Only you
Credentials (API keys, OAuth tokens)Secure credential vault on your machineOnly you (and the tools/domains they're scoped to)

This data is not synced to any cloud. Not telemetry. Not analytics. Not “anonymized usage data.” It sits on your hard drive and nowhere else.

What leaves your machine

Here's where we have to be honest about the trade-offs.

Your prompts and context go to the AI model provider.

Every time you send a message, your assistant assembles a context bundle:

  • Your message
  • The current conversation history
  • Relevant workspace files (SOUL.md, USER.md, etc.)
  • Relevant memories
  • Skill instructions (if a skill is loaded)

This entire bundle is sent to the AI model provider (currently Anthropic) to generate a response. That's how your assistant “thinks.” It can't think locally because the AI model runs in the cloud.

🫣 What this means practically: If you tell your assistant “I'm working on a secret project called Nightfall,” that information may be:
  1. Saved to your local memory/workspace (stays on your machine)
  2. Included in future AI model calls when relevant (leaves your machine temporarily)

The AI model provider processes it to generate a response, but does not (per their terms) use it to train models or share it with third parties. Still, it does leave your machine. We want you to know that.

API calls to connected services.

When your assistant checks your calendar, sends an email, or orders food, it makes API calls to those services (Google, AgentMail, DoorDash, etc.). The data in those calls is whatever's needed for the action: calendar event details, email content, order items.

These are the same API calls any app would make when talking to these services. Nothing unusual, but worth knowing.

What we don't do

  • No telemetry. We don't track how you use the product.
  • No analytics. We don't measure click-through rates on your permission prompts.
  • No data sharing. Your data is not sold, shared, or aggregated with other users.
  • No model training. Your conversations are not used to train or fine-tune AI models (from our side; check the model provider's terms for their policies).
  • No cloud backup. Your workspace is not synced anywhere. If you delete it, it's gone.

The model provider question

“But what does Anthropic do with my data?”

Fair question. Here's what we know:

  • Anthropic's API terms state that data sent through the API is not used for model training
  • Prompts and responses are processed to generate outputs, then subject to Anthropic's data retention policies
  • We recommend reading Anthropic's privacy policy directly for the most current and detailed information

Read Anthropic's Privacy Policy for full details on how they handle data.

We chose Anthropic because their approach to AI safety and data handling aligns with our principles. But we also believe you should verify this yourself, not just take our word for it.

Your options for sensitive information

If you have information you don't want leaving your machine at all:

  1. Don't tell your assistant. If it's not in the conversation, it's not sent to the model.
  2. Ask it to forget. “Forget what I told you about [topic].” This removes it from memory so it won't be included in future context.
  3. Edit your workspace files. Remove anything from USER.md or SOUL.md that you don't want in the context window.
  4. Wait for local models.

We'd rather give you informed choices than make promises about things outside our control.