Docs / Trust & Security / The Permissions Model

The Permissions Model

Every AI product says “you're in control.” Here's how we actually enforce it.

The core rule

Every sensitive action requires your explicit approval, every time.

Not “once per session.” Not “once per category.” Every individual action that touches your machine outside the sandbox shows a permission prompt.

How the prompt works

When your assistant needs to do something sensitive, you see a message in chat that includes:

  1. What it wants to do in plain language (“I need to look through your Downloads folder”)
  2. Why it needs to do it (“to find the invoice PDF you mentioned”)
  3. Whether it's read-only or will change something (“This is read-only, nothing gets moved or deleted”)
  4. Two buttons: Allow / Don't Allow

That's the entire flow. No complex settings. No permission manager dashboard. No “advanced” vs. “basic” mode. Just: here's what I need, here's why, do you approve?

What's in the sandbox (no permission needed)

These actions run in an isolated environment and can't affect your system:

  • Searching the web
  • Reading your assistant's own workspace files
  • Running code in the sandbox
  • Building apps and UI surfaces
  • Saving memories
  • Making calculations

Your assistant does these freely. They're safe by design.

What requires permission (host actions)

These actions touch your actual machine:

ActionWhat the prompt looks like
Reading a file on your machine“I need to read your Downloads folder to find the file you mentioned. This is read-only.”
Running a shell command“I need to install the project dependencies, which will download some packages.”
Writing or editing a file“I need to save this script to your Desktop.”
Accessing a system database“I need to access your Contacts to look up Sarah's email.”

The prompt always explains the action in plain language, not technical jargon. You should never see a raw command like ls -lt ~/Downloads in a permission prompt. If you do, that's a bug.

What happens when you say "Allow"

The action runs. Your assistant does exactly what it described, shows you the result, and moves on. One action, one approval.

What happens when you say "Don't Allow"

  1. The action is blocked. Nothing happens.
  2. Your assistant acknowledges the denial: “No problem! I wasn't able to access that.”
  3. It does not retry automatically. No second prompt. No “are you sure?”
  4. It asks if you'd like to try a different approach or try again later.
  5. It only retries if you explicitly say yes.

Saying no is always safe and always respected. Your assistant is designed to handle denial gracefully, not to guilt-trip you into clicking Allow.

macOS system permissions

Some capabilities require macOS-level permissions beyond the per-action prompts:

PermissionWhat it unlocksHow to grant it
Full Disk AccessReading files anywhere on your machineSystem Settings → Privacy & Security → Full Disk Access
Screen RecordingSeeing your screen contentSystem Settings → Privacy & Security → Screen Recording
AccessibilityControlling mouse and keyboardSystem Settings → Privacy & Security → Accessibility

Your assistant guides you to the right settings panel when these are needed. You only grant these once through macOS.

💡 Important distinction: macOS permissions are the “can it access this at all” layer. The Allow / Don't Allow prompts are the “should it access this right now” layer. Both must pass. Full Disk Access means your assistant can read your Documents folder, but each individual read still gets its own Allow / Don't Allow prompt.

Trust gating

Your assistant doesn't ask for everything at once. Permissions are introduced gradually:

First conversation:

  • Chat only. No file access. No system permissions.
  • Just getting to know each other.

Early use:

  • Web search (no permission needed)
  • Weather, simple questions (no permission needed)
  • First file access requests appear only when you ask for something that needs them

Ongoing use:

  • More capabilities unlock as you use more features
  • Screen recording, accessibility, computer control only offered when relevant
  • Each new permission is explained in context

The pattern:

  • Low-risk actions → taken with minimal friction
  • Medium-risk actions → explained and approved per-action
  • High-risk actions → explained, approved, and only suggested when the payoff is clear

This is the graduated trust model. Your assistant starts cautious and earns more access over time.