v0.5.2
Conversation forking, LLM context inspector, disk views, expanded inference providers, and scroll fixes.
- Conversation forking — branch off from any message to explore alternative directions without losing your original conversation history
- New LLM context inspector lets you drill into the exact prompts and responses sent to the AI model for any assistant message, including full request logs and normalized payload views
- Conversation disk views project conversations to disk in a structured format to improve reliability, attachment handling, and cross-feature consistency
- Expanded inference provider system — bring your own API key for OpenAI, Gemini, and other providers, with per-provider model selection and key management directly in settings
- macOS chat scrolling and bottom-pinning rearchitected with a dedicated coordinator, scroll loop guard, and diagnostics to eliminate jank and runaway scroll behavior