Read our latestState of AI development report– explore trends, usage, and emerging patterns!Read the results →
All Posts
Apr 4, 2025
Product Updates
Vellum Product Update | March 2025
5 min

Our biggest product feature drop ever: 27 updates in a single month (a Vellum record!)

No items found.

Authors:

No items found.

🌸 Spring has sprung, and with it, our biggest feature drop ever: 27 updates in a single month (a Vellum record! 🚢). From Prompt Diffing and real-time monitoring integrations to GA of our Workflows SDK and PDF inputs, March was packed with upgrades to help you build, test, and ship faster than ever.

Let’s dig into what’s new.

🆕 Key New Features

Prompt Comparison / Diffing

This one’s been at the top of many of our customers' wishlist — and it’s finally here. You can now view side-by-side diffs between prompt versions, so you never have to guess what changed again.

Access this feature by clicking the “View Diff” button in the top right of a Prompt Sandbox’s Comparison Mode.
This will open a modal with a side-by-side comparison of the two Prompt Variants

Whether you’re reviewing edits, debugging issues, or approving updates before deployment, this highly requested feature gives you full visibility into every change, line by line.

Deployment Release Reviews

Inspired by GitHub PR reviews, this feature allows team members to review, approve, or request changes to Prompt and Workflow Deployments. Perfect for Enterprise teams that require a formal approval process comply  with SOC 2 regulations. Watch Noa break it down:

Native Retry & Try functionality

You can now “wrap” any node with Try or Retry Adornments directly from the side panel — giving you first-class error handling.

Adornments are accessible in the Node sidepanel after clicking on a Node
  • Retry will keep invoking the wrapped node until it succeeds (or hits the max attempts).
  • Try will attempt once and continue gracefully even if it fails.

Bonus: these show up cleanly in your monitoring view, just like a single-node Sub-workflow.

Monitoring View Overhaul

For our VPC and self-hosted customers — this update is for you! With a brand new Grafana-based implementation, the revamped Monitoring View offers faster load times, smoother zooming, and better filters for things like date ranges and Release Tags. It’s everything you need to analyze performance at scale, now wherever you're deployed.

Monitoring views for Node Adornment invocations show as if the targeted Node was invoked as a single node Subworkflow

Webhooks + Datadog Integration

You can now configure Webhooks to receive real-time Vellum event updates — perfect for syncing with external tools like a data warehouse or custom health dashboard.

You can further customize it with your own auth configurations

You can emit those same Vellum events in near-real-time to Datadog for deeper observability!

Useful if your organization already uses Datadog, or you’d like to leverage Datadog’s monitoring, alerting, and BI capabilities using your Vellum data

Workflows SDK General Availability

All newly created Workflows are now SDK-enabled by default! Vellum Workflows SDK makes it easier to build predictable AI systems and collaborate with nontechnical teammates by allowing you to build your AI Workflows in code or in UI. Changes are synchronized by pushing and pulling between code and UI. Try our 5 minute quickstart.

PDFs as a Prompt Input

You can now pass PDFs directly into Prompts — perfect for extracting structured data from documents and powering downstream workflows. Just drag and drop a PDF into a Chat History variable, and if the model supports it (like Anthropic’s), you’re good to go. It’s like multi-modal inputs… but for documents.

Since PDFs are handled as images under the hood, this pairs perfectly with Vellum’s support for image inputs. Vellum supports images for OpenAI’s vision models like GPT-4 Turbo with Vision — both via API and in the UI. Read more about it here.

Workflow Deployment Executions – Cost Column

You’ll now see a Cost column in the Workflow Deployment Executions view — helping you track compute spend at a glance. This column breaks down the total cost per execution, summing up all Prompt invocations — so you get a clear picture of what’s driving spend across each run.

🔧 Quality of Life Improvements

Global Search

You can now search across all your Prompts, Workflows, Document Indexes, and more with Global Search.

Accessible from the side nav or with Cmd/Ctrl + K

This long-awaited feature lets you quickly find and jump to any resource in your Workspace — no more clicking around to track things down.

New Workflow Deployment APIs

You can now use two new APIs to List Workflow Deployment Executions for a specific Workflow Deployment or  Retrieve Workflow Deployment Execution for any single execution — making it easier to programmatically track and analyze Workflow runs outside of Vellum.

Automatic Evaluations Setup

Vellum now auto-generates a Test Suite with one Test Case per Scenario the first time you visit the Evaluations tab, so you can start adding Metrics and Ground Truth instantly.

🧠 Model & API Support

  • Gemini 2.5 Pro Model Support
    • Added support for Gemini 2.5 Pro Experimental (03-25 version)
    • Supports 1M input token context window and 64k output tokens via Google’s Gemini API
  • LLaMa 3.3 70B via Cerebras
    • Added support for LLaMa 3.3 70B through Cerebras AI
  • Qwen QwQ Models via Groq
    • Added support for:
      • QwQ 32B
      • QwQ 2.5 Coder 32B
      • QwQ 2.5 32B
    • All via Groq’s preview models
  • Qwen QwQ 32B via Fireworks AI
    • Added support for Qwen QwQ 32B through Fireworks AI
  • PDF Support for Gemini 2.0 Flash Models
    • Drag-and-drop PDF support added for:
      • Gemini 2.0 Flash Experimental
      • Gemini 2.0 Flash Experimental Thinking Mode
      • Gemini 2.0 Flash

That’s a wrap on March! From fresh debugging views to friendlier editors and deeper integrations, this month was all about helping you move faster with more clarity. We’ll be back in April with even more. Until then — happy building! 🚀

Changelog: https://docs.vellum.ai/changelog/2025/2025-03

ABOUT THE AUTHOR
Sharon Toh
Product and Growth Marketing

Sharon brings a background in product marketing and a proven ability to help SaaS companies achieve remarkable growth. With a well-established history of crafting strategies that resonate with audiences, she’s passionate about bridging the gap between cutting-edge AI solutions and the people who need them most. As part of Vellum's GTM team, Sharon focuses on connecting businesses and people with the right solutions that deliver impact, making advanced technology accessible and effective.

No items found.
The Best AI Tips — Direct To Your Inbox

Latest AI news, tips, and techniques

Specific tips for Your AI use cases

No spam

Oops! Something went wrong while submitting the form.

Each issue is packed with valuable resources, tools, and insights that help us stay ahead in AI development. We've discovered strategies and frameworks that boosted our efficiency by 30%, making it a must-read for anyone in the field.

Marina Trajkovska
Head of Engineering

This is just a great newsletter. The content is so helpful, even when I’m busy I read them.

Jeremy Hicks
Solutions Architect
Related Posts
View More

Experiment, Evaluate, Deploy, Repeat.

AI development doesn’t end once you've defined your system. Learn how Vellum helps you manage the entire AI development lifecycle.