Read our latestState of AI development report– explore trends, usage, and emerging patterns!Read the results →
Capture User Feedback for AI Testing
Jan 1, 2025
3 min
No items found.
Table of Contents

We’re excited to introduce a powerful new feature that allows you to capture end-user feedback on your AI system and use it as ground truth for your test cases. This gives you the ability to continuously refine your AI outputs, improving accuracy and ensuring your system delivers more reliable results over time.

Here’s how it works.

Capture feedback as test cases

magine you have a RAG chatbot that answers questions about your product's trust center and privacy policies. You've already set up a test suite to evaluate the chatbot’s performance across several important metrics, such as semantic similarity. This dimension measures how closely the chatbot’s responses align with the correct answers.

Once your system is live, you can collect end-user feedback and label it as actuals—either from users directly or from internal labeling data. These actuals represent what the correct response should have been, based on real-world interactions.

How it works

In Vellum you can easily flag the incorrect response, mark it as a test case example and add it in your evaluation suite. Here’s a quick demo on how that works:

With the new test case saved, you can go to your evaluation set and run the evaluation again to see how closely the chatbot’s output matches the updated ground truth.

If needed, you can tweak your prompts or workflows and rerun the evaluation until the system’s output aligns closely with the expected response. This process helps improve accuracy and ensures that the chatbot continuously gets closer to the correct answers.

Why This Matters

By incorporating end-user feedback into your testing cycle, you're creating a continuous improvement loop for your AI system.

This allows for faster iteration, more accurate outputs, and an overall improvement in AI system performance. Essentially, you're ensuring that your AI stays aligned with real-world expectations, while making it easier to spot and fix issues quickly.

Vellum is designed to support every stage of your AI development cycle — book a call with one of our AI experts to set up your evaluation.

ABOUT THE AUTHOR
Noa Flaherty
Co-founder & CTO

Noa Flaherty, CTO and co-founder at Vellum (YC W23) is helping developers to develop, deploy and evaluate LLM-powered apps. His diverse background in mechanical and software engineering, as well as marketing and business operations gives him the technical know-how and business acumen needed to bring value to nearly any aspect of startup life. Prior to founding Vellum, Noa completed his undergrad at MIT and worked at three tech startups, including roles in MLOps at DataRobot and Product Engineering at Dover.

No items found.
The Best AI Tips — Direct To Your Inbox

Latest AI news, tips, and techniques

Specific tips for Your AI use cases

No spam

Oops! Something went wrong while submitting the form.

Each issue is packed with valuable resources, tools, and insights that help us stay ahead in AI development. We've discovered strategies and frameworks that boosted our efficiency by 30%, making it a must-read for anyone in the field.

Marina Trajkovska
Head of Engineering

This is just a great newsletter. The content is so helpful, even when I’m busy I read them.

Jeremy Hicks
Solutions Architect
Related Posts
View More

Experiment, Evaluate, Deploy, Repeat.

AI development doesn’t end once you've defined your system. Learn how Vellum helps you manage the entire AI development lifecycle.