Process documents with OCR, test different search strategies, and orchestrate multiple agents to work together to answer questions.
Look for relevant context on multiple Document Indexes in parallel to reduce latency.
User passes question
We feed context from each Search result to different agents and have them answer the user’s question as well as possible.
We let a Supervisor Agent see the user’s original question and each of the Support Agent’s responses. The Supervisor picks the best response, if any.
To pass inputs into an API Node, you can use URL parameters or a JSON body. For URL parameters, you can define them directly in the URL field of the API Node. For a JSON body, you can specify the body content in the designated field of the API Node.
For more detailed instructions, you can refer to the [Node Types Help Doc](https://docs.vellum.ai/help/node-types).
1/ Add additional data sources
2/ Use advanced document chunking to process complex PDFs with images, charts, spreadsheets, etc.
3/ Add routing → escalate to humans when no good answer is available
4/ Add out of the box metrics to evaluate the quality of your RAG
5/ Add Tools so your LLMs can perform actions on behalf of your users
AI development doesn’t end once you've defined your system. Learn how Vellum helps you manage the entire AI development lifecycle.