Product Updates
Announcing Native Support for Cerebras Inference in Vellum
Oct 24, 2024
4 min
Anita Kirkovska
Founding GenAI Growth
Co-authors:
No items found.
Table of Contents

TLDR;

We're excited to announce that Vellum now has a native integration with Cerebras -  the fastest AI inference solution in the world, allowing customers to run 2,100 tokens per second for Llama3.1 70B, with flexible rate limits! This sets an industry record for inference speed, and starting October 23, 2024, Vellum users can benefit from this incredible performance boost to build faster, real-time AI applications.

As a development platform that enables companies around the world to build reliable AI systems with LLMs, we know that striking the right balance between accuracy, speed, and cost is a top priority for many companies today.

But, with the rise of more sophisticated AI applications—from traditional routing systems to more dynamic, agent-driven workflows—having fast response times is essential to handle the intricate logic involved.

Today, we’re excited to announce our native integration with Cerebras, the fastest AI inference solution that delivers 1,800 tokens/second for the Llama 3.2B 70B model, using the original 16-bit weights released by Meta.This solution is 14x faster than any known GPU and 60x faster than hyperscale clouds, according to third-party benchmarks. Even more impressive, Cerebras Inference serves Llama 70B models over 7x faster than GPUs serve Llama 3B, offering a 184x performance advantage.

“Our customers are blown away with the results! Time to completion on Cerebras is hands down faster than any other inference provider and I’m excited to see the production applications we’ll power via the Cerebras inference platform.

- Akash Sharma, CEO of Vellum

How the native integration works

All public models on Cerebras are now available to add to your workspace.

For example, to enable the Llama 70b model hosted on Cerebras into your workspace, you only need to get your API key from your Cerebras profile, and add it as a Secret named CEREBRAS on each of the model pages:

Then, in your prompts and workflow nodes, simply select the model you just enabled:

What do you get with Cerebras inference

The Cerebras inference solves the memory bandwidth bottleneck by building the largest chip in the world and storing the entire model on-chip without sacrificing weight precision. They currently support only Llama 70b, and you get the best model in terms of speed, accuracy and cost.

High speed

For Llama 3.1-70B, Cerebras generates instant responses at 1800 tokens per second, which is is 14x faster than any known GPU solution and 60x faster than hyperscale clouds as measured by third-party benchmarking organization.

The most interesting part is that Cerebras Inference serves Llama 70B more than 7x faster than GPUs serve Llama 3B, delivering an aggregate 184x advantage.

Highest accuracy

Regarding accuracy, Cerebras doesn’t reduce weight precision from 16-bit to 8-bit to overcome the memory bandwidth bottleneck. They use the original 16-bit weights released by Meta, ensuring the most accurate and reliable model output — Evaluations and third-party benchmarks show that 16-bit models can score up to 5% higher than their 8-bit counterparts.

The Llama 3.1 70b model is already climbing up the ranks in various fields like math, reasoning and and coding, and being able to run them 60 times faster is unlocking many new use-cases.

Check how the Llama 70b compare with other models in our LLM leaderboard.

Lowest cost

Developers can easily access the Cerebras Inference API, which is fully compatible with theOpenAI Chat Completions API, making migration seamless with just a few lines of code.

Cerebras Inference offers three pricing tiers for its AI inference service: Free, Developer, and Enterprise:

  • Free: Offers API access and generous usage limits (1 million free tokens daily)
  • Developer: Offers an API endpoint at a fraction of the cost of alternatives, with models priced at 10 cents and 60 cents per million tokens
  • Enterprise: Offers fine-tuned models, custom service level agreements, and dedicated support.

If you want to test the inference speed with Cerebras —  get in touch! We provide the tooling & best practices for building and evaluating AI systems that you can trust in production.

ABOUT THE AUTHOR
Anita Kirkovska
Founding GenAI Growth

Anita Kirkovska, is currently leading Growth and Content Marketing at Vellum. She is a technical marketer, with an engineering background and a sharp acumen for scaling startups. She has helped SaaS startups scale and had a successful exit from an ML company. Anita writes a lot of content on generative AI to educate business founders on best practices in the field.

No items found.
The Best AI Tips — Direct To Your Inbox

Latest AI news, tips, and techniques

Specific tips for Your AI use cases

No spam

Oops! Something went wrong while submitting the form.

Each issue is packed with valuable resources, tools, and insights that help us stay ahead in AI development. We've discovered strategies and frameworks that boosted our efficiency by 30%, making it a must-read for anyone in the field.

Marina Trajkovska
Head of Engineering

This is just a great newsletter. The content is so helpful, even when I’m busy I read them.

Jeremy Hicks
Solutions Architect
Related Posts
View More

Experiment, Evaluate, Deploy, Repeat.

AI development doesn’t end once you've defined your system. Learn how Vellum helps you manage the entire AI development lifecycle.