Fast Inference, Agents & Chatbots: Proven strategies from AI experts to build reliable apps.Grab your seat →
Product Updates
Our thoughts on working with Google's LLM: PaLM
May 10, 2023
Akash Sharma
Co-founder & CEO
Co-authors:
No items found.
Table of Contents

Earlier today Vellum was announced at Google I/O as an integration partner for their PaLM API, we’re thrilled to bring this new model to production use cases through our platform. If you have access to PaLM, you can use our Playground to compare PaLM side-by-side with models like OpenAI's GPT-4, Anthropic's Claude, and even open source models like Dolly from Databricks.

With an ever increasing number of foundation model providers, it gets difficult to choose the best prompt/model for your use case. One of the challenges here is measuring model quality, a topic we’ve written about in a prior blog here. When choosing the a model for your use case, our first recommendation is to find a model that clears your quality threshold after extensive unit testing. If multiple models clear your quality threshold, then choose based on other criteria like latency, cost and privacy.

In this article we’ll share how we experimented with PaLM and where it did better than other model providers.

I’ve heard of BARD, what is PaLM?

You can learn a lot more about Google’s AI offerings on their website, but in summary, BARD is the consumer application that Google is creating (similar to ChatGPT) while PaLM is a series of Large Language Models models similar to OpenAI’s GPT models or Anthropic’s Claude models.

PaLM also has an embedding model that can be used instead of OpenAI’s Ada or open source models like Instructor.

How we used PaLM and what we learned?

We’ve been doing side by side comparisons between OpenAI, Anthropic and Google (PaLM) and after sufficient prompt engineering to get good quality, we found PaLM to really shine in how quickly and accurately it gave responses. This is particularly true for chain of thought / reasoning related prompts. Let’s talk through an example.

We're creating an escalation classifier for incoming support messages for a computer repair shop. Usually front-line support representatives escalate messages to their manager if the customer is unhappy or angry. We're having the escalation classifier perform the same task. This is how the prompt is constructed:

  1. Give the LLM the 8 criteria which would result in escalation (e.g., customer is asking to speak to the manager, customer is upset, customer is repeating themselves etc.)
  2. Ask the LLM to take the incoming message and check if it meets any of the criteria
  3. In the final response, return which criteria were met and a true/false for whether the message should be escalated

In Vellum's Playground, you can clearly see PaLM's responses were more accurate and noticeably faster than other model providers. Here’s a video to bring it to life

Want to compare these models yourself?

Sign up for a 7 day free trial of our platform here and use our Playground for side by side model comparison. For any questions or feedback, please reach out at founders@vellum.ai

ABOUT THE AUTHOR
Akash Sharma
Co-founder & CEO

Akash Sharma, CEO and co-founder at Vellum (YC W23) is enabling developers to easily start, develop and evaluate LLM powered apps. By talking to over 1,500 people at varying maturities of using LLMs in production, he has acquired a very unique understanding of the landscape, and is actively distilling his learnings with the broader LLM community. Before starting Vellum, Akash completed his undergrad at the University of California, Berkeley, then spent 5 years at McKinsey's Silicon Valley Office.

No items found.
The Best AI Tips — Direct To Your Inbox

Latest AI news, tips, and techniques

Specific tips for Your AI use cases

No spam

Oops! Something went wrong while submitting the form.

Each issue is packed with valuable resources, tools, and insights that help us stay ahead in AI development. We've discovered strategies and frameworks that boosted our efficiency by 30%, making it a must-read for anyone in the field.

Marina Trajkovska
Head of Engineering

This is just a great newsletter. The content is so helpful, even when I’m busy I read them.

Jeremy Hicks
Solutions Architect
Related Posts
View More

Experiment, Evaluate, Deploy, Repeat.

AI development doesn’t end once you've defined your system. Learn how Vellum helps you manage the entire AI development lifecycle.