LLM Parameter Guide
Large Language Models expose a set of parameters that let you control how they generate text. Understanding these controls is essential for building reliable AI-powered applications.
This guide covers the most important parameters across providers like OpenAI, Anthropic, and Google, with practical guidance on when and how to use each one.
What are LLM parameters?
LLM parameters are configuration options you pass alongside your prompt to shape the model's output. They control properties like randomness, length, and format. Tuning them correctly can mean the difference between a chatbot that hallucinates and one that gives concise, accurate answers.
Some parameters, like temperature and top-p, affect the probability distribution the model samples from. Others, like max tokens and stop sequences, set hard limits on the output. And newer controls like structured outputs and function calling constrain the shape of the response to make it machine-readable.
Learn how to use LLM parameters
Each guide in this collection explains what a parameter does, which providers support it, and the practical trade-offs you should consider. Click any parameter above to get started, or browse the sidebar to jump to a specific topic.