pydantic_ai.models
Logic related to making requests to an LLM.
The aim here is to make a common interface for different LLMs, so that the rest of the code can be agnostic to the specific LLM being used.
KnownModelName
module-attribute
KnownModelName = Literal[
"openai:gpt-4o",
"openai:gpt-4o-mini",
"openai:gpt-4-turbo",
"openai:gpt-4",
"openai:o1-preview",
"openai:o1-mini",
"openai:o1",
"openai:gpt-3.5-turbo",
"groq:llama-3.3-70b-versatile",
"groq:llama-3.1-70b-versatile",
"groq:llama3-groq-70b-8192-tool-use-preview",
"groq:llama3-groq-8b-8192-tool-use-preview",
"groq:llama-3.1-70b-specdec",
"groq:llama-3.1-8b-instant",
"groq:llama-3.2-1b-preview",
"groq:llama-3.2-3b-preview",
"groq:llama-3.2-11b-vision-preview",
"groq:llama-3.2-90b-vision-preview",
"groq:llama3-70b-8192",
"groq:llama3-8b-8192",
"groq:mixtral-8x7b-32768",
"groq:gemma2-9b-it",
"groq:gemma-7b-it",
"google-gla:gemini-1.5-flash",
"google-gla:gemini-1.5-pro",
"google-gla:gemini-2.0-flash-exp",
"google-vertex:gemini-1.5-flash",
"google-vertex:gemini-1.5-pro",
"google-vertex:gemini-2.0-flash-exp",
"mistral:mistral-small-latest",
"mistral:mistral-large-latest",
"mistral:codestral-latest",
"mistral:mistral-moderation-latest",
"ollama:codellama",
"ollama:gemma",
"ollama:gemma2",
"ollama:llama3",
"ollama:llama3.1",
"ollama:llama3.2",
"ollama:llama3.2-vision",
"ollama:llama3.3",
"ollama:mistral",
"ollama:mistral-nemo",
"ollama:mixtral",
"ollama:phi3",
"ollama:phi4",
"ollama:qwq",
"ollama:qwen",
"ollama:qwen2",
"ollama:qwen2.5",
"ollama:starcoder2",
"anthropic:claude-3-5-haiku-latest",
"anthropic:claude-3-5-sonnet-latest",
"anthropic:claude-3-opus-latest",
"test",
]
Known model names that can be used with the model
parameter of Agent
.
KnownModelName
is provided as a concise way to specify a model.
Model
Bases: ABC
Abstract class for a model.
Source code in pydantic_ai_slim/pydantic_ai/models/__init__.py
92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 |
|
agent_model
abstractmethod
async
agent_model(
*,
function_tools: list[ToolDefinition],
allow_text_result: bool,
result_tools: list[ToolDefinition]
) -> AgentModel
Create an agent model, this is called for each step of an agent run.
This is async in case slow/async config checks need to be performed that can't be done in __init__
.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
function_tools
|
list[ToolDefinition]
|
The tools available to the agent. |
required |
allow_text_result
|
bool
|
Whether a plain text final response/result is permitted. |
required |
result_tools
|
list[ToolDefinition]
|
Tool definitions for the final result tool(s), if any. |
required |
Returns:
Type | Description |
---|---|
AgentModel
|
An agent model. |
Source code in pydantic_ai_slim/pydantic_ai/models/__init__.py
95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 |
|
AgentModel
Bases: ABC
Model configured for each step of an Agent run.
Source code in pydantic_ai_slim/pydantic_ai/models/__init__.py
122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 |
|
request
abstractmethod
async
request(
messages: list[ModelMessage],
model_settings: ModelSettings | None,
) -> tuple[ModelResponse, Usage]
Make a request to the model.
Source code in pydantic_ai_slim/pydantic_ai/models/__init__.py
125 126 127 128 129 130 |
|
request_stream
async
request_stream(
messages: list[ModelMessage],
model_settings: ModelSettings | None,
) -> AsyncIterator[StreamedResponse]
Make a request to the model and return a streaming response.
Source code in pydantic_ai_slim/pydantic_ai/models/__init__.py
132 133 134 135 136 137 138 139 140 141 |
|
StreamedResponse
dataclass
Bases: ABC
Streamed response from an LLM when calling a tool.
Source code in pydantic_ai_slim/pydantic_ai/models/__init__.py
144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 |
|
__aiter__
__aiter__() -> AsyncIterator[ModelResponseStreamEvent]
Stream the response as an async iterable of ModelResponseStreamEvent
s.
Source code in pydantic_ai_slim/pydantic_ai/models/__init__.py
152 153 154 155 156 |
|
get
get() -> ModelResponse
Build a ModelResponse
from the data received from the stream so far.
Source code in pydantic_ai_slim/pydantic_ai/models/__init__.py
169 170 171 |
|
usage
usage() -> Usage
Get the usage of the response so far. This will not be the final usage until the stream is exhausted.
Source code in pydantic_ai_slim/pydantic_ai/models/__init__.py
173 174 175 |
|
timestamp
abstractmethod
timestamp() -> datetime
Get the timestamp of the response.
Source code in pydantic_ai_slim/pydantic_ai/models/__init__.py
177 178 179 180 |
|
ALLOW_MODEL_REQUESTS
module-attribute
ALLOW_MODEL_REQUESTS = True
Whether to allow requests to models.
This global setting allows you to disable request to most models, e.g. to make sure you don't accidentally make costly requests to a model during tests.
The testing models TestModel
and
FunctionModel
are no affected by this setting.
check_allow_model_requests
check_allow_model_requests() -> None
Check if model requests are allowed.
If you're defining your own models that have costs or latency associated with their use, you should call this in
Model.agent_model
.
Raises:
Type | Description |
---|---|
RuntimeError
|
If model requests are not allowed. |
Source code in pydantic_ai_slim/pydantic_ai/models/__init__.py
194 195 196 197 198 199 200 201 202 203 204 |
|
override_allow_model_requests
Context manager to temporarily override ALLOW_MODEL_REQUESTS
.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
allow_model_requests
|
bool
|
Whether to allow model requests within the context. |
required |
Source code in pydantic_ai_slim/pydantic_ai/models/__init__.py
207 208 209 210 211 212 213 214 215 216 217 218 219 220 |
|