One major problem with Langchain is even after structured output LLMs might not adhere to structured types and produce hallucinations , apart from Langchain there are multiple upcoming agentic frameworks which we can use to build agentic applications, PydanticAI is one such Python Agent Framework designed to make it less painful to build production grade applications with Generative AI.
FastAPI revolutionized web development by offering an innovative and ergonomic design, built on the foundation of Pydantic
Why use PydanticAI
๐๏ธ Built by the Pydantic Team
Built by the team behind Pydantic (the validation layer of the OpenAI SDK, the Anthropic SDK, LangChain, LlamaIndex, AutoGPT, Transformers, CrewAI, Instructor and many more).
๐ง Model-agnostic
Supports OpenAI, Anthropic, Gemini, Ollama, Groq, and Mistral, and there is a simple interface to implement support for other models.
๐ฅ Pydantic Logfire Integration
Seamlessly integrates with Pydantic Logfire for real-time debugging, performance monitoring, and behavior tracking of your LLM-powered applications.
๐ Type-safe
Designed to make type checking as powerful and informative as possible for you.
๐ Python-centric Design
Leverages Python's familiar control flow and agent composition to build your AI-driven projects, making it easy to apply standard Python best practices you'd use in any other (non-AI) project.
๐ฆพStructured Responses
Harnesses the power of Pydantic to validate and structure model outputs, ensuring responses are consistent across runs.
๐ซDependency Injection System
Offers an optional dependency injection system to provide data and services to your agent's system prompts, tools and result validators. This is useful for testing and eval-driven iterative development.
๐ถ๏ธ Streamed Responses
Provides the ability to stream LLM outputs continuously, with immediate validation, ensuring rapid and accurate results.
๐ผ Graph Support
Pydantic Graph provides a powerful way to define graphs using typing hints, this is useful in complex applications where standard control flow can degrade to spaghetti code
Agentic Example
from pydantic_ai import Agent
agent = Agent(
'gemini-1.5-flash',
system_prompt='Be concise, reply with one sentence.',
)
result = agent.run_sync('Where does "hello world" come from?')
print(result.data)
"""
The first known use of "hello, world" was in a 1974 textbook about the C programming language.
"""
Agentic example for Bank support
from dataclasses import dataclass
from pydantic import BaseModel, Field
from pydantic_ai import Agent, RunContext
from bank_database import DatabaseConn
@dataclass
class SupportDependencies:
customer_id: int
db: DatabaseConn
class SupportResult(BaseModel):
support_advice: str = Field(description='Advice returned to the customer')
block_card: bool = Field(description="Whether to block the customer's card")
risk: int = Field(description='Risk level of query', ge=0, le=10)
support_agent = Agent(
'openai:gpt-4o',
deps_type=SupportDependencies,
result_type=SupportResult,
system_prompt=(
'You are a support agent in our bank, give the '
'customer support and judge the risk level of their query.'
),
)
@support_agent.system_prompt
async def add_customer_name(ctx: RunContext[SupportDependencies]) -> str:
customer_name = await ctx.deps.db.customer_name(id=ctx.deps.customer_id)
return f"The customer's name is {customer_name!r}"
@support_agent.tool
async def customer_balance(
ctx: RunContext[SupportDependencies], include_pending: bool
) -> float:
"""Returns the customer's current account balance."""
return await ctx.deps.db.customer_balance(
id=ctx.deps.customer_id,
include_pending=include_pending,
)
...
async def main():
deps = SupportDependencies(customer_id=123, db=DatabaseConn())
result = await support_agent.run('What is my balance?', deps=deps)
print(result.data)
"""
support_advice='Hello John, your current account balance, including pending transactions, is $123.45.' block_card=False risk=1
"""
result = await support_agent.run('I just lost my card!', deps=deps)
print(result.data)
"""
support_advice="I'm sorry to hear that, John. We are temporarily blocking your card to prevent unauthorized transactions." block_card=True risk=8
"""
Another Example with ChatGPT 40
from pydantic_ai import Agent, RunContext
roulette_agent = Agent(
'openai:gpt-4o',
deps_type=int,
result_type=bool,
system_prompt=(
'Use the `roulette_wheel` function to see if the '
'customer has won based on the number they provide.'
),
)
@roulette_agent.tool
async def roulette_wheel(ctx: RunContext[int], square: int) -> str:
"""check if the square is a winner"""
return 'winner' if square == ctx.deps else 'loser'
# Run the agent
success_number = 18
result = roulette_agent.run_sync('Put my money on square eighteen', deps=success_number)
print(result.data)
#> True
result = roulette_agent.run_sync('I bet five is the winner', deps=success_number)
print(result.data)
#> False
Understanding the code
Dataclass decorator (support dependencies)
- Purpose: Defines dependencies the agent needs, like the customer ID and a database connection (
db
). - Use: Passed to the agent to access customer-specific data during queries
SupportResult
- Purpose: Specifies the structured format for the agentโs output.
With this Pydantic AI has changed the way we think about LLM tool use and gives us the confidence to create more robust agentic workflows