Prerequisites
Before you start this tutorial, ensure you have the following:- An Anthropic API key
1. Install dependencies
If you haven’t already, install LangGraph and LangChain:LangChain is installed so the agent can call the model.
2. Create an agent
To create an agent, usecreate_react_agent
:
- Define a tool for the agent to use. Tools can be defined as vanilla Python functions. For more advanced tool usage and customization, check the tools page.
- Provide a language model for the agent to use. To learn more about configuring language models for the agents, check the models page.
- Provide a list of tools for the model to use.
- Provide a system prompt (instructions) to the language model used by the agent.
3. Configure an LLM
To configure an LLM with specific parameters, such as temperature, use init_chat_model:4. Add a custom prompt
Prompts instruct the LLM how to behave. Add one of the following types of prompts:- Static: A string is interpreted as a system message.
- Dynamic: A list of messages generated at runtime, based on input or configuration.
- Static prompt
- Dynamic prompt
Define a fixed prompt string or list of messages:
5. Add memory
To allow multi-turn conversations with an agent, you need to enable persistence by providing a checkpointer when creating an agent. At runtime, you need to provide a config containingthread_id
— a unique identifier for the conversation (session):
checkpointer
allows the agent to store its state at every step in the tool calling loop. This enables short-term memory and human-in-the-loop capabilities.- Pass configuration with
thread_id
to be able to resume the same conversation on future agent invocations.
InMemorySaver
).
Note that in the above example, when the agent is invoked the second time with the same thread_id
, the original message history from the first conversation is automatically included, together with the new user input.
For more information, see Memory.
6. Configure structured output
To produce structured responses conforming to a schema, use theresponse_format
parameter. The schema can be defined with a Pydantic
model or TypedDict
. The result will be accessible via the structured_response
field.
- When
response_format
is provided, a separate step is added at the end of the agent loop: agent message history is passed to an LLM with structured output to generate a structured response. To provide a system prompt to this LLM, use a tuple(prompt, schema)
, e.g.,response_format=(prompt, WeatherResponse)
.
LLM post-processing
Structured output requires an additional call to the LLM to format the response according to the schema.