Initialize a model
Useinit_chat_model
to initialize models:
- OpenAI
- Anthropic
- Azure
- Google Gemini
- AWS Bedrock
Instantiate a model directly
If a model provider is not available viainit_chat_model
, you can instantiate the provider’s model class directly. The model must implement the BaseChatModel interface and support tool calling:
Tool calling support
If you are building an agent or workflow that requires the model to call external tools, ensure that the underlying
language model supports tool calling. Compatible models can be found in the LangChain integrations directory.
Use in an agent
When usingcreate_react_agent
you can specify the model by its name string, which is a shorthand for initializing the model using init_chat_model
. This allows you to use the model without needing to import or instantiate it directly.
- model name
- model instance
Dynamic model selection
Pass a callable function tocreate_react_agent
to dynamically select the model at runtime. This is useful for scenarios where you want to choose a model based on user input, configuration settings, or other runtime conditions.
The selector function must return a chat model. If you’re using tools, you must bind the tools to the model within the selector function.
New in LangGraph v0.6
Advanced model configuration
Disable streaming
To disable streaming of the individual LLM tokens, setdisable_streaming=True
when initializing the model:
- init_chat_model
- ChatModel
disable_streaming
Add model fallbacks
You can add a fallback to a different model or a different LLM provider usingmodel.with_fallbacks([...])
:
- init_chat_model
- ChatModel
Use the built-in rate limiter
Langchain includes a built-in in-memory rate limiter. This rate limiter is thread safe and can be shared by multiple threads in the same process.Bring your own model
If your desired LLM isn’t officially supported by LangChain, consider these options:- Implement a custom LangChain chat model: Create a model conforming to the LangChain chat model interface. This enables full compatibility with LangGraph’s agents and workflows but requires understanding of the LangChain framework.
-
Direct invocation with custom streaming: Use your model directly by adding custom streaming logic with
StreamWriter
. Refer to the custom streaming documentation for guidance. This approach suits custom workflows where prebuilt agent integration is not necessary.