LangChain MCP Integration provides a bridge between the Model Context Protocol (MCP) and LangChain, allowing developers to use MCP tools within their LangChain applications. This integration enables AI models to interact with external tools and services through the standardized MCP interface while leveraging LangChain's powerful orchestration capabilities. The library offers a simple API to initialize MCP tools and convert them into LangChain-compatible tools that can be used in chains, agents, and other LangChain constructs. This makes it easy to extend AI applications with file system access, web browsing, database queries, and other capabilities provided by MCP servers.
LangChain MCP Integration allows you to use Model Context Protocol (MCP) tools within your LangChain applications. This integration bridges the gap between MCP's standardized tool interface and LangChain's agent framework.
Install the package using pip:
pip install langchain-mcp
To use LangChain MCP Integration, you need to:
Here's a basic example:
import pathlib
from mcp import ClientSession, StdioServerParameters
from mcp.stdio import stdio_client
from langchain_mcp import MCPToolkit
# Set up an MCP server (in this example, using the filesystem server)
server_params = StdioServerParameters(
command="npx",
args=["-y", "@modelcontextprotocol/server-filesystem", str(pathlib.Path(__file__).parent)]
)
async def run_with_mcp(prompt):
# Create an MCP client session
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write) as session:
# Initialize the MCPToolkit
toolkit = MCPToolkit(session=session)
await toolkit.initialize()
# Get the LangChain tools
tools = toolkit.get_tools()
# Use the tools with your LangChain setup
# For example, with a chat model and agent
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
from langchain.agents import create_openai_functions_agent
from langchain.agents import AgentExecutor
model = ChatOpenAI()
prompt_template = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant with access to tools."),
("human", "{input}")
])
agent = create_openai_functions_agent(model, tools, prompt_template)
agent_executor = AgentExecutor(agent=agent, tools=tools)
response = await agent_executor.ainvoke({"input": prompt})
return response["output"]
Here's a complete example that uses the MCP filesystem server to read and summarize a file:
import os
import pathlib
import asyncio
from mcp import ClientSession, StdioServerParameters
from mcp.stdio import stdio_client
from langchain_mcp import MCPToolkit
from langchain_openai import ChatOpenAI
from langchain.agents import create_openai_functions_agent, AgentExecutor
from langchain_core.prompts import ChatPromptTemplate
async def run(tools, prompt):
model = ChatOpenAI(model="gpt-3.5-turbo")
prompt_template = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant with access to tools."),
("human", "{input}")
])
agent = create_openai_functions_agent(model, tools, prompt_template)
agent_executor = AgentExecutor(agent=agent, tools=tools)
response = await agent_executor.ainvoke({"input": prompt})
return response["output"]
async def main():
# Set up the MCP filesystem server
server_params = StdioServerParameters(
command="npx",
args=["-y", "@modelcontextprotocol/server-filesystem", str(pathlib.Path(__file__).parent)]
)
# Create prompt
prompt = "Read and summarize the file ./README.md"
# Run with MCP
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write) as session:
toolkit = MCPToolkit(session=session)
await toolkit.initialize()
response = await run(toolkit.get_tools(), prompt)
print(response)
if __name__ == "__main__":
asyncio.run(main())