Interface for OpenAI models with enhanced capabilities like structured output generation
Interface for Anthropic Claude models with enhanced capabilities
Interface for Google Gemini models with enhanced capabilities
Core agent implementation that can execute tasks using an AugmentedLLM
Defines a sequence of steps to be executed by agents
Individual step in a workflow with its own agent and execution logic
Coordinates multiple agents to solve complex tasks
HTTP server implementation compatible with Model Context Protocol
MCP Agent is a Python framework for building effective AI agents using the Model Context Protocol (MCP). It provides a set of composable patterns and tools that simplify the creation of complex AI agents with capabilities like structured output generation, workflow orchestration, and integration with various model providers. The framework enables developers to create agents that can reason, plan, and execute tasks with improved reliability and transparency.
MCP Agent is a powerful framework for building effective AI agents using the Model Context Protocol. It provides a set of composable patterns and tools that make it easier to create reliable, transparent, and effective AI agents.
You can install MCP Agent using pip:
pip install mcp-agent
MCP Agent allows you to build agents using different patterns:
AugmentedLLM is a core component that provides a unified interface to different LLM providers with enhanced capabilities:
from mcp_agent import OpenAIAugmentedLLM
# Initialize an AugmentedLLM
llm = OpenAIAugmentedLLM(model="gpt-4o")
# Generate text
response = llm.generate("Explain quantum computing in simple terms")
# Generate structured output
structured_response = llm.generate_structured(
"List the top 3 programming languages",
schema={
"type": "array",
"items": {
"type": "object",
"properties": {
"name": {"type": "string"},
"popularity": {"type": "string"}
}
}
}
)
MCP Agent provides workflow patterns for creating more complex agents:
from mcp_agent import Agent, Workflow, Step
# Define steps in your workflow
research_step = Step(
name="research",
description="Research information about a topic",
agent=OpenAIAugmentedLLM(model="gpt-4o")
)
summarize_step = Step(
name="summarize",
description="Summarize the research findings",
agent=OpenAIAugmentedLLM(model="gpt-4o")
)
# Create a workflow
research_workflow = Workflow(
name="research_workflow",
description="Research and summarize a topic",
steps=[research_step, summarize_step]
)
# Execute the workflow
result = research_workflow.execute("Quantum computing")
For more complex agent behaviors, you can use the Orchestrator:
from mcp_agent import Orchestrator, Agent
# Create an orchestrator with multiple agents
orchestrator = Orchestrator(
agents={
"researcher": Agent(llm=OpenAIAugmentedLLM(model="gpt-4o")),
"writer": Agent(llm=OpenAIAugmentedLLM(model="gpt-4o"))
}
)
# Execute a task with the orchestrator
result = orchestrator.execute(
"Research quantum computing and write a blog post about it",
agent_selection_strategy="auto"
)
MCP Agent supports multiple model providers:
You can select the appropriate AugmentedLLM implementation based on your preferred provider:
from mcp_agent import OpenAIAugmentedLLM, AnthropicAugmentedLLM, GoogleAugmentedLLM
openai_llm = OpenAIAugmentedLLM(model="gpt-4o")
anthropic_llm = AnthropicAugmentedLLM(model="claude-3-opus-20240229")
google_llm = GoogleAugmentedLLM(model="gemini-1.5-pro")
MCP Agent supports streaming responses for real-time interaction:
for chunk in llm.generate_stream("Explain quantum computing step by step"):
print(chunk, end="", flush=True)
Generate JSON or other structured outputs with schema validation:
result = llm.generate_structured(
"Extract entities from this text: John met Sarah at Starbucks on Tuesday.",
schema={
"type": "object",
"properties": {
"people": {"type": "array", "items": {"type": "string"}},
"places": {"type": "array", "items": {"type": "string"}},
"time": {"type": "string"}
}
}
)
MCP Agent can be used to create MCP-compatible HTTP servers:
from mcp_agent.server import MCPServer
from mcp_agent import OpenAIAugmentedLLM
# Create a server with an AugmentedLLM
server = MCPServer(llm=OpenAIAugmentedLLM(model="gpt-4o"))
# Start the server
server.start(host="0.0.0.0", port=8000)
For more detailed examples, check out the examples directory in the GitHub repository, which includes: