Back to MCP Catalog

Agent MCP Server

Developer ToolsPython
Build effective agents using Model Context Protocol and simple workflow patterns
Available Tools

OpenAIAugmentedLLM

Interface for OpenAI models with enhanced capabilities like structured output generation

modelapi_keyorganization

AnthropicAugmentedLLM

Interface for Anthropic Claude models with enhanced capabilities

modelapi_key

GoogleAugmentedLLM

Interface for Google Gemini models with enhanced capabilities

modelapi_key

Agent

Core agent implementation that can execute tasks using an AugmentedLLM

llmnamedescription

Workflow

Defines a sequence of steps to be executed by agents

namedescriptionsteps

Step

Individual step in a workflow with its own agent and execution logic

namedescriptionagent

Orchestrator

Coordinates multiple agents to solve complex tasks

agentsagent_selection_strategy

MCPServer

HTTP server implementation compatible with Model Context Protocol

llmhostport

MCP Agent is a Python framework for building effective AI agents using the Model Context Protocol (MCP). It provides a set of composable patterns and tools that simplify the creation of complex AI agents with capabilities like structured output generation, workflow orchestration, and integration with various model providers. The framework enables developers to create agents that can reason, plan, and execute tasks with improved reliability and transparency.

Introduction

MCP Agent is a powerful framework for building effective AI agents using the Model Context Protocol. It provides a set of composable patterns and tools that make it easier to create reliable, transparent, and effective AI agents.

Installation

You can install MCP Agent using pip:

pip install mcp-agent

Basic Usage

MCP Agent allows you to build agents using different patterns:

1. Using AugmentedLLM

AugmentedLLM is a core component that provides a unified interface to different LLM providers with enhanced capabilities:

from mcp_agent import OpenAIAugmentedLLM

# Initialize an AugmentedLLM
llm = OpenAIAugmentedLLM(model="gpt-4o")

# Generate text
response = llm.generate("Explain quantum computing in simple terms")

# Generate structured output
structured_response = llm.generate_structured(
    "List the top 3 programming languages",
    schema={
        "type": "array",
        "items": {
            "type": "object",
            "properties": {
                "name": {"type": "string"},
                "popularity": {"type": "string"}
            }
        }
    }
)

2. Building Agents with Workflows

MCP Agent provides workflow patterns for creating more complex agents:

from mcp_agent import Agent, Workflow, Step

# Define steps in your workflow
research_step = Step(
    name="research",
    description="Research information about a topic",
    agent=OpenAIAugmentedLLM(model="gpt-4o")
)

summarize_step = Step(
    name="summarize",
    description="Summarize the research findings",
    agent=OpenAIAugmentedLLM(model="gpt-4o")
)

# Create a workflow
research_workflow = Workflow(
    name="research_workflow",
    description="Research and summarize a topic",
    steps=[research_step, summarize_step]
)

# Execute the workflow
result = research_workflow.execute("Quantum computing")

3. Using the Orchestrator

For more complex agent behaviors, you can use the Orchestrator:

from mcp_agent import Orchestrator, Agent

# Create an orchestrator with multiple agents
orchestrator = Orchestrator(
    agents={
        "researcher": Agent(llm=OpenAIAugmentedLLM(model="gpt-4o")),
        "writer": Agent(llm=OpenAIAugmentedLLM(model="gpt-4o"))
    }
)

# Execute a task with the orchestrator
result = orchestrator.execute(
    "Research quantum computing and write a blog post about it",
    agent_selection_strategy="auto"
)

Model Providers

MCP Agent supports multiple model providers:

  • OpenAI (GPT models)
  • Anthropic (Claude models)
  • Google (Gemini models)
  • Custom model providers

You can select the appropriate AugmentedLLM implementation based on your preferred provider:

from mcp_agent import OpenAIAugmentedLLM, AnthropicAugmentedLLM, GoogleAugmentedLLM

openai_llm = OpenAIAugmentedLLM(model="gpt-4o")
anthropic_llm = AnthropicAugmentedLLM(model="claude-3-opus-20240229")
google_llm = GoogleAugmentedLLM(model="gemini-1.5-pro")

Advanced Features

Streaming Responses

MCP Agent supports streaming responses for real-time interaction:

for chunk in llm.generate_stream("Explain quantum computing step by step"):
    print(chunk, end="", flush=True)

Structured Output Generation

Generate JSON or other structured outputs with schema validation:

result = llm.generate_structured(
    "Extract entities from this text: John met Sarah at Starbucks on Tuesday.",
    schema={
        "type": "object",
        "properties": {
            "people": {"type": "array", "items": {"type": "string"}},
            "places": {"type": "array", "items": {"type": "string"}},
            "time": {"type": "string"}
        }
    }
)

HTTP Server Integration

MCP Agent can be used to create MCP-compatible HTTP servers:

from mcp_agent.server import MCPServer
from mcp_agent import OpenAIAugmentedLLM

# Create a server with an AugmentedLLM
server = MCPServer(llm=OpenAIAugmentedLLM(model="gpt-4o"))

# Start the server
server.start(host="0.0.0.0", port=8000)

Examples

For more detailed examples, check out the examples directory in the GitHub repository, which includes:

  • Basic agent implementations
  • Workflow patterns
  • Model provider integrations
  • HTTP server implementations
  • Use cases for different domains

Additional Resources

Related MCPs

Apple Shortcuts
Developer ToolsJavaScript

Control Apple Shortcuts automations from AI assistants

Clojars Dependency Lookup
Developer ToolsJavaScript

Fetch dependency information from Clojars, the Clojure community's artifact repository

Simple Timeserver
Developer ToolsPython

Provides Claude with current time and timezone information

About Model Context Protocol

Model Context Protocol (MCP) allows AI models to access external tools and services, extending their capabilities beyond their training data.

Generate Cursor Documentation

Save time on coding by generating custom documentation and prompts for Cursor IDE.