Back to MCP Catalog

Logfire Telemetry Analysis MCP Server

MonitoringPython
Access and analyze OpenTelemetry traces and metrics from Logfire
Available Tools

find_exceptions

Get exception counts from traces grouped by file

age

find_exceptions_in_file

Get detailed trace information about exceptions in a specific file

filepathage

arbitrary_query

Run custom SQL queries on your OpenTelemetry traces and metrics

queryage

get_logfire_records_schema

Get the OpenTelemetry schema to help with custom queries

The Logfire Telemetry Analysis MCP enables LLMs to retrieve and analyze application telemetry data stored in Logfire. It provides tools to examine distributed traces, find exceptions, and execute custom SQL queries against your OpenTelemetry data. With this MCP, AI assistants can help you troubleshoot application issues by identifying error patterns, analyzing trace information, and extracting insights from your telemetry data. It bridges the gap between your monitoring infrastructure and AI assistance for more effective debugging and system analysis.

Overview

The Logfire Telemetry Analysis MCP provides access to your application's OpenTelemetry traces and metrics stored in Logfire. This allows AI assistants to help you analyze performance issues, troubleshoot errors, and gain insights from your telemetry data.

Prerequisites

Before using this MCP, you'll need:

  1. A Logfire account with telemetry data
  2. A Logfire read token for API access
  3. The uv package manager installed on your system

Installation

Install uv

First, ensure you have uv installed on your system. If you don't have it yet, follow the uv installation instructions.

If you already have uv but need to update it, run:

uv self update

Get a Logfire Read Token

To access your telemetry data, you'll need a Logfire read token:

  1. Go to your Logfire project settings
  2. Navigate to the "Read Tokens" section
  3. Create a new read token at: https://logfire.pydantic.dev/-/redirect/latest-project/settings/read-tokens

Note that read tokens are project-specific, so create one for the specific project you want to analyze.

Configuration

For Cursor

Create a .cursor/mcp.json file in your project root with the following content:

{
  "mcpServers": {
    "logfire": {
      "command": "uvx",
      "args": ["logfire-mcp", "--read-token=YOUR-TOKEN"]
    }
  }
}

Replace YOUR-TOKEN with your actual Logfire read token.

For Claude Desktop

Add this configuration to your Claude settings:

{
  "command": ["uvx"],
  "args": ["logfire-mcp"],
  "type": "stdio",
  "env": {
    "LOGFIRE_READ_TOKEN": "YOUR_TOKEN"
  }
}

Replace YOUR_TOKEN with your actual Logfire read token.

For Cline

Add to your Cline settings in cline_mcp_settings.json:

{
  "mcpServers": {
    "logfire": {
      "command": "uvx",
      "args": ["logfire-mcp"],
      "env": {
        "LOGFIRE_READ_TOKEN": "YOUR_TOKEN"
      },
      "disabled": false,
      "autoApprove": []
    }
  }
}

Replace YOUR_TOKEN with your actual Logfire read token.

Manual Execution

If you need to run the MCP server manually (not required for most client integrations), use:

LOGFIRE_READ_TOKEN=YOUR_READ_TOKEN uvx logfire-mcp

Or with the token as a command-line argument:

uvx logfire-mcp --read-token=YOUR_READ_TOKEN

Customization

Custom Base URL

By default, the server connects to https://logfire-api.pydantic.dev. To use a different Logfire instance:

Using the command line:

uvx logfire-mcp --base-url=https://your-logfire-instance.com

Using an environment variable:

LOGFIRE_BASE_URL=https://your-logfire-instance.com uvx logfire-mcp

Example Usage

Here are some questions you can ask your AI assistant with this MCP:

  1. "What exceptions occurred in traces from the last hour across all services?"
  2. "Show me the recent errors in the file 'app/api.py' with their trace context"
  3. "How many errors were there in the last 24 hours per service?"
  4. "What are the most common exception types in my traces, grouped by service name?"
  5. "Get me the OpenTelemetry schema for traces and metrics"
  6. "Find all errors from yesterday and show their trace contexts"

The AI will use the appropriate tools to query your Logfire data and provide insights based on the results.

Related MCPs

Axiom Query
MonitoringGo

Query your Axiom data using APL (Axiom Processing Language)

Prometheus Metrics
MonitoringPython

Query and analyze Prometheus metrics through standardized interfaces

Sentry Issue Analyzer
MonitoringPython

Retrieve and analyze issues from Sentry.io

About Model Context Protocol

Model Context Protocol (MCP) allows AI models to access external tools and services, extending their capabilities beyond their training data.

Generate Cursor Documentation

Save time on coding by generating custom documentation and prompts for Cursor IDE.