Back to MCP Catalog

Qdrant Vector Search MCP Server

Knowledge & MemoryPython
Semantic memory layer for LLMs using Qdrant vector database
Available Tools

qdrant-store

Store information in the Qdrant vector database with optional metadata

informationmetadatacollection_name

qdrant-find

Retrieve relevant information from the Qdrant database using semantic search

querycollection_name

Qdrant Vector Search provides a semantic memory layer for Large Language Models through the Model Context Protocol. It enables LLMs to store, retrieve, and search through information using Qdrant's powerful vector database capabilities. This MCP server implementation allows AI applications to maintain persistent memory across conversations, search for relevant context based on semantic similarity, and enhance responses with stored knowledge. By leveraging Qdrant's vector search engine, it provides efficient and accurate retrieval of information based on meaning rather than just keywords.

Introduction

Qdrant Vector Search is an official Model Context Protocol (MCP) server implementation that integrates with Qdrant, a high-performance vector database. It provides a semantic memory layer that allows Large Language Models to store information and retrieve it later based on semantic similarity.

Installation

You can install and run the Qdrant MCP server in several ways:

Using pip

pip install mcp-server-qdrant

After installation, you can run the server with:

python -m mcp_server_qdrant

Using Docker

docker run -p 8000:8000 \
  -e QDRANT_URL=https://your-qdrant-instance.com \
  -e QDRANT_API_KEY=your_api_key \
  -e COLLECTION_NAME=your_collection \
  qdrant/mcp-server-qdrant

Configuration

The server is configured using environment variables:

| Variable | Description | Default | |----------|-------------|---------| | QDRANT_URL | URL of your Qdrant server | None | | QDRANT_API_KEY | API key for Qdrant server authentication | None | | COLLECTION_NAME | Default collection name to use | None | | QDRANT_LOCAL_PATH | Path to local Qdrant database (alternative to QDRANT_URL) | None | | EMBEDDING_PROVIDER | Embedding provider to use | fastembed | | EMBEDDING_MODEL | Name of the embedding model | sentence-transformers/all-MiniLM-L6-v2 | | TOOL_STORE_DESCRIPTION | Custom description for the store tool | See default in settings.py | | TOOL_FIND_DESCRIPTION | Custom description for the find tool | See default in settings.py |

Note: You cannot provide both QDRANT_URL and QDRANT_LOCAL_PATH simultaneously.

FastMCP Environment Variables

Since this server is built on FastMCP, it also supports all FastMCP environment variables, including:

| Variable | Description | Default | |----------|-------------|---------| | FASTMCP_DEBUG | Enable debug mode | false | | FASTMCP_LOG_LEVEL | Set logging level | INFO | | FASTMCP_HOST | Host address to bind to | 0.0.0.0 | | FASTMCP_PORT | Port to run the server on | 8000 |

Usage

Once the server is running, LLMs can interact with it through the MCP protocol. The server provides two main tools:

  1. qdrant-store: Stores information in the Qdrant database with optional metadata
  2. qdrant-find: Retrieves relevant information based on a semantic query

Example Usage with Claude

When using with Claude or other LLMs, you can instruct the model to use these tools to maintain memory across conversations:

You can store important information using the qdrant-store tool and retrieve it later with qdrant-find.

Multiple Collections

If you need to work with multiple collections, you can specify the collection name in each request. This allows you to organize different types of information in separate collections.

Troubleshooting

If you encounter issues:

  1. Check that your Qdrant server is accessible and the API key is correct
  2. Verify that the environment variables are properly set
  3. Look at the server logs for detailed error messages
  4. Ensure the embedding model is available and working correctly

For more detailed information, visit the GitHub repository.

Related MCPs

Knowledge Graph Memory
Knowledge & MemoryTypeScript

A persistent memory system using a local knowledge graph

MemoryMesh
Knowledge & MemoryTypeScript

A knowledge graph server for structured memory persistence in AI models

Cognee
Knowledge & MemoryPython

Knowledge management and retrieval system with code graph capabilities

About Model Context Protocol

Model Context Protocol (MCP) allows AI models to access external tools and services, extending their capabilities beyond their training data.

Generate Cursor Documentation

Save time on coding by generating custom documentation and prompts for Cursor IDE.