Create multiple new entities in the knowledge graph with names, types, and observations
Create multiple new relations between entities in the knowledge graph
Add new observations to existing entities in the knowledge graph
Remove entities and their relations from the knowledge graph
Remove specific observations from entities in the knowledge graph
Remove specific relations from the knowledge graph
Read the entire knowledge graph with all entities and relations
Search for nodes in the knowledge graph based on a query string
Retrieve specific nodes by name from the knowledge graph
Knowledge Graph Memory provides a robust implementation of persistent memory for AI assistants using a local knowledge graph. It enables Claude and other AI models to remember information about users across conversations, creating a more personalized and contextually aware experience. The server stores information as entities, relations, and observations in a structured knowledge graph. This approach allows for sophisticated information retrieval and relationship tracking, making it ideal for applications where maintaining user context over time is essential.
Knowledge Graph Memory Server provides persistent memory capabilities for AI assistants through a structured knowledge graph. This allows Claude and other AI models to remember information about users across multiple conversations, creating a more personalized experience.
The memory system is built around three key components:
Entities: Primary nodes in the knowledge graph with unique names, entity types (e.g., "person", "organization"), and associated observations.
Relations: Directed connections between entities that describe how they interact or relate to each other (e.g., "works_at", "lives_in").
Observations: Discrete pieces of information about entities stored as strings. These should be atomic (one fact per observation) and can be added or removed independently.
You can install the Knowledge Graph Memory Server using either NPX or Docker.
Add this configuration to your client settings:
{
"mcpServers": {
"memory": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-memory"
]
}
}
}
Add this configuration to your client settings:
{
"mcpServers": {
"memory": {
"command": "docker",
"args": ["run", "-i", "-v", "claude-memory:/app/dist", "--rm", "mcp/memory"]
}
}
}
You can customize the memory storage location using environment variables:
{
"mcpServers": {
"memory": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-memory"
],
"env": {
"MEMORY_FILE_PATH": "/path/to/custom/memory.json"
}
}
}
}
MEMORY_FILE_PATH
: Path to the memory storage JSON file (default: memory.json
in the server directory)For VS Code users, you can install the server by adding the configuration to your User Settings (JSON) file. Press Ctrl + Shift + P
and type Preferences: Open Settings (JSON)
, then add one of the configurations shown above under the "mcp"
key.
Alternatively, you can add it to a file called .vscode/mcp.json
in your workspace (without the "mcp"
key) to share the configuration with others.
Once installed, the Knowledge Graph Memory Server provides tools for creating, reading, updating, and deleting information in the knowledge graph. The AI model can use these tools to maintain context about users, their preferences, and other important information across conversations.
To effectively use the memory system, you'll need to customize your system prompt to instruct the AI on when and how to create and retrieve memories. The specific prompt will depend on your use case, but should guide the model on the frequency and types of memories to create.