Retrieve information about users in the ZenML server
Retrieve information about stacks in the ZenML server
Retrieve information about pipelines in the ZenML server
Retrieve information about pipeline runs in the ZenML server
Retrieve information about pipeline steps in the ZenML server
Retrieve information about services in the ZenML server
Retrieve information about stack components in the ZenML server
Retrieve information about flavors in the ZenML server
Retrieve information about pipeline run templates in the ZenML server
Retrieve information about schedules in the ZenML server
Retrieve metadata about data artifacts in the ZenML server
Retrieve information about service connectors in the ZenML server
Retrieve code for a specific pipeline step
Retrieve logs for a specific pipeline step (if run on a cloud-based stack)
Retrieve the most recent pipeline runs from the ZenML server
Trigger a new pipeline run using an existing run template
The ZenML Integration provides a seamless connection between AI assistants and your ZenML MLOps platform. Access information about your ML pipelines, runs, artifacts, and services directly through AI interfaces like Claude Desktop or Cursor. This integration enables you to monitor, analyze, and even trigger new pipeline runs without leaving your AI assistant.
The ZenML Integration allows AI assistants to interact with your ZenML MLOps platform through the Model Context Protocol (MCP). This integration provides access to critical information about your machine learning pipelines, runs, artifacts, and services, enabling you to monitor and manage your ML workflows directly through AI interfaces.
Before setting up the ZenML Integration, you'll need:
uv
package manager installed locally (install via their installer script or via brew
on macOS)First, clone the ZenML MCP repository to your local machine:
git clone https://github.com/zenml-io/mcp-zenml.git
Create an MCP configuration file with the following structure:
{
"mcpServers": {
"zenml": {
"command": "/path/to/uv",
"args": ["run", "/full/path/to/zenml_server.py"],
"env": {
"LOGLEVEL": "INFO",
"NO_COLOR": "1",
"PYTHONUNBUFFERED": "1",
"PYTHONIOENCODING": "UTF-8",
"ZENML_STORE_URL": "https://your-zenml-server-url.com",
"ZENML_STORE_API_KEY": "your-api-key-here"
}
}
}
}
Replace the following placeholders:
/path/to/uv
: The full path to your uv
executable/full/path/to/zenml_server.py
: The full path to the zenml_server.py
file in the cloned repositoryhttps://your-zenml-server-url.com
: Your ZenML server URL (e.g., https://d534d987a-zenml.cloudinfra.zenml.io
)your-api-key-here
: Your ZenML API key (consider using a service account for this purpose)For a better experience with tool outputs, go to Settings → Profile and add the following to your preferences:
When using zenml tools which return JSON strings and you're asked a question, you might want to consider using markdown tables to summarize the results or make them easier to view!
.cursor
foldermcp.json
file with your MCP configurationOnce configured, you can interact with your ZenML server through your AI assistant. You can ask questions about:
You can also trigger new pipeline runs if you have run templates configured.
uv
and zenml_server.py
are correct and absoluteThis integration is in beta/experimental release. Join the ZenML Slack community to share your experience and help improve the integration.