Lists all available Siri shortcuts on the system
Opens a shortcut in the Shortcuts app
Runs a shortcut with optional input
The Siri Shortcuts MCP provides a bridge between large language models and Apple's Shortcuts automation platform. It allows LLMs to discover, execute, and interact with any Siri Shortcuts available on your macOS system, effectively extending AI capabilities to control various aspects of your Mac through automation.
The Siri Shortcuts MCP server enables large language models to interact with Apple's Shortcuts automation platform on macOS. This integration allows LLMs to discover and execute any Shortcuts you've created or installed, providing powerful automation capabilities.
To install the Siri Shortcuts MCP server, you'll need to add it to your Claude configuration. You can install it using npx:
{
"mcpServers": {
"siri-shortcuts": {
"command": "npx",
"args": ["mcp-server-siri-shortcuts"]
}
}
}
Alternatively, you can clone the repository and run it locally:
git clone https://github.com/dvcrn/mcp-server-siri-shortcuts.git
cd mcp-server-siri-shortcuts
npm install
npm start
Once installed, the MCP server provides several ways to interact with Siri Shortcuts:
Discover available shortcuts: Use the list_shortcuts
tool to see all shortcuts available on your system.
Open shortcuts in the editor: Use the open_shortcut
tool to open a specific shortcut in the Shortcuts app for viewing or editing.
Execute shortcuts: Use either the generic run_shortcut
tool or the dynamically generated shortcut-specific tools to execute shortcuts with optional input parameters.
The server automatically generates dedicated tools for each shortcut on your system, making it easy for the LLM to directly reference specific shortcuts.
The MCP server uses the macOS shortcuts
CLI command under the hood to interact with the Shortcuts app. It handles sanitizing shortcut names for tool naming compatibility and supports both direct text input and file-based input for shortcuts that accept parameters.
When a shortcut is executed, any output from the shortcut is returned to the LLM, allowing for two-way communication between the model and your automation workflows.