Connect to a HuggingFace Space to use its capabilities. The specific parameters depend on the space being used.
HuggingFace Spaces Connector provides a seamless way to integrate HuggingFace Spaces into your AI workflows. With minimal configuration, you can connect to various AI models for image generation, vision processing, text-to-speech, speech-to-text, and more. The connector automatically configures the appropriate endpoints and handles file transfers between Claude and HuggingFace Spaces.
HuggingFace Spaces Connector allows you to easily connect Claude Desktop (or other compatible clients) to HuggingFace Spaces. This enables you to leverage powerful AI models for various tasks including image generation, vision processing, text-to-speech, and more.
By default, the connector connects to black-forest-labs/FLUX.1-schnell
providing image generation capabilities, but you can specify any number of HuggingFace spaces to connect to.
Ensure you have a recent version of NodeJS installed on your system.
Add the connector to your Claude Desktop configuration by editing your claude_desktop_config.json
file. Add the following to the mcpServers
section:
"mcp-hfspace": {
"command": "npx",
"args": [
"-y",
"@llmindset/mcp-hfspace"
]
}
The connector will automatically find the most appropriate endpoint for each HuggingFace space you specify. You can provide a list of spaces as arguments:
"mcp-hfspace": {
"command": "npx",
"args": [
"-y",
"@llmindset/mcp-hfspace",
"shuttleai/shuttle-jaguar",
"styletts2/styletts2",
"Qwen/QVQ-72B-preview"
]
}
By default, the connector uses the current working directory for file uploads/downloads. It's recommended to set a specific working directory:
--work-dir=/your_directory
MCP_HF_WORK_DIR
Example configuration with working directory:
"mcp-hfspace": {
"command": "npx",
"args": [
"-y",
"@llmindset/mcp-hfspace",
"--work-dir=/Users/username/mcp-store",
"shuttleai/shuttle-jaguar"
]
}
To access private HuggingFace spaces, provide your HuggingFace token:
--hf-token=hf_...
HF_TOKEN
If you need to specify a particular API endpoint, add it to the space name:
Qwen/Qwen2.5-72B-Instruct/model_chat
Once configured, you can ask Claude to generate images using the connected spaces:
Generate an image of a sunset over mountains
Upload an image to your working directory, then ask Claude to analyze it:
Use paligemma to find out who is in "photo.jpg"
You can also provide a URL:
Use paligemma to detect humans in https://example.com/image.jpg
Ask Claude to generate speech:
Use styletts2 to say "Hello world" in a friendly voice
The audio file will be saved in your working directory.
Upload an audio file to your working directory, then ask Claude to transcribe it:
Use whisper to transcribe "recording.mp3"
Ask Claude to process an image:
Use omniparser to analyze "screenshot.png"
You can have Claude interact with other AI models:
Ask Qwen to solve this reasoning puzzle: [your puzzle]
The connector operates in Claude Desktop Mode by default, which means:
You can use the "Available Resources" prompt to see what files and mime types are available in your working directory.
You can run multiple server instances with different configurations if needed, for example to use different working directories or tokens for different spaces.