Provides access to the local filesystem through MCP
Chat MCP is a desktop application that leverages the Model Context Protocol (MCP) to seamlessly connect and interact with various Large Language Models. Built on Electron, the app ensures full cross-platform compatibility across Linux, macOS, and Windows. The application provides a clean, minimalistic interface for testing multiple LLM servers and models, making it an ideal tool for developers and researchers. It supports dynamic LLM configuration with all OpenAI SDK-compatible models and offers multi-client management to connect to multiple servers using MCP config.
To install and run the Chat MCP Desktop App:
git clone https://github.com/AI-QL/chat-mcp
cd chat-mcp
node -v
npm -v
npm install
Before starting the application, modify the config.json
file located in the src/main
directory to configure your MCP servers. Ensure that the command
and path
specified in the args
are valid.
Start the application:
npm start
The application supports various LLM configurations through JSON files. You can create configuration files for different API endpoints:
Create a gpt-api.json
file with the following content:
{
"chatbotStore": {
"apiKey": "",
"url": "https://api.aiql.com",
"path": "/v1/chat/completions",
"model": "gpt-4o-mini",
"max_tokens_value": "",
"mcp": true
},
"defaultChoiceStore": {
"model": [
"gpt-4o-mini",
"gpt-4o",
"gpt-4",
"gpt-4-turbo"
]
}
}
For Qwen models, create a qwen-api.json
file:
{
"chatbotStore": {
"apiKey": "",
"url": "https://dashscope.aliyuncs.com/compatible-mode",
"path": "/v1/chat/completions",
"model": "qwen-turbo",
"max_tokens_value": "",
"mcp": true
},
"defaultChoiceStore": {
"model": [
"qwen-turbo",
"qwen-plus",
"qwen-max"
]
}
}
For DeepInfra models, create a deepinfra.json
file:
{
"chatbotStore": {
"apiKey": "",
"url": "https://api.deepinfra.com",
"path": "/v1/openai/chat/completions",
"model": "meta-llama/Meta-Llama-3.1-70B-Instruct",
"max_tokens_value": "32000",
"mcp": true
},
"defaultChoiceStore": {
"model": [
"meta-llama/Meta-Llama-3.1-70B-Instruct",
"meta-llama/Meta-Llama-3.1-405B-Instruct",
"meta-llama/Meta-Llama-3.1-8B-Instruct"
]
}
}
To build a standalone desktop application:
npm run build-app
This will package your application for your current operating system, with artifacts stored in the /artifacts
directory.
For Debian/Ubuntu users experiencing RPM build issues, either:
package.json
to skip the RPM build step, orsudo apt-get install rpm
If you encounter this error, modify the config.json
in src/main
. On Windows, npx may not work properly. You can use node
directly in config.json:
{
"mcpServers": {
"filesystem": {
"command": "node",
"args": [
"node_modules/@modelcontextprotocol/server-filesystem/dist/index.js",
"D:/Github/mcp-test"
]
}
}
}
Always use absolute paths for better reliability.
If the installation process stalls at less than 300MB, it's likely due to a timeout during the Electron installation. This often happens because the download speed from Electron's default server is slow or inaccessible in certain regions. To resolve this, modify the environment or global variable ELECTRON_MIRROR
to use an accessible Electron mirror site.
When packaging with electron-builder, it downloads several large release packages from GitHub. If your network connection is unstable, this process may timeout. On Windows, clear the cache in the electron
and electron-builder
directories within C:\Users\YOURUSERNAME\AppData\Local
before retrying. Use the default shell terminal instead of VSCode's built-in terminal to avoid permission issues.