Generate text using Alibaba Cloud's Qwen language models
Qwen Max MCP Server provides seamless integration with Alibaba Cloud's powerful Qwen language models directly through Claude Desktop. This implementation supports all commercial Qwen models including Max, Plus, and Turbo variants, giving you access to models with extensive context windows up to 1 million tokens. The server handles all communication with the Dashscope API, providing proper error handling, configurable parameters, and streaming responses. Built with Node.js/TypeScript for optimal stability and reliability with MCP servers, it offers a straightforward way to leverage Qwen's capabilities for complex reasoning, code generation, and creative tasks.
Qwen Max MCP Server allows you to access Alibaba Cloud's Qwen language models directly through Claude Desktop. This implementation supports all commercial Qwen models with their extensive context windows and specialized capabilities.
Before installing the Qwen Max MCP Server, ensure you have:
The easiest way to install Qwen Max MCP Server is through Smithery:
npx -y @smithery/cli install @66julienmartin/mcp-server-qwen_max --client claude
If you prefer to install manually:
git clone https://github.com/66julienmartin/mcp-server-qwen-max.git
cd Qwen_Max
npm install
npm run build
.env
file in the project root with your API key:DASHSCOPE_API_KEY=your-api-key-here
By default, this server uses the Qwen-Max model, but you can choose from several options:
To modify the model, update the model name in src/index.ts
:
// For Qwen-Max (default)
model: "qwen-max"
// For Qwen-Plus
model: "qwen-plus"
// For Qwen-Turbo
model: "qwen-turbo"
The temperature parameter controls the randomness of the model's output:
Recommended temperature settings by task:
The server provides detailed error messages for common issues:
For development work on the server:
npm run dev # Watch mode
npm run build # Build
npm run start # Start server