Search for books and retrieve information about them
Search Wikipedia and retrieve page content
watsonx.ai Flows Engine MCP server provides seamless integration with IBM's AI tools ecosystem. It enables access to powerful tools like Google Books and Wikipedia search directly through the Model Context Protocol, allowing AI assistants to retrieve real-time information and enhance their responses with external data. This integration bridges the gap between large language models and specialized data sources.
The watsonx.ai Flows Engine MCP server allows you to connect AI assistants to IBM's watsonx.ai Flows Engine tools. This integration enables AI models to access external data sources and services through a standardized protocol, enhancing their capabilities with real-time information retrieval.
Before setting up the watsonx.ai Flows Engine MCP server, you'll need:
git clone https://github.com/IBM/wxflows.git
cd wxflows/examples/mcp/javascript
Navigate to the wxflows directory in the cloned repository:
cd wxflows
Deploy the pre-configured tools to a Flows Engine endpoint:
wxflows deploy
This will deploy the endpoint and tools that will be used by the wxflows SDK in your application.
From the project's root directory, create your environment file:
cp .env.sample .env
Edit the .env
file to add your credentials, including your watsonx.ai Flows Engine API key and endpoint.
Install the necessary dependencies for the application:
npm i
Build the server by running:
npm run build
To use with Claude Desktop or other MCP clients, add the server configuration to your client's settings.
Since MCP servers communicate over stdio, debugging can be challenging. The repository includes the MCP Inspector for debugging purposes:
npm run inspector
This will provide a URL to access debugging tools in your browser.
For questions or feedback, join the watsonx.ai Flows Engine Discord community at https://ibm.biz/wxflows-discord.