Routes MCP requests to appropriate LLM providers based on configuration
CLI command to generate connector projects and other components
Wanaku is a powerful router that standardizes how applications provide context to Large Language Models (LLMs) using the Model Context Protocol (MCP). It serves as a central hub for connecting AI-enabled applications, allowing them to communicate effectively with various LLM services. Named after the origins of the word "Guanaco," a camelid native to South America, Wanaku provides a robust infrastructure for building and deploying AI-powered applications.
Wanaku is a Model Context Protocol (MCP) router that enables seamless communication between AI applications and Large Language Models. This guide will help you set up and use Wanaku effectively.
There are several ways to install and run Wanaku:
The simplest way to get started is using Docker Compose:
Clone the repository:
git clone https://github.com/wanaku-ai/wanaku.git
cd wanaku
Start Wanaku using Docker Compose:
docker-compose up
This will start the Wanaku router and its dependencies.
Wanaku also supports JBang for quick execution:
jbang wanaku@wanaku-ai
To build Wanaku from source:
mvn clean install
java -jar wanaku-router/target/wanaku-router-*.jar
Wanaku can be configured through various methods:
wanaku.properties
file in your working directoryKey configuration options include:
Once Wanaku is running, you can:
Wanaku supports extending its functionality through custom connectors:
If you encounter issues:
For more detailed documentation, visit the official documentation in the repository's docs directory.