MCP Gateway
OpalServe acts as an MCP server itself, exposing all aggregated tools and knowledge base search to any MCP-compatible client. Instead of configuring each backend server individually, your AI tool connects to OpalServe once and gets everything.
How It Works
+---------------------+
[Claude Code] <--mcp--> | | <--mcp--> [GitHub Server]
[Cursor] <--mcp--> | OpalServe Gateway | <--mcp--> [Filesystem Server]
[Codex] <--mcp--> | | <--mcp--> [Slack Server]
[Any Client] <--mcp--> | | <--mcp--> [Database Server]
+---------------------+The gateway discovers tools from all connected backend servers and re-exposes them through a single MCP interface. When a client calls a tool, OpalServe routes the request to the correct backend server.
Setup
Claude Desktop
Add to your Claude Desktop config (claude_desktop_config.json):
{
"mcpServers": {
"opalserve": {
"command": "npx",
"args": ["-y", "opalserve", "start", "--mcp"]
}
}
}Claude Code
Add to your MCP settings:
{
"mcpServers": {
"opalserve": {
"command": "opalserve",
"args": ["start", "--mcp"]
}
}
}Cursor
In Cursor settings, add an MCP server with:
- Command:
opalserve - Arguments:
start --mcp
Any MCP Client
OpalServe supports stdio transport by default. Any MCP client that can launch a process and communicate via stdin/stdout will work:
opalserve start --mcpFor remote clients, OpalServe also serves an SSE endpoint:
http://your-server:3456/mcp/sseMeta-Tools
OpalServe registers these built-in tools automatically:
opalserve_search
Search for tools across all connected servers. Useful when the AI needs to discover available capabilities.
Input: { "query": "create issue", "limit": 5 }
Output: List of matching tools with descriptions and server namesExample:
{
"query": "create issue",
"limit": 5
}Returns:
Found 2 tools matching "create issue":
1. github:create_issue
Create a new issue in a repository
Server: github
2. github:create_pull_request
Create a new pull request
Server: githubopalserve_servers
List all registered servers and their status.
Input: {}
Output: Server list with connection status and tool countsopalserve_context_search
Search the knowledge base for team documentation and context. This is the tool that makes your team's documentation accessible to AI coding assistants.
Input: { "query": "deployment process", "limit": 5, "tags": "ops" }
Output: Relevant document chunks ranked by relevanceExample:
{
"query": "how do we deploy to production",
"limit": 3
}Returns:
Found 3 relevant documents:
1. [0.92] Deployment Guide
"To deploy to production, merge your PR to the main branch.
The CI/CD pipeline will automatically run tests, build the
Docker image, and deploy to the production cluster..."
2. [0.81] Release Checklist
"Before each release: 1) Ensure all staging tests pass.
2) Update the changelog. 3) Tag the release in git..."
3. [0.74] Incident Runbook
"If a deployment fails, immediately roll back using:
kubectl rollout undo deployment/app..."This is the key feature for teams
When developers ask their AI tools questions like "how do we deploy?" or "what's our database schema?", the AI can call opalserve_context_search and get your team's actual documentation instead of generic answers.
Proxy Tools
Every tool discovered from backend servers is registered in the gateway with prefixed names:
| Backend Tool | Gateway Tool Name |
|---|---|
create_issue (github) | github__create_issue |
read_file (files) | files__read_file |
query (database) | database__query |
send_message (slack) | slack__send_message |
The double underscore (__) separates the server name from the tool name. When an MCP client calls one of these tools, OpalServe routes the request to the correct backend server.
Tool Descriptions
Each proxy tool includes the original tool description plus the server name, so AI tools can understand where the tool comes from:
Tool: github__create_issue
Description: [GitHub] Create a new issue in a repository
Input Schema: { owner: string, repo: string, title: string, body?: string }Team Mode Gateway
When connected to a team server (team-client mode), the gateway proxies tool calls through the team server rather than connecting to backend MCP servers directly:
[Your AI Tool] <--stdio--> [OpalServe Client] <--https--> [OpalServe Team Server] <--mcp--> [Backend Servers]This means:
- Developers do not need direct network access to backend servers
- API tokens are managed centrally on the team server
- All tool calls are logged and rate-limited
- The admin can add/remove servers without any client-side changes
Programmatic Usage
import { OpalServeRegistry, McpGateway } from 'opalserve';
const registry = await OpalServeRegistry.create();
await registry.start();
// Start as stdio MCP server
const gateway = new McpGateway(registry);
await gateway.connectStdio();
// Or get the tool list programmatically
const tools = gateway.listTools();
console.log(`Gateway exposing ${tools.length} tools`);Troubleshooting
Tools not showing up in my AI client
- Make sure OpalServe is running:
opalserve status - Check that servers are connected:
opalserve server list - Verify tools are indexed:
opalserve tools list - Restart your MCP client after configuration changes
"Server disconnected" errors
The backend MCP server may have crashed. Check:
opalserve health
opalserve health --server githubOpalServe will automatically attempt to reconnect. You can force a reconnect:
# Via CLI
opalserve server reconnect github
# Via API
curl -X POST http://localhost:3456/api/v1/servers/github/reconnectHigh latency on tool calls
Tool call latency includes:
- MCP client to OpalServe (typically <1ms for stdio)
- OpalServe to backend server (depends on transport and server)
- Backend server processing time
Check per-server latency:
opalserve healthIf a specific server is slow, consider using a faster transport (stdio is fastest for local servers).