Supervisor MCP Server
Overview
Hybrid HTTP/MCP service that orchestrates multiple specialised agents and MCP tools using LangGraph’s supervisor workflow. It runs a FastAPI app (for /health
and SSE transport) and an MCP server (Supervisor MCP Server
) that exposes high-level tools to coordinate downstream agents.
Unlike the other services, this server does not have a dedicated *.mcp.yml
entry; it is typically run as part of the BuddAI stack or invoked manually when the orchestration layer is required.
Components
main.py
– boots FastAPI + FastMCP, registers the public tools, and streams progress updates back to the caller. Loads agents during application lifespan and exposes/health
for container health checks.loaders/agent_loader.py
– discoversagents/*.agent.yml
, builds LangChain agents withcreate_react_agent
, and compiles a LangGraph supervisor with MongoDB-backed checkpoints.loaders/mcp_loader.py
– reads everymcp_servers/*.mcp.yml
, initialises aMultiServerMCPClient
, and hydrates MCP tools for use inside LangGraph agents.helpers/extract_content.py
– parses streamed LangGraph events to surface agent names and the final message for MCP responses.Dockerfile
– Python 3.13 build that vendors dependencies, copies agent definitions and MCP configs, installsdocker
+curl
for nested tool execution, and defines aHEALTHCHECK
hitting/health
.
Exposed MCP tools
Tool | Purpose | Input |
---|---|---|
run_supervisor | Orchestrates a conversation among configured agents, streaming progress via ctx.report_progress . Returns the supervisor’s final message content. | prompt , optional user_id (defaults to last caller). |
notify_user | Generates a notification message for an agenda reminder, adapting tone based on snooze counters. | agenda dict with rememberText and status . |
Both tools rely on the supervisor graph created during startup. run_supervisor
pulls agent definitions and underlying MCP tools; notify_user
reuses the same supervisor to produce a natural-language reminder.
Configuration
Environment variables consumed by various subsystems:
- MongoDB:
MONGO_URI
– required for LangGraph checkpointing (MongoDBSaver
). - LLM providers: whichever secrets
langchain
backends require (ANTHROPIC_API_KEY
,OPENAI_API_KEY
,LANGFUSE_*
, etc.). These are passed through from.env
viadotenv
. - MCP tool configs:
loaders/mcp_loader.py
reads every*.mcp.yml
and substitutes${VAR}
with environment values. Ensure all downstream services have their variables available when the supervisor starts, as it may spin up Docker containers for them. - Debugging: use
LOG_LEVEL
andLANGCHAIN_HANDLER
env vars as needed; FastAPI logging is silenced for/health
by the middleware.
Runtime expectations
- MongoDB must allow the creation of
supervisor_memory_checkpoints
andsupervisor_memory_writes
collections. - Agent configuration files (
agents/*.agent.yml
) must exist and reference valid tools/LLM models. - Docker-in-Docker access is required if agents invoke MCP tools defined as Docker commands (the container installs the Docker CLI for that reason).
Running locally
- Install Python ≥ 3.11 and create a virtualenv.
pip install -r requirements
equivalent:pip install -e .
frommcp_servers/supervisor
using the providedpyproject.toml
/uv.lock
.- Populate
.env
withMONGO_URI
, LLM keys (OpenAI, Anthropic, etc.), and all secrets required by downstream MCP servers. - Ensure
agents/*.agent.yml
is populated and references the correct agent models/tools. - Start the service with
python main.py
. FastAPI listens on:8000
and the MCP interface is exposed via Server-Sent Events (FastMCP.sse_app()
), which clients likelangchain_mcp_adapters
can connect to.
Docker usage
Build the image:
docker build -t buddai/mcp-supervisor .
(Execute from the repository root so the Dockerfile paths resolve.)
Run with the required secrets mounted:
docker run --rm \
-p 8000:8000 \
--network buddai_net \
-e MONGO_URI=mongodb://mongo:27017/buddai \
-e OPENAI_API_KEY=... \
-e ANTHROPIC_API_KEY=... \
--env-file other-mcp.env \
buddai/mcp-supervisor
The container publishes /health
for orchestration and streams MCP responses over HTTP (/
).
Development notes
- Agents are re-created at startup; changes to
agents/*.agent.yml
require a restart. loaders/mcp_loader.py
closes over a globalMultiServerMCPClient
; ensure Docker sockets and network permissions allow it to launch other MCP services when running in a restricted environment.- The Dockerfile copies
mcp_servers/*.mcp.yml
into the image so the supervisor has access to local tool definitions even in isolation. - Legacy tests live under
old_test/
; they reference outdated APIs and serve mainly as historical context.
Troubleshooting
- Supervisor fails to start: check that MongoDB is reachable and that all referenced
agents/*.agent.yml
files exist and parse correctly. - Tool launch errors: confirm Docker is available inside the container and that each referenced MCP service image exists (
docker pull
as needed). - LLM authentication errors: ensure environment variables for chosen providers are passed through;
init_chat_model
usesconfigurable_fields
to set model/temperature per agent. - No progress updates: verify that
ctx.report_progress
is called—logs showing agent names should appear; adjusthelpers/extract_content
if event formats change.