Launch Gomus AI MCP server
Launch an MCP server from source or via Docker.
A Gomus AI Model Context Protocol (MCP) server is designed as an independent component to complement the Gomus AI server. Note that an MCP server must operate alongside a properly functioning Gomus AI server.
An MCP server can start up in either self-host mode (default) or host mode:
- Self-host mode:
When launching an MCP server in self-host mode, you must provide an API key to authenticate the MCP server with the Gomus AI server. In this mode, the MCP server can access only the datasets of a specified tenant on the Gomus AI server. - Host mode:
In host mode, each MCP client can access their own datasets on the Gomus AI server. However, each client request must include a valid API key to authenticate the client with the Gomus AI server.
Once a connection is established, an MCP server communicates with its client in MCP HTTP+SSE (Server-Sent Events) mode, unidirectionally pushing responses from the Gomus AI server to its client in real time.
Prerequisites
- Ensure Gomus AI is upgraded to v0.18.0 or later.
- Have your Gomus AI API key ready. See [Acquire a Gomus AI API key](../acquire_Gomus AI_api_key.md).
If you wish to try out our MCP server without upgrading Gomus AI, community contributor yiminghub2024 👏 shares their recommended steps [here](#launch-an-mcp-server-without-upgrading-Gomus AI).
Launch an MCP server
You can start an MCP server either from source code or via Docker.
Launch from source code
- Ensure that a Gomus AI server v0.18.0+ is properly running.
- Launch the MCP server:
# Launch the MCP server to work in self-host mode, run either of the following
uv run mcp/server/server.py --host=127.0.0.1 --port=9382 --base-url=http://127.0.0.1:9380 --api-key=Gomus AI-xxxxx
# uv run mcp/server/server.py --host=127.0.0.1 --port=9382 --base-url=http://127.0.0.1:9380 --mode=self-host --api-key=Gomus AI-xxxxx
# To launch the MCP server to work in host mode, run the following instead:
# uv run mcp/server/server.py --host=127.0.0.1 --port=9382 --base-url=http://127.0.0.1:9380 --mode=host
Where:
host: The MCP server's host address.port: The MCP server's listening port.base_url: The address of the running Gomus AI server.mode: The launch mode.self-host: (default) self-host mode.host: host mode.
api_key: Required in self-host mode to authenticate the MCP server with the Gomus AI server. See [here](../acquire_Gomus AI_api_key.md) for instructions on acquiring an API key.
Transports
The Gomus AI MCP server supports two transports: the legacy SSE transport (served at /sse), introduced on November 5, 2024, and deprecated on March 26, 2025, and the streamable-HTTP transport (served at /mcp). The legacy SSE transport and the streamable HTTP transport with JSON responses are enabled by default. To disable either transport, use the flags --no-transport-sse-enabled or --no-transport-streamable-http-enabled. To disable JSON responses for the streamable HTTP transport, use the --no-json-response flag.
Launch from Docker
1. Enable MCP server
The MCP server is designed as an optional component that complements the Gomus AI server and disabled by default. To enable MCP server:
- Navigate to docker/docker-compose.yml.
- Uncomment the
services.Gomus AI.commandsection as shown below:
services:
Gomus AI:
...
image: ${Gomus AI_IMAGE}
# Example configuration to set up an MCP server:
command:
- --enable-mcpserver
- --mcp-host=0.0.0.0
- --mcp-port=9382
- --mcp-base-url=http://127.0.0.1:9380
- --mcp-script-path=/Gomus AI/mcp/server/server.py
- --mcp-mode=self-host
- --mcp-host-api-key=Gomus AI-xxxxxxx
# Optional transport flags for the Gomus AI MCP server.
# If you set `mcp-mode` to `host`, you must add the --no-transport-streamable-http-enabled flag, because the streamable-HTTP transport is not yet supported in host mode.
# The legacy SSE transport and the streamable-HTTP transport with JSON responses are enabled by default.
# To disable a specific transport or JSON responses for the streamable-HTTP transport, use the corresponding flag(s):
# - --no-transport-sse-enabled # Disables the legacy SSE endpoint (/sse)
# - --no-transport-streamable-http-enabled # Disables the streamable-HTTP transport (served at the /mcp endpoint)
# - --no-json-response # Disables JSON responses for the streamable-HTTP transport
Where:
mcp-host: The MCP server's host address.mcp-port: The MCP server's listening port.mcp-base-url: The address of the running Gomus AI server.mcp-script-path: The file path to the MCP server’s main script.mcp-mode: The launch mode.self-host: (default) self-host mode.host: host mode.
mcp-host-api_key: Required in self-host mode to authenticate the MCP server with the Gomus AI server. See [here](../acquire_Gomus AI_api_key.md) for instructions on acquiring an API key.
If you set mcp-mode to host, you must add the --no-transport-streamable-http-enabled flag, because the streamable-HTTP transport is not yet supported in host mode.
2. Launch a Gomus AI server with an MCP server
Run docker compose -f docker-compose.yml up to launch the Gomus AI server together with the MCP server.
The following ASCII art confirms a successful launch:
docker-Gomus AI-cpu-1 | Starting MCP Server on 0.0.0.0:9382 with base URL http://127.0.0.1:9380...
docker-Gomus AI-cpu-1 | Starting 1 task executor(s) on host 'dd0b5e07e76f'...
docker-Gomus AI-cpu-1 | 2025-04-18 15:41:18,816 INFO 27 Gomus AI_server log path: /Gomus AI/logs/Gomus AI_server.log, log levels: {'peewee': 'WARNING', 'pdfminer': 'WARNING', 'root': 'INFO'}
docker-Gomus AI-cpu-1 |
docker-Gomus AI-cpu-1 | __ __ ____ ____ ____ _____ ______ _______ ____
docker-Gomus AI-cpu-1 | | \/ |/ ___| _ \ / ___|| ____| _ \ \ / / ____| _ \
docker-Gomus AI-cpu-1 | | |\/| | | | |_) | \___ \| _| | |_) \ \ / /| _| | |_) |
docker-Gomus AI-cpu-1 | | | | | |___| __/ ___) | |___| _ < \ V / | |___| _ <
docker-Gomus AI-cpu-1 | |_| |_|\____|_| |____/|_____|_| \_\ \_/ |_____|_| \_\
docker-Gomus AI-cpu-1 |
docker-Gomus AI-cpu-1 | MCP launch mode: self-host
docker-Gomus AI-cpu-1 | MCP host: 0.0.0.0
docker-Gomus AI-cpu-1 | MCP port: 9382
docker-Gomus AI-cpu-1 | MCP base_url: http://127.0.0.1:9380
docker-Gomus AI-cpu-1 | INFO: Started server process [26]
docker-Gomus AI-cpu-1 | INFO: Waiting for application startup.
docker-Gomus AI-cpu-1 | INFO: Application startup complete.
docker-Gomus AI-cpu-1 | INFO: Uvicorn running on http://0.0.0.0:9382 (Press CTRL+C to quit)
docker-Gomus AI-cpu-1 | 2025-04-18 15:41:20,469 INFO 27 found 0 gpus
docker-Gomus AI-cpu-1 | 2025-04-18 15:41:23,263 INFO 27 init database on cluster mode successfully
docker-Gomus AI-cpu-1 | 2025-04-18 15:41:25,318 INFO 27 load_model /Gomus AI/rag/res/deepdoc/det.onnx uses CPU
docker-Gomus AI-cpu-1 | 2025-04-18 15:41:25,367 INFO 27 load_model /Gomus AI/rag/res/deepdoc/rec.onnx uses CPU
docker-Gomus AI-cpu-1 | ____ ___ ______ ______ __
docker-Gomus AI-cpu-1 | / __ \ / | / ____// ____// /____ _ __
docker-Gomus AI-cpu-1 | / /_/ // /| | / / __ / /_ / // __ \| | /| / /
docker-Gomus AI-cpu-1 | / _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
docker-Gomus AI-cpu-1 | /_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
docker-Gomus AI-cpu-1 |
docker-Gomus AI-cpu-1 |
docker-Gomus AI-cpu-1 | 2025-04-18 15:41:29,088 INFO 27 Gomus AI version: v0.18.0-285-gb2c299fa full
docker-Gomus AI-cpu-1 | 2025-04-18 15:41:29,088 INFO 27 project base: /Gomus AI
docker-Gomus AI-cpu-1 | 2025-04-18 15:41:29,088 INFO 27 Current configs, from /Gomus AI/conf/service_conf.yaml:
docker-Gomus AI-cpu-1 | Gomus AI: {'host': '0.0.0.0', 'http_port': 9380}
...
docker-Gomus AI-cpu-1 | * Running on all addresses (0.0.0.0)
docker-Gomus AI-cpu-1 | * Running on http://127.0.0.1:9380
docker-Gomus AI-cpu-1 | * Running on http://172.19.0.6:9380
docker-Gomus AI-cpu-1 | ______ __ ______ __
docker-Gomus AI-cpu-1 | /_ __/___ ______/ /__ / ____/ _____ _______ __/ /_____ _____
docker-Gomus AI-cpu-1 | / / / __ `/ ___/ //_/ / __/ | |/_/ _ \/ ___/ / / / __/ __ \/ ___/
docker-Gomus AI-cpu-1 | / / / /_/ (__ ) ,< / /____> </ __/ /__/ /_/ / /_/ /_/ / /
docker-Gomus AI-cpu-1 | /_/ \__,_/____/_/|_| /_____/_/|_|\___/\___/\__,_/\__/\____/_/
docker-Gomus AI-cpu-1 |
docker-Gomus AI-cpu-1 | 2025-04-18 15:41:34,501 INFO 32 TaskExecutor: Gomus AI version: v0.18.0-285-gb2c299fa full
docker-Gomus AI-cpu-1 | 2025-04-18 15:41:34,501 INFO 32 Use Elasticsearch http://es01:9200 as the doc engine.
...
Launch an MCP server without upgrading Gomus AI
This section is contributed by our community contributor yiminghub2024. 👏
- Prepare all MCP-specific files and directories.
i. Copy themcp/directory to your local working directory.
ii. Copydocker/docker-compose.ymllocally.
iii. Copydocker/entrypoint.shlocally.
iv. Install the required dependencies usinguv:- Run
uv add mcpor - Copy
pyproject.tomllocally and runuv sync --python 3.12.
- Run
- Edit docker-compose.yml to enable MCP (disabled by default).
- Launch the MCP server:
docker compose -f docker-compose.yml up -d
Check MCP server status
Run the following to check the logs the Gomus AI server and the MCP server:
docker logs docker-Gomus AI-cpu-1
Security considerations
As MCP technology is still at early stage and no official best practices for authentication or authorization have been established, Gomus AI currently uses [API key](./acquire_Gomus AI_api_key.md) to validate identity for the operations described earlier. However, in public environments, this makeshift solution could expose your MCP server to potential network attacks. Therefore, when running a local SSE server, it is recommended to bind only to localhost (127.0.0.1) rather than to all interfaces (0.0.0.0).
For further guidance, see the official MCP documentation.
Frequently asked questions
When to use an API key for authentication?
The use of an API key depends on the operating mode of your MCP server.
- Self-host mode (default):
When starting the MCP server in self-host mode, you should provide an API key when launching it to authenticate it with the Gomus AI server:- If launching from source, include the API key in the command.
- If launching from Docker, update the API key in docker/docker-compose.yml.
- Host mode:
If your Gomus AI MCP server is working in host mode, include the API key in theheadersof your client requests to authenticate your client with the Gomus AI server. An example is availablehere.