Extending Gemini CLI with custom tools is accomplished primarily through the Model Context Protocol (MCP), which provides a standardized way to connect external services and capabilities to the AI agent. To add custom tools, you need to configure MCP servers in your Gemini CLI settings file located at ~/.gemini/settings.json
. This JSON configuration file defines which MCP servers to connect to, their connection parameters, and any authentication requirements. The configuration supports both local MCP servers running on your machine and remote servers accessible via network connections.
The process begins by identifying or creating an MCP server that exposes the functionality you want to add. Many popular services already have MCP server implementations available, including GitHub, GitLab, Firebase, databases, and media generation services. For custom functionality, you can create your own MCP server using the MCP SDK in languages like Python or TypeScript. Once you have an MCP server, you add it to your settings.json file with the appropriate configuration including server type, connection details, authentication tokens, and any specific parameters required by that server.
After configuring your MCP servers, restart Gemini CLI and use the /mcp
command to verify that your servers are connected and their tools are available. The AI will automatically discover the tools provided by your MCP servers and integrate them into its available capabilities. You can use /mcp desc
to see detailed descriptions of all available tools from your configured servers. The beauty of this system is that once configured, these custom tools become seamlessly integrated into Gemini CLI’s workflow - the AI will automatically select and use them when appropriate based on your prompts. For advanced users, you can also create bundled extensions that combine MCP servers with specific configurations and GEMINI.md files to create complete, reusable toolsets for specific workflows or team requirements.