How to Connect GitHub Copilot CLI to MCPWorks
GitHub just shipped Copilot CLI as generally available for all Copilot subscribers. It's a terminal-native coding agent that can plan tasks, edit files, run tests, and iterate autonomously — and it ships with MCP support built in.
That means Copilot CLI can connect to MCPWorks namespaces out of the box. No plugins, no adapters, no workarounds. Add your namespace endpoints to .mcp.json and Copilot starts creating and executing functions on MCPWorks infrastructure.
This post walks through the setup.
What Copilot CLI brings to the table
Copilot CLI is GitHub's answer to terminal-native AI development. It runs as an interactive agent in your terminal with three operating modes: step-by-step approval, plan-first execution (Shift+Tab), and full autopilot. It supports model switching mid-session between Claude Opus 4.6, Sonnet 4.6, GPT-5.3-Codex, GPT-5 mini, and Gemini 3 Pro using the /model command.
The key detail for MCPWorks users: Copilot CLI ships with GitHub's own MCP server built in and supports connecting to any custom MCP server. That's the integration point. MCPWorks endpoints speak standard MCP over HTTPS, so Copilot CLI treats them like any other MCP server.
How to connect Copilot CLI to MCPWorks
Add your MCPWorks namespace to the .mcp.json file in your project root. If you don't have one, create it:
{
"mcpServers": {
"mcpworks-create": {
"type": "http",
"url": "https://acme.create.mcpworks.io/mcp",
"headers": { "Authorization": "Bearer YOUR_TOKEN" }
},
"mcpworks-run": {
"type": "http",
"url": "https://acme.run.mcpworks.io/mcp",
"headers": { "Authorization": "Bearer YOUR_TOKEN" }
}
}
}
Replace acme with your namespace and YOUR_TOKEN with your MCPWorks API token. The create endpoint lets Copilot build and manage functions. The run endpoint lets it execute them.
Start Copilot CLI and it picks up the MCP configuration automatically. No restart required if you're already in a session — Copilot CLI reloads .mcp.json on changes.
What this looks like in practice
Once connected, you can ask Copilot CLI to create functions on MCPWorks the same way you'd ask it to edit local files:
$ copilot "create a function on mcpworks that fetches
the latest exchange rate for CAD to USD"
✓ Connected to acme.create.mcpworks.io
✓ Created function get_exchange_rate
→ Python, 18 lines, nsjail sandbox
$ copilot "what's the current CAD/USD rate?"
✓ Executing get_exchange_rate("CAD", "USD")
→ via acme.run.mcpworks.io
The current CAD/USD exchange rate is 0.7341.
Copilot writes the function code, deploys it to your MCPWorks namespace through the create endpoint, and executes it through the run endpoint. The function runs in an nsjail-isolated sandbox on MCPWorks infrastructure — not on your machine.
Because MCPWorks uses code-mode execution, Copilot only loads lightweight function names into its context window instead of full tool schemas. According to Anthropic's Code Execution MCP research (January 2026), this achieves 70-98% token savings compared to traditional MCP tool loading.
Why this matters for Copilot users specifically
Copilot CLI already ships with GitHub's MCP server, which gives it access to GitHub-specific tools — repositories, issues, pull requests. MCPWorks adds a different capability: custom, persistent functions that any AI client in your workflow can share.
Consider a team where some developers use Copilot CLI and others use Claude Code. Both connect to the same MCPWorks namespace. Functions created by one are immediately available to the other. There's no per-tool, per-client configuration. The namespace is the shared layer.
This also solves a practical problem with Copilot CLI's model flexibility. Copilot lets you switch between Claude, GPT, and Gemini mid-session. MCPWorks functions work identically regardless of which model is driving the session because the functions execute server-side in a sandbox. The model writes the code; MCPWorks runs it. Model choice affects code quality, not function compatibility.
Copilot CLI's autopilot mode and MCPWorks
Copilot CLI's autopilot mode lets it run end-to-end without approval interruptions. Combined with MCPWorks, this creates a fully autonomous loop: Copilot plans a task, creates the functions it needs on MCPWorks, executes them, evaluates results, and iterates — all without human intervention.
For teams running agent fleets, this means Copilot CLI can serve as an orchestrator that builds and maintains shared infrastructure on MCPWorks. Functions it creates or updates propagate instantly to every client connected to the namespace.
The background delegation feature (prefix any prompt with &) makes this even more practical. Offload a complex MCPWorks task to a cloud agent while you keep working in your terminal.
Getting started
- Sign up for MCPWorks — Developer Preview is free at the Builder tier for up to 90 days
- Create your namespace and grab your API token
- Add the
.mcp.jsonconfig shown above to your project - Open Copilot CLI and start building
MCPWorks works with any MCP client, but Copilot CLI's combination of model flexibility, autopilot mode, and background delegation makes it a particularly strong match for teams that want autonomous function creation and execution.
If you're already using Copilot CLI and want to extend it beyond GitHub tools, MCPWorks is the fastest path to custom, shared, persistent AI functions.
Questions? Reach out at [email protected] or find us on Bluesky.
Frequently asked questions
Does GitHub Copilot CLI support MCP servers?
Yes. Copilot CLI ships with MCP support built in, including GitHub's own MCP server and the ability to connect custom MCP servers. Any server that speaks standard MCP over HTTPS — including MCPWorks — works out of the box via .mcp.json configuration.
Which AI models does Copilot CLI support?
Copilot CLI supports Claude Opus 4.6, Sonnet 4.6, and Haiku 4.5, GPT-5.3-Codex, GPT-5 mini, and Gemini 3 Pro. You can switch models mid-session using the /model command. All models work with MCPWorks functions because execution happens server-side in a sandbox.
Can Copilot CLI and Claude Code share the same MCPWorks functions? Yes. Both clients connect to the same MCPWorks namespace endpoints. Functions created by Copilot CLI are immediately available to Claude Code, and vice versa. The namespace is the shared layer — any MCP-compatible client can use it.
What is code-mode execution? Code-mode execution is how MCPWorks reduces AI token consumption. Instead of loading full tool schemas into the AI's context window, the AI receives lightweight function names and writes code that runs in a secure sandbox. Intermediate data stays in the sandbox and never enters the context. Anthropic's research measured 70-98% token savings with this approach.
Is MCPWorks free to try? Developer Preview is free at the Builder tier for up to 90 days, with no credit card required. The Builder tier includes unlimited functions, 25,000 executions per month, and 3 namespaces.
Further reading
- GitHub Copilot CLI is now generally available — GitHub's announcement
- MCPWorks Functions: Developer Preview is Open — What we built and why
- What is Code-Mode Execution in MCP? — How code-mode reduces token costs
- MCP Server Hosting: Self-Hosted vs Managed — Infrastructure comparison
- Model Context Protocol specification — The open standard
MCPWorks is open source.
Self-host free forever, or try MCPWorks Cloud — 14-day Pro trial, no credit card.