MCPWorks

Introducing MCPWorks: Infrastructure for the Agentic Era

Simon Carr Updated

We're at an inflection point in software.

AI assistants are becoming the interface through which developers and knowledge workers interact with their tools. Claude Code, GitHub Copilot, and dozens of other AI-powered environments are changing how we write code, automate tasks, and build systems.

But there's a gap. These AI assistants are powerful reasoners, but they need infrastructure to act on. They need secure places to execute code. They need functions to call. They need tools that don't leak credentials or expose sensitive data.

That's what MCPWorks is building.

What MCPWorks Is

MCPWorks is a namespace-based function hosting platform for AI assistants. Developers create Python or TypeScript functions, MCPWorks hosts them in secure sandboxes, and any MCP-compatible AI client can invoke them directly over HTTPS.

The tagline says it simply: Create functions. We host them. Your AI uses them.

We're not building another AI product. We're building the infrastructure where AI assistants can safely create and invoke custom tools.

The BYOAI Philosophy

BYOAI: Bring Your Own AI.

This is the core principle behind everything we build. MCPWorks is infrastructure, not an AI product.

What We Provide What You Bring
MCP infrastructure Your AI (any LLM, API keys, or AI services)
Code Execution Sandbox Your reasoning and tokens
Function backends Your prompts and intent
Namespace endpoints (HTTPS) Any MCP-compatible client

Why does this matter?

AI-Agnostic. We don't favor any AI engine. MCPWorks works with Claude, Copilot, GPT, or any MCP-compatible client. Use whatever AI fits your needs.

No Token Costs From Us. You control your AI spend. We bill for infrastructure, not inference. We're not in the business of reselling tokens at a markup.

Future-Proof. As new AI assistants emerge, they work with MCPWorks if they support MCP. We're building on an open protocol, not a proprietary lock-in.

No Vendor Lock-in. Connect your own LLMs and AI services. Switch providers whenever you want. Your functions keep working.

The Product Underneath the Product

What's the real product we're building? Efficiency.

Our infrastructure enables 70-98% reduction in AI token usage compared to traditional approaches. Here's why that matters:

AI inference is expensive. Every token your AI processes costs money. Every unnecessary context loaded into a prompt is waste. As AI usage scales, the difference between efficient and inefficient tooling becomes the difference between sustainable and unsustainable operations.

MCPWorks reduces token usage through code-mode execution — two architectural principles working together:

Progressive Disclosure. Instead of loading all tool definitions into AI context upfront (often 150K+ tokens for large toolsets), MCPWorks loads only function names initially. Full schemas are retrieved on-demand when the AI actually needs them. This alone can reduce overhead by 98%.

Code Execution Isolation. When AI writes code to call functions, intermediate data stays in our execution environment. It never enters the AI context unless needed for decisions. Process 50 orders? Only the final summary ("Processed 50 orders") returns to the AI, not 50 order objects.

Today, this is a nice optimization. Tomorrow, as AI costs rise and usage scales, it becomes essential infrastructure for cost-effective AI operations.

What We're Building

Code Execution Sandbox

The cornerstone of MCPWorks is secure sandboxed execution for AI-generated code.

Your AI writes Python or TypeScript. We run it safely in an isolated environment with:

  • Linux namespaces for process isolation
  • cgroups for resource limits
  • seccomp for syscall filtering
  • Network isolation and controlled access

The AI sees one tool: run_code. That tool is a portal into a secure execution environment where functions can be composed, data can be processed, and sensitive information never leaks into the AI context.

Direct HTTPS Namespace Endpoints

Every MCPWorks account gets dedicated namespace endpoints — no local software to install, no proxy, no Docker setup:

  • {namespace}.create.mcpworks.io — for creating and managing functions
  • {namespace}.run.mcpworks.io — for executing functions

Your AI assistant connects directly over HTTPS using the standard MCP streamable HTTP transport. Authentication is handled via bearer tokens. Add your namespace to .mcp.json and your AI is connected.

Function Backends

MCPWorks supports multiple execution backends behind a unified MCP interface. The Code Execution Sandbox is the foundation, but functions can also be backed by automation platforms, AI models, and external repositories — all accessible through the same namespace endpoints.

Our Values

Fear and Respect What You Build

We're creating infrastructure that enables AI assistants to craft and invoke arbitrary tools. Users bring their own reasoning engines. We provide the workshop. The combinations are infinite and unpredictable.

This is powerful. This should make us uncomfortable.

If we're not slightly afraid of our own platform, we're not thinking hard enough about what it enables. Fear keeps us from shipping recklessly. Respect keeps us from underestimating consequences. Humility keeps us from assuming we know all the use cases.

Infrastructure Over Intelligence

We don't pretend to be smarter than your AI. We provide reliable, secure, well-documented infrastructure. The intelligence comes from your chosen LLM. Our job is to not get in the way.

Simplicity Enables Power

The most powerful tools are often the simplest. Namespace endpoints. A sandbox. Function backends. The power comes from composition, not complexity.

Every layer we don't add is a dependency we don't have, an attack surface we don't expose, a failure mode we don't debug, and a cost we don't pay.

But simplicity never compromises security. We use Linux primitives correctly: namespaces for isolation, cgroups for limits, seccomp for syscall filtering. Simple doesn't mean naive.

The Current Moment

2026 is the year of agentic AI. Autonomous agents are going mainstream. MCP has become the industry standard for AI-to-tool connectivity, adopted by Google, Microsoft, Salesforce, and governed by the Linux Foundation.

But there's a governance gap. Organizations are deploying AI agents faster than they can secure them. Open-source personal AI agents are leaking credentials. Supply chain attacks target agent skill ecosystems. Agent-to-agent prompt injection is a real attack vector.

MCPWorks sits at the intersection of these trends: secure, hosted MCP infrastructure for the age of autonomous agents.

We're not trying to replace the AI assistants or the open-source agent ecosystems. We're building the secure execution layer that makes them deployable in production.

What's Next

We're in the early stages. A0 milestone: Code Execution Sandbox, function backends, namespace endpoints, and a basic dashboard. We're targeting 5-10 pilot users to validate the approach.

If you're building with AI assistants and need secure infrastructure for your functions, we'd love to talk. If you're running into token costs that don't scale, we might be able to help. If you're deploying AI agents and your security team is asking hard questions, we're building answers.

Create functions. We host them. Your AI uses them.

Questions or feedback? Reach out at [email protected].

MCPWorks is open source.

Self-host free forever, or try MCPWorks Cloud — 14-day Pro trial, no credit card.

View on GitHub Cloud Trial — Coming Soon