MCPWorks

Blog

MCPWorks vs. LangChain: Where MCP Hosting Fits in the AI Stack

The AI tooling landscape has five distinct layers. Orchestration frameworks, tool platforms, serverless compute, agent builders, and MCP registries all solve different problems. MCPWorks sits in a gap none of them fill.

How Code-Mode Works: Zero-Context Tool Discovery for AI Agents

Code-mode generates lightweight function wrappers at execution time, letting AI agents call functions and remote MCP tools as native Python or TypeScript imports — with zero context window overhead.

OAuth 2.0 for MCP Servers: How AI Agents Authenticate to External Services

When an AI agent needs to read your Gmail or post to Slack, someone has to approve it. We built OAuth 2.0 into our MCP server proxy using RFC 8628 device flow — the same 'enter this code' pattern you use to sign into your TV.

How MCPWorks Uses Grafana and Prometheus to Monitor AI Agent Infrastructure

Running AI agent infrastructure means watching things that traditional APM tools weren't designed for — sandbox execution durations, token savings, MCP proxy latencies, and prompt injection attempts. Here's how we built our observability stack.

When Your AI Lies About Lying: Anatomy of a Compounding Hallucination

A real toolkit. A wrong method name. An overcorrection that spiraled through four sessions and three report drafts. This is what compounding hallucination looks like in production — and why 'trust but verify' needs to become just 'verify.'

Persistent Memory for AI Agents with MemPalace

AI agents are stateless by default. Every session starts from zero. MemPalace is an open-source memory system that plugs into any MCP-compatible agent and gives it persistent, searchable recall across sessions. It works with MCPWorks today.

We Built a Pluggable Prompt Injection Defense — Not a Scanner, a Framework

No single prompt injection defense works. We built the framework instead — a configurable pipeline where you plug in your own scanners alongside our built-in defaults. Webhook any external service, load any Python classifier, observe every decision.

Why 40% of AI Agent Projects Will Fail — And How Proper Infrastructure Prevents It

Gartner predicts 40% of agentic AI projects will be cancelled by 2027. The gap between a working demo and a reliable production system is where projects are dying. Six threat categories, six defence layers, and the infrastructure most teams are missing.

The MCP Security Crisis: 1,800 Servers Without Authentication and What Comes Next

Over 1,800 public MCP servers have been discovered running without authentication. Five real breaches in 2025 alone. The industry is normalizing dangerous practices with MCP servers before a major incident forces everyone to take security seriously.

Per-Agent Access Control: Least Privilege for AI Agents

AI agents should not have unrestricted access to everything in your namespace. MCPWorks now supports per-agent function and state access rules with glob patterns and deny-takes-precedence semantics.

Path-Based Routing: Self-Host MCPWorks With Just an IP Address

We replaced wildcard subdomains with path-based routing. Self-hosting MCPWorks now requires zero DNS configuration — just an IP address and a port.

Why Prompt Injection Can't Steal Your API Keys on MCPWorks

MCPWorks prevents prompt injection from becoming data theft through four architectural layers: keys never enter AI context, agents can't author functions, output is scanned for leaked secrets, and trust boundaries wrap untrusted data.

Procedures: Auditable Execution Pipelines That Eliminate Agent Hallucination

When an AI agent is told to post to Bluesky, it can hallucinate calling the function — claiming success without executing anything. MCPWorks Procedures enforce that every step actually runs, with full audit trails and immutable versioning.

Agent Clusters: Scale Any Agent to N Replicas with One Tool Call

One tool call scales any MCPWorks agent to N replicas. Each replica gets full tier resources, a unique verb-animal name, and coordinated scheduling — all on existing PostgreSQL and Redis. No new infrastructure required.

Prompt Injection Defense: How MCPWorks Protects AI Agents from Adversarial Content

When your AI agent reads emails, any email could contain 'ignore previous instructions.' MCPWorks now detects and flags prompt injection attacks at the platform level — before the data reaches your AI.

MCPWorks is Now Open Source — Plus Third-Party MCP Server Plugins

MCPWorks is now open source. Self-host with docker compose up. This post explains the architecture behind 70-98% token savings and the new MCP server plugin system that makes any third-party MCP server callable from the sandbox.

Agent Intelligence Update: Conversation Memory, Self-Programming Heartbeats, and Context Budgets

Agents now remember across sessions and can leave themselves instructions for the next run. Conversation history persists with LLM compaction, heartbeat agents can write their own next instructions, and new state search tools mean agents no longer forget where they put things.

MCPWorks Agents: LLM-Agnostic Autonomous Agents on Managed Infrastructure

MCPWorks Agents are autonomous containers with persistent state, multi-trigger architecture, and heartbeat mode — where your agent wakes on interval and decides whether to act. BYOAI, MCP-native, no infrastructure to manage.

TypeScript Functions Are Live on MCPWorks

MCPWorks now supports TypeScript alongside Python. Your AI creates functions in either language, and they can call each other across the language boundary. Here's how it works, what you can build, and what the sandbox actually looks like.

Build an Ethereum Price Tracker Agent with MCPWorks

Build an autonomous agent that tracks ETH prices, stores history in a free Turso database, triggers on price swings, and sends AI-written market summaries to Discord — all configured through natural language in Claude Code.

MCPWorks Functions: Developer Preview is Open

MCPWorks Functions is live in Developer Preview with a 14-day Pro trial. Here's what we built, why it matters, and what code-mode execution means for your AI infrastructure costs.

How to Connect GitHub Copilot CLI to MCPWorks

GitHub Copilot CLI shipped with MCP support on day one. Connect it to your MCPWorks namespace and Copilot can create and run functions in a secure sandbox — with the same code-mode token savings you get from any MCP client.

Horses with Motors: Why AI Transformation Needs Proper Tools, Not Band-Aids

When a legal tech executive dismissed building an MCP server because 'no customer requested it,' a Siemens AI expert called it the core problem with corporate AI transformation. Here's why she's right — and what it means for MCP infrastructure.

MCP Server Hosting: Self-Hosted vs Managed

Should you run your own MCP servers or use managed hosting? We compare both approaches across infrastructure, security, cost, and scaling to help you decide.

What is the MCP Tool Overload Problem?

Every MCP server you connect adds tool definitions to your AI's context. With enough servers, the overhead becomes the majority of every interaction. This is the tool overload problem.

What is Code-Mode Execution in MCP?

Code-mode execution flips the traditional MCP model: instead of loading every tool definition into the AI's context, the AI writes code that runs in a sandbox. Here's how it works and why it matters.

Introducing MCPWorks: Infrastructure for the Agentic Era

We're building infrastructure, not intelligence. MCPWorks hosts secure execution environments for AI-driven function execution while you bring your own AI. Here's our philosophy and why it matters.