Blog
Agent Clusters: Scale Any Agent to N Replicas with One Tool Call
One tool call scales any MCPWorks agent to N replicas. Each replica gets full tier resources, a unique verb-animal name, and coordinated scheduling — all on existing PostgreSQL and Redis. No new infrastructure required.
Prompt Injection Defense: How MCPWorks Protects AI Agents from Adversarial Content
When your AI agent reads emails, any email could contain 'ignore previous instructions.' MCPWorks now detects and flags prompt injection attacks at the platform level — before the data reaches your AI.
MCPWorks is Now Open Source — Plus Third-Party MCP Server Plugins
MCPWorks is now open source. Self-host with docker compose up. This post explains the architecture behind 70-98% token savings and the new MCP server plugin system that makes any third-party MCP server callable from the sandbox.
Agent Intelligence Update: Conversation Memory, Self-Programming Heartbeats, and Context Budgets
Agents now remember across sessions and can leave themselves instructions for the next run. Conversation history persists with LLM compaction, heartbeat agents can write their own next instructions, and new state search tools mean agents no longer forget where they put things.
MCPWorks Agents: LLM-Agnostic Autonomous Agents on Managed Infrastructure
MCPWorks Agents are autonomous containers with persistent state, multi-trigger architecture, and heartbeat mode — where your agent wakes on interval and decides whether to act. BYOAI, MCP-native, no infrastructure to manage.
TypeScript Functions Are Live on MCPWorks
MCPWorks now supports TypeScript alongside Python. Your AI creates functions in either language, and they can call each other across the language boundary. Here's how it works, what you can build, and what the sandbox actually looks like.
Build an Ethereum Price Tracker Agent with MCPWorks
Build an autonomous agent that tracks ETH prices, stores history in a free Turso database, triggers on price swings, and sends AI-written market summaries to Discord — all configured through natural language in Claude Code.
MCPWorks Functions: Developer Preview is Open
MCPWorks Functions is live in Developer Preview. Free Builder access for up to 90 days. Here's what we built, why it matters, and what code-mode execution means for your AI infrastructure costs.
How to Connect GitHub Copilot CLI to MCPWorks
GitHub Copilot CLI shipped with MCP support on day one. Connect it to your MCPWorks namespace and Copilot can create and run functions in a secure sandbox — with the same code-mode token savings you get from any MCP client.
Horses with Motors: Why AI Transformation Needs Proper Tools, Not Band-Aids
When a legal tech executive dismissed building an MCP server because 'no customer requested it,' a Siemens AI expert called it the core problem with corporate AI transformation. Here's why she's right — and what it means for MCP infrastructure.
MCP Server Hosting: Self-Hosted vs Managed
Should you run your own MCP servers or use managed hosting? We compare both approaches across infrastructure, security, cost, and scaling to help you decide.
What is the MCP Tool Overload Problem?
Every MCP server you connect adds tool definitions to your AI's context. With enough servers, the overhead becomes the majority of every interaction. This is the tool overload problem.
What is Code-Mode Execution in MCP?
Code-mode execution flips the traditional MCP model: instead of loading every tool definition into the AI's context, the AI writes code that runs in a sandbox. Here's how it works and why it matters.
Introducing MCPWorks: Infrastructure for the Agentic Era
We're building infrastructure, not intelligence. MCPWorks hosts secure execution environments for AI-driven function execution while you bring your own AI. Here's our philosophy and why it matters.