MCPWorks vs. LangChain: Where MCP Hosting Fits in the AI Stack
We get asked a version of this question every week: "How is MCPWorks different from LangChain?" The short answer is that LangChain is a library you import and MCPWorks is infrastructure you connect to. But the long answer is more interesting, because the AI tooling landscape has fractured into five distinct layers and most people conflate at least two of them.
This post maps out those layers, places MCPWorks in context, and is honest about where we overlap with other tools and where we don't.
The five layers of AI tooling
Before comparing specific products, it helps to see the full picture:
| Layer | What it does | Examples |
|---|---|---|
| Orchestration frameworks | Client-side libraries for chaining LLM calls, defining agents, managing memory | LangChain, LangGraph, CrewAI, AutoGen, Semantic Kernel |
| Tool connector platforms | Pre-built integrations to SaaS APIs, managed OAuth | Composio, Toolhouse |
| Serverless AI compute | On-demand execution environments, often with GPUs | Modal, Replicate, Baseten, RunPod |
| Agent builder platforms | No-code/low-code tools for building end-user agents | Lindy.ai, Relevance AI, AutoGPT Platform |
| MCP infrastructure | Hosting, discovery, and execution of MCP-protocol tools | MCPWorks, Arcade AI, MCP registries |
MCPWorks lives in the fifth layer. Most of the tools people compare us to live in the first or second.
Orchestration frameworks: LangChain, CrewAI, AutoGen
LangChain is a Python and JavaScript framework. You pip install langchain, import it, define chains and agents in your code, and run it on your infrastructure. LangGraph extends this with stateful graph-based workflows where nodes are functions and edges handle conditional routing. CrewAI takes a role-based approach — you define agents with backstories and goals and they collaborate like a team. AutoGen (now AG2) uses conversational multi-agent patterns where a GroupChat selector decides who speaks next. Microsoft's Semantic Kernel targets enterprise C# and Python shops with strong Azure integration.
These are all excellent tools for what they do. What they have in common is that they are client-side orchestration. They help you write the logic that decides which LLM to call, which tools to invoke, and how to handle the results. But they don't host the tools themselves.
When a LangChain agent calls a tool, something has to execute that tool. If the tool is "query a database," your LangChain app needs database access. If it's "run untrusted Python code," you need a sandbox. If it's "call an external MCP server," you need credentials and a proxy. The framework gives you the plumbing to define and invoke tools, but the tools need to live somewhere.
That's the gap MCPWorks fills. We host the tools — sandboxed Python and TypeScript functions, proxied MCP server connections, procedures that chain them together — and expose them as MCP endpoints. A LangChain agent could connect to a MCPWorks namespace and call tools without knowing anything about how they're sandboxed, authenticated, or billed.
The relationship is complementary, not competitive. LangChain orchestrates. MCPWorks hosts what gets orchestrated.
Tool connector platforms: Composio, Toolhouse
Composio is the market leader in managed AI tool integrations. They have 500+ pre-built connectors to services like Slack, GitHub, Salesforce, and Google Workspace, with a managed OAuth vault and SOC 2 compliance. Toolhouse takes a similar approach but targets enterprise deployments with on-premise options and granular permissions.
These platforms answer the question: "How does my AI agent talk to existing SaaS products?" They handle the OAuth flows, API rate limits, and schema translations so you don't have to build a Slack integration from scratch every time.
MCPWorks answers a different question: "Where does my custom tool logic run?" We're not a catalog of pre-built integrations. We host arbitrary user-authored code in nsjail sandboxes with namespace isolation, usage tracking, and subscription billing. When you need a tool that doesn't exist in any catalog — a proprietary data pipeline, a domain-specific calculation, a custom workflow — you write it as a function and MCPWorks runs it.
That said, MCPWorks also proxies third-party MCP servers. You can register an external MCP server on your namespace, and MCPWorks handles OAuth, token refresh, trust scoring, and prompt injection defense for every tool call that passes through. So the two approaches converge at the edges: Composio connects you to pre-built integrations, MCPWorks hosts your custom code and wraps external servers in a security layer.
If anything, they're complementary. Composio recently started exposing their integrations as MCP servers through their MCP Hub. A MCPWorks namespace could proxy a Composio-hosted MCP server alongside locally-authored functions, giving an agent a unified toolkit.
Serverless AI compute: Modal, Replicate
Modal is the closest thing to MCPWorks in the compute layer. It's serverless Python execution with a clean decorator-based API, GPU support, and per-second billing. Replicate hosts open-source models. Baseten and RunPod focus on GPU inference.
If you squint, MCPWorks looks like Modal with MCP protocol support bolted on. But the differences run deep:
Protocol-native vs. protocol-agnostic. Modal runs arbitrary Python functions and returns results over HTTP. MCPWorks runs functions and returns results over MCP — tool schemas, structured content types, resource URIs, the full protocol. An MCP client discovers and calls MCPWorks tools without any adapter layer.
Multi-tenant namespaces vs. per-user deployments. Modal gives each user isolated deployments. MCPWorks organizes tools into namespaces that can have multiple services, connected MCP servers, procedures, agents, and fine-grained access control. A namespace is a logical unit of capability, not just a deployment target.
Built-in agent runtime. MCPWorks runs persistent agents with containers, state, schedules, heartbeats, and AI orchestration. Modal could run an agent if you built the scheduling, state management, and container lifecycle yourself — but that's a significant amount of infrastructure to replicate.
Security scanning pipeline. Every function execution in MCPWorks passes through a configurable security scanner pipeline that checks for prompt injection, validates outputs against trust policies, and fires security events. This is built into the platform, not bolted on after the fact.
Modal is a better choice if you need GPUs, arbitrary container images, or general-purpose Python execution. MCPWorks is purpose-built for the specific problem of hosting tools that AI agents consume via MCP.
The MCP ecosystem: registries and beyond
The MCP ecosystem has matured significantly since the protocol's introduction. The official MCP Registry at registry.modelcontextprotocol.io stores standardized metadata about MCP servers — namespaces, package locations, execution instructions — but doesn't host server code. Smithery indexes over 7,000 servers with an app-store interface. Glama provides a curated registry with daily updates.
These registries solve discovery: "What MCP servers exist and how do I install them?" They don't solve hosting or execution. Finding a server in a registry still means you need to run it somewhere — install dependencies, manage credentials, handle scaling, deal with security.
MCPWorks is the hosting layer that sits beneath registries. We also support MCP Server Cards (.well-known/mcp.json) so that namespaces with discovery enabled can be found by AI platforms that crawl for available tool providers. Discovery and hosting are separate concerns, and the ecosystem needs both.
Arcade AI is worth mentioning as the closest architectural cousin. It's an open-source execution runtime for AI tools with around 112 first-party integrations. The difference is scope: Arcade focuses on providing a secure execution layer for pre-built tools. MCPWorks is a full platform with namespaces, multi-service architectures, procedures, agent runtime, MCP server proxying, and subscription billing.
Agent builder platforms: Lindy, Relevance AI
Lindy.ai and Relevance AI are no-code platforms for building end-user AI agents — lead qualification bots, inbox triage, CRM automation. They provide drag-and-drop interfaces, built-in integrations, and business-user-friendly UX.
These platforms are potential customers of MCPWorks, not competitors. A Lindy-style product needs somewhere to run custom tool logic that doesn't fit their built-in integrations. MCPWorks provides that execution layer. We're infrastructure for developers building agent-powered products, not the products themselves.
Where does this leave us?
The honest positioning:
MCPWorks is MCP-native infrastructure for hosting tools, running agents, and managing namespaces.
We are not an orchestration framework. We don't help you write agent logic — use LangChain, CrewAI, or Semantic Kernel for that. We are not a SaaS connector catalog. We don't ship 500 pre-built integrations — use Composio for that. We are not a general-purpose compute platform. We don't offer GPUs or arbitrary container images — use Modal for that. We are not a no-code agent builder. We don't have drag-and-drop workflows — use Lindy for that.
What we do is host the things those tools need:
- Functions that run in sandboxed isolation with namespace-level multi-tenancy
- MCP server proxying with OAuth, trust scoring, and prompt injection defense
- Procedures that chain functions and MCP tools into multi-step workflows
- Agents with persistent state, scheduled execution, and AI orchestration
- Discovery via MCP Server Cards so AI platforms find your tools automatically
- Usage tracking and billing so you can offer tools to others as a service
The AI stack needs all five layers. We think the infrastructure layer — the one that actually runs the tools — has been underserved. Most of the investment has gone into orchestration frameworks and SaaS connectors. MCPWorks is our bet that as MCP adoption grows, developers will need a dedicated platform for hosting the tool backends that power their agents.
If you're building AI agents and need somewhere to host the tools they use, take a look at the MCPWorks GitHub repo or connect a namespace on api.mcpworks.io.
MCPWorks is open source.
Self-host free forever, or try MCPWorks Cloud — 14-day Pro trial, no credit card.