MCPWorks

Horses with Motors: Why AI Transformation Needs Proper Tools, Not Band-Aids

Simon Carr

A Siemens principal AI expert recently argued that most corporate AI transformation is the equivalent of strapping a motor to a horse — bolting new technology onto broken processes instead of building something fundamentally better. Her article, "Horses with Motors: The Reality of AI Transformation," includes an anecdote about MCP infrastructure that captures the exact problem managed MCP hosting platforms like MCPWorks are designed to solve.

What happened at the legal tech company

Maria Sukhareva, a computational linguist and Principal AI Expert at Siemens, describes a conversation with a Head of AI at a legal tech company. Sukhareva suggested the company build an MCP server to enable AI integration with their API. The executive rejected the idea outright:

"What is the use case? No customer has requested an MCP server."

Sukhareva's response cuts to the heart of the problem: users request outcomes, not infrastructure. Nobody asks for an MCP server the same way nobody asked for REST APIs before REST existed. They ask to get things done. The infrastructure that makes "getting things done" possible is a platform decision, not a customer request.

This is status quo bias in action. Research in cognitive science shows that people naturally avoid challenging existing processes and default to optimizing what already exists. The legal tech executive wasn't being unreasonable — they were being human. But as Sukhareva argues, that instinct produces "horses with motors" instead of cars.

The band-aid pattern in AI integration

Sukhareva identifies a pattern that anyone working in enterprise AI will recognize: organizations build AI applications that entrench legacy systems rather than replace them.

Her example is striking. A client requested a "document analysis" AI tool. After investigation, the actual need was natural language database queries. The documents being "analyzed" were auto-generated reports — workarounds created because users couldn't write SQL. The AI was being asked to parse workarounds for a bad interface, adding a third layer of indirection instead of fixing the root problem.

This pattern repeats everywhere:

  • Chatbots on broken systems — AI handles complaints about bad UX instead of fixing the UX
  • "AI-powered query optimizers" — patching messy databases with inference instead of cleaning the data model
  • "AI-assisted ticketing" — routing tickets faster through a broken workflow instead of eliminating the workflow

Each of these becomes a permanent band-aid. The AI layer makes the underlying problem just tolerable enough that nobody fixes it.

Why this matters for MCP infrastructure

The MCP server anecdote isn't just a story about one executive's status quo bias. It reveals a structural gap in how AI tooling gets adopted.

Model Context Protocol — now an open standard governed by the Linux Foundation and supported by Google, Microsoft, and Anthropic — gives AI assistants a standard way to call external tools. But building and hosting MCP servers requires infrastructure decisions: provisioning servers, configuring transport, managing authentication, implementing security isolation, and maintaining the whole stack.

When that infrastructure decision lands on an executive's desk as "should we build an MCP server?", the answer is predictably "no customer asked for it." The decision never gets made — not because it's wrong, but because it requires proactive investment in infrastructure that users can't articulate a need for.

This is the gap that managed MCP hosting fills. When the infrastructure already exists — when functions can be deployed in minutes through a namespace endpoint rather than requiring a build-or-buy committee decision — the status quo bias disappears. There's nothing to decide. You just deploy.

Proper tools, not band-aids

Sukhareva's core argument is that AI's real value isn't automating repetitive tasks — it's enabling entirely new capabilities. But new capabilities require proper infrastructure, not AI wrappers on legacy systems.

For MCP specifically, that means:

  • Don't build a chatbot on your broken API. Give AI assistants proper tool access through MCP so they can call your services directly.
  • Don't wait for customers to ask for MCP integration. They'll ask for outcomes that require it. Build the infrastructure proactively.
  • Don't strap a motor to the horse. If your AI integration plan involves adding an LLM layer on top of workarounds, step back and ask what the actual problem is.

The legal tech executive who dismissed MCP because "no customer requested it" will eventually face a competitor whose AI assistants can directly interact with legal APIs through MCP tools — not through a chatbot that parses PDFs that were generated from a database that could have been queried directly.

That's the difference between a horse with a motor and a car.

How MCPWorks approaches this

MCPWorks exists specifically to remove the infrastructure decision that blocks MCP adoption. Each account gets namespace-based HTTPS endpoints — {namespace}.create.mcpworks.io for function management, {namespace}.run.mcpworks.io for execution — with security isolation, authentication, and scaling handled by the platform.

Functions are created through the MCP interface itself. Your AI assistant connects and deploys directly. No committee decision, no infrastructure provisioning, no "what is the use case" gatekeeping.

Combined with code-mode execution — which achieves 70-98% token savings by letting AI write code that runs in a sandbox rather than loading tool schemas into context — this approach treats MCP infrastructure as a utility rather than a project.

The article from Siemens validates a principle we've built on from the start: the organizations that give AI proper tools will outperform those bolting AI onto broken processes. Managed MCP hosting makes "proper tools" the path of least resistance.

Frequently asked questions

What is the "horses with motors" problem in AI? The term comes from Maria Sukhareva, Principal AI Expert at Siemens. It describes organizations that add AI to broken processes rather than fixing the underlying systems — like strapping a motor to a horse instead of building a car. The result is permanent workarounds instead of genuine transformation.

Why did a legal tech company reject building an MCP server? The Head of AI said "no customer has requested an MCP server." This illustrates status quo bias — users request outcomes, not infrastructure. No one explicitly asks for API standards or hosting platforms, but those decisions determine what outcomes are possible.

How does managed MCP hosting solve the adoption problem? Platforms like MCPWorks remove the infrastructure decision entirely. Instead of requiring a build-or-buy committee discussion, teams get pre-built namespace endpoints with security isolation, authentication, and scaling. There's nothing to approve — you just deploy functions.

What is code-mode execution in MCP? Code-mode execution lets AI assistants write and execute code in a secure sandbox rather than loading every tool schema into the context window. According to Anthropic's research, this reduces token usage by 70-98% compared to traditional MCP tool loading.

Further reading

MCPWorks is open source.

Self-host free forever, or try MCPWorks Cloud — 14-day Pro trial, no credit card.

View on GitHub Cloud Trial — Coming Soon