Intelligence Without Action
Large Language Models can write code, analyse contracts, summarise research, and reason through complex problems. But ask one to check your calendar, update a Jira ticket, or query your production database — and it hits a wall.
LLMs are brains without hands. They can think, but they can't do. Every action requires a human to copy-paste output into the right system, or a developer to wire up custom API integrations one by one.
The Copy-Paste Tax
Your team spends hours shuttling AI-generated content between tools. The AI writes the email — you manually send it. The AI analyses data — you manually update the dashboard.
The Integration Burden
Every tool your AI needs to use requires custom API code. Authentication, error handling, rate limiting, data transformation — multiplied by every service.
The Context Problem
Your AI can't see what's in your CRM, your project management tool, or your knowledge base unless you manually feed it. It's working with incomplete information.
The Trust Gap
Even when AI can technically take action, there's no standardised way to control permissions, audit actions, or limit scope. So teams don't let it.
What Is MCP?
The Model Context Protocol (MCP) is an open standard, developed by Anthropic, that creates a universal interface between AI models and external tools. Think of it as USB for AI — a single, standardised way for any model to connect to any service.
Before MCP, every AI-to-tool connection was bespoke. A developer had to write custom code for each integration — handle authentication, define request formats, parse responses, manage errors. MCP replaces this with a common protocol that both sides understand.
MCP doesn't replace your AI model. It extends what your model can do. The model still does the thinking — MCP gives it the ability to reach out, gather information, and take actions in the real world.
How MCP Servers Work
An MCP server is a lightweight program that exposes capabilities — tools, resources, and prompts — to any MCP-compatible AI client. It acts as a translator between what the AI wants to do and what the external service understands.
Tools
Actions the AI can take — send an email, create a ticket, run a query. Each tool has a defined schema the model understands.
Resources
Data the AI can read — files, database records, API responses. Resources give the model context without requiring the user to provide it.
Prompts
Pre-built interaction templates — specialised workflows the server offers. Think of them as "recipes" the AI can follow.
The protocol uses JSON-RPC 2.0 over standard I/O or HTTP with Server-Sent Events. This means MCP servers can run locally on your machine, on a remote server, or embedded inside an orchestration platform — the transport is flexible.
Anatomy of an MCP Server
An MCP server has three core responsibilities: discover (tell the client what it can do), execute (perform the requested action), and respond (return results in a format the model understands).
The Request Lifecycle
Discovery
The client asks the server: "What tools do you have?" The server responds with a manifest — a list of available tools, their parameters, and descriptions.
Selection
The AI model reads the tool descriptions and decides which tool to call based on the user's request. The model generates a structured tool call with the right parameters.
Execution
The MCP client sends the tool call to the server. The server validates the parameters, calls the external service, and handles errors.
Response
The server returns the result to the client, which passes it back to the model. The model uses the result to continue its reasoning or generate a response.
Real-World MCP Servers
MCP isn't theoretical. There are already production-grade MCP servers for major platforms, and the ecosystem is growing rapidly.
Google Drive MCP
Search, read, and create documents. Your AI can find that proposal from last quarter without you digging through folders.
Slack MCP
Read messages, post updates, search channels. The AI participates in your team's communication without you relaying information.
PostgreSQL MCP
Query databases directly. The AI can look up customer records, check inventory, or pull analytics — safely, with read-only access.
GitHub MCP
Read repos, create issues, review PRs. The AI becomes a participant in your development workflow.
Linear MCP
Create, update, and query issues. Your AI can triage bug reports and update project status automatically.
Filesystem MCP
Read and write local files. The AI can process documents, generate reports, and manage file-based workflows.
The MCP server ecosystem has grown from a handful of reference implementations to hundreds of community-built servers. The open specification means anyone can build an MCP server for any service — and they are.
Security and Trust
Giving AI the ability to take actions raises legitimate concerns. MCP addresses these through structured permission models, audit trails, and scope limitations.
Principle of Least Privilege
Each MCP server defines exactly what it can do. A database server can be configured as read-only. A Slack server can be limited to specific channels. The AI never gets more access than the server explicitly grants.
Every Action Is Logged
MCP servers can log every tool call — what was requested, what parameters were used, and what was returned. This creates a complete audit trail for compliance and debugging.
MCP clients can require user confirmation before executing sensitive actions. The AI proposes an action, the user approves or denies it, and only then does the server execute. This preserves human oversight while still automating the research and preparation work.
MCP Meets Orchestration
MCP becomes even more powerful when combined with workflow orchestration platforms like Make.com or n8n. The orchestrator handles the macro flow — triggers, scheduling, error handling — while MCP-enabled LLM steps handle dynamic tool use within each step.
Static Workflows
Every tool call must be pre-defined in the workflow. Adding a new service means modifying the workflow itself. The LLM follows a fixed script.
Dynamic Tool Use
The LLM decides which tools to call based on the input. Add a new MCP server and the model can use it immediately — no workflow changes needed.
When you combine orchestration (predictable macro flow) with MCP (dynamic micro-actions), you get systems that are both reliable and intelligent. The workflow ensures things run on schedule and errors are handled. MCP ensures the AI can adapt its behaviour to each unique input.
Building Your First MCP Server
Building an MCP server is surprisingly straightforward. The TypeScript and Python SDKs handle the protocol layer — you just define your tools and implement the logic.
Choose Your SDK
The official TypeScript SDK (@modelcontextprotocol/sdk) and Python SDK (mcp) provide all the protocol handling. Pick the language your team knows.
Define Your Tools
Each tool needs a name, description (for the AI to understand when to use it), and an input schema (JSON Schema). Be specific in descriptions — the model relies on them.
Implement the Handlers
Write the actual logic for each tool. This is where you call your APIs, query your databases, or interact with your services. Standard application code.
Configure Transport
Choose stdio (for local servers) or HTTP with SSE (for remote servers). Stdio is simpler for development; HTTP is better for production.
Test with Claude Desktop
Add your server to Claude Desktop's configuration and test it. The fastest way to validate that your tools work correctly with a real AI model.
Your first MCP server doesn't need to do everything. Start with one or two tools for a single service your team uses daily. A read-only database query tool or a Slack message poster is enough to demonstrate the value — and to learn the pattern.
Where MCP Is Heading
MCP is still early, but its trajectory is clear. The protocol is moving toward a future where every software service exposes an MCP interface — making AI integration as simple as adding a URL.
Remote MCP Servers
The shift from local-only to remote servers with OAuth authentication means MCP servers can be deployed as cloud services, accessible from anywhere.
Server Discovery
Imagine a registry where AI models can discover available MCP servers automatically — like a DNS for AI capabilities.
Multi-Tenant Security
Enterprise-grade permission models are coming — per-user scoping, team-level access controls, and integration with existing identity providers.
Vendor Adoption
As more LLM providers adopt MCP, the protocol becomes a universal standard. Your MCP servers work with Claude, GPT, Gemini, and any future model.
The Bottom Line
MCP solves the last-mile problem of AI deployment. Your models are smart enough. Your tools are capable enough. What was missing was a standard way to connect them. MCP provides that standard — and the teams that adopt it early will have a significant advantage in building AI systems that don't just think, but act.
Ready to give your AI
the tools it needs?
From custom MCP servers to full orchestration pipelines — we design, build, and deploy AI systems that think and act.
Let's Build Together →