Back to resources

In The Loop Episode 8 | Model Context Protocol (MCP): The Newest AI Buzzword Explained

In The Loop Episode 8 | Model Context Protocol (MCP): The Newest AI Buzzword Explained

Published by

Jack Houghton
Anna Kocsis

Published on

March 26, 2025

Read time

7
min read

Category

Podcast
Table of contents

Everyone around the world is talking about Model Context Protocols, or MCP, but few truly understand why. This might be the most important invention for AI agents in the last three years. The internet is a vast web of communication held together by rules and conventions known as protocols. Without these protocols, the internet as we know it simply wouldn’t exist—and MCP might play the same role for AI.

AI agents are widely seen as the future of technology, but a major question remains unanswered: how do we connect agents to the real world in a universally accepted way? These are the same kinds of challenges people faced in the 1980s when trying to make the internet universally accepted and adopted.

In today’s episode, I’ll break down why protocols are so essential when building technology ecosystems. I’ll explain, in simple terms, what is the Model Context Protocol and why it’s so important for the future of AI agents.

The importance of protocols

Let’s start with why protocols matter. They establish rules and frameworks that people worldwide can adopt to build technology systems that grow more powerful over time. When new innovations emerge—every year, every month, even every day—it can seem like progress just happens on its own. But in reality, breakthroughs often rely on shared standards.

Take the web, for example. You’ve probably seen "http://" or "https://" in your browser’s address bar. That’s the HTTP protocol in action—a revolution that laid the foundations for the modern internet. Tim Berners-Lee, the creator of the World Wide Web, introduced HTTP (Hypertext Transfer Protocol) in the late 1980s. It was a simple way to fetch and display HTML documents in a web browser.

At first glance, it might not seem particularly exciting, but that protocol unified developers worldwide around a straightforward method of delivering web pages. It became the backbone of the internet we rely on today.

Protocols are especially important when a technology is still emerging. Right now, AI lacks a universally accepted way of doing things. Everyone is innovating in different directions, which can make it difficult to collaborate or integrate systems. A shared protocol simplifies this, allowing diverse players to align around a single approach.

The same story applies to email. Before standardized protocols, email typically stayed within local networks—computers connected by physical wires or even just a single machine. That was fine if you only needed to communicate with people in the same building. But in the 1980s, Jon Postel formalized a new framework: SMTP (Simple Mail Transfer Protocol). Suddenly, people everywhere had a standard way to send emails across different servers. That breakthrough allowed email to go mainstream.

All of this underscores why protocols are so crucial. They empower builders and innovators—from startups like Mindset AI to industry giants like IBM—to create technology that anyone in the world can use. If you’ve ever wondered how systems as vast as the internet evolve and scale, open standards and protocols are a huge part of the answer. They make it possible for technology to grow and adapt in ways that no single company or developer could achieve alone.

What is MCP used for in AI?

How does this relate to large language models and AI agents?

Before MCP, a major bottleneck prevented AI agents from becoming widely adopted. Let’s go back a few years to understand why.

In 2022, large language models (LLMs) made a huge leap into the mainstream. Tools like ChatGPT started generating coherent paragraphs, summarizing text, writing code, and answering questions. By 2023, public hype around ChatGPT was at an all-time high. Suddenly, anyone with internet access had a powerful AI system at their fingertips.

But there was a problem. While these models could handle complex questions—like explaining quantum physics—they couldn’t take action in the real world. If you asked it to book a flight for next Thursday, it wouldn’t be able to do it. That’s because there was no standard way for LLMs to interact with external systems—whether that was a flight booking API, an email service, or a calendar.

This is where function calling came in. Function calling allowed developers to build systems that let LLMs call specific tools or integrations. So, a chatbot could engage in a conversation and, through function calling, trigger an action—like searching the web or retrieving data.

Before MCP, integrating these capabilities was a highly manual process. Here’s how it typically worked:

  1. A developer or product team identified a user need—say, the ability to search the web.
  2. They built an internet search tool and connected it to an API, like Microsoft Bing or Google Search.
  3. Then, if the user wanted to integrate with Slack, they had to build a separate Slack integration.
  4. If the user needed to connect to a database, that meant writing yet another integration.

The result? A patchwork of ad hoc implementations. If you had 15 different tools, you needed 15 separate integrations. And if any external service changed—like an update to the search API or a change in Slack’s integration—things could break.

Even worse, LLMs themselves evolved rapidly. If OpenAI released a major update, it could suddenly impact how tools functioned. We’ve experienced this firsthand while experimenting with different LLMs—switching between models like GPT, Claude (Anthropic), and LLaMA (Meta). Each model interprets instructions differently, which means an AI agent that worked perfectly with one model might not function the same way with another.

This was completely unscalable—similar to the early internet before widely accepted protocols were introduced.

And that’s where Model Context Protocol (MCP) comes in.

What is MCP in AI agents?

By now, you’ve probably got the gist of it—MCP (Model Context Protocol) is an emerging standard designed to unify how AI models, agents, and products interact with external systems.

H3: How does MCP work?

Instead of requiring custom integrations for every tool, MCP introduces a single layer—the MCP layer—through which AI can communicate with various databases, applications, or services.

Imagine this as three columns:

  • Companies like Mindset AI: We make it easy to build, manage, and deploy AI agents connected to large language models.
  • MCP (Model Context Protocol): A standardized protocol that translates AI requests into commands that services can understand.
  • External Services: Google Sheets, Slack, databases, or any other application an AI might need to interact with.

So, if we wanted to integrate our AI agents into Google Sheets or Google Slides, as long as Google supports MCP, it becomes seamless. No matter what changes Google makes in the future—or even if a different LLM is used—the connection remains stable and secure.

How does this work?

  • External services (like Slack or Google) would have an MCP server—essentially software that understands and processes MCP commands.
  • Companies like Mindset can then integrate effortlessly, without needing custom-built solutions for every service.
  • This standardization ensures AI requests (e.g., “Fetch user data from Google Sheets”) result in consistent, structured responses—without breaking when services update or models change.

What are the benefits of MCP?

A recent research paper by Anthropic on AI agent capabilities highlights why MCP could be a game-changer for AI adoption:

  • Fewer integration headaches: MCP makes it simple for anyone to connect with external services.
  • More reliable AI systems: Updates to Google Sheets (or any other service) won’t break integrations because everything follows a shared standard.
  • Accelerated AI adoption: If MCP is widely accepted, it could dramatically expand the AI ecosystem. Lower barriers to entry mean more entrepreneurs, innovators, and major companies incorporating AI into their products.

This last point is key. The faster and cheaper it becomes to integrate AI, the more widespread its use across industries, devices, and applications.

The AI acceleration trend: what can MCP do?

You’ve probably heard of Moore’s Law, which predicted that computing power would double every two years. Well, a recent research report suggests that AI agent capabilities might be doubling every three months.

Researchers tested this by evaluating AI performance on various tasks—coding challenges, reasoning puzzles, cybersecurity tests—comparing them to expert human performance.

Here’s what they found:

  • 2019 – LLMs could handle tasks equivalent to just a few seconds of human effort.
  • 2023 – They reached tasks that take a human up to four minutes.
  • 2024 – AI models can now complete tasks requiring up to an hour of human effort.

And if that trend continues—if AI keeps doubling in capability every few months—then in just a couple of years, we’re talking about AI agents that can complete multi-hour or even multi-day tasks, fully autonomously.

Now, of course, there are some criticisms of this study. Some argue that it only requires the AI to succeed on half the tasks to count as an improvement, while others say these benchmarks don’t fully reflect real-world complexity. But even with those caveats, the trend is undeniable—AI is getting better at an exponential rate. And if something like MCP becomes widely adopted, that progress could accelerate even further.

Closing thoughts

I'm really interested in the fact that we’re seeing real data showing an exponential curve in AI agent capabilities. If we circle back to Model Context Protocols, this progress is happening without the kind of universal protocols that fueled the explosion of the Internet. So just imagine what happens if something like MCP gets widely adopted.

If that happens, things are going to move really fast. Of course, companies like OpenAI and others might introduce competing protocols that align with their own long-term goals. But regardless of how it plays out, as these standards emerge, AI agents are only going to become more powerful and more useful—especially for companies like us that need seamless integration into the tools people already use every day.

And I’ll always bet on one thing: the cheaper and easier something becomes, the more people will use it and build on top of it.

So that’s it for this week. Thanks for listening! And if you know a friend, colleague, or even a particularly tech-savvy dog that might enjoy this episode, share it with them.

Become an AI expert

Subscribe to our newsletter for the latest AI news.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Table of contents
Share this:
Related

Articles

Stay tuned for the latest AI thought leadership.

View all

Book a demo today.

Because your employees don't have time for repetitive work.

Book a demo