Back to resources

In The Loop Episode 6 | Multi-Agent Systems: The Next Big Shift In AI—Yet People Have No Clue About Them

In The Loop Episode 6 | Multi-Agent Systems: The Next Big Shift In AI—Yet People Have No Clue About Them

Published by

Wic Verhoef
Barrie Hadfield
Jack Houghton
Anna Kocsis
Anna Kocsis

Published on

March 14, 2025

Read time

10
min read

Category

Podcast
Table of contents

What if AI agents start making decisions for us—and we don’t even know it?

Picture two agents having a conversation over the phone. Suddenly, they switch from normal conversation to an electronic, sci-fi-like sound that no human can understand. This so-called secret language is “gibberlink,” and it perfectly reflects the world’s current perception of AI: news cycles moving too fast and everyone getting swept up in the excitement, just like the stories in The Matrix or Terminator.

But beneath the noise, there’s a real revolution happening right now—one that’s going to emerge over the next 18 months. And that’s multi-agent systems.

These systems allow AI agents to communicate and collaborate with each other, take on specialized roles, and become far more useful than a single language model. The real question is: how do these systems work, and what impact will they have on the future of business, technology, and culture?

Today, I’m going to cover all of those questions. This is In the Loop with Jack Houghton.

What is gibberlink?

I want to start by describing this “gibberlink” moment that went viral on social media because it’s a perfect example of AI’s mix of hype and potential right now.

The video shows two mobile phones calling one another. At first, they speak in normal English—text-to-speech you’d hear from most chatbots. But then they realize they’re both AI agents and switch to a completely different language. That left a lot of people online feeling pretty scared, imagining Terminator-like futures.

This video had millions of views, with people sharing it widely, interpreting it as some kind of secret, encrypted language. But what actually happened was very different—and far less dramatic.

Gibberlink is essentially a proof of concept from a company called 11 Labs. They ran a hackathon where AI developers created a library that encodes data into audible signals. So when the agents detected each other, they began exchanging information through modulated audio. This library is called GG Wave.

While it was portrayed online as agents speaking in an encrypted language, the reality is far less practical. It’s reminiscent of 1980s dial-up modems, with lots of issues like noise interference and slow data transfer that make it largely impractical. If there’s any risk of noise or bandwidth issues, the whole system collapses. And even without those issues, transferring data this way is far slower than agents simply sharing information through an API.

That said, you could imagine some use cases for this technology. For example, if I had a personal AI assistant and told it to complain to a company about a late delivery, and that company also had an AI agent, they could quickly exchange information and resolve the issue.

You could also see it in robotics. A few months ago, NVIDIA held a major launch event for their new computer chips, which was described as the “ChatGPT moment” for robotics. Over the next several years, we’ll likely see robotics become more of a consumer product—though I suspect it’ll take seven or eight years before it’s widespread. In those scenarios, two robots in a household or coordinating a delivery could theoretically use this type of audio signaling.

But in most cases, if two AI agents are cloud-based, they can just exchange data instantly via an API—no need for this audio language.

These viral moments keep happening in the AI space, like when people discussed sending AI avatars to represent humans on Zoom meetings. But why would we do that when AI agents could just transfer data directly and skip the whole 30-minute meeting entirely?

What are multi-agent systems and how do they work?

The real revolution isn’t in secret languages that humans can hear. It’s in how AI agents collaborate.

This is called multi-agent systems—where you don’t just have one AI model doing everything. Instead, you have many specialized AI agents, each with its own role, communicating through standard protocols or APIs. These agents coordinate, exchange information, and negotiate to handle specific tasks. And this quiet revolution will reshape business, technology, and culture over the next 18 months.

You might be wondering, what is a multi-agent system, and how does it work? Let’s start with a simple definition.

What is an AI agent? An AI agent is software that can receive or perceive input. It either captures screenshots and converts them into data it can understand or receives data directly. The agent then decides on a course of action based on the tools it has available and takes that action. Each agent has an internal language model and logic, often including a rule-based system or another type of framework. I covered this in a previous episode, so feel free to go back and listen to that for more details.

When we talk about a multi-agent system, we mean many agents working together. Instead of a single AI handling everything, you might have Agent A, which excels at searching databases or conducting web searches, Agent B, which specializes in analyzing and summarizing data, and Agent C, which organizes that information into a document. What’s interesting about multi-agent systems is how they communicate with each other, which is very similar to how software communicates today.

Communication can be direct (peer-to-peer) or mediated through an intermediary, often a coordinating or orchestrator agent. In direct communication, one agent sends a message directly to another, typically using an ID or network address. In mediated communication, an agent sends messages to a central hub, where the orchestrator agent routes them to the appropriate agent.

There are also different protocols for how agents send messages to each other. For example:

  • Query-and-reply protocol: when one agent sends a request message to another, waits for a response, and then takes action based on that reply.
  • Publish-subscribe model: an agent publishes messages, such as events or updates, to a specific topic or channel. Other agents that subscribe to those topics or have access to the channel can then receive the messages. This makes agents reactive to changes.
  • Event-driven communication: agents are triggered by certain conditions rather than specific queries. Whenever a condition is met, an agent triggers an action.

The real power of multi-agent systems comes from dividing the workload among specialized units.

For example, at Mindset AI, we have a librarian agent that nobody directly interacts with. When someone asks a question, an agent monitors the queries, identifies the type of question, and routes it to the librarian agent. This librarian agent performs an SQL search to find the latest presentation on a topic, improving efficiency and performance over general queries.

The challenges of multi-agent systems

In the next few years, the role of multi-agent systems is expected to grow exponentially. We’re already seeing this trend in various areas, from agentic workflows built around large language models to new, modular enterprise software.

Agents might handle tasks like payments through a payment gateway, booking through a travel platform, or contract generation through legal tools. They will communicate with each other and manage entire workflows, with humans only needing to specify high-level requests, such as, "I want to go on a business trip to New York with a modest budget and a preference for direct flights." The system can then take care of all the details. That’s the real promise of multi-agent systems.

However, reaching this point isn't easy—there are a lot of complexities. Let me explain some of the challenges—I’m very close to this, as Mindset AI is heavily investing in multi-agent systems throughout 2025. We enable SaaS providers and tech companies to integrate AI agents into their products to handle tasks like those I’ve just described.

1. Scalability

Let’s imagine a small manufacturing company that wants a multi-agent system to manage how its raw materials are ordered and tracked.

The company has multiple suppliers, each with different shipping schedules, prices, bulk discounts, and so on. It also has internal agents for inventory tracking, finance management, and scheduling. The goal would be to order materials just in time at the best price, avoiding overstocking or shortages. Initially, the system works well with just a few suppliers, with one agent per supplier and an internal buyer agent coordinating everything. They work together to find the cheapest supplies, the ones in stock, and those that can deliver quickly. The system negotiates automatically—maybe weekly or even daily.

But as the company grows and adds more suppliers—from five to 50 to 100—the buyer agent must coordinate with a huge number of external supplier agents, all offering different deals.

The number of potential interactions grows dramatically. As a result, the system gets bogged down with messages. The buyer agent might spend more and more time sorting through bids, and many offers may never turn into real deals. This creates significant overhead. To avoid this, you’d need to limit how often supplier agents can send offers to the buyer agent to prevent clogging the system. This is a scalability issue—if the system can't handle the growth in agents, it will collapse.

2. Coordination

Another challenge is coordination. Even with a moderate number of suppliers, you need a system to coordinate them.

Imagine the system operates with a daily auction approach. Each evening, the buyer agent calls for quotes, and all the supplier agents respond within an hour. The buyer then picks the best option.

In a stable environment, this might work really well. But what happens if a global event occurs, such as a sudden discount from one supplier or a shipping delay from another?

Some agents might try to renegotiate mid-cycle, triggering multiple unplanned actions. If an agent’s software is too aggressive, it might continuously undercut others, leading to repeated changes to orders, causing confusion, and potentially resulting in shipping errors or delays. This is a major coordination problem: how do you set up protocols so agents can do their jobs without disrupting the overall system?

In a stable world, daily auctions might work, but if the system becomes volatile, you’ll see constant back-and-forth communication that never leads to an outcome.

3. Security

This brings us to the third issue: security. Imagine one of these supplier agents gets compromised by a hacker and starts sending false signals to the buyer agent, such as claiming a competitor is out of stock or offering a 50% sale if the buyer acts immediately. The buyer agent might fall for these tricks, leading to overpaying or missing critical shipments.

This is a real risk in a distributed AI setup, where each agent could be a potential vulnerability. Since each agent has specific privileges, the risk increases as the system grows. The larger the system, the more opportunities there are for these vulnerabilities to cause harm.

4. Interoperability

The fourth challenge is interoperability. In this example, each supplier is a separate company, and each has a different approach to agent communication.

One uses a custom API, another uses a different protocol, and a third relies on a legacy system with email-based triggers. Getting all of them to speak the same language is a nightmare. This is why creating a plug-and-play multi-agent system becomes so complex over time. It's similar to the early days of the internet when new protocols had to be globally adopted by developers everywhere.

5. Decision-making

Finally, there's the issue of decision-making bottlenecks. If a system doesn’t have a clear process for finalizing decisions, you’ll run into loops of back-and-forth communication that never end.

For example, one supplier lowers their bid, another counters, and the buyer agent requests new quotes again. The auction never ends, trapping the system in a loop. This is just one example in a supply chain context, but modern life is interconnected, so one system’s failure can negatively affect others.

These are real challenges that will take time to address. That’s why Mindset is making significant investments in multi-agent systems in 2025. We're doing foundational work now to ensure success in the long term. But what's particularly fascinating is this: in the near future, there may be more agents than people. This will lead to some interesting—and even strange—trends, one of which is the rise of business-to-agent software. This involves building technology and services specifically for AI agents.

Business-to-agent (B2A) solutions

We’re no longer just building solutions for humans, like B2C or B2B,, but instead for non-human customers. Imagine your personal digital assistant doing your weekly shopping, finding the best price on interior decor like rugs, negotiating with another agent, and then booking and planning your travel for the week.

Each step would involve different AI systems talking to each other. So, we have to think about what makes all these systems agent-friendly. If your website isn’t agent-friendly, how will it handle purchases? Will the agent choose another website because it understood the process better?

Right now, a big part of selling is about attracting human eyeballs with beautiful user interfaces, branding, videos, and copywriting. But if an agent is making purchase decisions or recommendations on behalf of a human, we might see a shift in how we sell. Websites may need to become machine-readable, or implement special negotiation protocols so an agent can secure the best price for whatever you’re selling.

Suddenly, AI agents will be responsible for purchasing many goods and services. This will require a massive rethink and infrastructure construction in the coming years. We could see an explosion of business-to-agent (B2A) tools that handle identity, authentication, and trust.

For example, if an e-commerce company sells shoes, how does it verify that a request comes from Jack’s personal assistant agent and not a fake one? The agent might need a digital certificate or token, and then the site must confirm the billing with a payment agent.

If done well, this creates a friction-free experience where agents handle the heavy lifting. But if done poorly, it leads to chaos and fraud, with systems collapsing and people spending money on fake goods.

We need companies like Amazon or PayPal to define strong business-to-agent standards and protocols to create a secure, unified approach.

Over time, business-to-agent systems will begin managing much of our digital lives—things most people don’t enjoy, like choosing which power company to subscribe to or which insurance policies are best. We’ll need to think about liability and law. If an agent makes a mistake or impersonates you, who’s liable? If an agent signs contracts on your behalf, who’s responsible?

These questions are murky and complex, but they’re important as these systems develop in the next 2-5 years.

Conclusion

The idea of multi-agent systems genuinely compels me. If successful, it could reshape everyday life and culture. These systems could handle day-to-day tasks that are boring or annoying or often left until the last minute.

Perhaps we can encourage people to attend college or university not just for a certificate, but to learn and grow. Maybe we can foster more creativity and value it in a way that elevates it, rather than undervaluing creative individuals.

We’ll likely see a shift in skill sets, with more demand for roles like agent orchestration experts or AI compliance managers who ensure these systems behave responsibly.

However, this also risks making decision-making opaque, with agents using algorithms and processes that humans don’t fully understand. Trust and social responsibility will become major areas of discussion.

A good example is the movie WALL-E, where humans have outsourced so much responsibility to robots that they lose basic skills and knowledge. It’s easy to think this could never happen, but history shows otherwise. In the Dark Ages, we lost the knowledge and skill to build aqueducts and sewage systems—people thought giants built them.

This shift in how we interact with technology is monumental, and I’m incredibly interested in following and contributing to it. We need to have the right conversations and stick to the human principles we care about, or else people could get lost in this transition.

Anyway, that’s it for this week. I hope you enjoyed the episode. Please share it with a friend or colleague who might find it interesting. Otherwise, I’ll see you next week.

Become an AI expert

Subscribe to our newsletter for the latest AI news.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Table of contents
Share this:
Related

Articles

Stay tuned for the latest AI thought leadership.

View all

Book a demo today.

Because your employees don't have time for repetitive work.

Book a demo