Back to resources

In The Loop Episode 4 | Why Microsoft's CEO Thinks Everyone's Wrong About AI Agents & AGI

In The Loop Episode 4 | Why Microsoft's CEO Thinks Everyone's Wrong About AI Agents & AGI

Published by

Wic Verhoef
Barrie Hadfield
Jack Houghton
Anna Kocsis
Anna Kocsis

Published on

February 26, 2025
March 11, 2025

Read time

7
min read

Category

Podcast
Table of contents

This week, Satya Nadella sat down with Dwrakesh Patel for over an hour and shared his views on the future of AI. I think you’re going to find this conversation fascinating.

On one hand, we're seeing the convergence of technologies that, just a few years ago, would have felt like sci-fi: autonomous AI agents taking care of large portions of white-collar knowledge work, industrial supercomputers simulating the weather, disease, and even human behavior to predict the future. It’s almost like the European Renaissance. These technologies aren’t new; they’ve been around for years, but suddenly, they’re all converging at the same time.

On the other hand, growth in the West has been stagnant for years. People are still dying from preventable diseases, wages aren't increasing, and it’s starting to feel more like a dystopia than the utopia that these technological innovations should create. This tension frames the hour-long conversation with Satya Nadella. He questions our obsession with AGI as a benchmark for success—the idea that AI can learn any cognitive task that a human can—and, more importantly, believes we’re not paying enough attention to the realities of law and people. If we want AI to benefit the many, these are the things we should focus on.

This is In The Loop with Jack Houghton.

Let me set the scene. Today’s podcast dives into a fascinating conversation between one of my favorite podcasters, Dwarkesh Patel, and the CEO of Microsoft, Satya Nadella.

For those who haven’t come across the Dwarkesh Podcast, it’s blown up in the last 15 months. It’s an academic, thinker’s podcast with long-form interviews and fascinating people. It’s been a big inspiration for In The Loop as well. It’s well-researched, asks the right big questions, and is direct. Satya Nadella, the CEO of Microsoft, is the leader of one of the most influential players in AI right now.

With the release of their brand-new quantum chip, this podcast episode could have easily been a marketing piece, but Nadella opened up about the near-term future of knowledge workers and the complexities of building quantum computers. He voiced a lot of skepticism around the hype and obsession with AGI.

As I’ve said before, I love history, and Nadella draws fantastic comparisons to things like the Industrial Revolution, which I found compelling. It’s an interesting opportunity to look at AI from a different perspective. I’m passionate about understanding the utility of something new. With AI, the utility is obvious, but what isn’t clear is how it generates value for everyone.

It’s something that’s caused tension for me personally because Mindset AI is at the forefront of the AI agent revolution. Still, many things must happen so that everyone can experience the benefits of AI.

More time for important things is an obvious benefit, but currently, many global workplaces are set up in ways that limit the real value for most people. This conversation was brilliant because it explored many of the popular questions and beliefs—including mine—and provided fresh perspectives. Let’s dive in.

AGI, Industrial Revolution, and the shifting landscape of knowledge work

What’s interesting about Nadella’s perspective is that he criticizes the comparison between AI and the Industrial Revolution. The idea that AI/AGI, is this massive revolution, like steam power in the 1800s, is the wrong benchmark.

He points out that if AI is to live up to that comparison, shouldn't we see growth of 10% across many Western nations? The Industrial Revolution saw roughly that kind of growth, but right now, Western growth sits at about 2%, not even that when adjusted for inflation.

Nadella isn’t saying AI won’t deliver that potential growth, but he argues that growth should be a primary target and shouldn’t just be about AGI. I agree with this reasoning. We risk achieving the wrong goal if we’re chasing the wrong milestone.

If we define AGI as something that takes over all cognitive labor, is that even achievable? Is it the right goal? Cognitive labor is always shifting.

He mentioned AI handling emails, meeting prep, or entering data. Once that’s automated, humans move on to the next task. This is the difference between knowledge work and knowledge workers. AI triaging emails is great, but that’s not the life of a knowledge worker, and it’s not necessarily knowledge work—it’s just part of it. Knowledge workers will then focus on a new type of knowledge work, and when AI can handle that too, humans will move on yet again.

This speaks to me personally because I believe humans have a critical role in taking something from inception to foundation and then handing it off to AI to scale or improve.

The 10x human: Why AI won’t replace humans

Satya Nadella argues that we’re bounded by rationality and limited time. When we free ourselves from one burden, we step up to a new level of complexity. Not everyone agrees, though. Some people compare this to horses: they’re still helpful in certain situations, but most people use cars today.

Maybe that’s true for AI in the future, but I prefer Nadella’s optimistic view. He sees this as a cycle. AI takes on tasks, and we shift roles. As AI improves, it takes on new tasks, and we shift again. There’s no immediate finish line where human roles vanish. Some might call this corporate positivity, but Nadella grounds his view in history. The Industrial Revolution didn’t eliminate human work—it changed its form.

I and Mindset AI truly believe in the idea of the 10x human. A person can produce 10 times more, and if they can do that with AI agents and teams of agents, why get rid of them? If a team can produce 10 times more than their competitors, that’s a huge advantage. So, hire more people, make them more efficient, and rethink systems to get the most out of new technologies.

I believe this will also lead to the re-evaluation of other types of labor, like healthcare, social care, and teaching. These professions will gain higher wages and more value. I’m ashamed we’ve seen so much devalue of these fields in many Western societies today. Governments should have done more to prevent this devaluation.

These changes we’re about to experience have to impact society and the economy, and they’ll take time.

Barriers to AI revolution: workflows, law-making, social trust

There are many barriers to breakthrough. Dwarkesh’s podcast discusses these hurdles, and I’ll touch on a few of them now.

Workflows and assets

The first barrier discussed is the re-imagining of workflows and assets. Nadella drew a remarkable parallel to how email and spreadsheets changed office behavior.

Before spreadsheets, financial forecasting in a large company involved faxes, handwritten notes, and piecing things together. Once Excel and email became widespread, workflows and information organization changed completely.

I see this happening now with Google’s AI assistant in Google Docs. It’s not yet good enough, and it doesn’t make me want to use it because they haven’t reimagined the workflow; they’ve just added a new AI tool to the existing system. Human-AI agent user interface design is something i talked about in the previous episode of In The Loop. Product builders must reimagine experiences from the ground up, not just bolt AI onto existing workflows. The best solutions will fully integrate AI into the experience, not as an add-on.

Nadella also provided an interesting analogy with lean manufacturing, which changed factories. Lean manufacturing systematically identifies and eliminates activities or resources that add no value. This approach spread from Toyota to industries worldwide, focusing on continuous improvement, cutting unnecessary steps, reducing inventory, and eliminating bottlenecks. The result is lower costs, higher quality, and better alignment with the customer.

Nadella suggests that AI could do for knowledge work what lean manufacturing did for factories: it could force organizations to rethink processes, data sharing, decision-making, and more. The real challenge is that organizations need to adopt new managerial thinking and systems to reach scale. This process will take time, just like lean manufacturing did.

Lawmaking, liability, and social trust

The second major barrier to widespread AI adoption is the need to rethink liability law and social trust.

Nadella made an important point: if an AI system makes a harmful mistake, the world doesn’t accept the excuse that it was the AI’s fault. Someone has to be held accountable, especially in high-stakes domains like finance, medicine, and national security. Laws aren’t designed to blame software—they’re designed to blame people or entities. Without clear liability, large AI deployments will be slow.

If AI ever reaches the out-of-control threshold that some fear, governments won’t just sit back and let it happen. Society will likely punish bad actors, just as we do now. This issue will slow down AI adoption and reshape it in important ways. And it all ties back to the broader point: if we’re on the brink of an AGI revolution, it has to show up in the economy.

I hope it happens. Global growth skyrocketing, wages increasing, and big social problems getting solved. That’s when we’ll know it’s a game changer. Until then, large language model providers can call their models AGI, but it’s just a marketing move. Nadella referred to this as “nonsensical benchmark hacking,” where companies rush to declare AGI based on arbitrary tasks that don’t reflect societal impact. What we should measure is real-world improvements, like that 10% growth we touched on at the beginning of this post.

Conclusion—and criticism

If you want to be cynical, you could argue that Nadella says all this because Microsoft’s partnership with OpenAI ends when OpenAI reaches AGI, and then Microsoft won’t have access. But I think there’s merit to the argument that AI is just a tool—albeit a powerful one—that must operate within the world’s constraints, including business models, regulations, and public trust. Even the best technology can’t ignore these.

If we compare it to the Industrial Revolution, you might recall that a lot more changed than just the invention of the steam engine. People reorganized factories, labor laws evolved very slowly and painfully, and new social and economic norms emerged.

So, stepping away from this conversation, I feel both reassured and challenged. I am reassured that there's an important CEO saying that these tools are excellent, but let's not lose our heads or jump to simple conclusions. But at the same time, I feel challenged because it reminds me that if we want to see these benefits, we can't just stand back and watch it happen. We need to create new policies and be brave with how we redesign new laws and figure out what people's roles in this new economy will be.

We need to question whether we're measuring success by the right metrics. Perhaps the real measure isn't how close we are to replicating human cognition but whether people's living conditions are actually improving.

Anyway, that's it for today. I really enjoyed covering Dwarkesh’s podcast episode. It was a fantastic interview—I recommend you have a listen now. The fact that I never even got to cover Microsoft's new quantum computing breakthrough shows just how many fantastic insights there were in a single conversation. Maybe next time—see you then.

Become an AI expert

Subscribe to our newsletter for the latest AI news.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Table of contents
Share this:
Related

Articles

Stay tuned for the latest AI thought leadership.

View all

Book a demo today.

Because your employees don't have time for repetitive work.

Book a demo