Back to resources

In The Loop Episode 3 | The Real AI Challenge: Designing Human-Agent Interfaces That Work

In The Loop Episode 3 | The Real AI Challenge: Designing Human-Agent Interfaces That Work

Published by

Jack Houghton
Anna Kocsis

Published on

February 19, 2025
March 11, 2025

Read time

8
min read

Category

Podcast
Table of contents

As humans, we think progress is inevitable and that technology will just get better. Today, you're probably reading this post on a phone or a laptop, devices that are pretty much just extensions of our bodies. Yet there was a time when using computers was reserved for experts. Without breakthroughs in design and user experience, computers would have remained as calculators, and phones would have never fit in our pockets.

Now, it feels like we're in a similar position with AI. OpenAI's chatbot may appear polished and ready for primetime, but in reality, we're just scratching the surface. The true challenge with AI isn't just building more powerful and capable models; it's building user interfaces and experiences that make engaging with agents more simple, intuitive, and trustworthy.

Designing these human-agent interfaces is perhaps one of the most critical innovations yet to happen. It isn't immediately obvious to most people why this is so important but so difficult. If we fail to build tools that enable us to communicate easily with agents, AI will never deliver on its promise.

Today, I'll explore why this gap exists. We're going to take a trip through time, all the way to Douglas Engelbart's 1968 demo that led to Xerox PARC and Steve Jobs' Apple redefining user interface for computing. And in doing so, I hope to unpack the real challenges of building user interfaces that enable humans and agents to work together.

Welcome to the show. Listen to In The Loop now.

The new UX challenges of agentic AI

Right now, there's a real big problem with AI: it’s how agents and humans communicate with each other. I don't think most people grasp how big and important a challenge this is to work through. People look at ChatGPT and think "job done." We are so far away from that being the truth. When you actually think about how difficult this is, it makes me even more inspired by people who can reimagine things.

Creativity isn't easy. It's just as easy to replicate the exact same thing that's come before—letters became emails, and even the icons are basically identical. Analogies and how we understand them in the context of new technology become extra important because an analogy forces us to think in a certain way.

That can make us more or less creative. Take the email example: You can either recreate emails or try to create something entirely new. Both could be the right direction, but both could diminish creativity. So, when it comes to this human-agent collaboration, it's not easy to create experiences that make communication simple.

How much information should be provided to a user, for example? How are we supposed to know what an agent can actually do? How do we understand its processes or limitations? These are all big challenges in the world, and Mindset AI and we as a team are trying to work through them all the time.

So, I'm going to try to break these down and give you an indication of where we think this world of user interface for agent-human collaboration is going.

I absolutely love history, and when I was thinking about this episode, it was almost impossible not to reflect on the history of computing and user interface, everything from Apple, and all the amazing things that happened.

So I'm going to use this episode as a complete excuse to, I guess, nerd out and go on a journey of computing. I think you'll find it interesting, and it really does relate to where we're at with agent-human collaboration.

The mother of all demos

Let's start in 1968, a pivotal year. A man called Douglas Engelbart introduced what was called "The Mother of All Demos" to this big packed-out crowd in San Francisco. This basically was a machine that allowed people to edit text and navigate through documents and video chat, all using this weird thing that was this wooden circle that he called a mouse.

Before this, the computer was basically just a big calculator. You had to be a programmer and be technical. The idea of using computers wasn't as widespread at that point. Just imagine how astonishing and powerful this was. Engelbart's vision went so much further than the computer as a simple calculator. He saw it as an extension of the human mind and body - this tool that could make us think clearly and achieve more ambitions.

At the time, Engelbart's system required massive servers and equipment. It was very expensive and obviously very new as well. So it was really limited by the technology of the day. But I think this lesson is so relevant today with AI systems. It reminds me exactly how OpenAI gave LLMs their new face, a chatbot—this new way to engage with this incredible technology. But it's still very expensive to run, it requires prompting know-how, and still faces a lot of challenges with being consistent and having trust.

I'm not saying that millions of people are not getting value from it, but it reminds me of the early days of computing.

Let’s fast forward a few years to the early 1970s, to a place called Xerox PARC. Xerox was basically a printing services company, and PARC is the Palo Alto Research Center. This is where a team from Xerox got together with some fantastic engineers and began turning the vision from Engelbart into something very tangible.

At PARC, the Alto workstation was born. This machine suddenly enabled people to have overlapping windows on their computers—that you probably recognize today—icons for their applications, and a mouse-driven interface. They also built bitmap displays that managed to render text and images that, at the time, were incredibly crisp. They even did some successful experiments with networking many different computers together into one system. Again, reimagining the interface and experience around something that's incredibly transformative. Because remember, at that point, computers weren't on everybody's desk—they weren't this widespread, highly adopted thing that revolutionized society.

Image source: Wikipedia

All of this should have made the Alto workstation one of the most successful products in the world, and it should have made Xerox incredibly successful. It was truly that game-changing. But people just didn't understand how important this new user interface and these new capabilities were.

So despite this huge breakthrough at PARC, the corporate leadership at Xerox didn't see any potential. They were obviously super focused on their core copier business. They treated the Alto workstations as just an expensive research project rather than an incredible product that could transform everything.

Everything changed when Steve Jobs got a sneak peek in 1979. They had a three-day visit that was, you could argue, one of the most influential three days in tech history. And if you listen to this clip of Steve Jobs, you'll understand why:

Now, Jobs obviously didn't want to just copy what he saw, he wanted to simplify it. Apple's designers took a lot of those innovations, stripped away a lot of the unnecessary complexity, got rid of any clunkiness, came up with an incredibly sleek, single-button mouse, and turned the originally technical icons for apps into friendly images.

Apple made everything easier to understand and use for everyday people. All of this became part of the Mac. I think this is such an important lesson in human-agent collaboration regarding building user interfaces and user experiences. Apple showed that the most advanced technology only becomes truly powerful when it's understandable and easy to use.

Want a deeper look at the history of AI? Check out our blog article about the evolution of agentic AI systems.

Today’s UX challenges

If we fast forward to today, this is where the challenges of AI agents truly begin. Most AI systems are powerful but often mysterious and unpredictable. This is a problem because research shows that when people don't trust a system, they won't use it.

For example, Nielsen found that 68% of people don't trust AI responses because they don’t understand how the AI arrived at an answer. This highlights the need for innovation in user interfaces to improve communication between AI and users.

Transparency and verification are key to addressing these challenges. AI agents must find ways to present their decision-making processes clearly. We're starting to see this with models like DeepSeek and OpenAI’s R1 and R3, though much of it feels like marketing rather than true transparency. Seeing an AI’s reasoning allows users to verify its decisions and develop a shared understanding of its process.

Conveying transparency effectively is difficult. Even when AI systems offer reasoning breakdowns, few users actively re-prompt or influence the system based on them. The process is often time-consuming and frustrating, meaning there’s still a lot of room for improvement.

Another challenge is control and consistency. Technology needs to behave predictably to maintain trust. AI, by nature, can be unpredictable due to how it generates responses. If users can’t rely on consistency, they struggle to form an understanding of how the system works. Erratic behavior makes users uneasy and less likely to engage with the technology. For AI providers, the challenge is in conveying consistency and ensuring users feel confident in the system’s actions.

Another key factor is adaptive detail. Not every user needs the same level of information at all times. Some prefer a quick summary, while others want detailed explanations at every step. The ideal AI system must adjust dynamically, providing the right amount of context without overwhelming or under-informing the user.

Memory is another significant challenge—specifically, how AI retains and utilizes past interactions. When an AI remembers previous conversations, it creates a more personalized experience, reducing repetition and building rapport. However, too much memory storage can clutter interactions, while too little makes the AI feel impersonal or disconnected. If the AI remembers irrelevant details or stores excessive information, responses slow down. Striking the right balance is a nuanced challenge that requires careful design.

At the core of these challenges is the need for effective two-way communication between humans and AI. Users want to state their goals and have the AI understand their preferences while keeping them informed, offering explanations when necessary, and flagging unexpected events. Achieving this remains a crucial area of innovation that hasn’t yet been perfected

Frameworks for human-AI interface design

At Mindset AI, our team profoundly researches these challenges. As an AI agent platform, we enable users to build, manage, and deploy AI agents. Fortunately, researchers are exploring promising design patterns.

One approach is agent cards—concise summaries outlining an AI’s capabilities, limitations, and ethical guidelines. Another is planned presentation and approval, where AI presents its intended actions for user confirmation before proceeding. This could include visual step breakdowns, allowing users to inspect and refine decisions.

Dialogue-based inspection is another concept, enabling users to ask AI directly why it took a specific action and receive a clear explanation. Feedback from these exchanges could even shape future AI behavior. Post hoc explanations, where the AI summarizes its key actions after completing them, are another approach to improving transparency.

Conclusion

As we develop these systems, the success of modern AI won’t depend on raw power alone—it will depend on its ability to communicate with humans effectively. The history of technology, from Engelbart to Xerox PARC to Steve Jobs, underscores that power alone isn’t enough—technology must be designed with people in mind. If AI doesn’t communicate well, its capabilities are meaningless.

We’re at a pivotal moment where our choices will shape the future of AI and, by extension, the future of much of the world. This is an opportunity to create AI systems that are not only powerful but also accessible, transparent, and trustworthy—true partners to humans.

I hope you enjoyed this deep dive. I certainly did, and I’ll be doing this more often. See you next Wednesday—in the meantime, check out my predictions about the future of agentic AI.

Become an AI expert

Subscribe to our newsletter for the latest AI news.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Table of contents
Share this:
Related

Articles

Stay tuned for the latest AI thought leadership.

View all

Book a demo today.

Because your employees don't have time for repetitive work.

Book a demo