How We Built The Agents On Our Website
· 8 min read · Blog
How much of the agent demos on our website would you actually have to build? Seven small pieces, each useful on its own. Here's the complete list, in order, with what each one unlocks.
The course performance agent on our website analyzes course data, diagnoses why learners are dropping off at specific lessons, and recommends concrete fixes. It's embedded in an admin dashboard. It knows which team the admin is looking at. It acts on the page. It starts working without being asked.
This piece walks through every component that went into making it work, so you can see what it would take to build the same on your platform and decide whether the approach makes sense before you talk to anyone here.
The line between what's ours and what's yours
Some pieces are ours. The agent runtime, LLM orchestration, streaming, multi-turn conversation management, the configuration studio where product teams build and iterate on agents under engineering-defined governance, expert knowledge injection (skills), rich visual rendering of tool results in the chat (widgets), persistent memory across sessions and devices.
Some pieces are yours. Your data, your UI, your business logic. Below is the complete list of what was built on the "your side" of that line.
Each item is independent. You can build them in any order. Each one is useful on its own. Together they compound. If you only ever build A, you have a working diagnostic agent. Add B and the agent finds problems across your catalog. Add C, D, E, F, G in any order and each one extends what the agent can do without invalidating what came before.
A. Course detail endpoint
What you build. An API endpoint that, given a course ID, returns the course structure with per-lesson engagement data.
Most learning platforms already have an endpoint that returns course details. The addition is engagement metrics alongside the content metadata.
For each lesson (or slide, module, activity, whatever your platform calls its content units): position and title, content type (video, text, quiz, interactive), type-specific characteristics (video duration and chapters, text word count and images, quiz questions and pass threshold, interaction type for interactive content), how many learners completed this lesson, how many learners stopped here (the last lesson they engaged with before not returning), that number as a percentage, and median time spent.
What this unlocks. The agent can diagnose why a specific course is underperforming. It connects engagement numbers to content characteristics. "Lesson 4 is a 6-minute video with no chapters and no interaction points. 38% of learners who reach it leave the course entirely. Median watch time is 2 minutes 22 seconds out of 6 minutes. Recommend splitting into three shorter segments with a knowledge check between each."
The agent doesn't just say "lesson 4 has high drop-off." It reads the lesson metadata, identifies the structural problem, and names the fix.
B. Cross-course query endpoint
What you build. An endpoint that accepts filter, sort, and limit parameters and returns courses matching the criteria with headline performance metrics.
The agent needs to answer questions like "which courses have completion rates below 50%?" or "what are the highest-rated courses?" or "show me mandatory courses for the warehouse team sorted by drop-off rate."
The endpoint returns per course: title, type, whether it's mandatory, which teams or groups it's assigned to, enrollment and completion counts, completion rate, average rating, and the maximum single-lesson drop-off.
This endpoint is wrapped in an MCP server: a lightweight service that makes the endpoint available as a tool the agent can call. We provide a specification and guide for building MCP servers. If you've never built one before, the wrapping itself is a few hours of work; the endpoint underneath is the part that's specific to you.
The wrapping pattern is generic. The endpoint underneath can be a database query, a third-party API, a custom service, or a Snowflake warehouse query. The agent doesn't care; it calls the tool, the tool runs whatever's wrapped underneath.
What this unlocks. The agent finds problems the admin didn't know to look for. Instead of browsing course by course, the admin asks a question and the agent queries across the entire catalog, identifies the outliers, and presents a prioritized list. Combined with A, this gives you the full triage-then-diagnosis flow: the agent finds the problem courses, then drills into each one to explain why.
C. Embed the agent
What you build. A <mindset-agent> element on your page and a one-line SDK initialization call.
The agent renders in your page as a panel, a sidebar, or a full-screen experience. It runs on Mindset AI. The agent's appearance (colors, typography) is configured to match your product's look and feel.
What this unlocks. Your admins (or learners, or managers) have an AI agent available inside your product. They can ask questions, get analysis, and receive recommendations without leaving the page they're working on. The agent has access to any MCP tools you've built (A, B) and any skills and widgets configured in the Agent Management Studio. This is the foundation for D, E, F, and G below. Those capabilities enhance the embedded agent, but each is optional and independent.
D. Page actions
What you build. Short JavaScript functions that wrap actions your admin UI already supports.
If your admin interface has a "flag for review" button, a navigation link to the course editor, or a team filter dropdown, you can make those same actions available to the agent. Each one is 10 to 20 lines of JavaScript. No backend changes. No deployment. The function runs in the browser, in the admin's authenticated session, using whatever your UI already uses when the admin clicks the button manually.
You control exactly which actions the agent can take, and you can vary them by page, by role, or by application state. A tool that isn't registered doesn't exist to the agent.
What this unlocks. The agent moves from advisor to actor. After diagnosing a problem, it offers to flag the course for review (a badge appears in the UI), open the editor at the problem lesson (the editor opens), or filter the view to a different team (the filter changes). The admin sees things happen on the page.
E. Context
What you build. A few lines of JavaScript that tell the agent what the admin is currently looking at. Which page they're on, which filters are active, which item is selected. This updates live as the admin navigates.
What this unlocks. The admin changes a filter. The agent picks up the new context and re-analyzes without being asked. Different team, different problems, different recommendations. No reloading, no new session. The admin never has to type "I'm looking at the warehouse team's courses"; the agent already knows.
F. Identity and multi-tenancy
What you build. The customer account ID and user role, passed as headers to your MCP endpoints.
This enforces multi-tenancy: one agent serves all your customers, but each admin only sees their own data. The scoping is enforced by your endpoints, not by the AI.
What this unlocks. One agent configuration serves your entire customer base. Data isolation is enforced server-side. You don't need a separate agent per customer. The same agent, the same skills, the same widgets, scoped to the right data by identity headers that your endpoints verify.
G. Programmatic triggers
What you build. One-line JavaScript calls that send a message to the agent from your application.
Any event on your page can trigger the agent: a button click, a page load, a filter change, a record selection. The trigger can be visible in the chat or silent (the agent responds but the trigger message is hidden).
What this unlocks. The page loads and the agent immediately starts analyzing the current view. The admin doesn't type anything. The admin clicks "analyze this course" and the agent begins a detailed lesson-by-lesson diagnosis. The admin changes the team filter and the agent automatically re-analyzes for the new team. The application orchestrates the agent, not just the user. Proactive, event-driven agent experiences become possible without the admin needing to know what to ask.
How they compose
| Combination | What the agent can do |
|---|---|
| A alone | Diagnose why a specific course is failing |
| A + B | Find problem courses across your catalog, then diagnose each one |
| C + A + B | Agent embedded in your product, finding and diagnosing courses |
| C + D + A + B | Agent acts on the page after diagnosis: flags courses, opens the editor |
| C + E + A + B | Agent knows context, scopes analysis to the admin's current view |
| C + F + A + B | Agent serves multiple customers from a single configuration |
| C + G + A + B | Agent starts working on page load, re-analyzes on filter changes |
This demo shows all seven working together.
Summary
| Capability | What you build | What the agent can do |
|---|---|---|
| A. Course detail | One enriched endpoint | Diagnose why a specific course is failing |
| B. Cross-course query | One endpoint + MCP wrapper | Find problem courses across your catalog |
| C. Embed the agent | Agent element + SDK init on your page | AI agent available inside your product |
| D. Page actions | JavaScript functions wrapping existing UI actions | Act on the page: flag, navigate, filter |
| E. Context | JavaScript on the host page | Agent knows what the admin is looking at |
| F. Identity | Headers passed to your MCP endpoints | Multi-tenant: one agent, many customers |
| G. Triggers | JavaScript calls to send messages to the agent | Proactive, event-driven agent behavior |
Each is independently valuable.
What Mindset AI provides
Everything not listed above. The agent runtime, LLM orchestration, streaming, multi-turn conversation management, the configuration studio where product teams build and iterate on agents under engineering-defined governance, expert knowledge injection (skills), rich visual rendering of tool results in the chat (widgets), persistent memory across sessions and devices, and the infrastructure to run all of this at scale across your customer base.
You build the domain pieces that only you can build. The platform handles the rest.
What this looks like for a different domain
The same primitives compose for any in-app agent. Our customer success demo uses A and B against account data instead of course data, adds skills (expert knowledge configured in the Agent Management Studio) for CS playbook interpretation, and uses page actions to mark up a portfolio chart and draft outreach messages. Same building blocks, different domain. The endpoints change, the page actions change, the knowledge changes. The pattern doesn't.
Where to go from here
If you want to see the same pattern applied to a use case closer to yours, the customer stories page has worked examples from teams who built on Mindset AI.
If you want the technical specification for the MCP wrapper in B, it's in our docs. For worked MCP examples across healthcare, fintech, logistics, edtech, insurance and travel commerce, see the integration patterns doc.
If you want to see what the studio looks like when product teams iterate on agents without engineering involvement, that's a separate demo we're happy to walk through.