The Model Context Protocol (MCP) can now render interactive HTML interfaces directly in AI chat windows. Not summaries of data—actual maps, charts, and dashboards running JavaScript.
MCP Apps, the first official MCP extension (SEP-1865), shipped January 26. Claude Desktop and VS Code added support the same week. Early implementations include ISS trackers with live maps, flame graph visualizers in VS Code, and Storybook component previews. JetBrains, AWS, and Figma announced IDE and workflow integrations days later.
How it works
MCP servers register tools with UI metadata pointing to HTML resources served via ui:// URIs. When Claude calls the tool, it renders the response as a sandboxed iframe instead of text. The @modelcontextprotocol/ext-apps library handles server-side registration and client-side rendering through JSON-RPC.
Servers declare Content Security Policy rules for external domains—critical for loading libraries like Leaflet or D3. The security model is auditable but untested at enterprise scale.
One sharp edge: Static <script src=""> tags don't work in srcdoc iframes. External libraries must load dynamically via document.createElement('script'). This trips up developers expecting standard HTML behavior.
Why this matters
MCP standardized how AI agents access external tools and data. Apps extends that to visual output. For enterprise teams, this collapses context-switching in development, design, and data workflows. Instead of asking an AI to describe API responses, you see the dashboard inline.
Microsoft integrated MCP Apps into Visual Studio 2026 preview for Azure automation. The promise: developers manipulate cloud infrastructure through conversational interfaces with visual confirmation—no tab-switching to portals.
The real test
Anthropic and partners are optimistic. The validation comes from production applications developers actually ship. Early examples span 3D visualizations, PDF viewers, and geographic data—use cases where text responses fail.
MCP Apps supports OpenAI SDK patterns and will work with Goose and ChatGPT soon. The question isn't technical capability—it's whether teams build UIs worth embedding. History suggests developers gravitate to new interfaces when they solve real friction. We'll see if dashboard-in-chat crosses that threshold.