Skip to content
Concepts Components Blog Roadmap
Get Started
News: Google Open Sources A2UI

Don't Just Chat.
Speak UI.

A2UI is the open standard protocol connecting Large Language Models (LLMs) to your Frontend.
Stop generating fragile HTML. Start building safe, native, and interactive Generative UI experiences.

Trusted by Developers building the Next Gen of Apps

The "Handshake" for Generative UI

A2UI (Agent-to-User Interface) isn't a library component—it's a protocol. It solves the biggest problem in AI today: How do intelligent agents safely control visual interfaces?

Instead of writing dangerous HTML, A2UI agents negotiate intent via structured JSON. Your app says "I can render a Chart," and the Agent says "Here is the data."

🧱

JSON, Not Code

Safe by design. The Agent sends a schema, your Codebase renders the view. No injections.

⚡️

Native Performance

Zero visual lag. Renders as native React/Vue/Flutter components, indistinguishable from handwritten code.

🔄

Full Control

You define the design system. The Agent just uses the "Lego bricks" you provide.

Powering Next-Gen
Agentic Experiences

✈️

Travel Agents

Don't just chat about flights. Show real-time availability, seat maps, and booking flows directly in the conversation.

🛍️

Conversational Commerce

Turn browsing into buying. Display product carousels, size guides, and checkout buttons without leaving the chat.

📊

Data Analyst

Visualise SQL queries instantly. Render interactive line charts, pivot tables, and heatmaps from raw data.

How A2UI Works

1

Define

Create a component registry in your codebase mapping JSON types to React/Vue components.

2

Prompt

Instruct your LLM (GPT-4, Gemini) to output JSON matching your schema.

3

Render

The `<AIOutput />` component parses the stream and renders native UI instantly.

Protocol Flow

A2UI End-to-End Data Flow

Agent generated UIs crossing trust boundaries safely using declarative schemas.

Deep Dive

How Generative UI Actually Works

Unlike traditional "Chat UI" where the LLM just streams markdown text, A2UI Protocol establishes a two-way, type-safe communication channel between your Artificial Intelligence agents and your client-side application.

1. The Negotiation Phase

When a user session starts, the Client Application (React, Vue, Flutter, etc.) sends a system_prompt containing the Component Schema. This tells the LLM exactly which UI elements are available (e.g., WeatherCard, StockChart, BookingForm) and their prop definitions.

2. The Generation Phase

Instead of hallucinating HTML tags (which is a security risk), the Agent generates Structured JSON adhering strictly to your schema. This ensures that the AI can only "speak" in valid UI components that you have explicitly allowed.

3. The Rendering Phase

Your frontend application receives this JSON stream. A dedicated A2UI Renderer mapped to your design system (Tailwind, Material UI, Shadcn) instantly hydrates these JSON nodes into real, interactive DOM elements. The result is a Native User Interface that feels indistinguishable from handwritten code.

Generic HTML vs. A2UI Protocol

Standard LLM Output
Generates raw HTML/JS
A2UI Standard
Generates Typed JSON
Security Risk
High (XSS Injection)
Enterprise Safe
Zero Eval Execution
Performance
Slow (DOM Parsing)
Visuals
60FPS Native Render
Maintenance
Hard (Unpredictable)
Scalability
Design System Sync
💡

Why this matters for SEO?

Implementing A2UI makes your AI application accessible, indexable, and significantly faster—improving your Core Web Vitals and search rankings compared to iframe-based solutions.

Why Developers Choose A2UI

The standard interface for building production-grade Agentic Applications. Stop fighting with prompt engineering for HTML—start using a deterministic protocol.

🔒

Type-Safe Protocol

End-to-end type safety between your Agent's thoughts and your UI. If the LLM hallucinates an invalid prop, A2UI catches it before render.

🎨

Design System Native

Don't settle for generic AI styles. A2UI uses your existing Tailwind, Shadcn, or Material UI components so everything looks on-brand.

Streaming Support

Render UI components incrementally as the LLM generates them. Reduce perceived latency and keep users engaged with real-time feedback.

📱

Cross-Platform

One agent thought, everywhere. Render the same JSON response on Web (React/Vue), Mobile (React Native/Flutter), and Desktop.

🤖

Model Agnostic

Works with OpenAI GPT-4, Anthropic Claude 3.5, Google Gemini, Meta Llama 3, and any model capable of structured JSON output.

🔌

Framework Agnostic

Astro, Next.js, Remix, Nuxt, SvelteKit... A2UI is just a protocol standard. Implementation adapters exist for all modern stacks.

The Problem: Agents need to speak UI

Text-only interactions are slow and inefficient for complex tasks. Instead of a clunky back-and-forth about availability, A2UI allows agents to instantly render a bespoke booking interface.

Comparison: Text Chat vs A2UI Reserve Table
Text Chat A2UI Interface

A2UI in Action

Watch an agent generate a complete landscape architect application interface from a single photo upload.

Trusted by Industry Leaders

"

A2UI was a great fit for Flutter's GenUI SDK because it ensures that every user, on every platform, gets a high quality native feeling experience.

V
Vijay Menon
Engineering Director, Dart & Flutter
"

It gives us the flexibility to let the AI drive the user experience in novel ways... Its declarative nature and focus on security allow us to experiment quickly and safely.

D
Dimitri Glazkov
Principal Engineer, Google Opal Team
"

A2UI changes that with a 'native-first' approach: Agents send a description of UI components, not code. Your app maps these to its own trusted design system, maintaining perfect brand consistency and security.

M
Minko Gechev
Product Lead, Angular @ Google

Frequently Asked Questions

Is A2UI safe to use?
Yes. Unlike solutions that generate and execute raw HTML or JavaScript (which pose severe XSS risks), A2UI is purely data-driven. The LLM produces JSON, and your client application strictly controls which components can be rendered.
Does it work with next.js / Vercel AI SDK?
Absolutely. A2UI is a protocol, not a framework lock-in. You can use it alongside the Vercel AI SDK, LangChain, or any other agent orchestration tool. We provide adapters for easy integration.
Is it totally free?
Yes. A2UI is an open-source standard (Apache 2.0). You can use it in personal and commercial projects without restriction. Check out the GitHub repository.