Pmm.putty PDocsProgramming
Related
Python 3.15 Alpha 2 Preview: What Developers Need to KnowKubernetes v1.36 GA: How Declarative Validation Transforms API Reliability10 Essential Insights into Information-Driven Imaging System DesignSafeguarding Configuration Rollouts at Scale: A Practical Guide to Canarying and Progressive DeploymentsGoogle's TCMalloc Breaks Linux Kernel API, Forces Exception to No-Regressions Rule6 Key Insights About Stack Allocation in GoSecure Your AI Agents: A Step-by-Step Guide to Governing MCP Tool Calls in .NET7 Reasons Dual Parameter Styles in mssql-python Will Revolutionize Your SQL Workflow

Migrating to the Latest A2UI and Flutter GenUI: A Step-by-Step Guide

Last updated: 2026-05-16 08:31:06 · Programming

Introduction

Generative UI (GenUI) is a design pattern where an agent not only generates content but also decides how that content is displayed and made interactive. For Flutter developers, implementing GenUI means using A2UI, an open protocol that defines how agents and renderers collaborate on UI composition and state. The Flutter team built the genui package to connect agents with a catalog of widgets via A2UI.

Migrating to the Latest A2UI and Flutter GenUI: A Step-by-Step Guide

Recently, both the genui package and the A2UI protocol received major updates. Version 0.9 of the protocol shifts the philosophy from Structured Output First (where A2UI messages were streamed through structured output APIs) to a Prompt First approach, where agents embed blocks of JSON as text in their responses. This decouples the architecture, giving you direct control over how your app interacts with Large Language Models (LLMs).

If you are migrating an app from genui v0.7.0 to v0.9.0, this guide covers every necessary step—from dependency cleanup to wiring up your new chat loops.

What You Need

  • An existing Flutter project using genui v0.7.0
  • Flutter SDK (3.x or later)
  • Familiarity with Dart and Flutter widgets
  • Access to an LLM API (e.g., Firebase AI, Google Generative AI, or any provider)
  • A code editor (VS Code or Android Studio recommended)

Step 1: Update Dependencies

Open your pubspec.yaml and bump the genui package to ^0.9.0. Remove any provider-specific packages like genui_dartantic, genui_google_generative_ai, or genui_firebase_ai—they are no longer needed.

dependencies:
  genui: ^0.9.0
  # Remove old provider packages:
  # genui_dartantic: ...
  # genui_google_generative_ai: ...
  # genui_firebase_ai: ...

After updating, run flutter pub get to install the new version.

Step 2: Remove Old ContentGenerator Classes

In v0.7.0, you likely used a ContentGenerator subclass (e.g., FirebaseAiContentGenerator) to wrap LLM interactions. In v0.9.0, ContentGenerator is removed entirely. Search your codebase for any references to ContentGenerator and delete them. Instead, you will now manage the LLM connection yourself.

For example, the old code like:

final generator = FirebaseAiContentGenerator(model: 'gemini-pro');

must be replaced with a direct call to your LLM provider’s API (see Step 4).

Step 3: Set Up the Engine – SurfaceController

The new architecture splits responsibilities into three layers: Engine, Transport, and Facade. Start by creating a SurfaceController (the engine) that manages UI state and rendering.

import 'package:genui/genui.dart';

final surfaceController = SurfaceController();

You can configure it with custom renderers or use the default ones provided by the package.

Step 4: Implement Transport – A2uiTransportAdapter

The transport layer streams messages between the agent and renderer. Use the built-in A2uiTransportAdapter and implement the required methods. This is where you connect to your LLM and handle the message exchange.

class MyTransportAdapter implements A2uiTransportAdapter {
  @override
  Future<String> sendMessage(String message) async {
    // Call your LLM API here (e.g., Firebase AI, Google Generative AI, etc.)
    final response = await yourLLMService.generate(message);
    return response;
  }

  @override
  Stream<String> messageStream() {
    // Optional: implement streaming if your LLM supports it
    return Stream.empty();
  }
}

Because the framework no longer wraps your agent, you have full control over prompt construction, retry logic, and error handling. You can use any provider and customize generation settings (temperature, topP, etc.) without going through an intermediary API.

Step 5: Create Facade – Conversation

The Conversation class provides a high-level API for managing chat states. It ties together the engine and transport.

final conversation = Conversation(
  surfaceController: surfaceController,
  transportAdapter: MyTransportAdapter(),
);

You can also manage chat history by passing a ChatHistory instance to Conversation.

Step 6: Wire Up Your Chat Loop

Now integrate the Conversation into your UI. Typically, you’ll use a ConversationBuilder widget that listens to state changes and rebuilds the UI accordingly.

ConversationBuilder(
  conversation: conversation,
  builder: (context, snapshot) {
    if (!snapshot.hasData) return CircularProgressIndicator();
    final messages = snapshot.data!;
    return ListView.builder(
      itemCount: messages.length,
      itemBuilder: (context, index) => MessageWidget(message: messages[index]),
    );
  },
);

To send a new user message:

conversation.sendMessage('Hello');

You are now responsible for handling retries or errors—wrap the call in a try-catch if needed.

Step 7: Test and Validate

Run your app and verify that:

  • Messages are exchanged correctly with the LLM.
  • UI updates as expected.
  • No dependency conflicts remain.

If you encounter errors, check that your A2uiTransportAdapter returns valid A2UI JSON blocks. Use the A2uiMessageParser utility to debug incoming data.

Tips and Best Practices

  • Leverage Decoupling: Now that the architecture is split, you can easily swap LLM providers or update rendering logic without touching the core chat flow.
  • Control History: Manage chat history explicitly. The Conversation class allows you to inject a custom ChatHistory instance, giving you full control over what the agent remembers.
  • Error Handling: Wrap your LLM calls in try-catch and implement retry logic (e.g., exponential backoff) for transient failures.
  • Think in “Prompt First”: With the new paradigm, include A2UI JSON directives inside your prompts as plain text. This makes your prompts more readable and easier to debug.
  • Use Internal Anchor Links: For long guides, reference steps like Step 4 to help readers jump directly to the transport implementation.
  • Test with a Simple Agent: Before integrating a complex LLM, test your setup with a mock transport that returns fixed A2UI JSON.

By following these steps, your app will be fully migrated to the latest genui and A2UI protocol, ready to take advantage of the new flexibility and control.