Stream Agent Real-Time UI

User Intent

"How do I build a real-time streaming chat UI with AI responses?"

Operation

SDK Method: streamAgent() Use Case: Real-time streaming chat interface with tool calling


Complete Code Example (TypeScript)

import { Graphlit } from 'graphlit-client';
import { AgentStreamEvent } from 'graphlit-client/dist/generated/graphql-types';

const graphlit = new Graphlit();

// UI State
let conversationId: string | undefined;
let currentMessage = '';
let isTyping = false;

await graphlit.streamAgent(
  'What are the key findings in the research papers?',
  async (event: AgentStreamEvent) => {
    switch (event.type) {
      case 'conversation_started':
        // UI: Store conversation ID, show typing indicator
        conversationId = event.conversationId;
        isTyping = true;
        updateUI({ showTyping: true });
        break;
        
      case 'message_update':
        // UI: Append text chunk to message bubble (real-time)
        currentMessage += event.message.message;
        updateMessageBubble(currentMessage);
        
        if (!event.isStreaming) {
          // UI: Message complete, hide typing, show final metadata
          isTyping = false;
          finalizeMessage({
            text: currentMessage,
            tokens: event.message.tokens,
            model: event.message.model
          });
          currentMessage = ''; // Reset for next message
        }
        break;
        
      case 'tool_update':
        // UI: Show tool execution card with status
        updateToolCard({
          name: event.toolCall.name,
          status: event.status, // 'preparing' | 'executing' | 'completed' | 'failed'
          arguments: event.toolCall.arguments,
          result: event.result,
          error: event.error
        });
        break;
        
      case 'reasoning_update':
        // UI: Show expandable "Thinking..." section (Claude extended thinking)
        updateReasoningBlock({
          content: event.reasoning,
          isVisible: true
        });
        break;
        
      case 'conversation_completed':
        // UI: Hide typing indicator, show token count badge
        isTyping = false;
        updateUI({
          showTyping: false,
          metadata: {
            tokens: event.message.tokens,
            model: event.message.model,
            throughput: event.message.throughput
          }
        });
        break;
        
      case 'error':
        // UI: Show error toast, enable retry button
        showError({
          message: event.error.message,
          code: event.error.code,
          recoverable: event.error.recoverable
        });
        if (event.error.recoverable) {
          showRetryButton();
        }
        break;
    }
  },
  conversationId,  // Continue existing conversation
  undefined,       // Use default specification
  [],              // Tools (optional)
  {}               // Tool handlers (optional)
);

Event Types → UI Patterns

1. conversation_started

When: Conversation begins UI Actions:

  • Store conversationId for subsequent turns

  • Show typing indicator

  • Scroll to bottom of chat

2. message_update

When: Message chunks arrive (streaming) and completion UI Actions:

  • Append each chunk to message bubble

  • When isStreaming: false, finalize message

3. tool_update

When: AI calls a tool/function UI Actions:

  • Show tool execution card

  • Update status: preparing → executing → completed/failed

  • Display result or error

4. reasoning_update

When: Model is thinking (Claude extended thinking) UI Actions:

  • Show expandable "Thinking..." section

  • Display reasoning content

5. conversation_completed

When: Full response complete UI Actions:

  • Hide typing indicator

  • Enable input field

  • Show token count badge

  • Update conversation metadata

6. error

When: Error occurs UI Actions:

  • Show error toast/banner

  • If recoverable: true, show retry button

  • Log error for debugging


Multi-Turn Conversation


Tool Calling UI


Cancellation


Production Pattern (React Example)


Key Differences: streamAgent vs promptConversation

Feature
streamAgent
promptConversation

Streaming

✅ Real-time chunks

❌ Wait for complete

Tool calling

✅ Supported

❌ Not supported

Citations

❌ Not available

✅ Returns citations

UI complexity

Higher (event handling)

Lower (single response)

Use case

Chat UI, streaming

Simple Q&A, citations

When to use streamAgent:

  • Building chat UI with real-time streaming

  • Need tool/function calling

  • Want to show AI "thinking" process

When to use promptConversation:

  • Simple Q&A without streaming

  • Need citations/sources

  • Don't need tool calling


Common Issues

Issue: Events arrive out of order Solution: This shouldn't happen. Ensure you're not modifying shared state incorrectly.

Issue: Message chunks duplicated Solution: Only append text when isStreaming: true. Final message comes with isStreaming: false.

Issue: Conversation ID not available Solution: Wait for conversation_started event before using conversationId.

Issue: Tools not executing Solution: Verify tools array and toolHandlers object are passed correctly.

Issue: Can't cancel streaming Solution: Pass abortSignal in options parameter.

Last updated

Was this helpful?