Stream Agent
User Intent
"I want real-time streaming responses from AI with tool calling support"
Operation
SDK Method:
graphlit.streamAgent()GraphQL: N/A (uses streaming protocol)
Entity Type: Conversation
Common Use Cases: Chat UI with streaming, real-time AI responses, tool calling, agentic workflows
TypeScript (Canonical)
import { Graphlit } from 'graphlit-client';
import { Types, AgentStreamEvent } from 'graphlit-client/dist/generated/graphql-types';
const graphlit = new Graphlit();
await graphlit.streamAgent(
'What are the key findings in the uploaded documents?', // prompt
async (event: AgentStreamEvent) => {
// Handle different event types
switch (event.type) {
case 'conversation_started':
console.log(`Conversation ID: ${event.conversationId}`);
break;
case 'message_update':
// Streaming message chunks
process.stdout.write(event.message.message);
if (!event.isStreaming) {
console.log('\n[Message complete]');
}
break;
case 'tool_update':
console.log(`Tool: ${event.toolCall.name} - ${event.status}`);
break;
case 'conversation_completed':
console.log(`\nTotal tokens: ${event.usage?.tokens || 0}`);
break;
}
},
undefined, // conversationId (optional - creates new if omitted)
undefined, // specification (optional - uses project default)
[], // tools (optional - for function calling)
{} // toolHandlers (optional - tool implementations)
);Parameters
streamAgent
prompt(string): User's question or messageeventHandler(function): Callback for streaming eventsCalled for each event (message chunks, tool calls, completion)
Must be async function
conversationId(string): Optional conversation IDIf omitted: Creates new conversation
If provided: Continues existing conversation
specification(EntityReferenceInput): Optional LLM configurationtools(ToolDefinitionInput[]): Tools for function callingtoolHandlers(Record<string, Function>): Tool implementations
Response (via Events)
Stream events sent to eventHandler:
conversation_started
message_update
tool_update
conversation_completed
Developer Hints
Streaming vs Non-Streaming
Real-time
Token-by-token
Wait for complete
Tool calling
Supported
Use promptAgent
Use case
Chat UI, streaming
Simple Q&A
Complexity
Higher (event handling)
Lower (single response)
Event Handler Must Be Async
Message Streaming Pattern
Messages stream in chunks, then complete:
🛠Tool Calling Lifecycle
Tools go through: preparing → executing → completed/failed:
Variations
1. Basic Streaming Chat
Simple streaming without tools:
2. Multi-Turn Streaming Conversation
Continue conversation across multiple prompts:
3. Streaming with Tool Calling
Implement tools for function calling:
4. Streaming with Custom Model
Use specific LLM:
5. Collect Full Message from Stream
Buffer chunks to get complete message:
6. Track Streaming Metrics
Monitor performance during streaming:
Common Issues
Issue: Events not firing / no response Solution: Ensure eventHandler is async. Check for errors in console/logs.
Issue: Messages streaming but not in order Solution: This shouldn't happen. If it does, ensure you're not modifying state incorrectly in event handler.
Issue: Tool calls not executing
Solution: Verify tools and toolHandlers are provided. Check tool schema is valid JSON.
Issue: "Streaming not supported" error
Solution: Some models don't support streaming. Falls back to promptAgent automatically.
Issue: Conversation ID not available immediately
Solution: Wait for conversation_started event to get conversation ID.
Issue: Multiple message_update events with same text
Solution: Check isStreaming flag. Final message comes with isStreaming: false.
Production Example
SSE (Server-Sent Events) pattern:
Last updated
Was this helpful?