Prompt Agent

User Intent

"I want to use AI with tool calling but don't need real-time streaming"

Operation

  • SDK Method: graphlit.promptAgent()

  • GraphQL: N/A (composite operation)

  • Entity Type: Conversation

  • Common Use Cases: Batch processing, background tasks, tool calling without streaming, server-side AI

TypeScript (Canonical)

import { Graphlit } from 'graphlit-client';
import { ModelServiceTypes, SpecificationTypes } from 'graphlit-client/dist/generated/graphql-types';

const graphlit = new Graphlit();

// Define tools
const tools: ToolDefinitionInput[] = [
  {
    name: 'searchDocuments',
    description: 'Search through ingested documents',
    schema: JSON.stringify({
      type: 'object',
      properties: {
        query: { type: 'string', description: 'Search query' }
      },
      required: ['query']
    })
  }
];

// Implement tool handlers
const toolHandlers: Record<string, Function> = {
  searchDocuments: async (args: { query: string }) => {
    const results = await graphlit.queryContents({
      search: args.query,
      limit: 5
    });
    
    return {
      count: results.contents.results.length,
      results: results.contents.results.map(c => c.name)
    };
  }
};

// Prompt agent with tools
const result = await graphlit.promptAgent(
  'Find information about API rate limiting',  // prompt
  undefined,  // conversationId (optional - creates new)
  undefined,  // specification (optional - uses default)
  tools,      // tool definitions
  toolHandlers // tool implementations
);

console.log(`AI Response: ${result.message}`);
console.log(`Tools called: ${result.toolCalls?.length || 0}`);
console.log(`Tokens used: ${result.usage?.totalTokens || 0}`);

Parameters

promptAgent

  • prompt (string): User's question or message

  • conversationId (string): Optional conversation ID

    • If omitted: Creates new conversation

    • If provided: Continues existing conversation

  • specification (EntityReferenceInput): Optional LLM configuration

  • tools (ToolDefinitionInput[]): Tool definitions for function calling

  • toolHandlers (Record<string, Function>): Tool implementations

    • Key = tool name

    • Value = async function that executes tool

Response

Developer Hints

promptAgent vs streamAgent

Feature
promptAgent
streamAgent

Streaming

Returns complete response

Token-by-token streaming

Tool calling

Supported

Supported

Latency

Higher (wait for full response)

Lower (first token faster)

Use case

Background, batch, server-side

Chat UI, real-time

Result

Single return value

Event callbacks

Tool Execution is Automatic

Unlike promptConversation which doesn't call tools, promptAgent automatically:

  1. Detects when tools should be called (based on prompt)

  2. Calls your tool handlers

  3. Sends results back to LLM

  4. Returns final response

Checking Tool Calls

âš¡ Performance Metrics

Rounds: Number of LLM calls (1 = no tools, 2+ = tool calling with follow-ups)

Variations

1. Simple Agent without Tools

Use as synchronous alternative to promptConversation:

2. Multi-Tool Agent

Provide multiple tools for complex tasks:

3. Multi-Turn Conversation with Tools

Continue conversation across multiple prompts:

4. Agent with Custom Model

Use specific LLM:

5. Error Handling for Tool Failures

Handle tool execution errors gracefully:

6. Batch Processing with promptAgent

Process multiple queries in parallel:

Common Issues

Issue: Tool not being called even though it seems relevant Solution: Ensure tool description clearly explains when/why to use it. Add more specific use cases in description.

Issue: Tool called but result not used in response Solution: Check tool handler returns data in expected format. Log tool results to debug.

Issue: toolHandlers function not executing Solution: Verify function name matches tool name exactly (case-sensitive). Ensure function is async.

Issue: Response takes longer than expected Solution: Tools are called synchronously. If tools take time (API calls, DB queries), total time increases. Use streamAgent for better UX.

Issue: Conversation ID changes unexpectedly Solution: If conversationId is undefined, promptAgent creates new conversation. Always pass conversationId for multi-turn.

Issue: Tool parameters not matching schema Solution: LLM may provide parameters differently than expected. Add validation in tool handler:

Production Example

Batch processing with tool calling:

Last updated

Was this helpful?