Prompt Agent
User Intent
"I want to use AI with tool calling but don't need real-time streaming"
Operation
SDK Method:
graphlit.promptAgent()GraphQL: N/A (composite operation)
Entity Type: Conversation
Common Use Cases: Batch processing, background tasks, tool calling without streaming, server-side AI
TypeScript (Canonical)
import { Graphlit } from 'graphlit-client';
import { ModelServiceTypes, SpecificationTypes } from 'graphlit-client/dist/generated/graphql-types';
const graphlit = new Graphlit();
// Define tools
const tools: ToolDefinitionInput[] = [
{
name: 'searchDocuments',
description: 'Search through ingested documents',
schema: JSON.stringify({
type: 'object',
properties: {
query: { type: 'string', description: 'Search query' }
},
required: ['query']
})
}
];
// Implement tool handlers
const toolHandlers: Record<string, Function> = {
searchDocuments: async (args: { query: string }) => {
const results = await graphlit.queryContents({
search: args.query,
limit: 5
});
return {
count: results.contents.results.length,
results: results.contents.results.map(c => c.name)
};
}
};
// Prompt agent with tools
const result = await graphlit.promptAgent(
'Find information about API rate limiting', // prompt
undefined, // conversationId (optional - creates new)
undefined, // specification (optional - uses default)
tools, // tool definitions
toolHandlers // tool implementations
);
console.log(`AI Response: ${result.message}`);
console.log(`Tools called: ${result.toolCalls?.length || 0}`);
console.log(`Tokens used: ${result.usage?.totalTokens || 0}`);Parameters
promptAgent
prompt(string): User's question or messageconversationId(string): Optional conversation IDIf omitted: Creates new conversation
If provided: Continues existing conversation
specification(EntityReferenceInput): Optional LLM configurationtools(ToolDefinitionInput[]): Tool definitions for function callingtoolHandlers(Record<string, Function>): Tool implementationsKey = tool name
Value = async function that executes tool
Response
{
conversationId: string; // Conversation ID (new or existing)
message: string; // AI's complete response
conversationMessage?: { // Detailed message info
message: string,
role: 'ASSISTANT',
tokens: number,
model: string,
completionTime: number
},
toolCalls?: ToolCall[]; // Tools that were called
toolResults?: ToolResult[]; // Results from tool executions
usage?: { // Token usage
totalTokens: number,
promptTokens: number,
completionTokens: number,
cost?: number
},
metrics?: { // Performance metrics
totalTime: number,
ttft: number,
llmTime: number,
toolTime: number,
toolExecutions: number,
rounds: number
}
}Developer Hints
promptAgent vs streamAgent
Streaming
Returns complete response
Token-by-token streaming
Tool calling
Supported
Supported
Latency
Higher (wait for full response)
Lower (first token faster)
Use case
Background, batch, server-side
Chat UI, real-time
Result
Single return value
Event callbacks
// Use promptAgent when:
// - Processing in background
// - Batch operations
// - Don't need streaming
// - Simpler code structure
// Use streamAgent when:
// - Chat UI with streaming
// - Need real-time progress
// - User-facing interactionsTool Execution is Automatic
Unlike promptConversation which doesn't call tools, promptAgent automatically:
Detects when tools should be called (based on prompt)
Calls your tool handlers
Sends results back to LLM
Returns final response
// You provide tools + handlers
const result = await graphlit.promptAgent(prompt, undefined, undefined, tools, toolHandlers);
// Graphlit automatically:
// 1. LLM decides to call searchDocuments
// 2. Calls your searchDocuments handler
// 3. Passes result back to LLM
// 4. Returns final response incorporating tool resultsChecking Tool Calls
const result = await graphlit.promptAgent(prompt, undefined, undefined, tools, toolHandlers);
// Check which tools were called
if (result.toolCalls && result.toolCalls.length > 0) {
console.log(`Tools called: ${result.toolCalls.length}`);
result.toolCalls.forEach((toolCall, index) => {
const toolResult = result.toolResults?.[index];
console.log(`Tool: ${toolCall.name}`);
console.log(`Args: ${JSON.stringify(toolCall.arguments)}`);
console.log(`Result: ${JSON.stringify(toolResult?.result)}`);
if (toolResult?.error) {
console.error(`Error: ${toolResult.error}`);
}
});
}⚡ Performance Metrics
const result = await graphlit.promptAgent(prompt, undefined, undefined, tools, toolHandlers);
console.log('Performance:');
console.log(` Total time: ${result.metrics?.totalTime}ms`);
console.log(` LLM time: ${result.metrics?.llmTime}ms`);
console.log(` Tool time: ${result.metrics?.toolTime}ms`);
console.log(` Tool executions: ${result.metrics?.toolExecutions}`);
console.log(` Rounds: ${result.metrics?.rounds}`);Rounds: Number of LLM calls (1 = no tools, 2+ = tool calling with follow-ups)
Variations
1. Simple Agent without Tools
Use as synchronous alternative to promptConversation:
const result = await graphlit.promptAgent(
'What are the key findings in the document?'
// No tools - just RAG
);
console.log(result.message);2. Multi-Tool Agent
Provide multiple tools for complex tasks:
const tools = [
{
name: 'searchDocuments',
description: 'Search ingested documents',
schema: JSON.stringify({
type: 'object',
properties: {
query: { type: 'string' }
},
required: ['query']
})
},
{
name: 'getCurrentDate',
description: 'Get current date and time',
schema: JSON.stringify({
type: 'object',
properties: {}
})
},
{
name: 'calculateSum',
description: 'Add numbers together',
schema: JSON.stringify({
type: 'object',
properties: {
numbers: { type: 'array', items: { type: 'number' } }
},
required: ['numbers']
})
}
];
const toolHandlers = {
searchDocuments: async (args: {query: string}) => {
// Implementation
},
getCurrentDate: async () => {
return { date: new Date().toISOString() };
},
calculateSum: async (args: {numbers: number[]}) => {
return { sum: args.numbers.reduce((a, b) => a + b, 0) };
}
};
const result = await graphlit.promptAgent(
'Search for information about pricing, then calculate the total cost',
undefined,
undefined,
tools,
toolHandlers
);3. Multi-Turn Conversation with Tools
Continue conversation across multiple prompts:
let conversationId: string | undefined;
// First turn
const result1 = await graphlit.promptAgent(
'Search for recent news about AI',
conversationId,
undefined,
tools,
toolHandlers
);
conversationId = result1.conversationId;
console.log(result1.message);
// Second turn (with context from first)
const result2 = await graphlit.promptAgent(
'Summarize the top 3 articles you found',
conversationId, // Same conversation
undefined,
tools,
toolHandlers
);
console.log(result2.message);4. Agent with Custom Model
Use specific LLM:
// Create specification
const specResponse = await graphlit.createSpecification({
name: 'Claude Sonnet',
type: SpecificationTypes.Completion,
serviceType: ModelServiceTypes.Anthropic,
anthropic: {
model: AnthropicModels.Claude_3_5_Sonnet
}
});
// Use with promptAgent
const result = await graphlit.promptAgent(
'Analyze this data and provide insights',
undefined,
{ id: specResponse.createSpecification.id }, // Custom model
tools,
toolHandlers
);5. Error Handling for Tool Failures
Handle tool execution errors gracefully:
const toolHandlers = {
riskyOperation: async (args: any) => {
try {
// Operation that might fail
const data = await fetchExternalAPI(args.url);
return { success: true, data };
} catch (error) {
// Return error info for LLM to handle
return {
success: false,
error: error.message
};
}
}
};
const result = await graphlit.promptAgent(
'Fetch data from external API',
undefined,
undefined,
tools,
toolHandlers
);
// Check if any tools failed
const failedTools = result.toolResults?.filter(tr => tr.error);
if (failedTools && failedTools.length > 0) {
console.warn(`${failedTools.length} tools failed`);
// LLM's response will acknowledge the failures
}
console.log(result.message);6. Batch Processing with promptAgent
Process multiple queries in parallel:
const prompts = [
'What is the main topic of document 1?',
'Summarize document 2',
'Extract key dates from document 3'
];
const results = await Promise.all(
prompts.map(prompt =>
graphlit.promptAgent(prompt, undefined, undefined, tools, toolHandlers)
)
);
results.forEach((result, index) => {
console.log(`Query ${index + 1}: ${result.message}`);
console.log(`Tokens: ${result.usage?.totalTokens}`);
});Common Issues
Issue: Tool not being called even though it seems relevant Solution: Ensure tool description clearly explains when/why to use it. Add more specific use cases in description.
Issue: Tool called but result not used in response Solution: Check tool handler returns data in expected format. Log tool results to debug.
Issue: toolHandlers function not executing
Solution: Verify function name matches tool name exactly (case-sensitive). Ensure function is async.
Issue: Response takes longer than expected Solution: Tools are called synchronously. If tools take time (API calls, DB queries), total time increases. Use streamAgent for better UX.
Issue: Conversation ID changes unexpectedly Solution: If conversationId is undefined, promptAgent creates new conversation. Always pass conversationId for multi-turn.
Issue: Tool parameters not matching schema Solution: LLM may provide parameters differently than expected. Add validation in tool handler:
const toolHandlers = {
myTool: async (args: any) => {
if (!args.requiredParam) {
return { error: 'Missing required parameter' };
}
// ... rest of implementation
}
};Production Example
Batch processing with tool calling:
const result = await graphlit.promptAgent(
userPrompt,
conversationId,
specificationId,
tools as ToolDefinitionInput[],
toolHandlers
);
// Track conversation ID (may be created by promptAgent)
if (result.conversationId && result.conversationId !== conversationId) {
conversationId = result.conversationId;
console.log(`New conversation created: ${conversationId}`);
}
// Log tool execution
if (result.toolCalls && result.toolCalls.length > 0) {
console.log(`Executed ${result.toolCalls.length} tool calls:`);
for (const toolCall of result.toolCalls) {
const toolResult = result.toolResults?.find(tr => tr.name === toolCall.name);
console.log(` - ${toolCall.name}: ${toolResult?.error ? 'failed' : 'completed'}`);
if (toolResult?.error) {
console.error(` Error: ${toolResult.error}`);
}
}
}
// Return response with metrics
return {
message: result.message,
tokens: result.usage?.totalTokens,
cost: result.usage?.cost,
duration: result.metrics?.totalTime
};Last updated
Was this helpful?