Quickstart: Your First Agent
Build an AI agent with semantic memory in 7 minutes
⏱️ Time: 7 minutes 🎯 Level: Beginner 💻 SDK: All SDKs (Step 4 streaming: TypeScript only)
What You'll Build
An AI agent that:
✅ Ingests documents into semantic memory
✅ Searches by meaning (not just keywords)
✅ Answers questions with citations
✅ Streams responses in real-time (TypeScript)
✅ Calls tools to extend capabilities
Prerequisites
Required setup:
Sign up (30 seconds)
Create project (1 minute)
Configure credentials (1 minute)
✅ Verify: Run
hello.tsfrom credentials page
For Step 4 (TypeScript streaming): OpenAI API key
Install dependencies:
npm install graphlit-client dotenv
npm install openai # For Step 4 onlyStep 1: Ingest Content
Add a document to semantic memory:
import { Graphlit } from 'graphlit-client';
const graphlit = new Graphlit();
async function main() {
const content = await graphlit.ingestUri(
'https://arxiv.org/pdf/1706.03762.pdf',
'Attention Paper',
undefined,
undefined,
true, // Wait for processing to complete
);
console.log(`✅ Document ingested: ${content.ingestUri.id}`);
}
main();What happens: Graphlit downloads the PDF, extracts text, generates embeddings, and stores in semantic memory.
Expected output:
✅ Document ingested: 01234567-89ab-cdef-0123-456789abcdefPython/.NET: Get this code in your language instantly:
Step 2: Search Your Memory
Query ingested content by meaning:
import { Graphlit } from 'graphlit-client';
const graphlit = new Graphlit();
async function main() {
const results = await graphlit.queryContents({
filter: { search: 'transformer architecture innovations' },
});
console.log(`Found ${results.contents.results.length} documents:`);
for (const item of results.contents.results) {
console.log(`📄 ${item.name}`);
}
}
main();Semantic search: Finds documents by meaning, not just keyword matching. Try searching for "attention mechanism" and see it find the transformer paper.
Expected output:
Found 1 documents:
📄 Attention PaperStep 3: RAG Conversation
Ask questions about your content:
import { Graphlit } from 'graphlit-client';
const graphlit = new Graphlit();
async function main() {
// Reference the document from Step 1
const content = await graphlit.ingestUri(
'https://arxiv.org/pdf/1706.03762.pdf',
'Attention Paper',
undefined,
undefined,
true,
);
// Create conversation scoped to this document
const conversation = await graphlit.createConversation({
name: 'Q&A Session',
filter: { contents: [{ id: content.ingestUri.id }] }
});
// Ask questions
const answer = await graphlit.promptConversation(
'What are the key innovations in this paper?',
conversation.createConversation.id,
);
console.log(answer.promptConversation.message?.message);
}
main();What happens: Graphlit retrieves relevant sections, injects context into the LLM, and generates an answer with citations.
Expected output:
The paper introduces the Transformer architecture, which relies entirely on
self-attention mechanisms rather than recurrence or convolutions. Key innovations
include multi-head attention and positional encodings.Get this code in your language:
Step 4: Real-Time Streaming (TypeScript)
Setup
Add to your .env:
OPENAI_API_KEY=your_openai_keyGet your key from platform.openai.com/api-keys.
Code
import { Graphlit } from 'graphlit-client';
import { OpenAI } from 'openai';
import {
SpecificationTypes,
ModelServiceTypes,
OpenAiModels,
} from 'graphlit-client/dist/generated/graphql-types';
const graphlit = new Graphlit();
// Enable streaming with OpenAI client
graphlit.setOpenAIClient(new OpenAI());
async function main() {
const spec = await graphlit.createSpecification({
name: 'Assistant',
type: SpecificationTypes.Completion,
serviceType: ModelServiceTypes.OpenAi,
openAI: {
model: OpenAiModels.Gpt4O_128K,
temperature: 0.7
}
});
await graphlit.streamAgent(
'Explain transformer attention in simple terms',
(event) => {
if (event.type === 'message_update') {
process.stdout.write(event.message.message);
if (!event.isStreaming) {
console.log('\n[complete]');
}
}
},
undefined,
{ id: spec.createSpecification.id },
);
}
main();What happens: Tokens stream in real-time as the AI generates the response (like ChatGPT's typing effect).
Expected output:
Transformer attention is a mechanism that allows the model to focus on different
parts of the input when processing each token. Think of it like highlighting the
most relevant words in a sentence when trying to understand each word's meaning.
[complete]Step 5: Add Tool Calling
Give your agent functions to call:
import { Graphlit } from 'graphlit-client';
import { OpenAI } from 'openai';
import {
SpecificationTypes,
ModelServiceTypes,
OpenAiModels,
ToolDefinitionInput,
} from 'graphlit-client/dist/generated/graphql-types';
const graphlit = new Graphlit();
graphlit.setOpenAIClient(new OpenAI());
// Define tool
const searchTool: ToolDefinitionInput = {
name: 'search_memory',
description: 'Search semantic memory for documents',
schema: JSON.stringify({
type: 'object',
properties: {
query: { type: 'string', description: 'Search query' },
},
required: ['query'],
}),
};
// Tool implementation
const toolHandlers = {
search_memory: async (args: { query: string }) => {
const results = await graphlit.queryContents({
filter: { search: args.query },
});
return results.contents.results.map((c) => c.name);
},
};
async function main() {
const spec = await graphlit.createSpecification({
name: 'Agent with Tools',
type: SpecificationTypes.Completion,
serviceType: ModelServiceTypes.OpenAi,
openAI: { model: OpenAiModels.Gpt4O_128K }
});
await graphlit.streamAgent(
'Find documents about attention mechanisms',
(event) => {
if (event.type === 'tool_update' && event.status === 'completed') {
console.log(`\n🔧 Called ${event.toolCall.name}`);
} else if (event.type === 'message_update') {
process.stdout.write(event.message.message);
if (!event.isStreaming) {
console.log('\n[complete]');
}
}
},
undefined,
{ id: spec.createSpecification.id },
[searchTool],
toolHandlers,
);
}
main();What happens: The agent decides when to call your function, executes it, and uses the results in its response.
Tool calling works in all SDKs: Python, TypeScript, and C# all support defining tools and handlers.
What You've Built
In 7 minutes, you created an AI agent with:
Semantic memory
Ingest and search documents by meaning
RAG conversations
Q&A grounded in your content
Real-time streaming
TypeScript token-by-token responses
Agentic behavior
AI that calls functions to accomplish tasks
Data Flow Summary
Ingest Content → Semantic memory indexes files, messages, and pages
Create Specification → Pick the LLM and parameters for the agent
Create Conversation → Optionally scope retrieval with filters
promptConversation (all SDKs) or streamAgent (TypeScript) → Get responses
Tool Handlers → Agent can call functions when needed
Production Notes
Timeouts: For very large files, ingestUri(..., true) may exceed default timeouts. Consider wrapping in Promise.race with a timeout or polling via isContentDone.
Logging: Replace console.log with structured logging (Pino/Winston) in production services.
Secrets: Keep .env out of version control; use platform secret stores in deployment.
Rate limits: OpenAI streaming respects your account quotas. Handle 429 responses with retries.
Next Steps
Learn Advanced Patterns
AI Agents with Memory - Multi-agent systems, advanced tool patterns (15 min)
Knowledge Graph - Extract entities and relationships (20 min)
MCP Integration - Connect to your IDE (10 min)
Explore Sample Applications
📓 60+ Colab Notebooks - Run Python examples instantly
RAG & Conversations (15+ examples)
Ingestion & Preparation (6+ examples)
Knowledge Graph & Extraction (7+ examples)
🚀 Next.js Apps - Deploy-ready applications
Full-featured chat with streaming
Chat with knowledge graph visualization
Document extraction interface
💻 Streamlit Apps - Interactive Python UIs
Add More Capabilities
Different AI Models:
// Use Claude instead
import { ModelServiceTypes, AnthropicModels } from 'graphlit-client/dist/generated/graphql-types';
serviceType: ModelServiceTypes.Anthropic,
anthropic: {
model: AnthropicModels.Claude_4_5Sonnet
}Multiple Documents:
// Upload multiple PDFs
const urls = [
'https://example.com/doc1.pdf',
'https://example.com/doc2.pdf',
];
const ids = [];
for (const url of urls) {
const content = await graphlit.ingestUri(url, undefined, undefined, undefined, true);
ids.push(content.ingestUri.id);
}
// Create conversation with all documents
const conversation = await graphlit.createConversation({
name: 'Multi-Document Chat',
filter: { contents: ids.map(id => ({ id })) }
});Custom Tools:
// Add a database query tool
const dbTool: ToolDefinitionInput = {
name: 'query_database',
description: 'Query the customer database',
schema: JSON.stringify({
type: 'object',
properties: {
query: { type: 'string', description: 'SQL query' },
},
required: ['query'],
}),
};Complete Examples
Full working code:
TypeScript SDK README - All examples tested and verified
Sample Repository - 60+ working examples
Next.js Apps - Full-stack applications
Troubleshooting
"streamAgent is not a function" (Python/C#)
Use prompt_conversation() (Python) or PromptConversation() (C#). Streaming is TypeScript-only. See Step 3 for the universal pattern.
"OpenAI API key not found"
Only needed for TypeScript streamAgent() (Step 4). Add to .env:
OPENAI_API_KEY=your_keyGet your key from platform.openai.com/api-keys.
"Content not finished processing"
Use isSynchronous: true (fifth parameter) in ingestUri() to wait for completion:
await graphlit.ingestUri(url, name, undefined, undefined, true);"Module not found: dotenv"
Install dotenv:
npm install dotenvNeed Help?
Discord Community - Get help from the Graphlit team and community
Ask Graphlit - AI code assistant for instant SDK code examples
TypeScript SDK Docs - Complete API reference
Sample Gallery - Browse working examples
Last updated
Was this helpful?