Quickstart: Your First Agent
Build an AI agent with semantic memory in 7 minutes
⏱️ Time: 7 minutes 🎯 Level: Beginner 💻 SDK: All SDKs (Step 4 streaming: TypeScript only)
What You'll Build
An AI agent that:
✅ Ingests documents into semantic memory
✅ Searches by meaning (not just keywords)
✅ Answers questions with citations
✅ Streams responses in real-time (TypeScript)
✅ Calls tools to extend capabilities
Prerequisites
Required setup:
Sign up (30 seconds)
Create project (1 minute)
Configure credentials (1 minute)
✅ Verify: Run
hello.tsfrom credentials page
For Step 4 (TypeScript streaming): OpenAI API key
Install dependencies:
Step 1: Ingest Content
Add a document to semantic memory:
What happens: Graphlit downloads the PDF, extracts text, generates embeddings, and stores in semantic memory.
Expected output:
Python/.NET: Get this code in your language instantly:
Step 2: Search Your Memory
Query ingested content by meaning:
Semantic search: Finds documents by meaning, not just keyword matching. Try searching for "attention mechanism" and see it find the transformer paper.
Expected output:
Step 3: RAG Conversation
Ask questions about your content:
What happens: Graphlit retrieves relevant sections, injects context into the LLM, and generates an answer with citations.
Expected output:
Get this code in your language:
Step 4: Real-Time Streaming (TypeScript)
Setup
Add to your .env:
Get your key from platform.openai.com/api-keys.
Code
What happens: Tokens stream in real-time as the AI generates the response (like ChatGPT's typing effect).
Expected output:
Step 5: Add Tool Calling
Give your agent functions to call:
What happens: The agent decides when to call your function, executes it, and uses the results in its response.
Tool calling works in all SDKs: Python, TypeScript, and C# all support defining tools and handlers.
What You've Built
In 7 minutes, you created an AI agent with:
Semantic memory
Ingest and search documents by meaning
RAG conversations
Q&A grounded in your content
Real-time streaming
TypeScript token-by-token responses
Agentic behavior
AI that calls functions to accomplish tasks
Data Flow Summary
Ingest Content → Semantic memory indexes files, messages, and pages
Create Specification → Pick the LLM and parameters for the agent
Create Conversation → Optionally scope retrieval with filters
promptConversation (all SDKs) or streamAgent (TypeScript) → Get responses
Tool Handlers → Agent can call functions when needed
Production Notes
Timeouts: For very large files, ingestUri(..., true) may exceed default timeouts. Consider wrapping in Promise.race with a timeout or polling via isContentDone.
Logging: Replace console.log with structured logging (Pino/Winston) in production services.
Secrets: Keep .env out of version control; use platform secret stores in deployment.
Rate limits: OpenAI streaming respects your account quotas. Handle 429 responses with retries.
Next Steps
Learn Advanced Patterns
AI Agents with Memory - Multi-agent systems, advanced tool patterns (15 min)
Knowledge Graph - Extract entities and relationships (20 min)
MCP Integration - Connect to your IDE (10 min)
Explore Sample Applications
📓 60+ Colab Notebooks - Run Python examples instantly
RAG & Conversations (15+ examples)
Ingestion & Preparation (6+ examples)
Knowledge Graph & Extraction (7+ examples)
🚀 Next.js Apps - Deploy-ready applications
Full-featured chat with streaming
Chat with knowledge graph visualization
Document extraction interface
💻 Streamlit Apps - Interactive Python UIs
Add More Capabilities
Different AI Models:
Multiple Documents:
Custom Tools:
Complete Examples
Full working code:
TypeScript SDK README - All examples tested and verified
Sample Repository - 60+ working examples
Next.js Apps - Full-stack applications
Troubleshooting
"streamAgent is not a function" (Python/C#)
Use prompt_conversation() (Python) or PromptConversation() (C#). Streaming is TypeScript-only. See Step 3 for the universal pattern.
"OpenAI API key not found"
Only needed for TypeScript streamAgent() (Step 4). Add to .env:
Get your key from platform.openai.com/api-keys.
"Content not finished processing"
Use isSynchronous: true (fifth parameter) in ingestUri() to wait for completion:
"Module not found: dotenv"
Install dotenv:
Need Help?
Discord Community - Get help from the Graphlit team and community
Ask Graphlit - AI code assistant for instant SDK code examples
TypeScript SDK Docs - Complete API reference
Sample Gallery - Browse working examples
Last updated
Was this helpful?