Quickstart: Your First Agent

Build an AI agent with semantic memory in 7 minutes

โฑ๏ธ Time: 7 minutes ๐ŸŽฏ Level: Beginner ๐Ÿ’ป SDK: All SDKs (Step 4 streaming: TypeScript only)

What You'll Build

An AI agent that:

  • โœ… Ingests documents into semantic memory

  • โœ… Searches by meaning (not just keywords)

  • โœ… Answers questions with citations

  • โœ… Streams responses in real-time (TypeScript)

  • โœ… Calls tools to extend capabilities


Prerequisites

circle-exclamation

Install dependencies:


Step 1: Ingest Content

Add a document to semantic memory:

What happens: Graphlit downloads the PDF, extracts text, generates embeddings, and stores in semantic memory.

Expected output:

circle-check

Step 2: Search Your Memory

Query ingested content by meaning:

Semantic search: Finds documents by meaning, not just keyword matching. Try searching for "attention mechanism" and see it find the transformer paper.

Expected output:


Step 3: RAG Conversation

Ask questions about your content:

What happens: Graphlit retrieves relevant sections, injects context into the LLM, and generates an answer with citations.

Expected output:

circle-check

Step 4: Real-Time Streaming (TypeScript)

circle-info

TypeScript SDK only: Python and C# SDKs use synchronous promptConversation() from Step 3. Real-time streaming is TypeScript-specific.

Setup

Add to your .env:

Get your key from platform.openai.com/api-keysarrow-up-right.

Code

What happens: Tokens stream in real-time as the AI generates the response (like ChatGPT's typing effect).

Expected output:


Step 5: Add Tool Calling

Give your agent functions to call:

What happens: The agent decides when to call your function, executes it, and uses the results in its response.

circle-check

What You've Built

In 7 minutes, you created an AI agent with:

Capability
Why It Matters

Semantic memory

Ingest and search documents by meaning

RAG conversations

Q&A grounded in your content

Real-time streaming

TypeScript token-by-token responses

Agentic behavior

AI that calls functions to accomplish tasks

Data Flow Summary

spinner
  1. Ingest Content โ†’ Semantic memory indexes files, messages, and pages

  2. Create Specification โ†’ Pick the LLM and parameters for the agent

  3. Create Conversation โ†’ Optionally scope retrieval with filters

  4. promptConversation (all SDKs) or streamAgent (TypeScript) โ†’ Get responses

  5. Tool Handlers โ†’ Agent can call functions when needed


Production Notes

Timeouts: For very large files, ingestUri(url, name, undefined, undefined, true) may exceed default timeouts. Consider wrapping in Promise.race with a timeout or polling via isContentDone.

Logging: Replace console.log with structured logging (Pino/Winston) in production services.

Secrets: Keep .env out of version control; use platform secret stores in deployment.

Rate limits: OpenAI streaming respects your account quotas. Handle 429 responses with retries.


Next Steps

Learn Advanced Patterns

AI Agents with Memory - Multi-agent systems, advanced tool patterns (15 min)

Knowledge Graph - Extract entities and relationships (20 min)

MCP Integration - Connect to your IDE (10 min)

Explore Sample Applications

๐Ÿ““ 60+ Colab Notebooksarrow-up-right - Run Python examples instantly

  • RAG & Conversations (15+ examples)

  • Ingestion & Preparation (6+ examples)

  • Knowledge Graph & Extraction (7+ examples)

๐Ÿš€ Next.js Appsarrow-up-right - Deploy-ready applications

  • Full-featured chat with streaming

  • Chat with knowledge graph visualization

  • Document extraction interface

๐Ÿ’ป Streamlit Appsarrow-up-right - Interactive Python UIs

Add More Capabilities

Different AI Models:

Multiple Documents:

Custom Tools:


Complete Examples

Full working code:


Troubleshooting

"streamAgent is not a function" (Python/C#)

Use prompt_conversation() (Python) or PromptConversation() (C#). Streaming is TypeScript-only. See Step 3 for the universal pattern.

"OpenAI API key not found"

Only needed for TypeScript streamAgent() (Step 4). Add to .env:

Get your key from platform.openai.com/api-keysarrow-up-right.

"Content not finished processing"

Use isSynchronous: true (fifth parameter) in ingestUri() to wait for completion:

"Module not found: dotenv"

Install dotenv:


Need Help?

Discord Communityarrow-up-right - Get help from the Graphlit team and community

Ask Graphlit - AI code assistant for instant SDK code examples

TypeScript SDK Docsarrow-up-right - Complete API reference

Sample Galleryarrow-up-right - Browse working examples

Last updated