Key Concepts
Core concepts for building AI agents with semantic memory using Graphlit
Graphlit provides semantic memory for AI agents. Understanding the core concepts helps you build production AI applications that remember, understand, and reason about information over time.
On this page:
Data Model Overview
Everything in Graphlit flows through this pipeline: sources → ingestion → processing → memory formation → retrieval.
Common Confusions Clarified
"What's the difference between Content and Feed?"
Content = Any document/file/text in Graphlit (the data itself)
Feed = A connection that continuously adds new content (the sync mechanism)
"Observable vs Entity - same thing?"
Entity = A thing (person, company, place)
Observable = An entity + all places it appears across content
Think: Observable = Entity with observation history
"Specification vs Workflow - both configure things?"
Workflow = How to process content (extraction, preparation)
Specification = Which AI model to use (GPT-5, Claude, etc.)
"Conversation vs Content - both have text?"
Content = Your data (PDFs, emails, docs)
Conversation = Q&A session about your content with AI
"When do I need a Workflow?"
Don't need: Basic ingestion and search (default works)
Need: Extract entities, use vision models, custom processing
"When do I need a Specification?"
Don't need: Default (latest OpenAI) is fine for most use cases
Need: Use different model (Claude, Gemini), custom prompts, token limits
About IDs
All Graphlit entities (content, collections, workflows, specifications, conversations, etc.) have unique identifiers:
Format: GUIDs (Globally Unique Identifiers), also known as UUIDs
Example: 550e8400-e29b-41d4-a716-446655440000
In code examples throughout this documentation, you'll see placeholder IDs like:
'content-id''collection-id''workflow-id'
Replace these with actual GUID values returned from Graphlit API operations.
Content: The Foundation
What is Content?
In semantic memory systems, knowledge exists in unstructured formats:
Documents (PDFs, Word, PowerPoint, Excel)
Audio (MP3, podcasts, meetings, calls)
Video (MP4, recordings, demos)
Web pages (HTML, markdown)
Messages (Slack, Teams, Discord)
Emails (Gmail, Outlook)
Issues (Jira, Linear, GitHub)
Social posts (Twitter, Reddit)
When you ingest any of these into Graphlit, we create a content object that tracks:
Original source and metadata
Extracted text and structured data
Entities found (people, organizations, events)
Relationships to other content
Temporal context (when created, when ingested)
Content with Context
Each piece of content preserves its full context:
Source metadata: Where it came from, when it was created
Temporal context: When ingested, last modified
Structural context: Relationships to other content
Semantic context: Entities and facts extracted from it
Some content types are episodic-like (specific events in time):
"This meeting recording from Oct 15, 2pm"
"This email sent from Sarah to Mike on Tuesday"
"This Slack message posted in #engineering yesterday"
Other content is more knowledge-based:
"This documentation about our API"
"This web page explaining GraphQL"
"This PDF white paper on RAG"
Key insight: Graphlit preserves the full context of each piece of content - not just text chunks, but metadata, relationships, and extracted knowledge.
Feeds: Continuous Data Ingestion
What are Feeds?
Feeds are automated connectors that continuously ingest content from data sources.
Instead of manually uploading each file, create a feed that monitors:
Cloud storage (S3, Azure Blob, Google Cloud, Dropbox, Box, OneDrive, SharePoint)
Communication tools (Slack, Teams, Discord, Twitter/X)
Email (Gmail, Outlook)
Issue trackers (Jira, Linear, GitHub)
Knowledge bases (Notion)
Content (RSS feeds, Reddit, podcasts)
Web (crawling, search, screenshots)
Sync Modes
One-time sweep: Ingest everything once
Good for: Initial knowledge base population
Example: "Import all existing SharePoint documents"
Recurring sync: Check for new content periodically
Good for: Keeping memory up-to-date
Example: "Check Slack #engineering every 5 minutes"
Example: "Monitor Gmail inbox every hour"
Real-World Pattern: Zine
Zine uses 20+ feeds to continuously sync:
Slack channels
Gmail
Google Calendar
Notion pages
Linear issues
GitHub repos
Meeting recordings
This creates a living semantic memory of everything your team does.
Workflows: Memory Formation Pipeline
What are Workflows?
As content enters Graphlit, workflows control how raw data becomes semantic memory.
This is the memory formation cycle:
Workflow Stages
1. Ingestion
Filter what content to accept
Configure source-specific settings
Example: "Only ingest PDFs from /docs folder"
2. Indexing
Extract metadata automatically
Document: author, creation date, title
Email: from/to, subject, timestamp
Audio: duration, speaker
Issue: reporter, assignee, status
Index for semantic search (embeddings)
Store raw content
3. Preparation
Extract text from various formats
Use vision models for PDFs (GPT-4 Vision, Claude Sonnet 3.5)
Transcribe audio (Deepgram, AssemblyAI, Whisper)
Parse HTML/markdown from web pages
Extract structured data
4. Extraction (Key to Semantic Memory)
Entity extraction: Identify people, organizations, places, events
Relationship mapping: Connect entities to each other
Summarization: Create concise representations
Knowledge graph: Build semantic memory layer
This is where raw content becomes semantic memory (structured knowledge with entities and relationships).
5. Enrichment
Enrich entities with external data (Crunchbase, Wikipedia)
Add domain-specific knowledge
Link to existing entities
Example Workflow
from graphlit import Graphlit
from graphlit_api import *
graphlit = Graphlit()
# Create workflow with vision model for OCR and entity extraction
response = await graphlit.client.create_workflow(
WorkflowInput(
name="PDF with Vision",
preparation=PreparationWorkflowStageInput(
jobs=[
PreparationWorkflowJobInput(
connector=FilePreparationConnectorInput(
type=FilePreparationServiceTypes.MODEL_DOCUMENT
)
)
]
),
extraction=ExtractionWorkflowStageInput(
jobs=[
ExtractionWorkflowJobInput(
connector=EntityExtractionConnectorInput(
type=EntityExtractionServiceTypes.MODEL_TEXT
)
)
]
)
)
)
workflow = response.create_workflowimport { Graphlit } from 'graphlit-client';
import { FilePreparationServiceTypes, EntityExtractionServiceTypes } from 'graphlit-client/dist/generated/graphql-types';
const graphlit = new Graphlit();
// Create workflow with vision model for OCR and entity extraction
const workflow = await graphlit.createWorkflow({
name: "PDF with Vision",
preparation: {
jobs: [{
connector: {
type: FilePreparationServiceTypes.ModelDocument
}
}]
},
extraction: {
jobs: [{
connector: {
type: EntityExtractionServiceTypes.ModelText
}
}]
}
});using GraphlitClient;
using System.Net.Http;
using StrawberryShake;
using var httpClient = new HttpClient();
var client = new Graphlit(httpClient);
// Create workflow with vision model for OCR and entity extraction
var input = new WorkflowInput(
name: "PDF with Vision",
preparation: new PreparationWorkflowStageInput(
jobs: new[] {
new PreparationWorkflowJobInput(
connector: new FilePreparationConnectorInput(
type: FilePreparationServiceTypes.ModelDocument
)
)
}
),
extraction: new ExtractionWorkflowStageInput(
jobs: new[] {
new ExtractionWorkflowJobInput(
connector: new EntityExtractionConnectorInput(
type: EntityExtractionServiceTypes.ModelText
)
)
}
)
);
var response = await client.CreateWorkflow.ExecuteAsync(input);
response.EnsureNoErrors();
var workflow = response.Data?.CreateWorkflow;Conversations: Accessing Memory
What are Conversations?
Conversations let AI agents access your content and knowledge graph to answer questions, complete tasks, and reason about information.
This isn't just "Retrieval Augmented Generation (RAG)" - it's semantic memory:
Stateful: Conversation history preserved
Entity-aware: Understands who/what you're asking about
Context-aware: Retrieves relevant memories
Temporal: Knows when things happened
How It Works
# Create conversation
conversation = await graphlit.client.create_conversation(
name="Acme Corp Analysis"
)
# Ask questions - memory retrieval automatic
response = await graphlit.client.prompt_conversation(
prompt="What are Acme Corp's main technical concerns?",
id=conversation.create_conversation.id
)
# Behind the scenes:
# 1. Parses entities: "Acme Corp" (organization)
# 2. Queries knowledge graph for related content
# 3. Retrieves relevant content (emails, meetings, documents)
# 4. Injects semantic memory (entities, relationships)
# 5. Generates answer with citations// Create conversation
const conversation = await graphlit.createConversation({
name: "Acme Corp Analysis"
});
// Ask questions - memory retrieval automatic
const response = await graphlit.promptConversation({
prompt: "What are Acme Corp's main technical concerns?",
id: conversation.createConversation.id
});
// Behind the scenes:
// 1. Parses entities: "Acme Corp" (organization)
// 2. Queries knowledge graph for related content
// 3. Retrieves relevant content (emails, meetings, documents)
// 4. Injects semantic memory (entities, relationships)
// 5. Generates answer with citations// Create conversation
var conversation = await graphlit.CreateConversation(
name: "Acme Corp Analysis"
);
// Ask questions - memory retrieval automatic
var response = await graphlit.PromptConversation(
prompt: "What are Acme Corp's main technical concerns?",
id: conversation.CreateConversation.Id
);
// Behind the scenes:
// 1. Parses entities: "Acme Corp" (organization)
// 2. Queries knowledge graph for related content
// 3. Retrieves relevant content (emails, meetings, documents)
// 4. Injects semantic memory (entities, relationships)
// 5. Generates answer with citationsConversations as Working Memory
While the conversation is active:
Working memory: Current conversation context (in LLM window)
Long-term memory: Content, entities, relationships (in knowledge graph)
Retrieval: Pull long-term memories into working memory as needed
Specifications: Configuring AI Models
What are Specifications?
Specifications configure how AI models process and generate information.
What You Can Configure
Model Selection:
OpenAI (GPT-5, GPT-4o, o4, GPT-4 Turbo)
Anthropic (Claude 4.5 Sonnet, Claude 4 Opus, Claude 3.5)
Google (Gemini 2.5 Pro, Gemini 2.0 Flash)
xAI (Grok 4, Grok 3)
Others (Groq, Mistral, Cohere, DeepSeek)
Tool Calling:
Define tools/functions the LLM can call
Enable agentic workflows
Connect to external APIs
Conversation Strategies:
Windowed: Keep last N messages
Summarized: Summarize old messages
Full: Keep everything (until context limit)
Prompt Strategies:
Rewriting: Improve user prompts
Planning: Break complex tasks into steps
RAG: Configure retrieval parameters
Example
from graphlit import Graphlit
from graphlit_api import *
graphlit = Graphlit()
# Create specification with Claude
response = await graphlit.client.create_specification(
SpecificationInput(
name="Claude 4.5 for Analysis",
type=SpecificationTypes.COMPLETION,
serviceType=ModelServiceTypes.ANTHROPIC,
anthropic=AnthropicModelPropertiesInput(
model=AnthropicModels.CLAUDE_4_5_SONNET,
temperature=0.2
)
)
)
spec = response.create_specification
# Use in conversation
response = await graphlit.client.create_conversation(
ConversationInput(
name="Technical Analysis",
specification=EntityReferenceInput(id=spec.id)
)
)
conversation = response.create_conversationimport { Graphlit } from 'graphlit-client';
import { SpecificationTypes, ModelServiceTypes, AnthropicModels } from 'graphlit-client/dist/generated/graphql-types';
const graphlit = new Graphlit();
// Create specification with Claude
const specResponse = await graphlit.createSpecification({
name: "Claude 4.5 for Analysis",
type: SpecificationTypes.Completion,
serviceType: ModelServiceTypes.Anthropic,
anthropic: {
model: AnthropicModels.Claude_4_5Sonnet,
temperature: 0.2
}
});
const spec = specResponse.createSpecification;
// Use in conversation
const convResponse = await graphlit.createConversation({
name: "Technical Analysis",
specification: { id: spec.id }
});
const conversation = convResponse.createConversation;using GraphlitClient;
using System.Net.Http;
using StrawberryShake;
using var httpClient = new HttpClient();
var client = new Graphlit(httpClient);
// Create specification with Claude
var specInput = new SpecificationInput(
name: "Claude 4.5 for Analysis",
type: SpecificationTypes.Completion,
serviceType: ModelServiceTypes.Anthropic,
anthropic: new AnthropicModelPropertiesInput(
model: AnthropicModels.Claude_4_5Sonnet,
temperature: 0.2
)
);
var specResponse = await client.CreateSpecification.ExecuteAsync(specInput);
specResponse.EnsureNoErrors();
var spec = specResponse.Data?.CreateSpecification;
// Use in conversation
var convInput = new ConversationInput(
name: "Technical Analysis",
specification: new EntityReferenceInput(id: spec.Id)
);
var convResponse = await client.CreateConversation.ExecuteAsync(convInput);
convResponse.EnsureNoErrors();
var conversation = convResponse.Data?.CreateConversation;Collections: Organizing Memory
What are Collections?
Collections group related content for organization and filtering.
Think of them as:
Folders (but content can be in multiple collections)
Tags (but more structured)
Projects (grouping related work)
Use Cases
By Topic:
"Product Documentation"
"Customer Feedback"
"Engineering Discussions"
By Source:
"Acme Corp Content" (all emails, meetings, docs)
"Q4 2024 Planning"
"Architecture Decisions"
By Workflow:
"Needs Review"
"Published"
"Archived"
Example
from graphlit import Graphlit
from graphlit_api import *
graphlit = Graphlit()
# Create collection
response = await graphlit.client.create_collection(
CollectionInput(
name="Acme Corp"
)
)
collection = response.create_collection
# Add content during ingestion
response = await graphlit.client.ingest_uri(
uri="https://example.com/acme-doc.pdf",
collections=[EntityReferenceInput(id=collection.id)]
)
content = response.ingest_uri
# Query by collection
response = await graphlit.client.query_contents(
filter=ContentFilter(
collections=[EntityReferenceFilter(id=collection.id)]
)
)
results = response.query_contents.resultsimport { Graphlit } from 'graphlit-client';
const graphlit = new Graphlit();
// Create collection
const collResponse = await graphlit.createCollection({
name: "Acme Corp"
});
const collection = collResponse.createCollection;
// Add content during ingestion
const ingestResponse = await graphlit.ingestUri(
"https://example.com/acme-doc.pdf",
undefined,
undefined,
undefined,
false,
undefined,
[{ id: collection.id }]
);
const content = ingestResponse.ingestUri;
// Query by collection
const queryResponse = await graphlit.queryContents({
collections: [{ id: collection.id }]
});
const results = queryResponse.queryContents?.results;using GraphlitClient;
using System.Net.Http;
using StrawberryShake;
using var httpClient = new HttpClient();
var client = new Graphlit(httpClient);
// Create collection
var collInput = new CollectionInput(name: "Acme Corp");
var collResponse = await client.CreateCollection.ExecuteAsync(collInput);
collResponse.EnsureNoErrors();
var collection = collResponse.Data?.CreateCollection;
// Add content during ingestion
var ingestResponse = await client.IngestUri.ExecuteAsync(
uri: "https://example.com/acme-doc.pdf",
collections: new[] { new EntityReferenceInput(id: collection.Id) }
);
ingestResponse.EnsureNoErrors();
var content = ingestResponse.Data?.IngestUri;
// Query by collection
var filter = new ContentFilter(
collections: new[] { new EntityReferenceFilter(id: collection.Id) }
);
var queryResponse = await client.QueryContents.ExecuteAsync(filter);
queryResponse.EnsureNoErrors();
var results = queryResponse.Data?.QueryContents?.Results;Knowledge Graph: Semantic Memory Layer
What is the Knowledge Graph?
The knowledge graph is Graphlit's semantic memory - it stores entities and their relationships, not just documents.
This is the key difference between Graphlit and simple RAG systems:
RAG: Stores documents, searches by similarity
Semantic Memory: Stores entities, searches by meaning and relationships
Schema.org Foundation
Graphlit uses Schema.org (JSON-LD) as the knowledge graph foundation:
Why Schema.org?
Industry standard (Google, Microsoft use it)
Rich vocabulary (Person, Organization, Event, Place, Product, etc.)
Interoperable with other systems
Extensible
Example Entity:
{
"@context": "https://schema.org",
"@type": "Person",
"name": "Sarah Chen",
"jobTitle": "CTO",
"worksFor": {
"@type": "Organization",
"name": "Acme Corp"
}
}Observations of Observable Entities
How the graph is built:
LLM reads content: "Sarah Chen from Acme Corp mentioned pricing concerns"
Identifies entities:
Person: Sarah Chen
Organization: Acme Corp
Topic: "pricing concerns"
Creates observations:
Sarah mentioned in this document
Acme Corp mentioned in this document
Sarah works_at Acme Corp (relationship)
Links to source: Observations point to specific content, pages, timestamps
This enables queries like:
"Show me all content mentioning Sarah Chen"
"Who from Acme Corp have we talked to?"
"What technical issues did CTOs raise in Q4?"
Observable Types
Graphlit extracts these entity types:
Person
Sarah Chen, Mike Rodriguez
Track people across sources
Organization
Acme Corp, Google
Company mentions
Place
San Francisco, HQ
Location context
Event
Q4 Planning Meeting
Temporal events
Product
Graphlit API, iPhone
Product mentions
Software
PostgreSQL, Python
Tech stack
Repo
github.com/org/repo
Code references
Label
"bug", "feature-request"
Generic tags
Category
PII classifications
Data categorization
Graph Relationships
As more content is ingested, relationships become more valuable:
Example:
Sarah Chen extracted from emails ✓
Sarah Chen extracted from Slack messages ✓
Sarah Chen extracted from SharePoint docs ✓
Query: "Show me all content related to Sarah Chen" Result: Emails + Slack + SharePoint + any other mentions
Query: "Show me collaboration between Sarah and Mike" Result: All content where both appear
This is auto-categorization through entity recognition.
GraphRAG: Enhanced Context Retrieval
When you ask a question, Graphlit uses the knowledge graph for better context:
Traditional RAG:
User asks: "What did we discuss about the recent earnings?"
Vector search for similar content
Return chunks
Hope it's relevant
GraphRAG (Graphlit):
User asks: "What did we discuss about the recent earnings?"
Extract entities from query: "earnings" (topic)
Semantic search finds documents
Identify commonly observed entities: "CFO" person
Also retrieve content linked to CFO (Slack, emails, meetings)
Inject expanded context into LLM
Generate answer with full context
Result: More relevant, complete answers.
Content Repurposing
Summarization
Generate summaries of content using LLMs:
Built-in methods:
Summary Paragraphs
Bullet Points
Headlines
Social Media Posts
Follow-up Questions
Custom prompts:
from graphlit import Graphlit
from graphlit_api import *
graphlit = Graphlit()
# Summarize all architecture content
response = await graphlit.client.summarize_contents(
summarizations=[
SummarizationStrategyInput(
type=SummarizationTypes.CUSTOM,
prompt="Create a technical summary for engineering team"
)
],
filter=ContentFilter(search="architecture")
)
summary = response.summarize_contentsimport { Graphlit } from 'graphlit-client';
import { SummarizationTypes } from 'graphlit-client/dist/generated/graphql-types';
const graphlit = new Graphlit();
// Summarize all architecture content
const response = await graphlit.summarizeContents(
[
{
type: SummarizationTypes.Custom,
prompt: "Create a technical summary for engineering team"
}
],
{ search: "architecture" }
);
const summary = response.summarizeContents;using GraphlitClient;
using System.Net.Http;
using StrawberryShake;
using var httpClient = new HttpClient();
var client = new Graphlit(httpClient);
// Summarize all architecture content
var summarizations = new[] {
new SummarizationStrategyInput(
type: SummarizationTypes.Custom,
prompt: "Create a technical summary for engineering team"
)
};
var filter = new ContentFilter(search: "architecture");
var response = await client.SummarizeContents.ExecuteAsync(summarizations, filter);
response.EnsureNoErrors();
var summary = response.Data?.SummarizeContents;Publishing
Transform content into new formats:
Two-step process:
Summarization: Each piece of content summarized individually
Publishing: Summaries combined with publishing prompt
Example:
from graphlit import Graphlit
from graphlit_api import *
graphlit = Graphlit()
# Publish blog post from Q4 collection
response = await graphlit.client.publish_contents(
publish_prompt="Write a blog post about our Q4 achievements",
connector=ContentPublishingConnectorInput(
type=ContentPublishingServiceTypes.MARKDOWN
),
filter=ContentFilter(
collections=[EntityReferenceFilter(id=q4_collection_id)]
)
)
published = response.publish_contentsimport { Graphlit } from 'graphlit-client';
import { ContentPublishingServiceTypes } from 'graphlit-client/dist/generated/graphql-types';
const graphlit = new Graphlit();
// Publish blog post from Q4 collection
const response = await graphlit.publishContents(
"Write a blog post about our Q4 achievements",
{ type: ContentPublishingServiceTypes.Markdown },
undefined, // summaryPrompt
undefined, // summarySpecification
undefined, // publishSpecification
undefined, // name
{ collections: [{ id: q4_collection_id }] }
);
const published = response.publishContents;using GraphlitClient;
using System.Net.Http;
using StrawberryShake;
using var httpClient = new HttpClient();
var client = new Graphlit(httpClient);
// Publish blog post from Q4 collection
var response = await client.PublishContents.ExecuteAsync(
publishPrompt: "Write a blog post about our Q4 achievements",
connector: new ContentPublishingConnectorInput(
type: ContentPublishingServiceTypes.Markdown
),
filter: new ContentFilter(
collections: new[] { new EntityReferenceFilter(id: q4_collection_id) }
)
);
response.EnsureNoErrors();
var published = response.Data?.PublishContents;Or publish as audio:
Use ElevenLabs text-to-speech
Generate AI podcasts
Create audio summaries
Alerts
Semantic alerts are automated, recurring publications:
Use cases:
Daily email summary of overnight messages
Weekly Slack post of key decisions
Hourly monitoring of customer feedback
Example:
from graphlit import Graphlit
from graphlit_api import *
graphlit = Graphlit()
# Alert: Summarize overnight emails every morning at 9am
response = await graphlit.client.create_alert(
AlertInput(
name="Overnight Email Summary",
publishPrompt="Summarize key emails with action items",
connector=ContentPublishingConnectorInput(
type=ContentPublishingServiceTypes.MARKDOWN
),
filter=ContentFilter(
types=[ContentTypes.EMAIL],
createdInLast="PT12H" # Last 12 hours (ISO 8601 duration)
),
schedulePolicy=AlertSchedulePolicyInput(
recurrenceType=TimedPolicyRecurrenceTypes.ONCE_PER_DAY
)
)
)
alert = response.create_alertimport { Graphlit } from 'graphlit-client';
import { ContentPublishingServiceTypes, ContentTypes, TimedPolicyRecurrenceTypes } from 'graphlit-client/dist/generated/graphlit-types';
const graphlit = new Graphlit();
// Alert: Summarize overnight emails every morning at 9am
const response = await graphlit.createAlert({
name: "Overnight Email Summary",
publishPrompt: "Summarize key emails with action items",
connector: { type: ContentPublishingServiceTypes.Markdown },
filter: {
types: [ContentTypes.Email],
createdInLast: "PT12H" // Last 12 hours (ISO 8601 duration)
},
schedulePolicy: {
recurrenceType: TimedPolicyRecurrenceTypes.OncePerDay
}
});
const alert = response.createAlert;using GraphlitClient;
using System.Net.Http;
using StrawberryShake;
using var httpClient = new HttpClient();
var client = new Graphlit(httpClient);
// Alert: Summarize overnight emails every morning at 9am
var alertInput = new AlertInput(
name: "Overnight Email Summary",
publishPrompt: "Summarize key emails with action items",
connector: new ContentPublishingConnectorInput(
type: ContentPublishingServiceTypes.Markdown
),
filter: new ContentFilter(
types: new[] { ContentTypes.Email },
createdInLast: "PT12H" // Last 12 hours (ISO 8601 duration)
),
schedulePolicy: new AlertSchedulePolicyInput(
recurrenceType: TimedPolicyRecurrenceTypes.OncePerDay
)
);
var response = await client.CreateAlert.ExecuteAsync(alertInput);
response.EnsureNoErrors();
var alert = response.Data?.CreateAlert;Key Takeaways
Memory, Not Just Documents
Graphlit transforms unstructured content into structured memory:
Content = Long-term storage (documents, messages, recordings)
Knowledge Graph = Semantic memory (entities, facts, relationships)
Conversations = Working memory (active context with LLM)
Episodic context = Preserved for temporal content (emails, meetings, messages)
Automated Formation
Feeds + Workflows = continuous memory formation:
No manual data entry
Always up-to-date
Scales to millions of documents
Entity-Centric
Knowledge graph enables queries by meaning:
"Show me everything about Acme Corp" (not keyword "Acme")
"Who from enterprise customers raised concerns?" (multi-hop)
"Technical discussions in Q4" (temporal + semantic)
Production-Ready
Built for scale:
Multi-tenant isolation
Real-time ingestion
30+ feeds
20+ AI models
Learn More
Understand the Concepts:
Semantic Memory - Deep dive on memory vs RAG
Platform Overview - Complete platform capabilities
Connectors - All data sources
AI Models - Model options
Build with Graphlit:
Quickstart: Your First Agent - Build a streaming agent in 7 minutes
AI Agents - Build agents with memory
Knowledge Graph - Extract entities
See It in Production:
Zine Case Study - Real-world patterns
Sample Repository - 60+ examples
Give your AI semantic memory. Start with the core concepts. Build with Graphlit.
Last updated
Was this helpful?