# Quickstart: Your First Agent

⏱️ **Time**: 7 minutes\
🎯 **Level**: Beginner\
💻 **SDK**: All SDKs (Step 4 streaming: TypeScript only)

## What You'll Build

An AI agent that:

* ✅ Ingests documents into semantic memory
* ✅ Searches by meaning (not just keywords)
* ✅ Answers questions with citations
* ✅ Streams responses in real-time (TypeScript)
* ✅ Calls tools to extend capabilities

***

## Prerequisites

**You need**: Node.js 20+ ([download](https://nodejs.org/)) and a free Graphlit account.

{% hint style="warning" %}
**Don't have an account yet?**

1. [Sign up](https://docs.graphlit.dev/account-setup-one-time/signup) (30 seconds)
2. [Create project](https://docs.graphlit.dev/account-setup-one-time/create-project) (1 minute)
3. Copy your credentials from [API Settings](https://docs.graphlit.dev/account-setup-one-time/credentials)
   {% endhint %}

### Project setup

Create a new project folder and initialize it:

```bash
mkdir graphlit-quickstart && cd graphlit-quickstart
npm init -y
npm install graphlit-client dotenv tsx
npm install openai  # Needed for Steps 4-5 (streaming)
```

Create a `.env` file with your credentials from the [Developer Portal](https://portal.graphlit.dev):

```env
GRAPHLIT_ORGANIZATION_ID=your_org_id
GRAPHLIT_ENVIRONMENT_ID=your_env_id
GRAPHLIT_JWT_SECRET=your_jwt_secret
OPENAI_API_KEY=your_openai_key  # Needed for Steps 4-5 (streaming)
```

{% hint style="info" %}
The SDK automatically loads your `.env` file — no import needed. Get your Graphlit credentials from your project's **API Settings** page in the [Developer Portal](https://portal.graphlit.dev).
{% endhint %}

### Verify your setup

Create `hello.ts` and run it:

```typescript
import { Graphlit } from 'graphlit-client';

const graphlit = new Graphlit();

async function main() {
  const project = await graphlit.getProject();
  console.log(`Connected to: ${project.project.name}`);
}

main();
```

```bash
npx tsx hello.ts
```

**Expected output:**

```
Connected to: My Project
```

If you see your project name, you're ready. If you get an error, double-check the values in your `.env` file match what's shown in the Developer Portal.

***

## Step 1: Ingest Content

Add a document to semantic memory. Save this as `step1.ts`:

```typescript
import { Graphlit } from 'graphlit-client';

const graphlit = new Graphlit();

async function main() {
  const content = await graphlit.ingestUri(
    'https://arxiv.org/pdf/1706.03762.pdf',
    'Attention Paper',
    undefined,
    undefined,
    true, // Wait for processing to complete
  );

  console.log(`✅ Document ingested: ${content.ingestUri.id}`);
}

main();
```

```bash
npx tsx step1.ts
```

**What happens**: Graphlit downloads the PDF, extracts text, generates embeddings, and stores in semantic memory. This may take 30-60 seconds.

**Expected output:**

```
✅ Document ingested: 01234567-89ab-cdef-0123-456789abcdef
```

{% hint style="success" %}
**Python/.NET**: Get this code in your language instantly:

* [Convert to Python](https://ask.graphlit.dev/?prompt=Convert%20this%20TypeScript%20code%20to%20Python%3A%0A%0Aimport%20%7B%20Graphlit%20%7D%20from%20%27graphlit-client%27%3B%0A%0Aconst%20graphlit%20%3D%20new%20Graphlit%28%29%3B%0A%0Aasync%20function%20main%28%29%20%7B%0A%20%20const%20content%20%3D%20await%20graphlit.ingestUri%28%0A%20%20%20%20%27https%3A//arxiv.org/pdf/1706.03762.pdf%27%2C%0A%20%20%20%20%27Attention%20Paper%27%2C%0A%20%20%20%20undefined%2C%0A%20%20%20%20undefined%2C%0A%20%20%20%20true%0A%20%20%29%3B%0A%0A%20%20console.log%28%60%E2%9C%85%20Document%20ingested%3A%20%24%7Bcontent.ingestUri.id%7D%60%29%3B%0A%7D%0A%0Amain%28%29%3B)
* [Convert to .NET](https://ask.graphlit.dev/?prompt=Convert%20this%20TypeScript%20code%20to%20C%23/.NET%3A%0A%0Aimport%20%7B%20Graphlit%20%7D%20from%20%27graphlit-client%27%3B%0A%0Aconst%20graphlit%20%3D%20new%20Graphlit%28%29%3B%0A%0Aasync%20function%20main%28%29%20%7B%0A%20%20const%20content%20%3D%20await%20graphlit.ingestUri%28%0A%20%20%20%20%27https%3A//arxiv.org/pdf/1706.03762.pdf%27%2C%0A%20%20%20%20%27Attention%20Paper%27%2C%0A%20%20%20%20undefined%2C%0A%20%20%20%20undefined%2C%0A%20%20%20%20true%0A%20%20%29%3B%0A%0A%20%20console.log%28%60%E2%9C%85%20Document%20ingested%3A%20%24%7Bcontent.ingestUri.id%7D%60%29%3B%0A%7D%0A%0Amain%28%29%3B)
  {% endhint %}

***

## Step 2: Search Your Memory

Query ingested content by meaning. Save this as `step2.ts`:

```typescript
import { Graphlit } from 'graphlit-client';

const graphlit = new Graphlit();

async function main() {
  const results = await graphlit.queryContents({
    search: 'transformer architecture innovations',
  });

  console.log(`Found ${results.contents.results.length} documents:`);

  for (const item of results.contents.results) {
    console.log(`📄 ${item.name}`);
  }
}

main();
```

```bash
npx tsx step2.ts
```

**Semantic search**: Finds documents by meaning, not just keyword matching. Try searching for "attention mechanism" and see it find the transformer paper.

**Expected output:**

```
Found 1 documents:
📄 Attention Paper
```

***

## Step 3: RAG Conversation

Ask questions about your content. Save this as `step3.ts`:

```typescript
import { Graphlit } from 'graphlit-client';

const graphlit = new Graphlit();

async function main() {
  // Ingest the document (if you already ran Step 1, this returns the existing content)
  const content = await graphlit.ingestUri(
    'https://arxiv.org/pdf/1706.03762.pdf',
    'Attention Paper',
    undefined,
    undefined,
    true,
  );

  // Create conversation scoped to this document
  const conversation = await graphlit.createConversation({
    name: 'Q&A Session',
    filter: { contents: [{ id: content.ingestUri.id }] }
  });

  // Ask questions
  const answer = await graphlit.promptConversation(
    'What are the key innovations in this paper?',
    conversation.createConversation.id,
  );

  console.log(answer.promptConversation.message?.message);
}

main();
```

```bash
npx tsx step3.ts
```

**What happens**: Graphlit retrieves relevant sections, injects context into the LLM, and generates an answer with citations.

**Expected output:**

```
The paper introduces the Transformer architecture, which relies entirely on 
self-attention mechanisms rather than recurrence or convolutions. Key innovations 
include multi-head attention and positional encodings.
```

{% hint style="success" %}
**Get this code in your language**:

* [Python version](https://ask.graphlit.dev/?prompt=Convert%20this%20TypeScript%20to%20Python%3A%0A%0Aconst%20conversation%20%3D%20await%20graphlit.createConversation%28%7B%0A%20%20name%3A%20%27Q%26A%20Session%27%2C%0A%20%20filter%3A%20%7B%20contents%3A%20%5B%7B%20id%3A%20content.ingestUri.id%20%7D%5D%20%7D%0A%7D%29%3B%0A%0Aconst%20answer%20%3D%20await%20graphlit.promptConversation%28%0A%20%20%27What%20are%20the%20key%20innovations%3F%27%2C%0A%20%20conversation.createConversation.id%0A%29%3B)
* [.NET version](https://ask.graphlit.dev/?prompt=Convert%20this%20TypeScript%20to%20C%23/.NET%3A%0A%0Aconst%20conversation%20%3D%20await%20graphlit.createConversation%28%7B%0A%20%20name%3A%20%27Q%26A%20Session%27%2C%0A%20%20filter%3A%20%7B%20contents%3A%20%5B%7B%20id%3A%20content.ingestUri.id%20%7D%5D%20%7D%0A%7D%29%3B%0A%0Aconst%20answer%20%3D%20await%20graphlit.promptConversation%28%0A%20%20%27What%20are%20the%20key%20innovations%3F%27%2C%0A%20%20conversation.createConversation.id%0A%29%3B)
  {% endhint %}

***

## Step 4: Real-Time Streaming (TypeScript)

{% hint style="info" %}
**TypeScript SDK only**: Python and C# SDKs use synchronous `promptConversation()` from Step 3. Real-time streaming is TypeScript-specific.
{% endhint %}

{% hint style="warning" %}
**Requires OpenAI API key.** If you haven't already, add `OPENAI_API_KEY=your_key` to your `.env` file. Get a key from [platform.openai.com/api-keys](https://platform.openai.com/api-keys).
{% endhint %}

Save this as `step4.ts`:

```typescript
import { Graphlit } from 'graphlit-client';
import { OpenAI } from 'openai';
import {
  SpecificationTypes,
  ModelServiceTypes,
  OpenAiModels,
} from 'graphlit-client/dist/generated/graphql-types';

const graphlit = new Graphlit();

// Enable streaming with OpenAI client
graphlit.setOpenAIClient(new OpenAI());

async function main() {
  const spec = await graphlit.createSpecification({
    name: 'Assistant',
    type: SpecificationTypes.Completion,
    serviceType: ModelServiceTypes.OpenAi,
    openAI: {
      model: OpenAiModels.Gpt4O_128K,
      temperature: 0.7
    }
  });

  await graphlit.streamAgent(
    'Explain transformer attention in simple terms',
    (event) => {
      if (event.type === 'message_update') {
        process.stdout.write(event.message.message);
        if (!event.isStreaming) {
          console.log('\n[complete]');
        }
      }
    },
    undefined,
    { id: spec.createSpecification.id },
  );
}

main();
```

```bash
npx tsx step4.ts
```

**What happens**: Tokens stream in real-time as the AI generates the response (like ChatGPT's typing effect).

**Expected output:**

```
Transformer attention is a mechanism that allows the model to focus on different
parts of the input when processing each token. Think of it like highlighting the
most relevant words in a sentence when trying to understand each word's meaning.
[complete]
```

***

## Step 5: Add Tool Calling

Give your agent functions to call. Save this as `step5.ts`:

```typescript
import { Graphlit } from 'graphlit-client';
import { OpenAI } from 'openai';
import {
  SpecificationTypes,
  ModelServiceTypes,
  OpenAiModels,
  ToolDefinitionInput,
} from 'graphlit-client/dist/generated/graphql-types';

const graphlit = new Graphlit();
graphlit.setOpenAIClient(new OpenAI());

// Define tool
const searchTool: ToolDefinitionInput = {
  name: 'search_memory',
  description: 'Search semantic memory for documents',
  schema: JSON.stringify({
    type: 'object',
    properties: {
      query: { type: 'string', description: 'Search query' },
    },
    required: ['query'],
  }),
};

// Tool implementation
const toolHandlers = {
  search_memory: async (args: { query: string }) => {
    const results = await graphlit.queryContents({ search: args.query });
    return results.contents.results.map((c) => c.name);
  },
};

async function main() {
  const spec = await graphlit.createSpecification({
    name: 'Agent with Tools',
    type: SpecificationTypes.Completion,
    serviceType: ModelServiceTypes.OpenAi,
    openAI: { model: OpenAiModels.Gpt4O_128K }
  });

  await graphlit.streamAgent(
    'Find documents about attention mechanisms',
    (event) => {
      if (event.type === 'tool_update' && event.status === 'completed') {
        console.log(`\n🔧 Called ${event.toolCall.name}`);
      } else if (event.type === 'message_update') {
        process.stdout.write(event.message.message);
        if (!event.isStreaming) {
          console.log('\n[complete]');
        }
      }
    },
    undefined,
    { id: spec.createSpecification.id },
    [searchTool],
    toolHandlers,
  );
}

main();
```

```bash
npx tsx step5.ts
```

**What happens**: The agent decides when to call your function, executes it, and uses the results in its response.

{% hint style="success" %}
**Tool calling works in all SDKs**: Python, TypeScript, and C# all support defining tools and handlers.
{% endhint %}

***

## What You've Built

In 7 minutes, you created an AI agent with:

<table><thead><tr><th width="200">Capability</th><th>Why It Matters</th></tr></thead><tbody><tr><td><strong>Semantic memory</strong></td><td>Ingest and search documents by meaning</td></tr><tr><td><strong>RAG conversations</strong></td><td>Q&#x26;A grounded in your content</td></tr><tr><td><strong>Real-time streaming</strong></td><td>TypeScript token-by-token responses</td></tr><tr><td><strong>Agentic behavior</strong></td><td>AI that calls functions to accomplish tasks</td></tr></tbody></table>

### Data Flow Summary

{% @mermaid/diagram content="graph LR
A\[Ingest Content] -->|ingestUri| B\[Semantic Memory]
C\[Create Specification] --> D\[Conversation]
B --> D
D -->|promptConversation| E\[Synchronous Response]
D -->|streamAgent| F\[Streaming Response]
F -->|Optional| G\[Tool Handlers]" %}

1. **Ingest Content** → Semantic memory indexes files, messages, and pages
2. **Create Specification** → Pick the LLM and parameters for the agent
3. **Create Conversation** → Optionally scope retrieval with filters
4. **promptConversation** (all SDKs) or **streamAgent** (TypeScript) → Get responses
5. **Tool Handlers** → Agent can call functions when needed

***

## Production Notes

**Timeouts**: For very large files, `ingestUri(url, name, undefined, undefined, true)` may exceed default timeouts. Consider wrapping in `Promise.race` with a timeout or polling via `isContentDone`.

**Logging**: Replace `console.log` with structured logging (Pino/Winston) in production services.

**Secrets**: Keep `.env` out of version control; use platform secret stores in deployment.

**Rate limits**: OpenAI streaming respects your account quotas. Handle `429` responses with retries.

***

## Next Steps

### Learn Advanced Patterns

[**AI Agents with Memory**](https://docs.graphlit.dev/tutorials/ai-agents) - Multi-agent systems, advanced tool patterns (15 min)

[**Knowledge Graph**](https://docs.graphlit.dev/tutorials/knowledge-graph) - Extract entities and relationships (20 min)

[**MCP Integration**](https://docs.graphlit.dev/mcp-integration/mcp-integration) - Connect to your IDE (10 min)

### Explore Sample Applications

[**📓 60+ Colab Notebooks**](https://github.com/graphlit/graphlit-samples/tree/main/python/Notebook%20Examples) - Run Python examples instantly

* RAG & Conversations (15+ examples)
* Ingestion & Preparation (6+ examples)
* Knowledge Graph & Extraction (7+ examples)

[**🚀 Next.js Apps**](https://github.com/graphlit/graphlit-samples/tree/main/nextjs) - Deploy-ready applications

* Full-featured chat with streaming
* Chat with knowledge graph visualization
* Document extraction interface

[**💻 Streamlit Apps**](https://github.com/graphlit/graphlit-samples/tree/main/python/Streamlit) - Interactive Python UIs

### Add More Capabilities

**Different AI Models:**

```typescript
// Use Claude instead
import { ModelServiceTypes, AnthropicModels } from 'graphlit-client/dist/generated/graphql-types';

serviceType: ModelServiceTypes.Anthropic,
anthropic: {
  model: AnthropicModels.Claude_4_5Sonnet
}
```

**Multiple Documents:**

```typescript
// Upload multiple PDFs
const urls = [
  'https://example.com/doc1.pdf',
  'https://example.com/doc2.pdf',
];

const ids = [];
for (const url of urls) {
  const content = await graphlit.ingestUri(url, undefined, undefined, undefined, true);
  ids.push(content.ingestUri.id);
}

// Create conversation with all documents
const conversation = await graphlit.createConversation({
  name: 'Multi-Document Chat',
  filter: { contents: ids.map(id => ({ id })) }
});
```

**Custom Tools:**

```typescript
// Add a database query tool
const dbTool: ToolDefinitionInput = {
  name: 'query_database',
  description: 'Query the customer database',
  schema: JSON.stringify({
    type: 'object',
    properties: {
      query: { type: 'string', description: 'SQL query' },
    },
    required: ['query'],
  }),
};
```

***

## Complete Examples

**Full working code**:

* [**TypeScript SDK README**](https://github.com/graphlit/graphlit-client-typescript#readme) - All examples tested and verified
* [**Sample Repository**](https://github.com/graphlit/graphlit-samples) - 60+ working examples
* [**Next.js Apps**](https://github.com/graphlit/graphlit-samples/tree/main/nextjs) - Full-stack applications

***

## Troubleshooting

**"streamAgent is not a function" (Python/C#)**

Use `prompt_conversation()` (Python) or `PromptConversation()` (C#). Streaming is TypeScript-only. See Step 3 for the universal pattern.

**"OpenAI API key not found"**

Only needed for TypeScript `streamAgent()` (Step 4). Add to `.env`:

```env
OPENAI_API_KEY=your_key
```

Get your key from [platform.openai.com/api-keys](https://platform.openai.com/api-keys).

**"Content not finished processing"**

Use `isSynchronous: true` (fifth parameter) in `ingestUri()` to wait for completion:

```typescript
await graphlit.ingestUri(url, name, undefined, undefined, true);
```

**"Module not found: dotenv"**

Install dotenv:

```bash
npm install dotenv
```

***

## Need Help?

[**Discord Community**](https://discord.gg/ygFmfjy3Qx) - Get help from the Graphlit team and community

[**Ask Graphlit**](https://docs.graphlit.dev/resources/ask-graphlit) - AI code assistant for instant SDK code examples

[**TypeScript SDK Docs**](https://github.com/graphlit/graphlit-client-typescript) - Complete API reference

[**Sample Gallery**](https://github.com/graphlit/graphlit-samples) - Browse working examples
