Prompt with Citations
User Intent
"How do I get AI responses with source citations and page references?"
Operation
SDK Method: promptConversation()
Use Case: Q&A with source attribution (synchronous, not streaming)
Complete Code Example (TypeScript)
import { Graphlit } from 'graphlit-client';
const graphlit = new Graphlit();
const response = await graphlit.promptConversation(
'What are the key findings from the research papers?',
conversationId // Optional: undefined for new conversation
);
// Extract conversation ID
const convId = response.promptConversation?.conversation?.id;
// Extract message
const message = response.promptConversation?.message?.message;
// Extract citations with page numbers
const citations = response.promptConversation?.message?.citations;
console.log('Answer:', message);
console.log('\nSources:');
citations?.forEach((citation, i) => {
console.log(`[${i + 1}] ${citation.content?.name}`);
console.log(` Type: ${citation.content?.type}`);
console.log(` Page: ${citation.startPage || 'N/A'}`);
console.log(` Relevance: ${citation.score?.toFixed(2)}`);
});Response Structure
{
promptConversation: {
conversation: {
id: string, // Conversation ID (use for follow-ups)
name?: string,
state: 'ENABLED' | 'DISABLED'
},
message: {
message: string, // AI response text
role: 'ASSISTANT',
citations: [ // Source references
{
index: number, // Citation number (e.g., [1])
text: string, // Excerpt from source
startPage?: number, // Page number (for documents)
endPage?: number,
score: number, // Relevance score (0-1)
content: {
id: string,
name: string, // Document name
type: string, // 'FILE', 'PAGE', 'EMAIL', etc.
uri?: string // Source URI
}
}
],
tokens?: number, // Token count
completionTokens?: number,
promptTokens?: number
}
}
}Multi-Turn Conversation with Citations
const graphlit = new Graphlit();
// First question
const response1 = await graphlit.promptConversation(
'What is machine learning?'
);
const conversationId = response1.promptConversation?.conversation?.id;
const answer1 = response1.promptConversation?.message?.message;
const citations1 = response1.promptConversation?.message?.citations;
console.log('Q: What is machine learning?');
console.log('A:', answer1);
console.log('Sources:', citations1?.length, 'documents');
// Follow-up question (with context)
const response2 = await graphlit.promptConversation(
'Can you give an example?',
conversationId // Same conversation = has context
);
const answer2 = response2.promptConversation?.message?.message;
const citations2 = response2.promptConversation?.message?.citations;
console.log('\nQ: Can you give an example?');
console.log('A:', answer2);
console.log('Sources:', citations2?.length, 'documents');Display Citations
function displayCitationsMarkdown(citations: Citation[]) {
console.log('\n## Sources\n');
citations?.forEach((citation, i) => {
const num = i + 1;
const name = citation.content?.name || 'Unknown';
const page = citation.startPage
? ` (p. ${citation.startPage})`
: '';
const score = (citation.score * 100).toFixed(0);
console.log(`${num}. **${name}**${page} - ${score}% relevance`);
if (citation.text) {
console.log(` > "${citation.text.substring(0, 100)}..."`);
}
console.log();
});
}
// Usage
const response = await graphlit.promptConversation('Explain AI ethics');
displayCitationsMarkdown(response.promptConversation?.message?.citations);Output:
## Sources
1. **AI Ethics Guidelines.pdf** (p. 12) - 92% relevance
> "Ethical AI systems must prioritize transparency, fairness, and accountability..."
2. **Research Paper on Bias.pdf** (p. 5) - 87% relevance
> "Bias in machine learning models can lead to discriminatory outcomes..."Filter Citations by Type
const response = await graphlit.promptConversation('Summarize the documents');
const citations = response.promptConversation?.message?.citations || [];
// Group by content type
const pdfCitations = citations.filter(c => c.content?.type === 'FILE');
const webCitations = citations.filter(c => c.content?.type === 'PAGE');
const emailCitations = citations.filter(c => c.content?.type === 'EMAIL');
console.log(`PDFs: ${pdfCitations.length}`);
console.log(`Web pages: ${webCitations.length}`);
console.log(`Emails: ${emailCitations.length}`);Sort Citations by Relevance
const response = await graphlit.promptConversation('What are the main points?');
const citations = response.promptConversation?.message?.citations || [];
// Sort by relevance score (highest first)
const sortedCitations = [...citations].sort((a, b) =>
(b.score || 0) - (a.score || 0)
);
console.log('Top 3 most relevant sources:');
sortedCitations.slice(0, 3).forEach((citation, i) => {
console.log(`${i + 1}. ${citation.content?.name} (${(citation.score * 100).toFixed(0)}%)`);
});Link to Source Content
const response = await graphlit.promptConversation('Find information about X');
const citations = response.promptConversation?.message?.citations || [];
citations?.forEach(citation => {
const contentId = citation.content?.id;
const contentName = citation.content?.name;
const page = citation.startPage;
// Generate link to view full document
const viewUrl = `https://app.example.com/content/${contentId}${page ? `#page=${page}` : ''}`;
console.log(`[${contentName}](${viewUrl})`);
});Use Custom Model
// Create specification for specific model
const specResponse = await graphlit.createSpecification({
name: 'GPT-4o',
type: SpecificationTypes.Completion,
serviceType: ModelServiceTypes.OpenAI,
openAI: {
model: OpenAiModels.Gpt4O_128K
}
});
// Use custom specification
const response = await graphlit.promptConversation(
'Analyze this with more detail',
conversationId,
{
id: specResponse.createSpecification.id // Custom model
}
);Filter by Collection
// Query specific collection
const response = await graphlit.promptConversation(
'What are the findings?',
conversationId,
undefined, // specification
undefined, // correlationId
{
collections: [
{ id: 'research-papers-collection-id' }
]
} // filter
);
// Only returns citations from specified collectionKey Differences: promptConversation vs streamAgent
Citations
✅ Returns citations
❌ No citations
Streaming
❌ Waits for complete
✅ Real-time chunks
Tool calling
❌ Not supported
✅ Supported
Page numbers
✅ Exact page refs
❌ N/A
Use case
Q&A with sources
Chat UI streaming
When to use promptConversation:
Need source citations
Need exact page numbers
Don't need streaming
Simple Q&A
When to use streamAgent:
Building chat UI with streaming
Need tool/function calling
Don't need citations
Common Issues
Issue: No citations returned Solution: Citations depend on retrieval quality. Check that:
Content is ingested and indexed
Search query matches content
Specification supports retrieval
Issue: Page numbers missing Solution: Page numbers only available for:
PDFs with page structure
Documents (not web pages or emails)
Issue: Too many citations Solution: Filter by relevance score or limit count:
const topCitations = citations
?.filter(c => c.score > 0.7)
?.slice(0, 5);Issue: Citations from wrong collection
Solution: Pass collection filter in filter parameter.
Issue: Want streaming with citations
Solution: Not possible. Citations only work with synchronous promptConversation. Use streamAgent for streaming (no citations) or promptConversation for citations (no streaming).
Last updated
Was this helpful?