Graphlit Platform
Developer PortalChangelogPlatform StatusMore InformationJoin Discord
  • Graphlit Platform
    • What is Graphlit?
    • Key Concepts
  • Getting Started
    • Sign up for Graphlit
    • Create Graphlit Project
    • For Python Developers
    • For Node.js Developers
    • For .NET Developers
  • 🚀Quickstart
    • Next.js applications
      • GitHub Code
    • Python applications
      • GitHub Code
  • Graphlit Data API
    • API Usage
      • API Endpoints
      • API Authentication
      • API Explorer
      • GraphQL 101
    • API Reference
      • Content
        • Ingest With Workflow
        • Ingest File
        • Ingest Encoded File
        • Ingest Web Page
        • Ingest Text
        • Semantic Search
          • Query All Content
          • Query Facets
          • Query By Name
          • Filter By Contents
        • Metadata Filtering
          • Filter By Observations
          • Filter By Feeds
          • Filter By Collections
          • Filter By Content Type
          • Filter By File Type
          • Filter By File Size Range
          • Filter By Date Range
        • Summarize Contents
        • Extract Contents
        • Publish Contents
      • Knowledge Graph
        • Labels
        • Categories
        • Persons
        • Organizations
        • Places
        • Events
        • Products
        • Repos
        • Software
      • Collections
      • Feeds
        • Create Feed With Workflow
        • Create RSS Feed
        • Create Podcast Feed
        • Create Web Feed
        • Create Web Search Feed
        • Create Reddit Feed
        • Create Notion Feed
        • Create YouTube Feed
        • User Storage Feeds
          • Create OneDrive Feed
          • Create Google Drive Feed
          • Create SharePoint Feed
        • Cloud Storage Feeds
          • Create Amazon S3 Feed
          • Create Azure Blob Feed
          • Create Azure File Feed
          • Create Google Blob Feed
        • Messaging Feeds
          • Create Slack Feed
          • Create Microsoft Teams Feed
          • Create Discord Feed
        • Email Feeds
          • Create Google Mail Feed
          • Create Microsoft Outlook Feed
        • Issue Feeds
          • Create Linear Feed
          • Create Jira Feed
          • Create GitHub Issues Feed
        • Configuration Options
      • Workflows
        • Ingestion
        • Indexing
        • Preparation
        • Extraction
        • Enrichment
        • Actions
      • Conversations
      • Specifications
        • Azure OpenAI
        • OpenAI
        • Anthropic
        • Mistral
        • Groq
        • Deepseek
        • Replicate
        • Configuration Options
      • Alerts
        • Create Slack Audio Alert
        • Create Slack Text Alert
      • Projects
    • API Changelog
    • Multi-tenant Applications
  • JSON Mode
    • Overview
    • Document JSON
    • Transcript JSON
  • Content Types
    • Files
      • Documents
      • Audio
      • Video
      • Images
      • Animations
      • Data
      • Emails
      • Code
      • Packages
      • Other
    • Web Pages
    • Text
    • Posts
    • Messages
    • Emails
    • Issues
  • Data Sources
    • Feeds
  • Platform
    • Developer Portal
      • Projects
    • Cloud Platform
      • Security
      • Subprocessors
  • Resources
    • Community
Powered by GitBook
On this page

Was this helpful?

  1. Graphlit Data API
  2. API Reference

Specifications

Create, manage and query LLM specifications.

Last updated 10 months ago

Was this helpful?

Overview

When creating a conversation about content, you are utilizing a Large Language Model (LLM), such as OpenAI GPT-4o. Each LLM provides a number of configuration settings, which allow you to tune the completion of the user prompt in various ways.

LLM specifications are a way to select a specific LLM and save its configuration, so it can be reused across multiple conversations.

Graphlit supports LLMs from a wide variety of hosted services:

  • Azure-hosted OpenAI

  • OpenAI

  • Anthropic

  • Mistral

  • Groq

  • Deepseek

  • Replicate

For more information on configuring these models in your specification, or for using your own model deployment or API key:

In addition to model-specific parameters, specifications store the model's system prompt, which provides instructions to the LLM for how to complete the user prompt. This allows you to create multiple specifications as templates for various personas, such as a call-center agent or academic researcher.

For more information, see these sources:

Create Specification

The createSpecification mutation enables the creation of a specification by accepting the specification name, serviceType and systemPrompt and it returns essential details, including the ID, name, state, type and service type of the newly generated specification.

The LLM systemPrompt is a text input provided to the model to instruct it on what kind of content or responses to generate. It serves as the initial message or query that sets the context and guides the model's behavior, helping it generate text that is relevant, informative, or creative, depending on the user's needs.

When using theOPEN_AI service type, you can assign the openAI parameters for the model, model temperature, model probability and the completionTokenLimit.

When using the AZURE_OPEN_AI service type, you can assign the azureOpenAI parameters for the model, model temperature, model probability and the completionTokenLimit.

When using the ANTHROPIC service type, you can assign the anthropic parameters for the model, model temperature, model probability and the completionTokenLimit.

When using the REPLICATE service type, you can assign the replicate parameters for the model, model temperature, model probability and the completionTokenLimit.

Model probability is an alternative to sampling with temperature where the model considers the results of the tokens with probability mass. 1 is the default value. So 0.1 means only the tokens comprising the top 10% probability mass are considered.

Completion token limit is the maximum number of tokens which the LLM will return in the prompt completion.

Mutation:

mutation CreateSpecification($specification: SpecificationInput!) {
  createSpecification(specification: $specification) {
    id
    name
    state
    type
    serviceType
  }
}

Variables:

{
  "specification": {
    "type": "COMPLETION",
    "serviceType": "OPEN_AI",
    "systemPrompt": "You are a Machine Learning researcher, who is intelligent, experienced and detail oriented. Use the provided content sources to answer the request the user has sent. Please cite the sources and relevant pages numbers with your answer, as if you were writing technical documentation. Combine any sources within the same document.",
    "openAI": {
      "model": "GPT35_TURBO_16K_0613",
      "temperature": 0.0,
      "probability": 0.2,
      "completionTokenLimit": 512
    },
    "name": "ML Researcher"
  }
}

Response:

{
  "type": "COMPLETION",
  "serviceType": "OPEN_AI",
  "id": "e652c758-4eaf-492e-9b68-16471fdb8f85",
  "name": "ML Researcher",
  "state": "ENABLED"
}
Update Specification

The updateSpecification mutation enables the renaming of a specification by accepting the specification name and/or any other fields you want to update.

Here we are changing the model service type to AZURE_OPEN_AI and increasing the completionTokenLimit to 768 tokens, as well as enabling the embedding of citations in the conversation response with embedCitations.

Note, updating the specification will overwrite all provided fields, rather than merging the supplied fields. For example, if the system prompt was not provided in the update, and it was provided in the create, the specification will end up having no system prompt assigned.

Mutation:

mutation UpdateSpecification($specification: SpecificationUpdateInput!) {
  updateSpecification(specification: $specification) {
    id
    name
    state
    type
  }
}

Variables:

{
  "specification": {
    "serviceType": "AZURE_OPEN_AI",
    "systemPrompt": "You are a Machine Learning researcher, who is intelligent, experienced and detail oriented. Use the provided content sources to answer the request the user has sent. Please cite the sources and relevant pages numbers with your answer, as if you were writing technical documentation. Combine any sources within the same document.",
    "azureOpenAI": {
      "model": "GPT35_TURBO_16K",
      "temperature": 0.0,
      "completionTokenLimit": 768
    },
    "strategy": {
      "embedCitations": true
    },
    "id": "e652c758-4eaf-492e-9b68-16471fdb8f85"
  }
}

Response:

{
  "type": "COMPLETION",
  "serviceType": "AZURE_OPEN_AI",
  "id": "e652c758-4eaf-492e-9b68-16471fdb8f85",
  "name": "ML Researcher",
  "state": "ENABLED"
}
Delete Specification

The deleteSpecification mutation allows the deletion of a specification by utilizing the id parameter, and it returns the ID and state of the deleted specification.

Mutation:

mutation DeleteSpecification($id: ID!) {
  deleteSpecification(id: $id) {
    id
    state
  }
}

Variables:

{
  "id": "e652c758-4eaf-492e-9b68-16471fdb8f85"
}

Response:

{
  "id": "e652c758-4eaf-492e-9b68-16471fdb8f85",
  "state": "DELETED"
}
Get Specification

The specification query allows you to retrieve specific details of a specification by providing the id parameter, including the ID, name, creation date, state, owner ID, and type associated with the specification. You can also retrieve the openai details for the LLM specification.

Query:

query GetSpecification($id: ID!) {
  specification(id: $id) {
    id
    name
    creationDate
    owner {
      id
    }
    state
    type
    serviceType
    systemPrompt
    openAI {
      tokenLimit
      completionTokenLimit
      model
      temperature
    }
  }
}

Variables:

{
  "id": "e652c758-4eaf-492e-9b68-16471fdb8f85"
}

Response:

{
  "type": "COMPLETION",
  "serviceType": "OPEN_AI",
  "systemPrompt": "You are a Machine Learning researcher, who is intelligent, experienced and detail oriented. Use the provided content sources to answer the request the user has sent. Please cite the sources and relevant pages numbers with your answer, as if you were writing technical documentation. Combine any sources within the same document.",
  "openAI": {
    "model": "GPT35_TURBO_16K_0613",
    "temperature": 0.0,
    "completionTokenLimit": 512
  },
  "id": "e652c758-4eaf-492e-9b68-16471fdb8f85",
  "name": "ML Researcher",
  "state": "ENABLED",
  "creationDate": "2023-12-16T20:51:16Z",
  "owner": {
    "id": "5a9d0a48-e8f3-47e6-b006-3377472bac47"
  }
}
Query Specifications

The specifications query allows you to retrieve all specifications. It returns a list of specification results, including the ID, name, creation date, state, owner ID, and type for each specification. It also allows you to retrieve the specific model parameters.

Query Specifications

Query:

query QuerySpecifications($filter: SpecificationFilter!) {
  specifications(filter: $filter) {
    results {
      id
      name
      creationDate
      state
      owner {
        id
      }
      type
      serviceType
      systemPrompt
      openAI {
        tokenLimit
        completionTokenLimit
        modelName
        temperature
      }
    }
  }
}

Variables:

{
  "filter": {
    "offset": 0,
    "limit": 100
  }
}

Response:

{
  "results": [
    {
      "type": "COMPLETION",
      "serviceType": "OPEN_AI",
      "systemPrompt": "You are a Machine Learning researcher, who is intelligent, experienced and detail oriented. Use the provided content sources to answer the request the user has sent. Please cite the sources and relevant pages numbers with your answer, as if you were writing technical documentation. Combine any sources within the same document.",
      "openAI": {
        "modelName": "gpt-3.5-turbo",
        "temperature": 0.0,
        "completionTokenLimit": 256
      },
      "id": "bf20d121-8332-405f-bfe2-7789b9e19215",
      "name": "Machine Learning Researcher",
      "state": "ENABLED",
      "creationDate": "2023-07-04T01:12:31Z",
      "owner": {
        "id": "9422b73d-f8d6-4faf-b7a9-152250c862a4"
      }
    }
  ]
}

Query Specifications By Name

The specifications query allows you to retrieve specifications based on a specific filter criteria, via the name parameter. In this example, the name is set to "Researcher." It returns a list of specification results containing the ID, name, creation date, state, owner ID, and type for each matching specification.

Query:

query QuerySpecifications($filter: SpecificationFilter!) {
  specifications(filter: $filter) {
    results {
      id
      name
      creationDate
      state
      owner {
        id
      }
      type
      serviceType
      systemPrompt
      openAI {
        tokenLimit
        completionTokenLimit
        modelName
        temperature
      }
    }
  }
}

Variables:

{
  "filter": {
    "name": "Researcher",
    "offset": 0,
    "limit": 100
  }
}

Response:

{
  "results": [
    {
      "type": "COMPLETION",
      "serviceType": "OPEN_AI",
      "systemPrompt": "You are a Machine Learning researcher, who is intelligent, experienced and detail oriented. Use the provided content sources to answer the request the user has sent. Please cite the sources and relevant pages numbers with your answer, as if you were writing technical documentation. Combine any sources within the same document.",
      "openAI": {
        "modelName": "gpt-3.5-turbo",
        "temperature": 0.0,
        "completionTokenLimit": 256
      },
      "id": "bf20d121-8332-405f-bfe2-7789b9e19215",
      "name": "Machine Learning Researcher",
      "state": "ENABLED",
      "creationDate": "2023-07-04T01:12:31Z",
      "owner": {
        "id": "9422b73d-f8d6-4faf-b7a9-152250c862a4"
      }
    }
  ]
}

Specifications provide several advanced configuration properties, such as conversation strategy, and the ability to tune the semantic search of content sources which get formatted into the LLM prompt. .

Writing your system prompt to get the expected LLM output can take some trial and error, and OpenAI has written some best practices and .

All models accept the to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.

Deepseek

Azure OpenAI

OpenAI

Anthropic

Mistral

Groq

Replicate

More details can be found here
Prompt engineering techniques
Best practices for prompt engineering with OpenAI API
System message framework and template recommendations for LLMs
here
here
model sampling temperature
Query Specifications
Query Specifications By Name
Cover

Queries

Cover

Mutations

Cover

Objects