Specifications
Create, manage and query LLM specifications.
Overview
When creating a conversation about content, you are utilizing a Large Language Model (LLM), such as OpenAI GPT-4o. Each LLM provides a number of configuration settings, which allow you to tune the completion of the user prompt in various ways.
LLM specifications are a way to select a specific LLM and save its configuration, so it can be reused across multiple conversations.
Graphlit supports LLMs from a wide variety of hosted services:
Azure-hosted OpenAI
OpenAI
Anthropic
Mistral
Groq
Deepseek
Replicate
For more information on configuring these models in your specification, or for using your own model deployment or API key:
In addition to model-specific parameters, specifications store the model's system prompt, which provides instructions to the LLM for how to complete the user prompt. This allows you to create multiple specifications as templates for various personas, such as a call-center agent or academic researcher.
Specifications provide several advanced configuration properties, such as conversation strategy, and the ability to tune the semantic search of content sources which get formatted into the LLM prompt. More details can be found here.
For more information, see these sources:
Writing your system prompt to get the expected LLM output can take some trial and error, and OpenAI has written some best practices here and here.
Create Specification
The createSpecification
mutation enables the creation of a specification by accepting the specification name
, serviceType
and systemPrompt
and it returns essential details, including the ID, name, state, type and service type of the newly generated specification.
The LLM systemPrompt
is a text input provided to the model to instruct it on what kind of content or responses to generate. It serves as the initial message or query that sets the context and guides the model's behavior, helping it generate text that is relevant, informative, or creative, depending on the user's needs.
When using theOPEN_AI
service type, you can assign the openAI
parameters for the model
, model temperature
, model probability
and the completionTokenLimit
.
When using the AZURE_OPEN_AI
service type, you can assign the azureOpenAI
parameters for the model
, model temperature
, model probability
and the completionTokenLimit
.
When using the ANTHROPIC
service type, you can assign the anthropic
parameters for the model
, model temperature
, model probability
and the completionTokenLimit
.
When using the REPLICATE
service type, you can assign the replicate
parameters for the model
, model temperature
, model probability
and the completionTokenLimit
.
All models accept the model sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
Model probability is an alternative to sampling with temperature where the model considers the results of the tokens with probability mass. 1 is the default value. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
Completion token limit is the maximum number of tokens which the LLM will return in the prompt completion.
Mutation:
mutation CreateSpecification($specification: SpecificationInput!) {
createSpecification(specification: $specification) {
id
name
state
type
serviceType
}
}
Variables:
{
"specification": {
"type": "COMPLETION",
"serviceType": "OPEN_AI",
"systemPrompt": "You are a Machine Learning researcher, who is intelligent, experienced and detail oriented. Use the provided content sources to answer the request the user has sent. Please cite the sources and relevant pages numbers with your answer, as if you were writing technical documentation. Combine any sources within the same document.",
"openAI": {
"model": "GPT35_TURBO_16K_0613",
"temperature": 0.0,
"probability": 0.2,
"completionTokenLimit": 512
},
"name": "ML Researcher"
}
}
Response:
{
"type": "COMPLETION",
"serviceType": "OPEN_AI",
"id": "e652c758-4eaf-492e-9b68-16471fdb8f85",
"name": "ML Researcher",
"state": "ENABLED"
}
Update Specification
The updateSpecification
mutation enables the renaming of a specification by accepting the specification name
and/or any other fields you want to update.
Here we are changing the model service type to AZURE_OPEN_AI
and increasing the completionTokenLimit
to 768 tokens, as well as enabling the embedding of citations in the conversation response with embedCitations
.
Note, updating the specification will overwrite all provided fields, rather than merging the supplied fields. For example, if the system prompt was not provided in the update, and it was provided in the create, the specification will end up having no system prompt assigned.
Mutation:
mutation UpdateSpecification($specification: SpecificationUpdateInput!) {
updateSpecification(specification: $specification) {
id
name
state
type
}
}
Variables:
{
"specification": {
"serviceType": "AZURE_OPEN_AI",
"systemPrompt": "You are a Machine Learning researcher, who is intelligent, experienced and detail oriented. Use the provided content sources to answer the request the user has sent. Please cite the sources and relevant pages numbers with your answer, as if you were writing technical documentation. Combine any sources within the same document.",
"azureOpenAI": {
"model": "GPT35_TURBO_16K",
"temperature": 0.0,
"completionTokenLimit": 768
},
"strategy": {
"embedCitations": true
},
"id": "e652c758-4eaf-492e-9b68-16471fdb8f85"
}
}
Response:
{
"type": "COMPLETION",
"serviceType": "AZURE_OPEN_AI",
"id": "e652c758-4eaf-492e-9b68-16471fdb8f85",
"name": "ML Researcher",
"state": "ENABLED"
}
Delete Specification
The deleteSpecification
mutation allows the deletion of a specification by utilizing the id
parameter, and it returns the ID and state of the deleted specification.
Mutation:
mutation DeleteSpecification($id: ID!) {
deleteSpecification(id: $id) {
id
state
}
}
Variables:
{
"id": "e652c758-4eaf-492e-9b68-16471fdb8f85"
}
Response:
{
"id": "e652c758-4eaf-492e-9b68-16471fdb8f85",
"state": "DELETED"
}
Get Specification
The specification
query allows you to retrieve specific details of a specification by providing the id
parameter, including the ID, name, creation date, state, owner ID, and type associated with the specification. You can also retrieve the openai
details for the LLM specification.
Query:
query GetSpecification($id: ID!) {
specification(id: $id) {
id
name
creationDate
owner {
id
}
state
type
serviceType
systemPrompt
openAI {
tokenLimit
completionTokenLimit
model
temperature
}
}
}
Variables:
{
"id": "e652c758-4eaf-492e-9b68-16471fdb8f85"
}
Response:
{
"type": "COMPLETION",
"serviceType": "OPEN_AI",
"systemPrompt": "You are a Machine Learning researcher, who is intelligent, experienced and detail oriented. Use the provided content sources to answer the request the user has sent. Please cite the sources and relevant pages numbers with your answer, as if you were writing technical documentation. Combine any sources within the same document.",
"openAI": {
"model": "GPT35_TURBO_16K_0613",
"temperature": 0.0,
"completionTokenLimit": 512
},
"id": "e652c758-4eaf-492e-9b68-16471fdb8f85",
"name": "ML Researcher",
"state": "ENABLED",
"creationDate": "2023-12-16T20:51:16Z",
"owner": {
"id": "5a9d0a48-e8f3-47e6-b006-3377472bac47"
}
}
Query Specifications
The specifications
query allows you to retrieve all specifications. It returns a list of specification results, including the ID, name, creation date, state, owner ID, and type for each specification. It also allows you to retrieve the specific model parameters.
Query Specifications
Query:
query QuerySpecifications($filter: SpecificationFilter!) {
specifications(filter: $filter) {
results {
id
name
creationDate
state
owner {
id
}
type
serviceType
systemPrompt
openAI {
tokenLimit
completionTokenLimit
modelName
temperature
}
}
}
}
Variables:
{
"filter": {
"offset": 0,
"limit": 100
}
}
Response:
{
"results": [
{
"type": "COMPLETION",
"serviceType": "OPEN_AI",
"systemPrompt": "You are a Machine Learning researcher, who is intelligent, experienced and detail oriented. Use the provided content sources to answer the request the user has sent. Please cite the sources and relevant pages numbers with your answer, as if you were writing technical documentation. Combine any sources within the same document.",
"openAI": {
"modelName": "gpt-3.5-turbo",
"temperature": 0.0,
"completionTokenLimit": 256
},
"id": "bf20d121-8332-405f-bfe2-7789b9e19215",
"name": "Machine Learning Researcher",
"state": "ENABLED",
"creationDate": "2023-07-04T01:12:31Z",
"owner": {
"id": "9422b73d-f8d6-4faf-b7a9-152250c862a4"
}
}
]
}
Query Specifications By Name
The specifications
query allows you to retrieve specifications based on a specific filter criteria, via the name
parameter. In this example, the name
is set to "Researcher." It returns a list of specification results containing the ID, name, creation date, state, owner ID, and type for each matching specification.
Query:
query QuerySpecifications($filter: SpecificationFilter!) {
specifications(filter: $filter) {
results {
id
name
creationDate
state
owner {
id
}
type
serviceType
systemPrompt
openAI {
tokenLimit
completionTokenLimit
modelName
temperature
}
}
}
}
Variables:
{
"filter": {
"name": "Researcher",
"offset": 0,
"limit": 100
}
}
Response:
{
"results": [
{
"type": "COMPLETION",
"serviceType": "OPEN_AI",
"systemPrompt": "You are a Machine Learning researcher, who is intelligent, experienced and detail oriented. Use the provided content sources to answer the request the user has sent. Please cite the sources and relevant pages numbers with your answer, as if you were writing technical documentation. Combine any sources within the same document.",
"openAI": {
"modelName": "gpt-3.5-turbo",
"temperature": 0.0,
"completionTokenLimit": 256
},
"id": "bf20d121-8332-405f-bfe2-7789b9e19215",
"name": "Machine Learning Researcher",
"state": "ENABLED",
"creationDate": "2023-07-04T01:12:31Z",
"owner": {
"id": "9422b73d-f8d6-4faf-b7a9-152250c862a4"
}
}
]
}
Last updated