Conversations
Create, manage and query Conversations.
Overview
Once you have ingested content into Graphlit, and built a knowledge graph from your data, we make it easy to have a conversation about content.
Conversations can be started in two ways.
Create a conversation, and specify the details of the Large Language Model (LLM) you wish to use, and assign a content filter to constrain the content to be 'conversed over'. Then prompt the newly created conversation.
Jump right into a conversation across all the content in your project, by providing a user prompt to start the conversation. This will use the default LLM specification for your project.
Conversations are named, but the name is not required to be unique across the Graphlit project. Each conversation has a unique id
field, in GUID format.
Conversations are created in the OPENED
state, which means that messages can be added to or removed from it. See the closeConversation
mutation for the ability to close (or lock) the conversation, so no messages can be added/removed.
For more information on LLM specifications, look here.
Operations
Prompt Conversation
Prompt Conversation
Continue Conversation
Prompt with Citations
Prompt with Tool Calling
Prompt Conversation
The promptConversation
mutation enables the creation of a conversation by accepting the user prompt
and it returns essential details, including the conversation ID, the LLM's response to the user prompt (including token usage and time to complete the prompt), and the total message count of the conversation.
If you don't provide a conversation as an input parameter to the mutation, a new conversation will be created.
Prompting a conversation will add a message with the USER
role (the user-provided prompt), and a message with the ASSISTANT
role (the LLM completion). So, each turn of the conversation will add two messages to the conversation.
Mutation:
mutation PromptConversation($prompt: String!, $id: ID) {
promptConversation(prompt: $prompt, id: $id) {
conversation {
id
}
message {
role
author
message
tokens
completionTime
}
messageCount
}
}
Variables:
{
"prompt": "Provide a summary of the term 'unstructured data', in 200 words or less."
}
Response:
{
"conversation": {
"id": "fc1d1795-1e51-463c-8f83-b60531a94b71"
},
"message": {
"role": "ASSISTANT",
"message": "Unstructured data refers to any data that does not have a predefined data model or structure. This type of data is typically not organized in a database or spreadsheet, making it difficult to analyze and process using traditional data analysis tools. Examples of unstructured data include text documents, images, audio and video files, social media posts, and sensor data from IoT devices. Unstructured data is often generated in large volumes and at high velocity, making it a challenge to store and manage effectively. However, advances in machine learning and natural language processing have enabled organizations to extract valuable insights from unstructured data, which can be used to improve decision-making and gain a competitive advantage. Effective management and analysis of unstructured data can provide valuable insights into customer behavior, market trends, and other key business metrics, making it a valuable asset for organizations across a range of industries.",
"tokens": 170,
"completionTime": "PT4.497279S"
},
"messageCount": 2
}
Continue Conversation
Once you have an existing conversation, thepromptConversation
mutation enables the continuation of a conversation by accepting the conversation ID
and user prompt.
The mutation returns essential details, including the conversation ID, the LLM's response to the user prompt (including token usage and time to complete the prompt), and the total message count of the conversation.
Mutation:
mutation PromptConversation($prompt: String!, $id: ID) {
promptConversation(prompt: $prompt, id: $id) {
conversation {
id
}
message {
role
author
message
tokens
completionTime
}
messageCount
}
}
Variables:
{
"prompt": "Sorry I meant, can you provide a summary of the term 'unstructured data' in 50 words or less?",
"id": "fc1d1795-1e51-463c-8f83-b60531a94b71"
}
Response:
{
"conversation": {
"id": "fc1d1795-1e51-463c-8f83-b60531a94b71"
},
"message": {
"role": "ASSISTANT",
"message": "Unstructured data refers to data that lacks a predefined structure or organization, making it difficult to analyze using traditional data analysis tools. This type of data includes text documents, images, videos, and social media posts, and is often generated in large volumes and at high velocity. Advances in machine learning and natural language processing have enabled organizations to extract valuable insights from unstructured data.",
"tokens": 74,
"completionTime": "PT2.5219146S"
},
"messageCount": 4
}
Prompt with Citations
When you configure citations in the specification, and then prompt a conversation, you can ask for the citations
with the completed response.
Each citation contains an entity reference to the content
, an index
which matches the footnotes added to the completion (i.e. [0]), the text
which was referenced in the citation, and either the pageNumber
if the source is a document or text, or a startTime
and endTime
pair if the source was audio or video.
Mutation:
mutation PromptConversation($prompt: String!, $id: ID) {
promptConversation(prompt: $prompt, id: $id) {
conversation {
id
}
message {
role
author
message
citations {
content {
id
}
index
text
startTime
endTime
pageNumber
}
tokens
completionTime
timestamp
}
messageCount
}
}
Variables:
{
"prompt": "Provide a summary of the term 'unstructured data', in 200 words or less.",
"id": "2982096f-8a3a-4482-ac6d-b0c917bafff3"
}
Response:
{
"conversation": {
"id": "ba3d0b58-98df-4ba1-92a2-1df3896a9a6d"
},
"message": {
"role": "ASSISTANT",
"message": "Unstructured data encompasses a broad range of data types such as imagery, audio, 3D, documents, and email, lacking a predefined data model or structure. It poses challenges for analysis and utilization as it doesn't neatly fit into traditional databases or spreadsheets. This type of data originates from diverse sources like satellite data, imaging, IoT, drones, robots, and mobile phones. Unstructured data also includes metadata, which can be categorized into first, second, and third order metadata, providing contextualization and inferences. Geospatial data adds an extra layer of structure due to its geographical context. While unstructured data presents opportunities for insights and value, it requires specialized tools and techniques for organization, analysis, and extraction of meaningful information. [0][1][2][3][4]\n\n",
"citations": [
{
"content": {
"id": "2f4e224b-d19b-4455-b062-864cbcf4f194"
},
"index": 0,
"text": "Trying to pull in other data sources.\nThat's our long term vision. I mean, it's really this kind of knowledge hub for\nthe real world,\nin in a in a business case,\nenterprise\nand business sense essentially. But when you think about the the big drivers of unstructured data today, What do you what do you think about? Do you think about satellite data? Do you think about imaging? Do you think about IoT?\nWhere do you go? Yeah. I I think, I mean, we typically look at the 3 main sources of data,\nfor image imaging video we get, and even what generates 3 d is. It's It's drones, robots, or mobile phones. So it'd be like a spot robot, a drone, or just somebody with an iPhone walking around. Those are, like, the three main sources of data that we get other than documentation,\nor CAD drawings and things like that. So But, typically, those\nare data about\nyour real world assets, and so they're sort of they're the documents are like maintenance reports, or there might be a Zoom meeting that was recorded about, say, an inspection going on.",
"startTime": "PT35M",
"endTime": "PT36M"
},
{
"content": {
"id": "2f4e224b-d19b-4455-b062-864cbcf4f194"
},
"index": 1,
"text": "back to them to go look at the data later. It may it may be something like that. So you've given a bunch of different examples of of projects that you've been involved with or discussed with with other organizations.\nWho typically comes to you? Like, who comes to you and says, look. We've we've got all this unstructured data. Can you structure it for us? Yeah. I mean, we're we're still early. I mean, we just launched about a month ago. So we're we're kind of more the opposite. We're going out trying to find people. But at trade shows have been really useful. We've got done a couple conferences where people come up to the booth, And we've gotten some really interesting use cases. I mean, one in the geospatial area was an aerial survey company, and\nthey\nTypically, I mean, they they fly over. They're actually very savvy around, like, photogrammetry\nand and the data capture they're doing,\nbut they're poor at data management. I mean, they're keeping their data on SharePoint.\nThey're not cloud native yet. They don't really have a search angle to what they're doing.",
"startTime": "PT22M",
"endTime": "PT23M"
},
{
"content": {
"id": "2f4e224b-d19b-4455-b062-864cbcf4f194"
},
"index": 2,
"text": "Yeah. For sure. Yes. Kirk Marple. I mean, I obviously had I founded Unstruct Data. I've been a long time software developer and actually\nJust remembered yesterday that I've been dealing with geospatial data back even from my first job, dealing with maps on laserdiscs. It goes back that far. So I've been more in the media space. So media software space, I guess, I consider, but I dabbled time to time in geospatial and now a bit more focused on it. Well, I I think we'll end up coming back to that Later on to your experience with the media space.\nBut but let's start here. What tell tell me what unstructured data is for you? For us, it's really I mean, everything. I mean, from imagery, audio,\nbut also 3 d, I mean, geometry point clouds, as well as documents and email. So it's a broad set of data. Back in I came from the video space and media space, and we would just call them files. I mean, file based workflows.",
"startTime": "PT1M",
"endTime": "PT2M"
},
{
"content": {
"id": "2f4e224b-d19b-4455-b062-864cbcf4f194"
},
"index": 3,
"text": "or, say, a document getting getting terms that are found. So we gotta call that 2nd order metadata. But then 3rd order metadata is really more inferences\nof, okay, I'm looking at, let's say, a conveyor belt in a picture. So someone's walking around on their maintenance route. They took an iPhone image,\nthat has excess metadata in it. They run it through a computer vision algorithm. It can see the conveyor belt. But then 3rd order metadata would be that conveyor belt is actually linked in an SAP database somewhere. And so there's that contextualization,",
"startTime": "PT5M",
"endTime": "PT6M"
},
{
"content": {
"id": "2f4e224b-d19b-4455-b062-864cbcf4f194"
},
"index": 4,
"text": "more commercial building data?\nAll that kind of stuff is It's so so non obvious, just when there's you're just looking it up at a folder on s 3. Could you imagine a world where, like, I came to you with some data, lots lots of sort of different kinds of data. So please, you know, run this through your system, create metadata around it, make it searchable, make it discoverable.\nLet let me know What's going on with my data right here now and and what's happened in the past? And then expose that as some kind of, may maybe a web catalog service or some something like that, something That was searchable on the web where I can",
"startTime": "PT25M",
"endTime": "PT26M"
}
],
"tokens": 204,
"completionTime": "PT1.8572716S",
"timestamp": "2024-01-22T23:25:13.849Z"
},
"messageCount": 4
}
Prompt with Tool Calling
When you configure tool webhook callback URI in the specification, and then prompt a conversation, your webhook response will be integrated into the LLM prompt completion response.
Example webhook payload:
{
"data": {
"conversation": {
"id": "3a606b2d-7751-43fb-9b9b-8f239ae771fd",
"specification_id": "ca16ff5d-7ccf-4490-a32f-6e1c8d4c379e"
},
"tool": {
"id": "call_72SseVMAK7DkDEZszFEcXvkE",
"name": "get_weather",
"arguments": {
"location": "Seattle, WA",
"format": "celsius"
}
},
"scope": {
"owner_id": "5a9d0a48-e8f3-47e6-b006-3377472bac47",
"project_id": "5a9d0a48-e8f3-47e6-b006-3377472bac47"
}
},
"created_at": 1705986758,
"object": "event",
"type": "tool.callback"
}
Example webhook response:
{
"temperature": "47F"
}
Mutation:
mutation PromptConversation($prompt: String!, $id: ID) {
promptConversation(prompt: $prompt, id: $id) {
conversation {
id
}
message {
role
author
message
citations {
content {
id
}
index
text
startTime
endTime
pageNumber
}
tokens
completionTime
timestamp
}
messageCount
}
}
Variables:
{
"prompt": "What is the weather in Seattle, WA?",
"id": "d681b62f-eb4e-491c-88e6-9d73ecc89d2b"
}
Response:
{
"conversation": {
"id": "d681b62f-eb4e-491c-88e6-9d73ecc89d2b"
},
"message": {
"role": "ASSISTANT",
"message": "The current temperature in Seattle, WA is 47°F.",
"tokens": 51,
"completionTime": "PT5.2072934S",
"timestamp": "2024-01-23T05:25:26.399Z"
},
"messageCount": 2
}
Create Conversation
Create Conversation
Create Conversation With Specification
Create Conversation About Podcasts
Create Conversation
The createConversation
mutation enables the creation of a conversation by accepting the conversation name
and it returns essential details, including the ID, name, state, and type of the newly generated conversation.
Mutation:
mutation CreateConversation($conversation: ConversationInput!) {
createConversation(conversation: $conversation) {
id
name
state
type
}
}
Variables:
{
"conversation": {
"name": "Ask A Question"
}
}
Response:
{
"type": "CONTENT",
"id": "373cdb14-a0df-4c0d-83c9-da64793d6295",
"name": "Ask A Question",
"state": "OPENED"
}
Create Conversation With Specification
The createConversation
mutation enables the creation of a conversation by accepting the conversation name
and desired specification
, and it returns essential details, including the ID, name, state, and type of the newly generated conversation.
Specifications contain the configuration for the LLM, which is used to complete the user prompts provided to the conversation. For example, the LLM system prompt, model temperature, and LLM service to use (i.e. Azure OpenAI).
For more details on specifications, see here.
Mutation:
mutation CreateConversation($conversation: ConversationInput!) {
createConversation(conversation: $conversation) {
id
name
state
type
}
}
Variables:
{
"conversation": {
"specification": {
"id": "08746105-5c2a-4f59-8170-eedddd58faf7"
},
"name": "Ask A Question"
}
}
Response:
{
"type": "CONTENT",
"id": "59e6411d-e0a3-4ef5-ac77-d7014a889bcc",
"name": "Ask A Question",
"state": "OPENED"
}
Create Conversation About Podcasts
The createConversation
mutation enables the creation of a conversation by accepting the conversation name
and desired content filter
, and it returns essential details, including the ID, name, state, and type of the newly generated conversation.
The content filter provides a way to constrain the set of content which acts as the context for the converation.
For example, here we are asking Graphlit to have a conversation about FILE
content types, which are AUDIO
files.
Content filters accept a variety of filtering options, such as by feed, by collection, by date and more. See here for more detailed information on what filter options are supported.
Mutation:
mutation CreateConversation($conversation: ConversationInput!) {
createConversation(conversation: $conversation) {
id
name
state
type
}
}
Variables:
{
"conversation": {
"filter": {
"types": [
"FILE"
],
"fileTypes": [
"AUDIO"
]
},
"name": "Ask A Question About Podcasts"
}
}
Response:
{
"type": "CONTENT",
"id": "1f4bc1d5-00ce-41c5-8a0c-3ce1374742d3",
"name": "Ask A Question About Podcasts",
"state": "OPENED"
}
Publish Conversation
The publishConversation
mutation gives the ability to publish an existing conversation into a document or audio content format.
The publishing operation creates a new content object, which will be ingested into Graphlit. You can provide a workflow with the publishing operation to control how the published content is prepared, extracted, etc.
Example:
Here is a conversation about GPT-4, published as Markdown content. By default, the heading is assigned as the conversation name.
# Can you explain the benefits of GPT-4?
**User**: Can you explain the benefits of GPT-4?
**Assistant**: GPT-4 is a large multimodal model that exhibits human-level performance on various professional and academic benchmarks, such as passing a simulated bar exam with a score around the top 10% of test takers.
GPT-4 is more reliable, creative, and able to handle much more nuanced instructions than its predecessor, GPT-3.5, making it suitable for handling complex tasks and providing accurate and nuanced responses.
It has a context length of 8,192 tokens and can accept visual inputs, generating text outputs given inputs consisting of interspersed text and images, which expands its capabilities to handle vision and language tasks.
GPT-4 has shown improved performance in multiple languages, outperforming its predecessor and other language models in various languages, including low-resource languages like Latvian, Welsh, and Swahili.
The model incorporates additional safety measures, such as a safety reward signal during training to reduce harmful outputs, and a zero-shot classifier judging safety boundaries and completion style on safety-related prompts, enhancing its ability to refuse requests for dangerous content.
GPT-4 has significantly reduced hallucinations relative to previous models and has made progress on external benchmarks like TruthfulQA, demonstrating its improved ability to separate fact from incorrect statements.
It has been used internally with great impact on functions like support, sales, content moderation, and programming, showcasing its versatility and potential to improve various aspects of human interaction and productivity.
The model is available for use through ChatGPT Plus subscriptions and API access, with plans to scale up and optimize its usage over time, making it accessible for a wide range of applications and users.
**User**: How can the model accept image inputs?
**Assistant**: GPT-4 can accept image inputs by generating text outputs given inputs consisting of interspersed text and images, expanding its capabilities to handle vision and language tasks.
**User**: Can you explain in more detail?
**Assistant**: GPT-4 can accept image inputs by generating text outputs given inputs consisting of interspersed text and images, expanding its capabilities to handle vision and language tasks.
To publish a conversation, you can assign the id
parameter for the conversation, and fill in the publishing connector
which specifies the output when published.
You start by selecting the type
of DOCUMENT
or AUDIO
, and then assign the appropriate format
for the type. For example, here we are assigning MARKDOWN
document format.
Currently, document publishing supports TEXT
, MARKDOWN
and HTML
, and audio publishing supports MP3
format. More published formats will be offered in future.
Mutation:
mutation PublishConversation($id: ID!, $connector: ContentPublishingConnectorInput!, $name: String, $workflow: EntityReferenceInput) {
publishConversation(id: $id, connector: $connector, name: $name, workflow: $workflow) {
id
name
creationDate
owner {
id
}
state
originalDate
finishedDate
workflowDuration
uri
text
type
fileType
mimeType
fileName
fileSize
}
}
Variables:
{
"id": "cf8ca8f8-2e5a-4dd5-86ed-7c572d6fafe2",
"connector": {
"type": "DOCUMENT",
"format": "MARKDOWN"
}
}
Response:
{
"type": "FILE",
"mimeType": "text/markdown",
"fileType": "DOCUMENT",
"fileSize": 2364,
"uri": "https://redacted.blob.core.windows.net/files/3260a302-ee0a-4e18-b44f-e69a83d42831/Conversation.md",
"id": "3260a302-ee0a-4e18-b44f-e69a83d42831",
"name": "Conversation.md",
"state": "INGESTED",
"creationDate": "2024-01-22T05:21:04Z",
"owner": {
"id": "5a9d0a48-e8f3-47e6-b006-3377472bac47"
}
}
Undo Conversation
The undoConversation
mutation gives the ability to undo the most recent prompting of a conversation by utilizing the id
parameter, and it returns the ID and state of the conversation.
For use cases where the most recent user prompt was incorrect, mistyped or otherwise undesired, this provides a way to remove the most recent USER
role message and its correspondingASSISTANT
role message. This resets the conversation to the state right before the most recent call to promptConversation
.
Mutation:
mutation UndoConversation($id: ID!) {
undoConversation(id: $id) {
id
name
state
type
}
}
Variables:
{
"id": "373cdb14-a0df-4c0d-83c9-da64793d6295"
}
Response:
{
"type": "CONTENT",
"id": "373cdb14-a0df-4c0d-83c9-da64793d6295",
"name": "Ask A Question",
"state": "OPENED"
}
Clear Conversation
The clearConversation
mutation allows the clearing of a conversation by utilizing the id
parameter, and it returns the ID and state of the conversation.
This will clear the list of saved messages in the conversation - from both USER
and ASSISTANT
roles, resulting in a message count of zero.
Mutation:
mutation ClearConversation($id: ID!) {
clearConversation(id: $id) {
id
name
state
type
}
}
Variables:
{
"id": "373cdb14-a0df-4c0d-83c9-da64793d6295"
}
Response:
{
"type": "CONTENT",
"id": "373cdb14-a0df-4c0d-83c9-da64793d6295",
"name": "Ask A Question",
"state": "OPENED"
}
Close Conversation
The closeConversation
mutation allows the closing of an opened conversation by utilizing the id
parameter, and it returns the ID and state of the closed conversation.
Once a conversation has been closed, any calls to the promptConversation
or clearConversation
mutations - passing the closed conversation - will return an error.
Mutation:
mutation CloseConversation($id: ID!) {
closeConversation(id: $id) {
id
name
state
type
}
}
Variables:
{
"id": "373cdb14-a0df-4c0d-83c9-da64793d6295"
}
Response:
{
"type": "CONTENT",
"id": "373cdb14-a0df-4c0d-83c9-da64793d6295",
"name": "Ask A Question",
"state": "CLOSED"
}
Open Conversation
The openConversation
mutation allows the opening of a closed conversation by utilizing the id
parameter, and it returns the ID and state of the opened conversation.
Mutation:
mutation OpenConversation($id: ID!) {
openConversation(id: $id) {
id
name
state
type
}
}
Variables:
{
"id": "373cdb14-a0df-4c0d-83c9-da64793d6295"
}
Response:
{
"type": "CONTENT",
"id": "373cdb14-a0df-4c0d-83c9-da64793d6295",
"name": "Ask A Question",
"state": "OPENED"
}
Update Conversation
The updateConversation
mutation enables the renaming of a conversation by accepting the conversation name.
Mutation:
mutation UpdateConversation($conversation: ConversationUpdateInput!) {
updateConversation(conversation: $conversation) {
id
name
state
type
}
}
Variables:
{
"conversation": {
"name": "Another Conversation"
}
}
Response:
{
"type": "CONTENT",
"id": "1f4bc1d5-00ce-41c5-8a0c-3ce1374742d3",
"name": "Another Conversation",
"state": "OPENED"
}
Delete Conversation
The deleteConversation
mutation allows the deletion of a conversation by utilizing the id
parameter, and it returns the ID and state of the deleted conversation.
Mutation:
mutation DeleteConversation($id: ID!) {
deleteConversation(id: $id) {
id
state
}
}
Variables:
{
"id": "373cdb14-a0df-4c0d-83c9-da64793d6295"
}
Response:
{
"id": "373cdb14-a0df-4c0d-83c9-da64793d6295",
"state": "DELETED"
}
Delete Conversations
The deleteConversations
mutation allows the deletion of multiple conversations, as specified by the ids
array parameter, and it returns the ID and state of the deleted conversations.
Mutation:
mutation DeleteConversations($ids: [ID!]!) {
deleteConversations(ids: $ids) {
id
state
}
}
Variables:
{
"ids": [
"39fcf408-15ca-4cc2-9476-622d64aa38f3",
"93476a0c-d567-4624-9d5e-df43dfff92ea"
]
}
Response:
[
{
"id": "93476a0c-d567-4624-9d5e-df43dfff92ea",
"state": "DELETED"
},
{
"id": "39fcf408-15ca-4cc2-9476-622d64aa38f3",
"state": "DELETED"
}
]
Delete All Conversations
The deleteAllConversations
mutation allows the deletion of all conversations in the current project, or tenant, depending if you are using a multi-tenant JWT.
Mutation:
mutation DeleteAllConversations {
deleteAllConversations {
id
state
}
}
Response:
[
{
"id": "93476a0c-d567-4624-9d5e-df43dfff92ea",
"state": "DELETED"
},
{
"id": "39fcf408-15ca-4cc2-9476-622d64aa38f3",
"state": "DELETED"
}
]
Get Conversation
The conversation
query allows you to retrieve specific details of a conversation by providing the id
parameter, including the ID, name, creation date, state, owner ID, and type associated with the conversation.
Query:
query GetConversation($id: ID!) {
conversation(id: $id) {
id
name
creationDate
state
owner {
id
}
type
}
}
Variables:
{
"id": "373cdb14-a0df-4c0d-83c9-da64793d6295"
}
Response:
{
"type": "CONTENT",
"id": "373cdb14-a0df-4c0d-83c9-da64793d6295",
"name": "Ask A Question",
"state": "OPENED",
"creationDate": "2023-07-04T19:17:52Z",
"owner": {
"id": "9422b73d-f8d6-4faf-b7a9-152250c862a4"
}
}
Query Conversations
Query Conversations
The conversations
query allows you to retrieve all conversations. It returns a list of conversation results, including the ID, name, creation date, state, owner ID, and type for each conversation.
Query:
query QueryConversations($filter: ConversationFilter!) {
conversations(filter: $filter) {
results {
id
name
creationDate
state
owner {
id
}
type
}
}
}
Variables:
{
"filter": {
"offset": 0,
"limit": 100
}
}
Response:
{
"results": [
{
"type": "CONTENT",
"id": "373cdb14-a0df-4c0d-83c9-da64793d6295",
"name": "Ask A Question",
"state": "OPENED",
"creationDate": "2023-07-04T19:17:52Z",
"owner": {
"id": "9422b73d-f8d6-4faf-b7a9-152250c862a4"
}
}
]
}
Query Conversations By Name
The conversations
query allows you to retrieve conversations based on a specific filter criteria, via the name
parameter. In this example, the name
is set to "Question." It returns a list of conversation results containing the ID, name, creation date, state, owner ID, and type for each matching conversation.
Query:
query QueryConversations($filter: ConversationFilter!) {
conversations(filter: $filter) {
results {
id
name
creationDate
state
owner {
id
}
type
}
}
}
Variables:
{
"filter": {
"name": "Question",
"offset": 0,
"limit": 100
}
}
Response:
{
"results": [
{
"type": "CONTENT",
"id": "373cdb14-a0df-4c0d-83c9-da64793d6295",
"name": "Ask A Question",
"state": "OPENED",
"creationDate": "2023-07-04T19:17:52Z",
"owner": {
"id": "9422b73d-f8d6-4faf-b7a9-152250c862a4"
}
}
]
}
Last updated