Skip to main content
POST
/
ai
/
chat
/
completions
import OpenAI from 'openai';

const client = new OpenAI({
  baseURL: 'http://localhost:3030/api/v1/ai/chat',
  apiKey: process.env.GAIA_API_KEY
});

const response = await client.chat.completions.create({
  chatId: 'chat-123',
  model: 'gpt-4o',
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'What is RAG?' }
  ],
  temperature: 0.7
});

console.log(response.choices[0].message.content);
{
  "id": "chatcmpl-123",
  "object": "chat.completion",
  "created": 1677652288,
  "model": "gpt-4o",
  "choices": [{
    "index": 0,
    "message": {
      "role": "assistant",
      "content": "RAG stands for Retrieval-Augmented Generation..."
    },
    "finish_reason": "stop"
  }],
  "usage": {
    "prompt_tokens": 20,
    "completion_tokens": 100,
    "total_tokens": 120
  }
}
Create a chat completion with your configured AI agent. This endpoint is fully compatible with OpenAI’s API, allowing you to use existing OpenAI SDKs and tools.

Request Body

chatId
string
required
Chat session identifier for conversation continuity
messages
array
required
Array of message objects representing the conversation history
model
string
default:"gpt-4o"
Model identifier to use for completion
provider
string
Provider name (e.g., “openai”, “anthropic”, “ollama”)
temperature
number
Sampling temperature between 0 and 2. Higher values make output more random
max_tokens
number
Maximum number of tokens to generate
stream
boolean
default:"false"
Whether to stream the response

Response

success
boolean
Indicates if the request was successful
id
string
Unique identifier for the completion
choices
array
Array of completion choices
usage
object
Token usage information
import OpenAI from 'openai';

const client = new OpenAI({
  baseURL: 'http://localhost:3030/api/v1/ai/chat',
  apiKey: process.env.GAIA_API_KEY
});

const response = await client.chat.completions.create({
  chatId: 'chat-123',
  model: 'gpt-4o',
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'What is RAG?' }
  ],
  temperature: 0.7
});

console.log(response.choices[0].message.content);
{
  "id": "chatcmpl-123",
  "object": "chat.completion",
  "created": 1677652288,
  "model": "gpt-4o",
  "choices": [{
    "index": 0,
    "message": {
      "role": "assistant",
      "content": "RAG stands for Retrieval-Augmented Generation..."
    },
    "finish_reason": "stop"
  }],
  "usage": {
    "prompt_tokens": 20,
    "completion_tokens": 100,
    "total_tokens": 120
  }
}

Streaming

Set stream: true to receive Server-Sent Events (SSE) instead of a single response:
const stream = await client.chat.completions.create({
  chatId: 'chat-123',
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'Hello!' }],
  stream: true
});

for await (const chunk of stream) {
  process.stdout.write(chunk.choices[0]?.delta?.content || '');
}

Notes

  • Your agent will automatically use configured knowledge bases, tools, and MCP servers
  • The chatId maintains conversation context and history
  • Compatible with all OpenAI client libraries and tools