OpenAI Responses Compatibility

Use this page when you want strict OpenAI Responses-style behavior from Hatz.

Which endpoint should I use?

Endpoint Best for Tool behavior
/v1/openai/responses OpenAI-compatible clients (OpenCode, AI SDK OpenAI provider) Client-managed tools. The API mirrors OpenAI Responses semantics and does not run the Hatz harness.
/v1/chat/completions Hatz-native assistant workflows Hatz harness enabled: recursive tool calling, server-side tools, and Hatz-specific orchestration.

If your client expects OpenAI Responses wire format, use /v1/openai/responses.

OpenCode Setup

Use OpenCode with the OpenAI provider and point baseURL at the Hatz OpenAI-compat base.

{
  "$schema": "https://opencode.ai/config.json",
  "provider": {
    "hatz": {
      "npm": "@ai-sdk/openai",
      "name": "Hatz",
      "options": {
        "baseURL": "https://ai.hatz.ai/v1/openai",
        "apiKey": "${HATZ_API_KEY}"
      },
      "models": {
        "gpt-5.2": { "name": "GPT-5.2" },
        "gpt-4o": { "name": "GPT-4o" },
        "anthropic.claude-haiku-4-5": { "name": "Claude Haiku 4.5" }
      }
    }
  }
}

Notes:

  • Use normal model IDs (for example gpt-5.2), not agent_* IDs.
  • Model IDs in this page are examples. Query /v1/chat/models and use a model from your tenant's returned list.
  • For local testing, set baseURL to http://localhost:8000/v1/openai.
  • Unknown top-level request fields are ignored for forward compatibility with evolving OpenAI SDK payloads.
  • store, include, and reasoning are accepted for OpenAI/OpenCode compatibility.
  • previous_response_id and metadata are currently unsupported.

Responses API Example

curl 'https://ai.hatz.ai/v1/openai/responses' \
  -H 'Authorization: Bearer '"$HATZ_API_KEY" \
  -H 'Content-Type: application/json' \
  -d '{
    "model": "gpt-5.2",
    "input": [
      {
        "role": "user",
        "content": [
          {
            "type": "input_text",
            "text": "Write a 2 sentence summary of SOC 2."
          }
        ]
      }
    ]
  }'

Tool Calling Behavior in Responses

The Responses route is client-managed for tools:

  1. Model returns a function call output item.
  2. Your client executes the tool.
  3. Your client sends function result items back in a follow-up /responses request.

The server does not run the Hatz recursive tool harness on this route.