Skip to main content
GET
/
api
/
v1
/
llm
/
responses
/
{response_id}
Python (SDK)
from mka1 import SDK


with SDK(
    bearer_auth="<YOUR_BEARER_TOKEN_HERE>",
) as sdk:

    res = sdk.llm.responses.get(response_id="resp_get123", include=[
        "file_search_call.results",
        "message.output_text.logprobs",
    ], include_obfuscation=False, starting_after=42, stream=False)

    with res as event_stream:
        for event in event_stream:
            # handle event
            print(event, flush=True)
{
  "id": "resp_get123",
  "object": "response",
  "created_at": 1735689600,
  "completed_at": 1735689601,
  "status": "completed",
  "error": null,
  "incomplete_details": null,
  "background": false,
  "instructions": null,
  "max_output_tokens": null,
  "max_tool_calls": 30,
  "metadata": {},
  "model": "meetkai:functionary-urdu-mini-pak",
  "output": [
    {
      "type": "message",
      "id": "msg_abc123",
      "role": "assistant",
      "content": [
        {
          "type": "output_text",
          "text": "The capital of France is Paris.",
          "annotations": []
        }
      ],
      "status": "completed"
    }
  ],
  "output_text": "The capital of France is Paris.",
  "parallel_tool_calls": true,
  "previous_response_id": null,
  "reasoning": {
    "effort": null,
    "summary": null
  },
  "service_tier": "auto",
  "store": true,
  "text": {
    "format": {
      "type": "text"
    },
    "verbosity": "medium"
  },
  "tool_choice": "auto",
  "tools": [],
  "truncation": "auto",
  "usage": {
    "input_tokens": 8,
    "input_tokens_details": {
      "cached_tokens": 0
    },
    "output_tokens": 7,
    "output_tokens_details": {
      "reasoning_tokens": 0
    },
    "total_tokens": 15
  },
  "user": null
}

Authorizations

Authorization
string
header
required

Gateway auth: send Authorization: Bearer <mka1-api-key>. For multi-user server-side integrations, you can also send X-On-Behalf-Of: <external-user-id>.

Path Parameters

response_id
string
required

The unique identifier of the response, formatted as 'resp_' or 'resp-' followed by alphanumeric characters.

Pattern: ^resp[-_][a-zA-Z0-9]+$

Query Parameters

include
enum<string>[]

Additional fields to include in the response. Allows requesting specific nested data like web search sources, code interpreter outputs, computer screenshots, file search results, input images, output logprobs, or reasoning content. These fields may have performance or cost implications.

Available options:
web_search_call.action.sources,
code_interpreter_call.outputs,
computer_call_output.output.image_url,
file_search_call.results,
message.input_image.image_url,
message.output_text.logprobs,
reasoning.encrypted_content
Example:
[
"file_search_call.results",
"message.output_text.logprobs"
]
include_obfuscation
boolean

When true, stream obfuscation will be enabled for privacy and security purposes.

Example:

false

starting_after
integer

The sequence number of the event after which to start streaming. Used for resuming streaming from a specific point.

Required range: -9007199254740991 <= x <= 9007199254740991
Example:

42

stream
boolean

If set to true, the model response data will be streamed using Server-Sent Events (SSE) for real-time updates as the agent generates the response.

Example:

false

Response

Successful response - returns either streaming events (SSE) when stream=true or a complete response object (JSON) when stream=false or omitted

id
string
required
object
any
required
created_at
number
required
completed_at
number | null
required
status
enum<string>
required

The overall status of the response generation. 'completed' means successfully finished, 'failed' means error occurred, 'in_progress' means currently processing, 'cancelled' means user-cancelled, 'queued' means waiting to start, 'incomplete' means partial completion.

Available options:
completed,
failed,
in_progress,
cancelled,
queued,
incomplete
error
object
required
incomplete_details
object
required
background
boolean
required
instructions
required
max_output_tokens
integer | null
required
Required range: -9007199254740991 <= x <= 9007199254740991
Example:

null

max_tool_calls
integer | null
required
Required range: -9007199254740991 <= x <= 9007199254740991
metadata
object
required
model
string
required
output
(Input message item · object | Output message item · object | Output audio item · object | File search call item · object | Computer call item · object | Computer call output item · object | Web search call item · object | Function call item · object | Function call output item · object | Reasoning item · object | Image generation call item · object | Code interpreter call item · object | Local shell call item · object | Local shell call output item · object | Shell call item · object | Shell call output item · object | MCP list tools item · object | MCP approval request item · object | MCP approval response item · object | MCP call item · object | Custom tool call item · object | Custom tool call output item · object | Item reference item · object | Compaction item · object)[]
required

Input message item: A message with role and content. Use this for user, assistant, system, or developer turns in structured inputs.

parallel_tool_calls
boolean
required
previous_response_id
string | null
required
store
boolean
required
text
object
required
tool_choice
required

Tool choice mode: Selects how the model decides tool usage. Use none, auto, or required.

Available options:
none,
auto,
required
tools
(Function tool definition · object | File search tool definition · object | Computer use tool definition · object | Web search tool definition · object | MCP tool definition · object | Code interpreter tool definition · object | Image generation tool definition · object | Local shell tool definition · object | Shell tool definition · object | Custom tool definition · object | Web search preview tool definition · object | Hosted tool definition · object | History tool definition · object)[]
required

Function tool definition: Defines a callable function tool. Provide a tool name and parameters schema, with optional description and deferred loading.

truncation
enum<string>
required
Available options:
auto,
disabled
usage
object
required
user
string | null
required
conversation
object
output_text
string
prompt
object
prompt_cache_key
string
reasoning
object
safety_identifier
string
service_tier
enum<string>
Available options:
auto,
default,
flex,
priority
temperature
number
Required range: 0 <= x <= 2
presence_penalty
number
frequency_penalty
number
top_logprobs
integer
Required range: 0 <= x <= 20
top_p
number
Required range: 0 <= x <= 1
context_management
object[]