Skip to main content
Use the Responses resource when you want the MKA1 API to return text. Start with a plain string for simple prompts. Use message items when you need explicit roles or conversation state.

Send a simple prompt

Pass a string in input for a single-turn request. The response includes generated text in output_text.
import { SDK } from '@meetkai/mka1';

const mka1 = new SDK({
  bearerAuth: `Bearer ${YOUR_API_KEY}`,
});

const result = await mka1.llm.responses.create({
  model: 'meetkai:functionary-urdu-mini-pak',
  input: 'Write a one-sentence summary of the MKA1 API.',
}, { headers: { 'X-On-Behalf-Of': '<end-user-id>' } });
If you are not acting for an end user, omit X-On-Behalf-Of.

Add instructions

Use instructions to define behavior before the model sees the user input. Keep instructions short and specific.
const result = await mka1.llm.responses.create({
  model: 'meetkai:functionary-urdu-mini-pak',
  instructions: 'You are a support assistant. Reply in plain English. Keep answers under 80 words.',
  input: 'Explain what embeddings are used for.',
});

Send structured messages

Use an array of message items in input when you want explicit roles. Each message item uses type, role, and content.
const result = await mka1.llm.responses.create({
  model: 'meetkai:functionary-urdu-mini-pak',
  input: [
    { type: 'message', role: 'developer', content: 'Answer as a technical writer. Keep the reply concise.' },
    { type: 'message', role: 'user', content: 'Draft a short product update about faster response times.' },
  ],
});
This pattern is useful when you want the request body to carry the message history directly.

Continue a multi-turn exchange

Use previous_response_id to continue from an earlier response without resending the full history.
const second = await mka1.llm.responses.create({
  model: 'meetkai:functionary-urdu-mini-pak',
  previousResponseId: 'resp_123',
  input: 'Now turn that into an email subject line.',
});
If you need a reusable conversation container, create one with the Conversations resource and then pass the conversation ID in conversation.
const conv = await mka1.llm.conversations.create({
  metadata: { session_id: 'web-42' },
});

const result = await mka1.llm.responses.create({
  model: 'meetkai:functionary-urdu-mini-pak',
  conversation: conv.id,
  input: 'What should I ask next to refine this draft?',
});
See the Conversations and Responses pages in the API Reference for the full resource workflow.

Stream text as it is generated

Set stream to true to receive server-sent events instead of waiting for the full response.
import { CreateAcceptEnum } from '@meetkai/mka1/sdk/responses';

const result = await mka1.llm.responses.create({
  model: 'meetkai:functionary-urdu-mini-pak',
  input: 'Write three release notes bullets for our docs update.',
  stream: true,
}, { acceptHeaderOverride: CreateAcceptEnum.TextEventStream });
Use streaming when you want to render partial output as it arrives.

Next steps