Skip to main content
Use fine-tuning when you want to adapt a base model to your own training examples and operational style. The fine-tuning endpoints create an asynchronous job from uploaded JSONL files and return a model ID when training succeeds. Fine-tuning endpoints are marked ADMIN ONLY in the generated API reference. Use an API key with the required permissions.

Before you start

Prepare:
InputDescription
Training fileA JSONL file with your training examples. Upload it with purpose: "fine-tune".
Validation fileOptional JSONL validation data. Upload it with purpose: "fine-tune".
Base modelThe model ID you want to fine-tune, for example meetkai:functionary-medium.
Fine-tuning jobs can return these statuses:
validating_files -> queued -> running -> succeeded
                          \-> failed
                          \-> cancelled

Step 1 - Upload your training files

Upload each JSONL file with the Files API and purpose: "fine-tune".
import { SDK } from '@meetkai/mka1';

const mka1 = new SDK({
  bearerAuth: 'Bearer <mka1-api-key>',
});

const requestOptions = {
  headers: {
    'X-On-Behalf-Of': '<end-user-id>',
  },
};

const trainingFile = await mka1.llm.files.upload(
  {
    file: Bun.file('./fine-tuning-train.jsonl'),
    purpose: 'fine-tune',
  },
  requestOptions
);

const validationFile = await mka1.llm.files.upload(
  {
    file: Bun.file('./fine-tuning-validation.jsonl'),
    purpose: 'fine-tune',
  },
  requestOptions
);

console.log(trainingFile.id);
console.log(validationFile.id);
Store the returned file IDs. You pass them to the Fine-Tuning API in the next step.

Step 2 - Create a fine-tuning job

Call mka1.llm.fineTuning.create with the base model and your uploaded training file ID. Add a validation file, suffix, metadata, and method settings when you need them.
const job = await mka1.llm.fineTuning.create(
  {
    model: 'meetkai:functionary-medium',
    trainingFile: trainingFile.id,
    validationFile: validationFile.id,
    suffix: 'support-bot',
    seed: 42,
    method: {
      type: 'supervised',
      supervised: {
        hyperparameters: {
          nEpochs: 3,
        },
      },
    },
    metadata: {
      experiment: 'support-bot-v1',
    },
  },
  requestOptions
);

console.log(job.id);            // "ftjob_aa87e2b1112a455b8deabed784372198"
console.log(job.status);        // "validating_files" | "queued" | "running" | ...
console.log(job.fineTunedModel); // null until the job succeeds

Step 3 - Poll job status

Retrieve the job until it reaches succeeded, failed, or cancelled.
async function waitForFineTuningJob(
  fineTuningJobId: string,
  timeoutMs = 30 * 60_000
) {
  const terminalStatuses = new Set(['succeeded', 'failed', 'cancelled']);
  const start = Date.now();

  while (Date.now() - start < timeoutMs) {
    const current = await mka1.llm.fineTuning.retrieve(
      { fineTuningJobId },
      requestOptions
    );

    if (terminalStatuses.has(current.status)) {
      return current;
    }

    await new Promise((resolve) => setTimeout(resolve, 10_000));
  }

  throw new Error(`Fine-tuning job ${fineTuningJobId} did not finish in time`);
}

const completedJob = await waitForFineTuningJob(job.id);

if (completedJob.status === 'succeeded') {
  console.log(completedJob.fineTunedModel);
} else {
  console.log(completedJob.error);
}
You can also page through all jobs with mka1.llm.fineTuning.list({ limit, after }).

Step 4 - Inspect training events and checkpoints

Use events for training logs and metrics updates. Use checkpoints to inspect intermediate model checkpoints and their metrics.
const events = await mka1.llm.fineTuning.listEvents(
  {
    fineTuningJobId: job.id,
    limit: 20,
  },
  requestOptions
);

for (const event of events.data) {
  console.log(event.createdAt, event.level, event.message, event.data);
}

const checkpoints = await mka1.llm.fineTuning.listCheckpoints(
  {
    fineTuningJobId: job.id,
    limit: 10,
  },
  requestOptions
);

for (const checkpoint of checkpoints.data) {
  console.log(
    checkpoint.stepNumber,
    checkpoint.fineTunedModelCheckpoint,
    checkpoint.metrics
  );
}
Checkpoint metrics can include train_loss, train_mean_token_accuracy, valid_loss, valid_mean_token_accuracy, full_valid_loss, and full_valid_mean_token_accuracy.

Step 5 - Pause, resume, or cancel a job

Use pause when you need to temporarily stop a running job. Use resume to continue it. Use cancel to stop it permanently.
await mka1.llm.fineTuning.pause(
  { fineTuningJobId: job.id },
  requestOptions
);

await mka1.llm.fineTuning.resume(
  { fineTuningJobId: job.id },
  requestOptions
);

await mka1.llm.fineTuning.cancel(
  { fineTuningJobId: job.id },
  requestOptions
);

Step 6 - Use the fine-tuned model

When the job reaches succeeded, job.fineTunedModel contains the new model ID. Pass that model ID to a Responses request.
const response = await mka1.llm.responses.create(
  {
    model: completedJob.fineTunedModel!,
    input: 'Write a support reply for a delayed shipment.',
  },
  requestOptions
);

console.log(response.outputText);

API reference

For the full request and response schema, open the Fine-Tuning group in the API Reference.