Creation Paths

Pick the fastest interface path without losing model engineering control.

BrewSLM supports three entry modes. They all compile into the same __SLM__ pipeline, so you can start simple and grow into deeper control without rewriting process.

Path Snapshot

At-a-glance fit by team shape

CLI Path

Best when your team already runs shell-first workflows and CI automation.

First milestone: scripted repeatable train + export job.

Python API Path

Best when SLM lifecycle must integrate directly into backend services.

First milestone: service-triggered one-click run + export API flow.

Wizard UI Path

Best when cross-functional teams need guided onboarding with visual checks.

First milestone: first successful run with shared project view.

Interactive Breakdown

Deep dive each interface mode

Terminal-first flow with strong reproducibility

$ ./brewslm project create --name support-assistant --template support

$ ./brewslm dataset import --project 1 --sample support-chat-v1

$ ./brewslm preflight --project 1 --task causal_lm

$ ./brewslm train --project 1 --autopilot --one-click

$ ./brewslm export --project 1 --format huggingface --target vllm

Team fit: platform, infra, MLOps-heavy engineering teams.

Risk reduced: drift from manual one-off notebook logic.

Handoff style: commit command profiles into repo templates.

Programmatic control inside product codebases

import httpx

api = "http://127.0.0.1:8000/api"

token = "sk-mock-admin-key"

headers = {"Authorization": f"Bearer {token}"}

project = httpx.post(f"{api}/projects", json={"name": "Support SLM"}, headers=headers).json()

pid = project["id"]

httpx.post(f"{api}/projects/{pid}/training/autopilot/one-click-run", json={"intent": "Draft support replies"}, headers=headers)

httpx.get(f"{api}/projects/{pid}/training/experiments", headers=headers)

Team fit: backend teams integrating training into services.

Risk reduced: brittle shell glue around app workflows.

Handoff style: expose internal training APIs to product teams.

Guided UI for fast onboarding and reviewable runs

  1. Select deployment target from the target catalog.
  2. Describe your goal in plain language (plus optional VRAM and run name).
  3. Review safe/balanced/max-quality plans and dataset readiness blockers.
  4. Launch one-click run and monitor training status in the wizard.
  5. Continue to Training, Playground, or Export when complete.

Team fit: mixed AI + product teams onboarding together.

Risk reduced: hidden setup errors during early project stages.

Handoff style: move repeatable runs to CLI or Python API when mature.

Decision Matrix

Compare execution tradeoffs before choosing your default mode

Dimension CLI Python API Wizard UI
Onboarding speed Fast for shell-native teams Medium, requires code integration Fastest for new mixed teams
Automation depth High via scripts and CI High via direct REST orchestration from services Medium, guided steps with exports
Reviewability High if config snapshots are committed High with explicit payloads and API logs High with visual run trace
Best starting point Infra-oriented engineering orgs Product/backend engineering orgs Cross-functional pilot teams
Typical migration path Stay CLI or add Python API helpers Remain API-driven with selective CLI ops Wizard first, then CLI or Python API

Adoption Sequence

A pragmatic rollout model teams actually use

Phase 1

Discover

Use Wizard UI for initial project setup and quick validation runs.

Phase 2

Stabilize

Move repeated operations to CLI scripts with pinned configs.

Phase 3

Integrate

Use Python API calls for app-triggered training, evaluation, and export pipelines.