FAQs
Answers for teams rolling BrewSLM into real delivery workflows.
Use the search box to filter questions by topic. This page focuses on practical engineering concerns: setup speed, control depth, risk, and deployment behavior.
Visible questions:
How fast can a new team run its first SLM job?
Most teams can create a project, import a sample dataset, and start a first run in under 10 minutes.
Where should we ask support questions?
Use the __SLM__ GitHub Issues page for support and troubleshooting.
Do we have to use the Wizard UI?
No. CLI and Python API flows are first-class and can own your full production path.
What does preflight actually validate?
Task-model compatibility, runtime dependencies, memory fit, and data-contract readiness.
Can we benchmark before committing to full training?
Yes. Training UI and model-selection APIs support sampled benchmark sweeps before long jobs.
How do we prevent invalid runs from wasting cloud budget?
Run preflight before launch, then enforce evaluation/registry gates before promotion.
Can we pin exact models instead of relying on recommendations?
Yes. Autopilot suggestions are optional. You can pin exact models and constraints.
Can we mix local development with cloud burst production jobs?
Yes. BrewSLM preserves one run model while letting you choose local or cloud execution.
What gets stored in exported artifacts?
Weights, selected config, benchmark trace, and metadata required for reproducible serving.
How do we compare new runs to current production quality?
Use evaluation scorecards and gate policies to compare candidate runs against your baseline.
Can we integrate BrewSLM into existing CI/CD workflows?
Yes. CLI commands and Python API calls are designed for scriptable automation pipelines.
What if our dataset schema changes frequently?
Mapping profiles and contract checks make schema drift visible and manageable.
Does BrewSLM support non-text data workflows?
Text is default; multimodal paths are available when configured in your runtime profile.
Can we audit why a model was selected for training?
Yes. Benchmark summaries and selection metadata are persisted for review and auditability.
How do we handoff from prototyping to platform teams?
Promote from Wizard to CLI/Python API and commit generated plans plus gating rules in version control.
What is the recommended first production policy?
Enforce strict preflight, benchmark floor, and release comparison against production baseline.
Where should we start if we are not sure which path to use?
Start with Creation Paths, pick one mode, then lock workflow and capability policies.
Need a rollout path?
Recommended read order for new teams
Creation Paths -> Workflow -> Capabilities -> Quickstart run.