Quick Start
Prerequisites
- Python 3.11+
- uv, fast Python package manager
- Docker (for Local and Tinker Docker deployments)
- GPU with >= 24 GB VRAM (Local backend only)
Choose your backend
- Local GPU
- Tinker SDK
- Modal Cloud (Coming Soon)
Runs SDPO training and vLLM inference on your own GPU. Requires >= 24 GB VRAM.The first run downloads Qwen3-8B (~16 GB). Expect the vLLM health check to take 10–20 minutes on first start.
Full Local backend reference
Requirements, all config variables, services, and manual setup

