Developed by the Physical Intelligence team, OpenPI hosts open-source VLA models pre-trained on 10k+ hours of robot data, enabling researchers and practitioners to fine-tune and deploy robotic manipulation capabilities across diverse platforms.
OpenPi offers three core VLA model families, optimized for robotic manipulation tasks:
All models require an NVIDIA GPU (Ubuntu 22.04 only; other OS not supported):
| Mode | Minimum GPU Memory | Example GPUs |
|---|---|---|
| Inference | 8 GB+ | RTX 4090 |
| Fine-Tuning (LoRA) | 22.5 GB+ | RTX 4090 |
| Fine-Tuning (Full) | 70 GB+ | A100 (80GB) / H100 |
Multi-GPU model parallelism (via fsdp_devices config) reduces per-GPU memory requirements; multi-node training is not yet supported.
Ensure submodules are included:
git clone --recurse-submodules git@github.com:Physical-Intelligence/openpi.git # For existing clones: git submodule update --init --recursive
We use uv for Python dependencies (install uv first):
GIT_LFS_SKIP_SMUDGE=1 uv sync GIT_LFS_SKIP_SMUDGE=1 uv pip install -e .
GIT_LFS_SKIP_SMUDGE=1 is required for LeRobot dependency compatibility.
For simplified setup, use our Docker instructions: Docker Setup.
These checkpoints are pre-trained on 10k+ hours of robot data:
| Model | Description | Checkpoint Path |
|---|---|---|
| π₀ | Base flow-based π₀ model for fine-tuning | gs://openpi-assets/checkpoints/pi0_base |
| π₀-FAST | Base autoregressive π₀-FAST model for fine-tuning | gs://openpi-assets/checkpoints/pi0_fast_base |
| π₀.₅ | Base upgraded π₀.₅ model (knowledge insulation) for fine-tuning | gs://openpi-assets/checkpoints/pi05_base |
Optimized for specific robot platforms/tasks:
| Model | Use Case | Description | Checkpoint Path |
|---|---|---|---|
| π₀-FAST-DROID | Inference | π₀-FAST fine-tuned on DROID dataset (0-shot table-top manipulation) | gs://openpi-assets/checkpoints/pi0_fast_droid |
| π₀-DROID | Fine-Tuning | π₀ fine-tuned on DROID (faster inference, weaker language following) | gs://openpi-assets/checkpoints/pi0_droid |
| π₀-ALOHA-towel | Inference | π₀ fine-tuned on ALOHA (0-shot towel folding) | gs://openpi-assets/checkpoints/pi0_aloha_towel |
| π₀-ALOHA-tupperware | Inference | π₀ fine-tuned on ALOHA (tupperware unpacking) | gs://openpi-assets/checkpoints/pi0_aloha_tupperware |
| π₀-ALOHA-pen-uncap | Inference | π₀ fine-tuned on public ALOHA data (pen uncapping) | gs://openpi-assets/checkpoints/pi0_aloha_pen_uncap |
| π₀.₅-LIBERO | Inference | π₀.₅ fine-tuned for LIBERO benchmark (state-of-the-art performance) | gs://openpi-assets/checkpoints/pi05_libero |
| π₀.₅-DROID | Inference/Fine-Tuning | π₀.₅ fine-tuned on DROID (fast inference + strong language following) | gs://openpi-assets/checkpoints/pi05_droid |
Checkpoints auto-download to ~/.cache/openpi (override with OPENPI_DATA_HOME env var).
from openpi.training import config as _config
from openpi.policies import policy_config
from openpi.shared import download
# Load config and checkpoint
config = _config.get_config("pi05_droid")
checkpoint_dir = download.maybe_download("gs://openpi-assets/checkpoints/pi05_droid")
# Initialize policy
policy = policy_config.create_trained_policy(config, checkpoint_dir)
# Run inference on a sample observation
example = {
"observation/exterior_image_1_left": ..., # Replace with real sensor data
"observation/wrist_image_left": ...,
"prompt": "pick up the fork"
}
action_chunk = policy.infer(example)["actions"]
Example for LIBERO dataset (adapt for custom data):
uv run examples/libero/convert_libero_data_to_lerobot.py --data_dir /path/to/libero/data
uv run scripts/compute_norm_stats.py --config-name pi05_libero
XLA_PYTHON_CLIENT_MEM_FRACTION=0.9 uv run scripts/train.py pi05_libero --exp-name=my_experiment --overwrite
uv run scripts/serve_policy.py policy:checkpoint --policy.config=pi05_libero --policy.dir=checkpoints/pi05_libero/my_experiment/20000
Platform-specific fine-tuning guides: ALOHA Simulator, ALOHA Real, UR5.
OpenPi now supports PyTorch for π₀ and π₀.₅ (π₀-FAST not supported yet; mixed precision/FSDP/LoRA/EMA training unavailable).
uv syncuv pip show transformerscp -r ./src/openpi/models_pytorch/transformers_replace/* .venv/lib/python3.11/site-packages/transformers/
Undo with: uv cache clean transformers
uv run examples/convert_jax_model_to_pytorch.py \
--checkpoint_dir /path/to/jax/checkpoint \
--config_name \
--output_path /path/to/pytorch/checkpoint
# Single GPU uv run scripts/train_pytorch.py--exp_name # Multi-GPU (single node) uv run torchrun --standalone --nnodes=1 --nproc_per_node= scripts/train_pytorch.py --exp_name
pytorch_training_precision).| Issue | Resolution |
|---|---|
| `uv sync` dependency conflicts | Delete `.venv` → re-run `uv sync`; update uv (`uv self update`). |
| GPU OOM during training | Set `XLA_PYTHON_CLIENT_MEM_FRACTION=0.9`; enable FSDP; disable EMA. |
| Policy server connection errors | Verify server port/network/firewall; confirm server is running. |
| Missing norm stats | Run `scripts/compute_norm_stats.py` with your config name. |
| Dataset download failures | Check internet/HuggingFace login (`huggingface-cli login`). |
| CUDA/GPU errors | Verify NVIDIA drivers/docker toolkit; uninstall system CUDA libs if conflicting. |
| Import errors | Run `uv sync`; check example-specific requirements. |
| Action dimension mismatch | Validate data transforms/robot action space definitions. |
| Diverging training loss | Adjust `norm_stats.json` (fix small `q01/q99/std` values for rare dimensions). |