Local Deployment#
Try OSMO on your local workstation — no cloud account, no infrastructure costs, no enterprise approval needed.
This guide walks you through deploying the complete OSMO platform locally using KIND (Kubernetes in Docker) in about 10 minutes.
Tip
Perfect for evaluation – Test your workflows, explore the platform, and assess fit for your robotics development needs before cloud deployment of OSMO.
Warning
Local deployment is not recommended for production use as it lacks authentication and has limited features.
Why Deploy Locally?#
Local deployment provides the complete OSMO experience on your workstation:
✓ Full workflow orchestration – Task dependencies, parallel execution, state management
✓ Real containerized execution – Your Docker images running in local Kubernetes
✓ Complete data management – Local object storage for datasets and artifacts
✓ The same YAML workflows that scale to cloud environments
✓ Zero cloud costs – Everything runs on your workstation
Seamless Scale to Cloud
If OSMO works for your use case locally, it will scale to hundreds of GPUs in the cloud. You can use the exact same workflows; no code changes required.
Prerequisites#
Install the following tools on your workstation:
Step 1: Create KIND Cluster#
Choose the appropriate setup based on whether your workstation has a GPU.
Option A: GPU Workstations (with nvkind)#
If your workstation has a GPU, follow these steps to create a cluster with GPU support.
Prerequisites
Install nvkind by following the prerequisites , setup , and installation guides.
Create Cluster Configuration
kind-osmo-cluster-config.yaml (GPU version)
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: osmo
nodes:
- role: control-plane
- role: worker
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "node_group=ingress,nvidia.com/gpu.deploy.operands=false"
extraPortMappings:
- containerPort: 30080
hostPort: 80
protocol: TCP
- role: worker
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "node_group=kai-scheduler,nvidia.com/gpu.deploy.operands=false"
- role: worker
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "node_group=data,nvidia.com/gpu.deploy.operands=false"
extraMounts:
- hostPath: /tmp/localstack-s3
containerPath: /var/lib/localstack
- role: worker
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "node_group=service,nvidia.com/gpu.deploy.operands=false"
- role: worker
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "node_group=service,nvidia.com/gpu.deploy.operands=false"
- role: worker
extraMounts:
- hostPath: /dev/null
containerPath: /var/run/nvidia-container-devices/all
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "node_group=compute"
Create the Cluster
$ nvkind cluster create --config-template=kind-osmo-cluster-config.yaml
Note
You can safely ignore any umount errors as long as nvkind cluster print-gpus shows your GPUs.
Install GPU Operator
After creating the cluster, install the GPU Operator to manage GPU resources:
$ helm fetch https://helm.ngc.nvidia.com/nvidia/charts/gpu-operator-v25.10.0.tgz
$ helm upgrade --install gpu-operator gpu-operator-v25.10.0.tgz \
--namespace gpu-operator \
--create-namespace \
--set driver.enabled=false \
--set toolkit.enabled=false \
--set nfd.enabled=true \
--wait
Option B: CPU Workstations (with KIND)#
If your workstation does not have a GPU, follow these steps for a standard CPU-only cluster.
Create Cluster Configuration
kind-osmo-cluster-config.yaml (CPU-only version)
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: osmo
nodes:
- role: control-plane
- role: worker
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "node_group=ingress"
extraPortMappings:
- containerPort: 30080
hostPort: 80
protocol: TCP
- role: worker
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "node_group=kai-scheduler"
- role: worker
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "node_group=data"
extraMounts:
- hostPath: /tmp/localstack-s3
containerPath: /var/lib/localstack
- role: worker
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "node_group=service"
- role: worker
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "node_group=service"
- role: worker
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "node_group=compute"
Create the Cluster
$ kind create cluster --config kind-osmo-cluster-config.yaml
Cluster Architecture#
Both commands create a Kubernetes cluster on your workstation with a control plane node and several worker nodes. The core OSMO components will be installed on those worker nodes:
Control Plane
2 worker nodes labeled
node_group=servicefor API server and workflow engine1 worker node labeled
node_group=ingressfor NGINX ingress1 worker node labeled
node_group=kai-schedulerfor KAI scheduler
Compute Layer
1 worker node labeled
node_group=compute
Data Layer
1 worker node labeled
node_group=datafor PostgreSQL, Redis, LocalStack S3
Step 2: Install KAI Scheduler#
KAI scheduler provides co-scheduling, priority, and preemption for workflows:
$ helm upgrade --install kai-scheduler \
oci://ghcr.io/nvidia/kai-scheduler/kai-scheduler \
--version v0.8.1 \
--create-namespace -n kai-scheduler \
--set global.nodeSelector.node_group=kai-scheduler \
--set "scheduler.additionalArgs[0]=--default-staleness-grace-period=-1s" \
--set "scheduler.additionalArgs[1]=--update-pod-eviction-condition=true" \
--wait
Step 3: Install OSMO#
Deploy the complete OSMO platform with a single Helm command:
$ helm fetch https://helm.ngc.nvidia.com/nvidia/osmo/charts/quick-start-1.0.0.tgz
$ helm upgrade --install osmo quick-start-1.0.0.tgz \
--namespace osmo \
--create-namespace \
--wait
Tip
Installation takes about 5 minutes. Monitor progress with:
$ kubectl get pods --namespace osmo
Step 4: Configure Access#
Add a host entry to access OSMO from your browser:
$ echo "127.0.0.1 quick-start.osmo" | sudo tee -a /etc/hosts
This allows you to visit http://quick-start.osmo in your web browser.
Step 5: Install OSMO CLI#
Download and install the OSMO command-line interface:
$ curl -fsSL https://raw.githubusercontent.com/NVIDIA/OSMO/refs/heads/main/install.sh | bash
Step 6: Log In to OSMO#
Authenticate with your local OSMO instance:
$ osmo login http://quick-start.osmo --method=dev --username=testuser
Success!
You now have OSMO configured and running on your workstation. You’re ready to start running robotics workflows!
Next Steps#
Now that you have OSMO running locally, explore the platform:
Run Your First Workflow: Visit the User Guide for tutorials on submitting workflows, interactive development, distributed training, and more.
Explore the Web UI: Visit
http://quick-start.osmoto access the OSMO dashboard.Test Your Own Workflows: Use your own Docker images and datasets to validate OSMO for your use case.
Tip
Ready to Scale?
Once you have validated OSMO locally, you can scale to cloud environments (EKS, AKS, GKE) or on-premise clusters without rewriting your workflows. Contact your cloud administrator to discuss production deployment options—see the Deploy Service guide for full production deployment.
Cleanup#
Delete the local cluster and all associated resources:
$ kind delete cluster --name osmo
This removes the entire Kubernetes cluster, including all persistent volumes and the PostgreSQL database.
Troubleshooting#
Too Many Files Open#
If you encounter “too many files open” errors or pods fail to start, increase the inotify limits:
$ echo "fs.inotify.max_user_watches=1048576" | sudo tee -a /etc/sysctl.conf
$ echo "fs.inotify.max_user_instances=512" | sudo tee -a /etc/sysctl.conf
$ sudo sysctl -p
For more details, see: Pod errors due to “too many open files”
Docker Permission Denied#
If you see “permission denied” errors when running Docker or KIND commands, add your user to the docker group:
$ sudo usermod -aG docker $USER && newgrp docker
Note
If permission errors persist, log out and log back in for the group membership changes to take effect.
For more details, see: Docker permission denied
Pods Not Starting#
Check resource availability and logs:
$ kubectl get pods --namespace osmo
$ kubectl describe pod <pod-name> --namespace osmo
$ kubectl logs <pod-name> --namespace osmo