Quickstart: Local (minikube)
Get Quanton running on your laptop with minikube in under 15 minutes.
Prerequisites
- minikube installed
- Helm >= 3.x installed
- kubectl installed
- Docker running, with at least 2 CPUs and 2GB memory available
onehouse-values.yamldownloaded from the Onehouse console — see Project Creation
Step 1: Start a local Kubernetes cluster
brew install minikube # macOS — see minikube docs for Linux/Windows
minikube start
Verify the cluster is running:
kubectl get nodes
Step 2: Install the Spark Operator
helm repo add spark-operator https://kubeflow.github.io/spark-operator
helm repo update
helm install spark-operator spark-operator/spark-operator \
--namespace spark-operator \
--create-namespace \
--set "spark.jobNamespaces={default}"
Verify it's running:
kubectl get pods -n spark-operator
Step 3: (Optional) Validate the Spark Operator
Submit the sample OSS job to confirm the Spark Operator is working before adding Quanton:
kubectl apply -f https://raw.githubusercontent.com/onehouseinc/quanton-operator/main/examples/oss-spark-application.yaml
kubectl get sparkapplications
# NAME STATUS AGE
# spark-pi-java-example COMPLETED 30s
Step 4: Install the Quanton Operator
helm upgrade --install quanton-operator oci://registry-1.docker.io/onehouseai/quanton-operator \
--namespace quanton-operator \
--create-namespace \
--set "quantonOperator.jobNamespaces={default}" \
-f onehouse-values.yaml
Verify the operator pod is running:
kubectl get pods -n quanton-operator
Step 5: Submit your first Quanton job
kubectl apply -f https://raw.githubusercontent.com/onehouseinc/quanton-operator/main/examples/quanton-application.yaml
Monitor the driver pod (first run may take 2–3 minutes while the Quanton image is pulled):
kubectl get pods -A | grep driver
Check the output once the driver is running:
kubectl logs -f quanton-spark-pi-java-example-driver | grep -i "pi is"
# Pi is roughly 3.1416568
Step 6: Access the Spark UI
While a job is running, port-forward the driver pod to view the Spark UI:
kubectl port-forward <driver-pod-name> 4040:4040
Then open http://localhost:4040.
note
The Spark UI is only available while the driver pod is alive. It terminates when the job completes.
Step 7: Resubmit a job
Option A — Delete and re-apply:
kubectl delete -f examples/quanton-application.yaml
kubectl apply -f examples/quanton-application.yaml
Option B — Change metadata.name to a new value each time:
kubectl create -f examples/quanton-application.yaml
Cleanup
helm uninstall quanton-operator -n quanton-operator
kubectl delete crd quantonsparkapplications.onehouse.ai
kubectl delete crd quantonsparkapplications.quantonsparkoperator.onehouse.ai
Next steps
- Cloud Quickstart — deploy on EKS, GKE, or AKS
- Running Jobs — submit your own Spark jobs
- Project YAML Configuration — customize operator settings