Skip to main content

AWS

Deploy the Quanton Operator on Amazon Web Services using EKS.

tip

New to EKS? Follow the EKS deployment guide for a step-by-step walkthrough from cluster creation to your first Spark job.

EKS

Amazon Elastic Kubernetes Service (EKS) is the recommended deployment target for Quanton on AWS. The Quanton Operator runs on your EKS cluster and manages the full Spark job lifecycle via Kubernetes.

Prerequisites

  • EKS cluster running Kubernetes >= 1.28
  • Helm >= 3.x and kubectl configured for your cluster
  • onehouse-values.yaml downloaded from the Onehouse console
  • Outbound network access from your cluster to *.onehouse.ai and *.docker.io

Step 1: Install the Spark Operator

The Quanton Operator builds on top of the kubeflow Spark Operator. Install it first:

helm repo add spark-operator https://kubeflow.github.io/spark-operator
helm repo update

helm install spark-operator spark-operator/spark-operator \
--namespace spark-operator \
--create-namespace \
--set "spark.jobNamespaces={default}"

Verify it's running:

kubectl get pods -n spark-operator

Step 2: Install the Quanton Operator

helm upgrade --install quanton-operator oci://registry-1.docker.io/onehouseai/quanton-operator \
--namespace quanton-operator \
--create-namespace \
--set "quantonOperator.jobNamespaces={default}" \
-f onehouse-values.yaml

Verify the operator pod is running:

kubectl get pods -n quanton-operator

Step 3: Submit a Spark job

apiVersion: quantonsparkoperator.onehouse.ai/v1beta2
kind: QuantonSparkApplication
metadata:
name: my-spark-job
namespace: default
spec:
sparkApplicationSpec:
type: Python
mode: cluster
image: "dist.onehouse.ai/onehouseai/quanton-spark:release-v1.29.0-al2023"
mainApplicationFile: "s3://my-bucket/jobs/my_job.py"
sparkVersion: "3.5.0"
sparkConf:
"spark.hadoop.fs.s3a.aws.credentials.provider": "com.amazonaws.auth.WebIdentityTokenFileCredentialsProvider"
driver:
cores: 4
memory: "8192m"
serviceAccount: spark-operator-spark
executor:
cores: 4
instances: 4
memory: "8192m"
kubectl apply -f my-spark-job.yaml

S3 access via IRSA

Use IRSA (IAM Roles for Service Accounts) to give driver and executor pods access to S3 without static credentials:

kubectl annotate serviceaccount spark-operator-spark \
eks.amazonaws.com/role-arn=arn:aws:iam::<account>:role/SparkS3Role \
-n default

The IAM role needs s3:GetObject, s3:PutObject, and s3:ListBucket on your data buckets.

Dedicated node group (optional)

For best performance, run Spark pods on a dedicated node group:

eksctl create nodegroup \
--cluster my-cluster \
--name spark-workers \
--node-type m5.2xlarge \
--nodes 4 \
--node-labels workload=spark

Set a matching node selector in onehouse-values.yaml:

quantonOperator:
nodeSelector:
workload: spark

Then re-apply the Helm install with the updated values file.