Skip to main content

Network Configuration

The Quanton Operator requires outbound network access from your Kubernetes cluster to the Onehouse control plane. No inbound ports need to be opened.

Required outbound access

All control plane communication uses port 443 (gRPC/TLS):

EndpointPurpose
gwc.onehouse.ai:443Control plane (job orchestration, image token refresh)
metrics.onehouse.ai:443Metrics forwarding
registry-1.docker.ioQuanton Operator image
dist.onehouse.aiQuanton Spark runtime image

Domain allowlist

If your environment uses an egress firewall, allowlist these domains:

  • .onehouse.ai
  • .docker.io
  • .amazonaws.com
  • .ecr.aws
  • .gcr.io
  • .k8s.io
  • .pkg.dev
  • production.cloudflare.docker.com

AWS-specific networking

VPC

Create a VPC with a /16 CIDR block (e.g. 10.0.0.0/16). This gives 65,536 IP addresses — important because Kubernetes pods consume IPs directly from the subnet.

Subnets

TypeCountCIDR sizePurpose
PrivateAt least 2/20 per subnetEKS nodes and pods — spread across 2 AZs
PublicAt least 2/24 per subnetNAT Gateway and load balancers only
caution

EKS subnets must span at least two Availability Zones. You cannot add subnets in a new AZ after the cluster is provisioned.

NAT Gateway

Deploy a NAT Gateway in a public subnet. All outbound traffic from EKS nodes routes through it to reach the Onehouse control plane. A single NAT Gateway is sufficient for most deployments.

S3 VPC Gateway Endpoint

Create an S3 VPC Gateway Endpoint so EKS-to-S3 traffic stays inside the AWS network and doesn't route through the NAT Gateway. This is required to avoid significant NAT data transfer costs at scale.

S3 Gateway Endpoint policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowContainerImageRegistries",
"Effect": "Allow",
"Principal": "*",
"Action": "*",
"Resource": [
"arn:aws:s3:::docker-images-prod/*",
"arn:aws:s3:::prod-<AWS_REGION>-starport-layer-bucket/*"
]
},
{
"Sid": "AllowOnehouseAndLakehouseBuckets",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::onehouse-customer-bucket-XXXX",
"arn:aws:s3:::onehouse-customer-bucket-XXXX/*",
"arn:aws:s3:::<your-lake-bucket>",
"arn:aws:s3:::<your-lake-bucket>/*"
]
}
]
}

EKS cluster endpoint

The EKS cluster API endpoint is private. The Onehouse control plane connects from these NAT IPs — ensure they are not blocked:

  • 54.153.81.1/32
  • 184.169.135.156/32

For environments where no public internet traversal is allowed, Onehouse supports AWS PrivateLink. All control plane traffic stays within the AWS network. Contact Onehouse support to enable PrivateLink for your project.

GCP-specific networking

For GKE clusters:

  • Ensure the cluster has outbound internet access or configure Cloud NAT.
  • Allow egress to gwc.onehouse.ai:443 and dist.onehouse.ai.
  • For Artifact Registry / GCR image access, ensure the node service account has roles/artifactregistry.reader.

Validate your network setup

Before installing the operator, verify your cluster can reach the required endpoints:

# Run from a pod in your cluster
kubectl run net-test --rm -it --image=curlimages/curl -- \
curl -v https://gwc.onehouse.ai

A TLS handshake (even if rejected) confirms network reachability.