Introduction
If you’re using n8n in production and need horizontal scaling for better performance and reliability, queue mode is essential. In this guide, we’ll walk through deploying n8n on Kubernetes using the official Helm chart, configuring Redis as the central queue, and separating main and worker pods for distributed execution.
This setup:
- Uses Helm to deploy n8n in queue mode
- Runs main and worker pods separately
- Integrates with PostgreSQL and Redis
- Supports horizontal scalability with multiple workers
Architecture Overview
Browser UI --> Main Pod (Web UI + Scheduler)
|
v
Redis Queue (Bull)
|
v
Worker Pods (process jobs)
Prerequisites
- Kubernetes cluster (tested with Minikube)
- Helm v3+
- Redis and PostgreSQL deployed (e.g., Bitnami Helm charts, AWS RDS, Elasticache)
Helm Values Breakdown
Here is the values.yaml
you can use to deploy a queue-enabled n8n cluster:
Global Image Config
image:
repository: n8nio/n8n
pullPolicy: IfNotPresent
tag: "1.102.4"
Scaling Config
scaling:
enabled: true
redis:
host: redis-master.default.svc.cluster.local
port: 6379
This enables queue mode and points n8n to the Redis instance handling the Bull queue.
Main Pod
The main pod handles the Web UI and workflow triggering (e.g., cron/schedule triggers):
main:
config:
n8n:
basicAuth:
active: true
user: admin
password: admin
encryption_key: "your-random-secret-key"
port: 5678
protocol: http
db:
type: postgresdb
postgresdb:
host: postgres-postgresql.default.svc.cluster.local
port: 5432
database: n8n
user: n8n
queue:
bull:
redis:
host: redis-master.default.svc.cluster.local
port: 6379
secret:
db:
postgresdb:
password: secret
extraEnv:
EXECUTIONS_MODE:
value: "queue" # Ensures execution uses Redis queue
N8N_LOG_LEVEL:
value: "debug" # Enables debug logs
N8N_RUNNERS_ENABLED:
value: "true" # Enables worker communication
N8N_REDIS_TIMEOUT_THRESHOLD:
value: "30000"
N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS:
value: "false"
QUEUE_HEALTH_CHECK_ACTIVE:
value: "true"
N8N_METRICS:
value: "true"
N8N_DIAGNOSTICS_ENABLED:
value: "false"
Worker Pods
Workers are responsible for consuming and processing jobs from Redis:
worker:
enabled: true
replicaCount: 4
config:
db:
type: postgresdb
postgresdb:
host: postgres-postgresql.default.svc.cluster.local
port: 5432
database: n8n
user: n8n
queue:
bull:
redis:
host: redis-master.default.svc.cluster.local
port: 6379
secret:
db:
postgresdb:
password: secret
extraEnv:
EXECUTIONS_MODE:
value: "queue" # Must match main pod
N8N_LOG_LEVEL:
value: "debug"
N8N_RUNNERS_ENABLED:
value: "true"
N8N_REDIS_TIMEOUT_THRESHOLD:
value: "30000"
N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS:
value: "false"
QUEUE_HEALTH_CHECK_ACTIVE:
value: "true"
N8N_METRICS:
value: "true"
N8N_DIAGNOSTICS_ENABLED:
value: "false"
concurrency: 5
concurrency
defines how many jobs each worker can process simultaneously.
Webhook Pod (Optional)
If you want to scale webhook handling independently:
webhook:
enabled: false # Enable if needed
Deployment
- Save the above config to
values.yaml
- Deploy Redis & Postgres (optional):
helm install redis bitnami/redis
helm install postgres bitnami/postgresql
- Deploy n8n:
helm upgrade --install n8n oci://8gears.container-registry.com/library/n8n --namespace n8n --create-namespace -f values.yaml
Verifying Worker Pods Are Running
After deployment, ensure that your worker pods are up and running. You can verify this by running:
kubectl get pods -n n8n
You should see multiple n8n-worker
pods in the Running
state, like this:

Each of these workers is connected to the Redis queue and ready to pick up jobs. The number of workers is controlled via the worker.replicaCount
value in your values.yaml
. In our case, we set it to 4
, and Kubernetes has spawned four workers accordingly.
You’ll be greeted with an owner account setup screen like the one below:

Here, you need to provide:
- A valid email
- First and last name
- A strong password (at least 8 characters, 1 number, and 1 capital letter)
This setup screen only appears on first run. The credentials you set here will be stored in the database (users
table in Postgres).
This step finalizes your n8n instance and gives you admin access to the system.

Closing Thoughts
Deploying n8n on Kubernetes using queue mode unlocks true scalability for production-grade workflows. By decoupling execution from the UI and enabling horizontal scaling with worker pods, you gain better control over performance, reliability, and system health.
Whether you’re automating internal ops or building external automation services, this Helm-based setup gives you a powerful, cloud-native foundation.
If you’re using Minikube for local testing or managing large workloads in production — this architecture holds up with minimal configuration drift between environments.
Now that your n8n setup is:
✅ Running in queue mode
✅ Using Redis-backed job queues
✅ Horizontally scalable via worker replicas
✅ Secure with basic auth
✅ Observable via logs and metrics
You’re ready to build faster and scale smarter.
Full values.yaml
Reference
Below is the complete values.yaml
file used for this setup. You can copy-paste it and adjust based on your Redis/Postgres service names, resource requirements, or any other preferences.
image:
repository: n8nio/n8n
pullPolicy: IfNotPresent
tag: "1.102.4"
imagePullSecrets: []
nameOverride:
fullnameOverride:
hostAliases: []
ingress:
enabled: false
annotations: {}
className: ""
hosts:
- host: workflow.example.com
paths: []
tls: []
scaling:
enabled: true
redis:
host: redis-master.default.svc.cluster.local
port: 6379
main:
config:
n8n:
basicAuth:
active: true
user: admin
password: admin
encryption_key: "random-generated-secret-key"
port: 5678
protocol: http
db:
type: postgresdb
postgresdb:
host: postgres-postgresql.default.svc.cluster.local
port: 5432
database: n8n
user: n8n
queue:
bull:
redis:
host: redis-master.default.svc.cluster.local
port: 6379
secret:
db:
postgresdb:
password: secret
extraEnv:
EXECUTIONS_MODE:
value: "queue"
N8N_LOG_LEVEL:
value: "debug"
N8N_RUNNERS_ENABLED:
value: "true"
N8N_REDIS_TIMEOUT_THRESHOLD:
value: "30000"
N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS:
value: "false"
QUEUE_HEALTH_CHECK_ACTIVE:
value: "true"
N8N_METRICS:
value: "true"
N8N_DIAGNOSTICS_ENABLED:
value: "false"
persistence:
enabled: false
type: emptyDir
accessModes:
- ReadWriteOnce
size: 1Gi
extraVolumes: []
extraVolumeMounts: []
replicaCount: 1
deploymentStrategy:
type: "Recreate"
# maxSurge: "50%"
# maxUnavailable: "50%"
serviceAccount:
create: true
annotations: {}
name: ""
deploymentAnnotations: {}
deploymentLabels: {}
podAnnotations: {}
podLabels: {}
podSecurityContext:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
securityContext: {}
lifecycle: {}
command: []
livenessProbe:
httpGet:
path: /healthz
port: 5678
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
readinessProbe:
httpGet:
path: /healthz
port: 5678
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
initContainers: []
service:
enabled: true
annotations: {}
type: ClusterIP
port: 80
resources:
limits:
cpu: 200m
memory: 512Mi
requests:
cpu: 200m
memory: 512Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: {}
tolerations: []
affinity: {}
worker:
enabled: true
config:
db:
type: postgresdb
postgresdb:
host: postgres-postgresql.default.svc.cluster.local
port: 5432
database: n8n
user: n8n
queue:
bull:
redis:
host: redis-master.default.svc.cluster.local
port: 6379
secret:
db:
postgresdb:
password: secret
extraEnv:
EXECUTIONS_MODE:
value: "queue"
N8N_LOG_LEVEL:
value: "debug"
N8N_RUNNERS_ENABLED:
value: "true"
N8N_REDIS_TIMEOUT_THRESHOLD:
value: "30000"
N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS:
value: "false"
QUEUE_HEALTH_CHECK_ACTIVE:
value: "true"
N8N_METRICS:
value: "true"
N8N_DIAGNOSTICS_ENABLED:
value: "false"
concurrency: 5
persistence:
enabled: false
type: emptyDir
accessModes:
- ReadWriteOnce
size: 1Gi
replicaCount: 4
deploymentStrategy:
type: "Recreate"
# maxSurge: "50%"
# maxUnavailable: "50%"
serviceAccount:
create: true
annotations: {}
name: ""
deploymentAnnotations: {}
deploymentLabels: {}
podAnnotations: {}
podLabels: {}
podSecurityContext:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
securityContext: {}
lifecycle: {}
command: []
commandArgs: []
livenessProbe:
httpGet:
path: /healthz
port: http
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
readinessProbe:
httpGet:
path: /healthz
port: http
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
initContainers: []
service:
annotations: {}
type: ClusterIP
port: 80
resources:
limits:
cpu: 200m
memory: 512Mi
requests:
cpu: 200m
memory: 512Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: {}
tolerations: []
affinity: {}
webhook:
enabled: false
config: {}
secret: {}
extraEnv:
N8N_RUNNERS_ENABLED:
value: "true"
N8N_REDIS_TIMEOUT_THRESHOLD:
value: "30000"
N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS:
value: "false"
# WEBHOOK_URL:
# value: "http://webhook.domain.tld"
persistence:
enabled: false
type: emptyDir
accessModes:
- ReadWriteOnce
size: 1Gi
replicaCount: 1
deploymentStrategy:
type: "Recreate"
nameOverride: ""
fullnameOverride: ""
serviceAccount:
create: true
annotations: {}
name: ""
deploymentAnnotations: {}
deploymentLabels: {}
podAnnotations: {}
podLabels: {}
podSecurityContext:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
securityContext: {}
lifecycle: {}
command: []
commandArgs: []
livenessProbe:
httpGet:
path: /healthz
port: http
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
readinessProbe:
httpGet:
path: /healthz
port: http
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
initContainers: []
service:
annotations: {}
type: ClusterIP
port: 80
resources: {}
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: {}
tolerations: []
affinity: {}
extraManifests: []
extraTemplateManifests: []
valkey:
enabled: false