
How to Set Up an Nginx Ingress on DigitalOcean Kubernetes Using Helm
Setting up an Nginx Ingress Controller on DigitalOcean Kubernetes using Helm is a crucial skill for managing external access to your cluster services. This configuration acts as a reverse proxy and load balancer, handling HTTP and HTTPS traffic routing to your applications based on defined rules. You’ll learn how to deploy the ingress controller, configure SSL certificates, set up routing rules, and troubleshoot common issues that arise during implementation.
How Nginx Ingress Controller Works
The Nginx Ingress Controller operates as a specialized load balancer that runs inside your Kubernetes cluster. Unlike traditional external load balancers, it understands Kubernetes resources and can dynamically update its configuration based on Ingress resource definitions.
When you create an Ingress resource, the controller automatically generates the corresponding Nginx configuration and reloads the server. This process happens seamlessly without dropping existing connections. The controller watches for changes to Services, Endpoints, Secrets, and Configmaps, ensuring your routing rules stay synchronized with your cluster state.
Component | Function | Resource Type |
---|---|---|
Ingress Controller | Nginx process that handles traffic | Deployment |
Ingress Resource | Configuration rules for routing | Custom Resource |
Load Balancer Service | External IP allocation | Service |
Prerequisites and Initial Setup
Before diving into the installation, ensure you have these components ready:
- A running DigitalOcean Kubernetes cluster with kubectl access
- Helm 3.x installed and configured
- Basic understanding of Kubernetes services and deployments
- Domain name pointing to your cluster (for SSL configuration)
First, verify your cluster connection and install Helm if you haven’t already:
kubectl cluster-info
kubectl get nodes
# Install Helm (if not already installed)
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
helm version
Step-by-Step Installation Guide
Start by adding the official Nginx Ingress Helm repository:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
Create a namespace for the ingress controller to keep things organized:
kubectl create namespace ingress-nginx
Now install the Nginx Ingress Controller with DigitalOcean-specific configurations:
helm install ingress-nginx ingress-nginx/ingress-nginx \
--namespace ingress-nginx \
--set controller.service.type=LoadBalancer \
--set controller.service.annotations."service\.beta\.kubernetes\.io/do-loadbalancer-enable-proxy-protocol"=true \
--set controller.config.use-proxy-protocol=true \
--set controller.service.annotations."service\.beta\.kubernetes\.io/do-loadbalancer-hostname"=your-domain.com
The installation process takes a few minutes as DigitalOcean provisions a load balancer. Monitor the progress:
kubectl get services -n ingress-nginx -w
Once you see an external IP assigned, your ingress controller is ready. The output should look similar to:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
ingress-nginx-controller LoadBalancer 10.245.xxx.xxx 139.59.xxx.xxx 80:32080/TCP,443:32443/TCP
Configuring Your First Ingress Resource
Create a sample application to test your ingress setup. Here’s a simple nginx deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-app
spec:
replicas: 2
selector:
matchLabels:
app: test-app
template:
metadata:
labels:
app: test-app
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: test-app-service
spec:
selector:
app: test-app
ports:
- port: 80
targetPort: 80
Save this as `test-app.yaml` and apply it:
kubectl apply -f test-app.yaml
Now create an Ingress resource to route traffic to your application:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: your-domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: test-app-service
port:
number: 80
Apply the ingress configuration:
kubectl apply -f ingress.yaml
SSL Certificate Management with Cert-Manager
For production environments, SSL certificates are essential. The most efficient approach uses cert-manager with Let’s Encrypt:
helm repo add jetstack https://charts.jetstack.io
helm repo update
kubectl create namespace cert-manager
helm install cert-manager jetstack/cert-manager \
--namespace cert-manager \
--version v1.13.0 \
--set installCRDs=true
Create a ClusterIssuer for Let’s Encrypt:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: your-email@domain.com
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: nginx
Update your Ingress resource to include TLS configuration:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
ingressClassName: nginx
tls:
- hosts:
- your-domain.com
secretName: your-domain-tls
rules:
- host: your-domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: test-app-service
port:
number: 80
Advanced Configuration Options
The Nginx Ingress Controller offers extensive customization through annotations and ConfigMaps. Here are some commonly used configurations:
Feature | Annotation | Example Value |
---|---|---|
Rate Limiting | nginx.ingress.kubernetes.io/rate-limit | 100 |
CORS Headers | nginx.ingress.kubernetes.io/enable-cors | true |
Upload Size | nginx.ingress.kubernetes.io/proxy-body-size | 50m |
Basic Auth | nginx.ingress.kubernetes.io/auth-type | basic |
For global configurations, create a ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
namespace: ingress-nginx
data:
proxy-connect-timeout: "10"
proxy-read-timeout: "120"
proxy-send-timeout: "120"
server-tokens: "false"
Real-World Use Cases and Examples
Multi-Service Application Routing
A common scenario involves routing different paths to different services:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: multi-service-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
rules:
- host: myapp.com
http:
paths:
- path: /api(/|$)(.*)
pathType: Prefix
backend:
service:
name: api-service
port:
number: 8080
- path: /frontend(/|$)(.*)
pathType: Prefix
backend:
service:
name: frontend-service
port:
number: 80
Blue-Green Deployments
Use traffic splitting for gradual rollouts:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: canary-ingress
annotations:
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: "10"
spec:
ingressClassName: nginx
rules:
- host: myapp.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app-v2-service
port:
number: 80
Performance Tuning and Best Practices
Monitor your ingress controller performance using these metrics:
kubectl top pods -n ingress-nginx
kubectl get --raw /metrics | grep nginx
For high-traffic environments, consider these optimizations:
- Increase replica count for the ingress controller
- Configure resource limits and requests appropriately
- Enable HTTP/2 and gzip compression
- Use connection pooling and keep-alive settings
- Implement proper caching strategies
Update your Helm values for production workloads:
helm upgrade ingress-nginx ingress-nginx/ingress-nginx \
--namespace ingress-nginx \
--set controller.replicaCount=3 \
--set controller.resources.requests.cpu=100m \
--set controller.resources.requests.memory=90Mi \
--set controller.resources.limits.cpu=500m \
--set controller.resources.limits.memory=500Mi
Troubleshooting Common Issues
External IP Not Assigned
If your LoadBalancer service shows `
kubectl describe service ingress-nginx-controller -n ingress-nginx
kubectl get events -n ingress-nginx --sort-by='.lastTimestamp'
This usually indicates DigitalOcean load balancer provisioning issues or quota limits.
SSL Certificate Issues
Check cert-manager logs and certificate status:
kubectl logs -n cert-manager deployment/cert-manager
kubectl describe certificate your-domain-tls
kubectl describe challengerequest
503 Service Temporarily Unavailable
This error typically means your backend service isn’t responding:
kubectl get endpoints
kubectl logs -n ingress-nginx deployment/ingress-nginx-controller
Verify your service selectors match your pod labels exactly.
Comparison with Alternative Solutions
Solution | Pros | Cons | Best For |
---|---|---|---|
Nginx Ingress | Feature-rich, battle-tested, extensive customization | Resource intensive, complex configuration | Production environments |
Traefik | Automatic service discovery, modern architecture | Smaller community, fewer examples | Microservices, Docker environments |
DigitalOcean LB | Managed service, minimal setup | Limited customization, vendor lock-in | Simple applications |
For high-availability setups on dedicated servers, consider running multiple ingress controllers across different nodes with anti-affinity rules.
Security Considerations
Implement these security measures for production deployments:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: ingress-nginx-policy
namespace: ingress-nginx
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
policyTypes:
- Ingress
- Egress
ingress:
- from: []
ports:
- protocol: TCP
port: 80
- protocol: TCP
port: 443
Additional security configurations:
- Enable ModSecurity WAF with OWASP rules
- Configure rate limiting and DDoS protection
- Use authentication middlewares for sensitive endpoints
- Implement IP whitelisting for admin interfaces
- Regular security updates and monitoring
The combination of DigitalOcean Kubernetes and Nginx Ingress provides a robust foundation for hosting production applications. For smaller deployments, consider VPS solutions as a cost-effective alternative.
Remember to regularly update your ingress controller and monitor its performance metrics. The official Nginx Ingress documentation provides comprehensive guidance for advanced configurations and troubleshooting scenarios.

This article incorporates information and material from various online sources. We acknowledge and appreciate the work of all original authors, publishers, and websites. While every effort has been made to appropriately credit the source material, any unintentional oversight or omission does not constitute a copyright infringement. All trademarks, logos, and images mentioned are the property of their respective owners. If you believe that any content used in this article infringes upon your copyright, please contact us immediately for review and prompt action.
This article is intended for informational and educational purposes only and does not infringe on the rights of the copyright owners. If any copyrighted material has been used without proper credit or in violation of copyright laws, it is unintentional and we will rectify it promptly upon notification. Please note that the republishing, redistribution, or reproduction of part or all of the contents in any form is prohibited without express written permission from the author and website owner. For permissions or further inquiries, please contact us.