
How to Setup K3s Kubernetes Cluster on Ubuntu
Setting up a Kubernetes cluster can feel like trying to solve a Rubik’s cube blindfolded, but K3s changes the game completely. This lightweight Kubernetes distribution strips away the complexity while keeping all the power you need for production workloads. Whether you’re running a homelab, managing edge deployments, or just want to spin up a cluster without dedicating your entire weekend to YAML debugging, this guide will walk you through getting K3s running on Ubuntu from zero to hero. We’ll cover everything from the initial installation to real-world deployment scenarios, including the gotchas that’ll save you hours of head-scratching later.
How K3s Works Under the Hood
K3s is essentially Kubernetes with all the enterprise bloat surgically removed. Rancher Labs designed it to be a single binary that weighs in at less than 70MB – compare that to a full Kubernetes installation that can easily consume several gigabytes. The magic happens through some clever architectural decisions:
- SQLite by default: Instead of requiring etcd, K3s uses SQLite for single-node setups and can scale to external datastores when needed
- Embedded components: Critical components like CoreDNS, Traefik ingress controller, and local storage provisioner come baked in
- Simplified networking: Flannel handles CNI out of the box, though you can swap it for Calico or Cilium if you’re feeling adventurous
- ARM-friendly: Unlike vanilla Kubernetes, K3s runs beautifully on ARM processors, making it perfect for Raspberry Pi clusters
The architecture is surprisingly elegant. The server process combines the Kubernetes API server, scheduler, controller manager, and cloud controller manager into a single binary. Agent nodes run kubelet and kube-proxy, plus the container runtime (containerd by default). This consolidation reduces memory overhead by roughly 50% compared to full Kubernetes.
Here’s what makes K3s particularly appealing for ops folks: it starts in under 30 seconds on decent hardware, uses about 512MB of RAM for a basic setup, and the entire cluster configuration lives in a single kubeconfig file. No more juggling multiple configuration files or debugging complex networking overlays.
Step-by-Step K3s Installation and Setup
Let’s get our hands dirty. I’m assuming you’ve got Ubuntu 20.04 or 22.04 instances ready to go. If you need VPS instances, grab them from https://mangohost.net/vps – their Ubuntu images come pre-configured and save you the initial OS setup headaches.
Prerequisites and System Preparation:
# Update your system (obviously)
sudo apt update && sudo apt upgrade -y
# Install essential tools
sudo apt install -y curl wget apt-transport-https
# Disable swap (Kubernetes hates swap)
sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
# Configure firewall (if ufw is active)
sudo ufw allow 6443/tcp # Kubernetes API server
sudo ufw allow 10250/tcp # Kubelet API
sudo ufw allow 8472/udp # Flannel VXLAN
sudo ufw allow 51820/udp # Flannel Wireguard (if used)
Installing K3s Server (Master Node):
# The magic one-liner installation
curl -sfL https://get.k3s.io | sh -
# Check if K3s is running
sudo systemctl status k3s
# Get your kubeconfig
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chown $(whoami):$(whoami) ~/.kube/config
# Verify the installation
kubectl get nodes
kubectl get pods -A
If you want more control over the installation (and you probably should), here’s a more configurable approach:
# Install with custom options
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server" \
K3S_KUBECONFIG_MODE="644" \
K3S_TOKEN="your-super-secret-token" \
sh -s - --cluster-init \
--write-kubeconfig-mode=644 \
--disable=traefik \
--node-name=master-01
Adding Worker Nodes:
First, grab the node token from your master:
# On the master node
sudo cat /var/lib/rancher/k3s/server/node-token
Then on each worker node:
# Replace MASTER_IP and TOKEN with actual values
curl -sfL https://get.k3s.io | K3S_URL=https://MASTER_IP:6443 \
K3S_TOKEN=TOKEN \
sh -s - --node-name=worker-01
Setting Up High Availability (HA) Cluster:
For production workloads, you’ll want multiple master nodes. Here’s how to set up a proper HA cluster:
# First master node
curl -sfL https://get.k3s.io | K3S_TOKEN=your-cluster-secret \
sh -s - server --cluster-init
# Additional master nodes (run on each)
curl -sfL https://get.k3s.io | K3S_TOKEN=your-cluster-secret \
sh -s - server --server https://first-master-ip:6443
For serious production deployments, consider using an external database like PostgreSQL or MySQL instead of the embedded etcd:
# Using external PostgreSQL
curl -sfL https://get.k3s.io | sh -s - server \
--datastore-endpoint="postgres://username:password@hostname:port/database"
Real-World Examples and Use Cases
Scenario 1: Development Environment
Perfect for developers who want a local Kubernetes environment without Docker Desktop’s resource hunger:
# Single-node development cluster
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--write-kubeconfig-mode 644 --disable=traefik" sh -
# Deploy a sample application
kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort
kubectl get svc nginx
Scenario 2: Edge Computing Deployment
K3s shines in edge scenarios where resources are constrained:
# Lightweight edge deployment
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable=traefik --disable=servicelb" sh -
# Deploy an IoT data collector
apiVersion: apps/v1
kind: Deployment
metadata:
name: iot-collector
spec:
replicas: 1
selector:
matchLabels:
app: iot-collector
template:
metadata:
labels:
app: iot-collector
spec:
containers:
- name: collector
image: eclipse-mosquitto:latest
resources:
limits:
memory: "128Mi"
cpu: "100m"
Scenario 3: Multi-Arch Raspberry Pi Cluster
This is where K3s really shows off. Traditional Kubernetes on ARM is painful, but K3s just works:
# On Raspberry Pi (ARM64)
curl -sfL https://get.k3s.io | sh -
# Verify ARM support
kubectl get nodes -o wide
# You'll see arm64 architecture listed
Performance Comparison Table:
Metric | Full Kubernetes | K3s | K0s | MicroK8s |
---|---|---|---|---|
Binary Size | ~2GB | ~70MB | ~200MB | ~200MB |
Memory Usage (idle) | ~1.5GB | ~512MB | ~800MB | ~900MB |
Startup Time | 5-10 minutes | 30-60 seconds | 2-3 minutes | 2-3 minutes |
ARM Support | Complex | Native | Native | Limited |
Common Gotchas and Solutions:
Problem: Nodes showing as “NotReady”
# Check CNI plugin status
kubectl get pods -n kube-system | grep flannel
# Restart K3s if needed
sudo systemctl restart k3s
Problem: Ingress not working after disabling Traefik
# Install NGINX Ingress Controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/cloud/deploy.yaml
Problem: Certificate issues in air-gapped environments
# Use custom registry
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--docker" sh -s - --private-registry /etc/rancher/k3s/registries.yaml
Advanced Configuration Examples:
# Custom K3s configuration file (/etc/rancher/k3s/config.yaml)
write-kubeconfig-mode: "0644"
tls-san:
- "my-cluster.example.com"
- "another-alias.example.com"
disable:
- servicelb
- traefik
cluster-cidr: "10.42.0.0/16"
service-cidr: "10.43.0.0/16"
kubelet-arg:
- "max-pods=150"
kube-apiserver-arg:
- "audit-log-path=/var/log/audit.log"
- "audit-log-maxage=30"
Integration with Other Tools and Automation
K3s plays exceptionally well with modern DevOps toolchains. Here are some killer combinations:
Terraform Integration:
resource "null_resource" "k3s_install" {
connection {
host = var.server_ip
user = "ubuntu"
private_key = file("~/.ssh/id_rsa")
}
provisioner "remote-exec" {
inline = [
"curl -sfL https://get.k3s.io | sh -",
"sudo chmod 644 /etc/rancher/k3s/k3s.yaml"
]
}
}
GitOps with ArgoCD:
# Install ArgoCD on K3s
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
# Expose ArgoCD UI
kubectl patch svc argocd-server -n argocd -p '{"spec":{"type":"NodePort"}}'
Monitoring Stack (Prometheus + Grafana):
# Using Helm
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
# Add Prometheus community charts
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
# Install monitoring stack
helm install monitoring prometheus-community/kube-prometheus-stack \
--namespace monitoring --create-namespace \
--set prometheus.prometheusSpec.storageSpec.volumeClaimTemplate.spec.resources.requests.storage=10Gi
Backup and Disaster Recovery:
K3s makes backups stupidly simple compared to full Kubernetes:
# Backup K3s cluster state
sudo cp /var/lib/rancher/k3s/server/db/state.db /backup/k3s-backup-$(date +%Y%m%d).db
# Automated backup script
#!/bin/bash
BACKUP_DIR="/backup/k3s"
DATE=$(date +%Y%m%d_%H%M%S)
mkdir -p $BACKUP_DIR
sudo cp /var/lib/rancher/k3s/server/db/state.db $BACKUP_DIR/k3s-$DATE.db
kubectl get all --all-namespaces -o yaml > $BACKUP_DIR/k3s-resources-$DATE.yaml
# Clean up old backups (keep last 7 days)
find $BACKUP_DIR -name "k3s-*" -type f -mtime +7 -delete
Scaling and Production Considerations
While K3s is lightweight, it’s no toy. Companies like CERN and Tesla use K3s in production. Here’s how to scale it properly:
Horizontal Scaling:
# Add nodes dynamically with cloud-init
#cloud-config
runcmd:
- curl -sfL https://get.k3s.io | K3S_URL=https://master-lb:6443 K3S_TOKEN=mytoken sh -
Load Balancer Configuration:
For serious production workloads, put a load balancer in front of your K3s masters. If you need dedicated servers for high-performance workloads, check out https://mangohost.net/dedicated for bare metal options that give you complete control over the hardware.
# HAProxy configuration for K3s masters
backend k3s-masters
balance roundrobin
server master1 10.0.1.10:6443 check
server master2 10.0.1.11:6443 check
server master3 10.0.1.12:6443 check
Resource Management:
# Set resource quotas per namespace
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-quota
namespace: production
spec:
hard:
requests.cpu: "4"
requests.memory: 8Gi
limits.cpu: "8"
limits.memory: 16Gi
persistentvolumeclaims: "4"
Security Hardening
K3s comes reasonably secure out of the box, but production environments need additional hardening:
# Enable Pod Security Standards
kubectl apply -f - <
Certificate Management:
# Install cert-manager for automatic SSL
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.13.0/cert-manager.yaml
# Configure Let's Encrypt issuer
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: your-email@example.com
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: nginx
Troubleshooting and Maintenance
K3s is remarkably stable, but when things go sideways, here’s your debugging toolkit:
# Check K3s logs
sudo journalctl -u k3s -f
# Inspect node conditions
kubectl describe node your-node-name
# Check system resources
kubectl top nodes
kubectl top pods --all-namespaces
# Network debugging
kubectl run debug --image=nicolaka/netshoot -it --rm -- /bin/bash
# Complete cluster reset (nuclear option)
sudo /usr/local/bin/k3s-uninstall.sh # on server
sudo /usr/local/bin/k3s-agent-uninstall.sh # on agents
Performance Tuning:
# Optimize for high-density workloads
echo 'net.netfilter.nf_conntrack_max = 131072' >> /etc/sysctl.conf
echo 'fs.inotify.max_user_watches = 1048576' >> /etc/sysctl.conf
sysctl -p
# K3s server optimizations
curl -sfL https://get.k3s.io | sh -s - server \
--kube-apiserver-arg='max-requests-inflight=400' \
--kube-apiserver-arg='max-mutating-requests-inflight=200' \
--kubelet-arg='max-pods=200'
Conclusion and Recommendations
K3s has fundamentally changed how I think about Kubernetes deployments. It removes the complexity barrier that kept many teams from adopting container orchestration while maintaining full Kubernetes compatibility. The 10x reduction in resource usage and near-instant startup times make it perfect for development environments, edge computing, and even production workloads where you don’t need enterprise-grade multi-tenancy.
Use K3s when:
- You need Kubernetes but have resource constraints
- Setting up development or testing environments
- Deploying to edge locations or ARM devices
- Running simple production workloads without complex networking requirements
- Learning Kubernetes without the operational overhead
Stick with full Kubernetes when:
- You need advanced networking features like multiple CNI plugins
- Running highly regulated workloads requiring specific compliance certifications
- Managing massive multi-tenant environments with complex RBAC requirements
- Using Kubernetes distributions with vendor-specific extensions
The installation process takes literally minutes, and the operational overhead is minimal compared to managing a full Kubernetes cluster. For most use cases, K3s provides 95% of Kubernetes functionality with 20% of the complexity. That’s a trade-off worth making in most scenarios.
Start with a single-node setup for experimentation, then graduate to a proper HA cluster when you’re ready for production. The migration path is straightforward, and the knowledge transfers directly to full Kubernetes environments. Your future self will thank you for choosing the simpler path that actually works.

This article incorporates information and material from various online sources. We acknowledge and appreciate the work of all original authors, publishers, and websites. While every effort has been made to appropriately credit the source material, any unintentional oversight or omission does not constitute a copyright infringement. All trademarks, logos, and images mentioned are the property of their respective owners. If you believe that any content used in this article infringes upon your copyright, please contact us immediately for review and prompt action.
This article is intended for informational and educational purposes only and does not infringe on the rights of the copyright owners. If any copyrighted material has been used without proper credit or in violation of copyright laws, it is unintentional and we will rectify it promptly upon notification. Please note that the republishing, redistribution, or reproduction of part or all of the contents in any form is prohibited without express written permission from the author and website owner. For permissions or further inquiries, please contact us.