
Setting Up Highly Available HAProxy Servers with Keepalived and Reserved IPs on Ubuntu 24
Setting up highly available HAProxy servers with Keepalived and Reserved IPs on Ubuntu 24 creates a robust load balancing solution that ensures your applications remain accessible even when one server fails. This configuration eliminates single points of failure by maintaining automatic failover between primary and secondary load balancers, while Reserved IPs provide seamless traffic switching without DNS propagation delays. You’ll learn how to implement a production-ready high availability setup using HAProxy for load balancing, Keepalived for automatic failover management, and Reserved IPs for consistent external access points.
How High Availability Load Balancing Works
The architecture combines three critical components that work together to maintain service availability. HAProxy handles the actual load balancing between your backend servers, distributing incoming requests based on configured algorithms and health checks. Keepalived manages the Virtual Router Redundancy Protocol (VRRP) implementation, which monitors the health of your load balancer instances and automatically promotes a backup server to master when failures occur.
Reserved IPs act as floating IP addresses that can move between servers instantly. When Keepalived detects a failure on the primary HAProxy server, it triggers the Reserved IP to automatically reassign to the secondary server, typically completing this process in under 10 seconds. This eliminates the need for DNS changes and provides near-instantaneous failover.
The VRRP protocol sends heartbeat packets between servers every few seconds. If the backup server stops receiving these packets from the master, it assumes the master role and claims the Reserved IP. This creates an active-passive configuration where only one server handles traffic at a time, though you can configure active-active setups for higher throughput.
Infrastructure Prerequisites and Planning
Before implementation, you’ll need at least two Ubuntu 24 servers in the same data center or cloud region. Most cloud providers require Reserved IPs to remain within the same availability zone for automatic reassignment. Your servers should have sufficient resources – typically 2 CPU cores and 4GB RAM minimum for production load balancing.
Reserve a dedicated IP address through your hosting provider’s control panel. This IP will serve as your primary entry point for all traffic. Document your backend server IPs, ports, and any specific health check requirements. Plan your network topology to ensure the load balancer servers can communicate with each other and reach all backend servers.
Security group or firewall rules need configuration for several ports:
- Port 80/443 for web traffic
- Port 8404 for HAProxy statistics (optional)
- VRRP multicast traffic (protocol 112)
- Backend server ports you’re load balancing
Installing and Configuring HAProxy
Install HAProxy on both servers using Ubuntu’s package manager. The Ubuntu 24 repositories include HAProxy 2.8, which provides excellent performance and stability for production environments.
sudo apt update
sudo apt install haproxy -y
sudo systemctl enable haproxy
Create a backup of the default configuration before making changes:
sudo cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.backup
Configure HAProxy with a comprehensive setup that includes health checks, statistics, and proper backend management. Replace the default configuration with this production-ready example:
global
daemon
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
log stdout local0
# SSL Configuration
ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256
ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
option httplog
option dontlognull
option redispatch
retries 3
maxconn 2000
frontend web_frontend
bind *:80
bind *:443 ssl crt /etc/ssl/certs/your-cert.pem
redirect scheme https if !{ ssl_fc }
default_backend web_servers
backend web_servers
balance roundrobin
option httpchk GET /health
http-check expect status 200
server web1 10.0.1.10:80 check inter 3000ms rise 2 fall 3
server web2 10.0.1.11:80 check inter 3000ms rise 2 fall 3
server web3 10.0.1.12:80 check inter 3000ms rise 2 fall 3
listen stats
bind *:8404
stats enable
stats uri /stats
stats refresh 30s
stats admin if TRUE
The configuration includes several production-ready features. Health checks run every 3 seconds, requiring 2 successful checks before marking a server as healthy and 3 failures before removing it from rotation. The statistics interface provides real-time monitoring of backend server status and traffic distribution.
Test the configuration syntax before starting the service:
sudo haproxy -f /etc/haproxy/haproxy.cfg -c
sudo systemctl restart haproxy
sudo systemctl status haproxy
Setting Up Keepalived for Failover Management
Install Keepalived on both servers to manage the VRRP failover process:
sudo apt install keepalived -y
sudo systemctl enable keepalived
Create the Keepalived configuration file for the primary server. This server will normally hold the Reserved IP and handle all traffic:
sudo nano /etc/keepalived/keepalived.conf
Primary server configuration:
global_defs {
router_id LB_PRIMARY
enable_script_security
script_user keepalived_script
}
vrrp_script chk_haproxy {
script "/usr/bin/killall -0 haproxy"
interval 2
weight 2
fall 3
rise 2
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 110
advert_int 1
authentication {
auth_type PASS
auth_pass your_secure_password_here
}
virtual_ipaddress {
192.168.1.100/24
}
track_script {
chk_haproxy
}
notify_master "/etc/keepalived/scripts/master.sh"
notify_backup "/etc/keepalived/scripts/backup.sh"
notify_fault "/etc/keepalived/scripts/fault.sh"
}
Secondary server configuration differs primarily in state and priority:
global_defs {
router_id LB_BACKUP
enable_script_security
script_user keepalived_script
}
vrrp_script chk_haproxy {
script "/usr/bin/killall -0 haproxy"
interval 2
weight 2
fall 3
rise 2
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass your_secure_password_here
}
virtual_ipaddress {
192.168.1.100/24
}
track_script {
chk_haproxy
}
notify_master "/etc/keepalived/scripts/master.sh"
notify_backup "/etc/keepalived/scripts/backup.sh"
notify_fault "/etc/keepalived/scripts/fault.sh"
}
Create the notification scripts directory and add monitoring scripts:
sudo mkdir -p /etc/keepalived/scripts
sudo nano /etc/keepalived/scripts/master.sh
Master script content:
#!/bin/bash
echo "$(date): Became MASTER" >> /var/log/keepalived-state.log
# Add cloud provider API call to assign Reserved IP
# curl -X POST "https://api.provider.com/reserved-ip/assign" \
# -H "Authorization: Bearer $API_TOKEN" \
# -d '{"server_id": "'$(curl -s http://169.254.169.254/metadata/v1/id)'"}'
Make the scripts executable:
sudo chmod +x /etc/keepalived/scripts/*.sh
sudo useradd -r -s /bin/false keepalived_script
Integrating Reserved IPs for Seamless Failover
Reserved IPs require integration with your cloud provider’s API for automatic reassignment. Most providers offer webhook endpoints or API calls that Keepalived can trigger during state changes. This automation ensures the Reserved IP always points to the currently active load balancer.
Create an API integration script that handles the IP reassignment:
sudo nano /etc/keepalived/scripts/assign-reserved-ip.sh
#!/bin/bash
API_TOKEN="your_api_token_here"
RESERVED_IP="203.0.113.50"
CURRENT_SERVER_ID=$(curl -s http://169.254.169.254/metadata/v1/id)
# Log the assignment attempt
echo "$(date): Attempting to assign Reserved IP $RESERVED_IP to server $CURRENT_SERVER_ID" >> /var/log/reserved-ip.log
# Make API call to assign Reserved IP
RESPONSE=$(curl -s -w "%{http_code}" -X POST \
"https://api.digitalocean.com/v2/reserved_ips/$RESERVED_IP/actions" \
-H "Authorization: Bearer $API_TOKEN" \
-H "Content-Type: application/json" \
-d "{\"type\":\"assign\",\"resource\":\"$CURRENT_SERVER_ID\"}")
HTTP_CODE="${RESPONSE: -3}"
RESPONSE_BODY="${RESPONSE%???}"
if [ "$HTTP_CODE" -eq 201 ]; then
echo "$(date): Successfully assigned Reserved IP" >> /var/log/reserved-ip.log
else
echo "$(date): Failed to assign Reserved IP. HTTP Code: $HTTP_CODE, Response: $RESPONSE_BODY" >> /var/log/reserved-ip.log
fi
Update the master script to call the IP assignment:
#!/bin/bash
echo "$(date): Became MASTER" >> /var/log/keepalived-state.log
/etc/keepalived/scripts/assign-reserved-ip.sh
Test the configuration and start Keepalived on both servers:
sudo systemctl restart keepalived
sudo systemctl status keepalived
sudo tail -f /var/log/syslog | grep keepalived
Testing Failover and Monitoring
Comprehensive testing ensures your high availability setup works correctly under various failure scenarios. Start by verifying normal operation – check which server currently holds the Reserved IP and confirm traffic flows correctly.
Test automatic failover by stopping HAProxy on the primary server:
sudo systemctl stop haproxy
# Watch logs on both servers
sudo tail -f /var/log/syslog | grep keepalived
The backup server should detect the failure within 6-10 seconds and promote itself to master, claiming the Reserved IP. Monitor the timing and verify traffic continues flowing without interruption.
Test service-level failures by blocking HAProxy processes:
sudo systemctl start haproxy # Restart HAProxy first
sudo kill -STOP $(pgrep haproxy) # Suspend HAProxy without killing it
The health check script should detect the unresponsive HAProxy process and trigger failover. Resume the process to test failback:
sudo kill -CONT $(pgrep haproxy)
Create a monitoring dashboard to track failover events and system health. This simple monitoring script checks the status every minute:
#!/bin/bash
# /usr/local/bin/ha-monitor.sh
RESERVED_IP="203.0.113.50"
LOG_FILE="/var/log/ha-monitor.log"
while true; do
CURRENT_MASTER=$(ip addr show | grep "$RESERVED_IP" && echo "LOCAL" || echo "REMOTE")
HAPROXY_STATUS=$(systemctl is-active haproxy)
KEEPALIVED_STATUS=$(systemctl is-active keepalived)
echo "$(date): Master=$CURRENT_MASTER HAProxy=$HAPROXY_STATUS Keepalived=$KEEPALIVED_STATUS" >> $LOG_FILE
sleep 60
done
Performance Optimization and Tuning
Fine-tune your setup for optimal performance based on your traffic patterns and requirements. HAProxy offers numerous tuning parameters that significantly impact performance under high load conditions.
Optimize kernel parameters for high connection volumes:
sudo nano /etc/sysctl.conf
Add these performance optimizations:
# Network performance tuning
net.core.somaxconn = 65535
net.core.netdev_max_backlog = 5000
net.core.rmem_default = 262144
net.core.rmem_max = 16777216
net.core.wmem_default = 262144
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.netfilter.nf_conntrack_max = 1048576
Apply the changes:
sudo sysctl -p
Configure HAProxy for higher connection limits by adjusting the global maxconn value and per-server connection limits:
global
maxconn 10000
nbproc 4 # Use multiple processes for CPU-intensive workloads
cpu-map 1 0
cpu-map 2 1
cpu-map 3 2
cpu-map 4 3
defaults
maxconn 5000
Monitor performance using HAProxy’s built-in statistics and system monitoring tools. The statistics page provides real-time metrics including request rates, response times, and queue lengths.
Security Considerations and Hardening
Secure your high availability setup against common attack vectors and unauthorized access. Start by restricting access to the HAProxy statistics interface and Keepalived configuration files.
Configure firewall rules using UFW to limit access:
sudo ufw enable
sudo ufw allow 22/tcp # SSH access
sudo ufw allow 80/tcp # HTTP traffic
sudo ufw allow 443/tcp # HTTPS traffic
sudo ufw allow from 10.0.1.0/24 to any port 8404 # Statistics access from internal network only
# Allow VRRP between load balancer servers
sudo ufw allow from [secondary_server_ip] to any port 112
sudo ufw allow to [secondary_server_ip] port 112
Implement SSL/TLS termination at the load balancer level with strong cipher suites. Generate or obtain SSL certificates for your domain:
# Using Let's Encrypt
sudo apt install certbot -y
sudo certbot certonly --standalone -d yourdomain.com
sudo cat /etc/letsencrypt/live/yourdomain.com/fullchain.pem \
/etc/letsencrypt/live/yourdomain.com/privkey.pem > \
/etc/ssl/certs/yourdomain.pem
Secure the Keepalived authentication by using strong passwords and considering certificate-based authentication for production environments. Change the default VRRP authentication password to a complex value:
authentication {
auth_type PASS
auth_pass $(openssl rand -base64 32)
}
Implement log monitoring and alerting for security events. Configure rsyslog to separate HAProxy and Keepalived logs:
sudo nano /etc/rsyslog.d/49-haproxy.conf
$ModLoad imudp
$UDPServerRun 514
$UDPServerAddress 127.0.0.1
local0.* /var/log/haproxy.log
& stop
Troubleshooting Common Issues
Several common issues can prevent proper high availability operation. Split-brain scenarios occur when both servers think they’re the master, typically caused by network connectivity issues between the servers. Monitor for this condition by checking if both servers have the virtual IP assigned:
# Check virtual IP assignment
ip addr show | grep -A2 -B2 "192.168.1.100"
# Check VRRP state
sudo journalctl -u keepalived -f
If split-brain occurs, verify network connectivity between servers and check firewall rules for VRRP traffic. Temporarily stop Keepalived on one server to resolve the immediate issue:
sudo systemctl stop keepalived # On one server only
# Wait 30 seconds
sudo systemctl start keepalived
Reserved IP assignment failures often result from API authentication issues or rate limiting. Check the API logs and verify your authentication tokens:
sudo tail -f /var/log/reserved-ip.log
# Test API access manually
curl -H "Authorization: Bearer $API_TOKEN" https://api.provider.com/account
HAProxy backend server failures can cause service degradation if not properly handled. Monitor backend health and configure appropriate timeouts:
# Check backend server status
echo "show stat" | sudo socat stdio /run/haproxy/admin.sock
# Test backend connectivity
curl -I http://backend-server:80/health
Performance issues often stem from insufficient resources or suboptimal configuration. Monitor system resources during peak load:
# Monitor system performance
htop
iotop
netstat -i # Network interface statistics
ss -tuln # Active connections
Alternative Solutions and Comparisons
Several alternatives exist for implementing high availability load balancing, each with distinct advantages and trade-offs. Understanding these options helps you choose the most appropriate solution for your specific requirements.
Solution | Complexity | Cost | Failover Time | Best Use Case |
---|---|---|---|---|
HAProxy + Keepalived | Medium | Low | 5-10 seconds | Traditional web applications |
Nginx + Keepalived | Medium | Low | 5-10 seconds | Static content and reverse proxy |
Cloud Load Balancers | Low | Medium-High | Instant | Cloud-native applications |
Kubernetes Ingress | High | Medium | Instant | Containerized applications |
DNS Round Robin | Low | Low | TTL dependent | Simple setups with DNS control |
Cloud-managed load balancers like AWS Application Load Balancer or Google Cloud Load Balancing offer similar functionality with less operational overhead but higher ongoing costs. These solutions provide automatic scaling, managed SSL certificates, and integrated monitoring at $20-50+ monthly per load balancer.
Kubernetes ingress controllers offer sophisticated routing capabilities and automatic service discovery but require containerized applications and cluster management expertise. They excel in microservices architectures where dynamic scaling and service mesh integration are priorities.
For smaller applications or budget-conscious deployments, DNS-based load balancing using services like Cloudflare or Route 53 health checks provides basic failover capabilities. However, DNS propagation delays can result in 60+ second failover times depending on TTL settings.
Production Deployment Best Practices
Deploy your high availability setup using infrastructure as code tools like Ansible or Terraform to ensure consistency and repeatability. This approach reduces configuration drift between servers and simplifies disaster recovery procedures.
Implement comprehensive monitoring using tools like Prometheus and Grafana to track system metrics, failover events, and performance trends. Set up alerting for critical events:
- Keepalived state changes
- HAProxy backend server failures
- High connection counts or response times
- SSL certificate expiration warnings
- Reserved IP assignment failures
Establish regular testing procedures to verify failover functionality. Schedule monthly failover tests during maintenance windows to catch configuration issues before they impact production traffic. Document all procedures and maintain an incident response playbook.
Consider implementing blue-green deployment strategies that leverage your load balancer setup. You can gradually shift traffic between different application versions by adjusting backend server weights in HAProxy:
# Gradual traffic shifting during deployment
echo "set weight web_servers/web1 25" | sudo socat stdio /run/haproxy/admin.sock
echo "set weight web_servers/web2 75" | sudo socat stdio /run/haproxy/admin.sock
Plan for capacity growth by monitoring resource utilization trends and establishing clear scaling thresholds. HAProxy can handle thousands of concurrent connections on modest hardware, but plan for horizontal scaling when approaching 80% capacity utilization.
The high availability setup you’ve implemented provides a robust foundation for serving production traffic with minimal downtime. Regular monitoring, testing, and maintenance ensure your load balancing infrastructure continues meeting availability requirements as your applications grow and evolve. For additional technical details and advanced configuration options, consult the HAProxy documentation and Keepalived documentation.

This article incorporates information and material from various online sources. We acknowledge and appreciate the work of all original authors, publishers, and websites. While every effort has been made to appropriately credit the source material, any unintentional oversight or omission does not constitute a copyright infringement. All trademarks, logos, and images mentioned are the property of their respective owners. If you believe that any content used in this article infringes upon your copyright, please contact us immediately for review and prompt action.
This article is intended for informational and educational purposes only and does not infringe on the rights of the copyright owners. If any copyrighted material has been used without proper credit or in violation of copyright laws, it is unintentional and we will rectify it promptly upon notification. Please note that the republishing, redistribution, or reproduction of part or all of the contents in any form is prohibited without express written permission from the author and website owner. For permissions or further inquiries, please contact us.