
How to Perform Redis Benchmark Tests
Redis benchmark testing is crucial for understanding your cache and data store performance under various loads and configurations. Whether you’re optimizing an existing Redis deployment or planning a new one, proper benchmarking helps you make informed decisions about server specifications, configuration tweaks, and scaling strategies. In this guide, you’ll learn how to use Redis’s built-in benchmarking tools, interpret the results, and apply best practices to get meaningful performance data that translates to real-world scenarios.
How Redis Benchmarking Works
Redis includes a built-in benchmarking tool called redis-benchmark
that simulates multiple clients sending commands to your Redis instance. The tool measures throughput (operations per second), latency percentiles, and can test various Redis commands under different conditions.
The benchmark works by spawning multiple client connections that send commands in parallel or pipeline mode. It tracks response times and calculates statistics like average latency, 95th percentile, 99th percentile, and maximum response times. Understanding these metrics helps you identify performance bottlenecks and capacity limits.
Setting Up Your Benchmarking Environment
Before running benchmarks, ensure your test environment closely matches your production setup. Install Redis on your target system – whether that’s a VPS for smaller deployments or dedicated servers for high-performance applications.
First, verify your Redis installation and check the benchmark tool availability:
redis-server --version
redis-benchmark --version
Configure your Redis instance with appropriate settings. Here’s a basic redis.conf for benchmarking:
# redis.conf for benchmarking
maxmemory 2gb
maxmemory-policy allkeys-lru
save ""
appendonly no
tcp-backlog 511
timeout 0
tcp-keepalive 300
Start Redis with your configuration:
redis-server /path/to/redis.conf
Basic Benchmark Commands and Examples
The simplest benchmark test uses default parameters:
redis-benchmark
This runs a standard suite of tests with 50 parallel clients making 100,000 requests total. For more control, use specific parameters:
# Test with 100 clients, 1 million requests
redis-benchmark -c 100 -n 1000000
# Test only SET and GET commands
redis-benchmark -t set,get -c 50 -n 100000
# Test with specific data size (1KB values)
redis-benchmark -d 1024 -t set,get -c 50 -n 50000
# Test with pipelining (16 commands per pipeline)
redis-benchmark -P 16 -t set,get -c 50 -n 100000
For testing specific scenarios, you can use custom key patterns:
# Random keys in specified range
redis-benchmark -t set -r 100000 -n 1000000
# Test specific host and port
redis-benchmark -h 192.168.1.100 -p 6379 -c 100 -n 500000
Advanced Benchmarking Techniques
Real-world applications often require more sophisticated testing scenarios. Here are some advanced techniques:
Testing with Authentication
# Benchmark with password authentication
redis-benchmark -a yourpassword -t set,get -c 50 -n 100000
Custom Command Testing
Create files with custom commands for more realistic testing:
# commands.txt
SET user:1001 "{"name":"John","email":"john@example.com"}"
GET user:1001
HSET profile:1001 name "John Doe"
HGET profile:1001 name
LPUSH notifications:1001 "New message"
LPOP notifications:1001
Run the custom benchmark:
redis-benchmark -c 50 -n 10000 < commands.txt
Memory Usage Testing
Monitor memory usage during benchmarks:
# In another terminal, monitor memory usage
watch -n 1 'redis-cli info memory | grep used_memory_human'
# Run memory-intensive benchmark
redis-benchmark -t set -r 1000000 -d 1024 -c 100 -n 1000000
Interpreting Benchmark Results
Understanding benchmark output is crucial for making informed decisions. Here’s a typical result breakdown:
SET: 89285.71 requests per second
GET: 90909.09 requests per second
====== SET ======
100000 requests completed in 1.12 seconds
50 parallel clients
3 bytes payload
keep alive: 1
99.90% <= 1 milliseconds
99.95% <= 2 milliseconds
99.99% <= 3 milliseconds
100.00% <= 3 milliseconds
89285.71 requests per second
Key metrics to focus on:
- Throughput: Requests per second indicates raw performance capacity
- Latency percentiles: 95th and 99th percentiles show worst-case performance
- Memory usage: Monitor memory consumption patterns during tests
- CPU utilization: Check if Redis is CPU-bound during high load
Performance Comparison Table
Here's how different configurations typically perform:
Configuration | SET ops/sec | GET ops/sec | 99% Latency | Memory Efficiency |
---|---|---|---|---|
Default (no pipelining) | 85,000 | 90,000 | 1.2ms | Good |
Pipelining (16) | 450,000 | 500,000 | 2.1ms | Good |
Large values (10KB) | 25,000 | 28,000 | 3.5ms | Fair |
Memory optimized | 70,000 | 80,000 | 1.4ms | Excellent |
Real-World Use Cases and Scenarios
E-commerce Session Store Testing
For session management, test with realistic session data:
# Simulate session store operations
redis-benchmark -t set,get,expire -r 50000 -d 512 -c 100 -n 500000
# Test session cleanup patterns
redis-benchmark -t expire,del -r 10000 -c 50 -n 100000
Cache Layer Performance
Test cache hit/miss patterns:
# High read ratio (typical for caching)
redis-benchmark -r 100000 -n 1000000 -c 50 -P 16 eval \
"if redis.call('exists', KEYS[1]) == 1 then return redis.call('get', KEYS[1]) else redis.call('set', KEYS[1], ARGV[1]) end" \
1 key value
Message Queue Benchmarking
Test list operations for queue implementations:
# Producer benchmark
redis-benchmark -t lpush -r 10000 -c 20 -n 100000
# Consumer benchmark
redis-benchmark -t rpop -r 10000 -c 20 -n 100000
Common Issues and Troubleshooting
Network Bottlenecks
If you see low throughput despite good latency, check network limits:
# Test network throughput separately
iperf3 -c your-redis-server -p 5001
# Monitor network usage during benchmark
iftop -i eth0
Memory Fragmentation
High memory fragmentation can impact performance:
# Check fragmentation ratio
redis-cli info memory | grep mem_fragmentation_ratio
# If fragmentation > 1.5, consider memory defragmentation
redis-cli config set activedefrag yes
File Descriptor Limits
High client counts may hit system limits:
# Check current limits
ulimit -n
# Increase if needed (add to /etc/security/limits.conf)
redis soft nofile 65535
redis hard nofile 65535
Best Practices and Optimization Tips
- Run multiple test iterations: Results can vary, so average multiple runs for accuracy
- Test realistic data sizes: Use data sizes that match your actual application
- Monitor system resources: Watch CPU, memory, and I/O during tests
- Test from application servers: Network latency affects real-world performance
- Use pipelining wisely: While it increases throughput, it also increases latency
- Test failure scenarios: Include tests with high memory usage and eviction policies
Comprehensive Benchmark Script
#!/bin/bash
# comprehensive-redis-benchmark.sh
echo "Starting Redis benchmark suite..."
# Basic performance test
echo "=== Basic Performance ==="
redis-benchmark -c 50 -n 100000 -t set,get,incr,lpush,rpop,sadd,hset
# Pipeline performance
echo "=== Pipeline Performance ==="
redis-benchmark -c 50 -n 100000 -t set,get -P 16
# Large data test
echo "=== Large Data Test ==="
redis-benchmark -c 20 -n 10000 -t set,get -d 10240
# High concurrency test
echo "=== High Concurrency ==="
redis-benchmark -c 200 -n 500000 -t set,get
# Memory info after tests
echo "=== Memory Usage ==="
redis-cli info memory | grep -E "(used_memory_human|mem_fragmentation_ratio)"
echo "Benchmark complete!"
Alternative Benchmarking Tools
While redis-benchmark is excellent for basic testing, consider these alternatives for specific needs:
- memtier_benchmark: More advanced Redis/Memcached benchmarking with better statistics
- redis-rdb-tools: Analyze memory usage patterns in RDB files
- Apache Bench (ab): Test Redis HTTP interfaces if using REST APIs
- Custom scripts: Python/Go scripts for application-specific testing patterns
For more detailed information about Redis performance optimization, check the official Redis optimization guide and the Redis benchmarking documentation.
Remember that benchmark results are only meaningful when they reflect your actual usage patterns. Start with realistic scenarios, gradually increase complexity, and always validate benchmark findings against production performance metrics.

This article incorporates information and material from various online sources. We acknowledge and appreciate the work of all original authors, publishers, and websites. While every effort has been made to appropriately credit the source material, any unintentional oversight or omission does not constitute a copyright infringement. All trademarks, logos, and images mentioned are the property of their respective owners. If you believe that any content used in this article infringes upon your copyright, please contact us immediately for review and prompt action.
This article is intended for informational and educational purposes only and does not infringe on the rights of the copyright owners. If any copyrighted material has been used without proper credit or in violation of copyright laws, it is unintentional and we will rectify it promptly upon notification. Please note that the republishing, redistribution, or reproduction of part or all of the contents in any form is prohibited without express written permission from the author and website owner. For permissions or further inquiries, please contact us.