
How to Monitor MongoDB Performance
MongoDB performance monitoring is a critical aspect of database administration that often separates the pros from the amateurs. Without proper monitoring, you’re essentially flying blind – your app might be running fine today, but tomorrow you could be dealing with slow queries, memory leaks, or connection pool exhaustion. This post will walk you through the essential tools, techniques, and strategies for monitoring MongoDB performance, from built-in MongoDB tools to third-party solutions, plus real-world troubleshooting scenarios that’ll save you from 3 AM production fires.
Understanding MongoDB Performance Metrics
Before diving into monitoring tools, you need to understand what metrics actually matter. MongoDB exposes dozens of performance indicators, but focusing on the wrong ones is like optimizing your car’s cup holders while ignoring the engine.
The core metrics you should track include:
- Operations per second (ops/sec): Read, write, update, and delete operations
- Query execution time: Average and 95th percentile response times
- Connection metrics: Active connections, connection pool utilization
- Memory usage: Resident memory, virtual memory, and page faults
- Disk I/O: Read/write throughput and queue depths
- Replication lag: For replica sets, the delay between primary and secondary nodes
- Index efficiency: Index hit ratios and index usage statistics
Metric Category | Critical Threshold | Warning Signs |
---|---|---|
Query Response Time | > 100ms average | Slow queries, index misses |
Connection Count | > 80% of max connections | Connection pool exhaustion |
Memory Usage | > 85% of available RAM | Excessive paging, performance degradation |
Replication Lag | > 10 seconds | Network issues, heavy write load |
Built-in MongoDB Monitoring Tools
MongoDB ships with several built-in monitoring capabilities that don’t require additional software. These tools should be your first line of defense for performance monitoring.
MongoDB Database Profiler
The database profiler is MongoDB’s equivalent of a query log on steroids. It captures detailed information about database operations that exceed specified thresholds.
// Enable profiling for operations slower than 100ms
db.setProfilingLevel(1, { slowms: 100 })
// Enable profiling for all operations (use with caution in production)
db.setProfilingLevel(2)
// Check current profiling status
db.getProfilingStatus()
// Query the profiler collection
db.system.profile.find().limit(5).sort({ ts: -1 }).pretty()
The profiler data includes execution times, index usage, and query plans. Here’s how to analyze slow operations:
// Find queries taking longer than 1 second
db.system.profile.find({ "millis": { $gt: 1000 } })
// Find operations with high examined-to-returned document ratios
db.system.profile.find({
$expr: {
$gt: [
{ $divide: ["$docsExamined", "$docsReturned"] },
10
]
}
})
mongostat and mongotop
These command-line utilities provide real-time performance statistics. mongostat
gives you a high-level view similar to iostat
for databases:
# Monitor every 5 seconds
mongostat --host localhost:27017 5
# Focus on specific metrics
mongostat --host localhost:27017 --columns "host,insert,query,update,delete,conn,time"
mongotop
shows where MongoDB is spending time at the collection level:
# Update every 10 seconds
mongotop --host localhost:27017 10
db.serverStatus() and db.stats()
These commands provide comprehensive server metrics programmatically:
// Get detailed server statistics
db.serverStatus()
// Focus on specific areas
db.serverStatus().connections
db.serverStatus().opcounters
db.serverStatus().wiredTiger.cache
// Database-level statistics
db.stats()
// Collection-level statistics
db.collection.stats()
Third-Party Monitoring Solutions
While built-in tools are great for troubleshooting, production environments typically require more sophisticated monitoring solutions with alerting, historical data, and dashboards.
Tool | Type | Best For | Cost |
---|---|---|---|
MongoDB Compass | Official GUI | Development, ad-hoc analysis | Free |
Percona Monitoring and Management | Open source | Comprehensive database monitoring | Free |
Datadog | SaaS | Enterprise monitoring with APM integration | Paid |
New Relic | SaaS | Application performance monitoring | Paid |
Prometheus + Grafana | Open source | Custom monitoring stacks | Free |
Setting Up Prometheus and Grafana
For teams wanting full control over their monitoring stack, Prometheus with Grafana provides excellent MongoDB monitoring capabilities. Here’s a basic setup:
# Install MongoDB exporter
wget https://github.com/percona/mongodb_exporter/releases/download/v0.20.1/mongodb_exporter-0.20.1.linux-amd64.tar.gz
tar xvf mongodb_exporter-0.20.1.linux-amd64.tar.gz
# Create systemd service
cat > /etc/systemd/system/mongodb-exporter.service << EOF
[Unit]
Description=MongoDB Exporter
After=network.target
[Service]
Type=simple
User=mongodb-exporter
ExecStart=/usr/local/bin/mongodb_exporter --mongodb.uri=mongodb://localhost:27017
Restart=always
[Install]
WantedBy=multi-user.target
EOF
# Start the service
systemctl enable mongodb-exporter
systemctl start mongodb-exporter
Configure Prometheus to scrape MongoDB metrics:
# Add to prometheus.yml
- job_name: 'mongodb'
static_configs:
- targets: ['localhost:9216']
scrape_interval: 30s
Real-World Monitoring Scenarios
Scenario 1: Identifying Connection Pool Issues
A common production issue is connection pool exhaustion. Here's how to detect and resolve it:
// Check current connections
db.serverStatus().connections
// Monitor connection creation/destruction rates
watch -n 5 'mongo --quiet --eval "db.serverStatus().connections"'
// Find long-running operations that might be holding connections
db.currentOp({"secs_running": {"$gt": 300}})
If you're seeing connection spikes, check your application's connection pooling configuration:
// Example Node.js MongoDB driver configuration
const client = new MongoClient(uri, {
maxPoolSize: 10, // Limit connection pool size
serverSelectionTimeoutMS: 5000,
socketTimeoutMS: 45000,
maxIdleTimeMS: 300000, // Close connections after 5 minutes of inactivity
});
Scenario 2: Memory Usage Analysis
MongoDB's memory usage patterns can be tricky to understand. WiredTiger cache sizing is particularly important:
// Check WiredTiger cache statistics
db.serverStatus().wiredTiger.cache
// Key metrics to monitor:
// - "bytes currently in the cache"
// - "tracked dirty bytes in the cache"
// - "pages evicted by application threads"
// Check for memory pressure indicators
db.serverStatus().extra_info.page_faults
Configure WiredTiger cache size appropriately:
# In mongod.conf
storage:
wiredTiger:
engineConfig:
cacheSizeGB: 8 # Set to ~50-60% of available RAM
Scenario 3: Query Performance Optimization
Slow queries are often the root cause of performance issues. Here's a systematic approach to identifying and fixing them:
// Find slow queries in the profiler
db.system.profile.aggregate([
{ $match: { "millis": { $gt: 1000 } } },
{ $group: {
_id: "$command.find",
avgTime: { $avg: "$millis" },
count: { $sum: 1 }
}},
{ $sort: { avgTime: -1 } }
])
// Analyze query execution plans
db.collection.find(query).explain("executionStats")
// Check index usage efficiency
db.collection.aggregate([
{ $indexStats: {} }
])
Setting Up Automated Alerts
Reactive monitoring isn't enough – you need proactive alerts. Here are some essential alert conditions:
# Example alerting rules for Prometheus/AlertManager
groups:
- name: mongodb
rules:
- alert: MongoDBDown
expr: mongodb_up == 0
for: 5m
- alert: MongoDBReplicationLag
expr: mongodb_replset_member_replication_lag > 10
for: 2m
- alert: MongoDBConnectionsHigh
expr: mongodb_connections{state="current"} / mongodb_connections{state="available"} > 0.8
for: 5m
- alert: MongoDBSlowQueries
expr: rate(mongodb_op_counters_total[5m]) > 1000
for: 10m
Performance Tuning Based on Monitoring Data
Monitoring data is only valuable if you act on it. Here are common optimizations based on monitoring insights:
- Index optimization: Create indexes for frequently filtered fields, remove unused indexes
- Schema design: Denormalize frequently joined data, use appropriate data types
- Connection pooling: Right-size connection pools based on actual usage patterns
- Hardware scaling: Scale up memory for working set, scale out for read-heavy workloads
- Sharding strategy: Implement sharding for large datasets with proper shard key selection
Best Practices and Common Pitfalls
Do:
- Monitor trends over time, not just current values
- Set up monitoring before you need it – never during an outage
- Test your monitoring setup with simulated failures
- Document your alerting thresholds and adjust them based on historical data
- Monitor both database and application-level metrics
Don't:
- Enable level 2 profiling in production without careful consideration
- Ignore replication lag in replica set environments
- Set alert thresholds too aggressively – avoid alert fatigue
- Monitor everything – focus on metrics that correlate with user experience
- Forget to monitor disk space and I/O wait times
Integration with Infrastructure Monitoring
MongoDB doesn't exist in isolation. For comprehensive monitoring, integrate database metrics with your broader infrastructure monitoring. If you're running MongoDB on VPS or dedicated servers, ensure you're monitoring system-level metrics alongside database performance.
Key integration points include:
- CPU and memory utilization correlation with database operations
- Network latency impact on replica set performance
- Disk I/O patterns and their effect on query response times
- Load balancer metrics for sharded clusters
Effective MongoDB performance monitoring requires a multi-layered approach combining built-in tools, third-party solutions, and proactive alerting. Start with MongoDB's native tools for immediate insights, then build out comprehensive monitoring as your application scales. Remember that monitoring is not a set-it-and-forget-it task – regularly review and adjust your monitoring strategy as your application evolves.
The key to successful MongoDB monitoring is focusing on metrics that directly impact user experience and having the tools in place to quickly diagnose and resolve issues. With proper monitoring, you'll catch performance problems before they impact users and have the data needed to make informed scaling and optimization decisions.
For additional information, refer to the official MongoDB monitoring documentation and the MongoDB Database Tools documentation.

This article incorporates information and material from various online sources. We acknowledge and appreciate the work of all original authors, publishers, and websites. While every effort has been made to appropriately credit the source material, any unintentional oversight or omission does not constitute a copyright infringement. All trademarks, logos, and images mentioned are the property of their respective owners. If you believe that any content used in this article infringes upon your copyright, please contact us immediately for review and prompt action.
This article is intended for informational and educational purposes only and does not infringe on the rights of the copyright owners. If any copyrighted material has been used without proper credit or in violation of copyright laws, it is unintentional and we will rectify it promptly upon notification. Please note that the republishing, redistribution, or reproduction of part or all of the contents in any form is prohibited without express written permission from the author and website owner. For permissions or further inquiries, please contact us.