
How to Back Up, Restore, and Migrate a MongoDB Database on Ubuntu 24
MongoDB has become one of the most popular NoSQL databases, and managing its data through backups, restores, and migrations is crucial for any production environment. Whether you’re dealing with data corruption, server upgrades, or scaling to new infrastructure, knowing how to properly handle MongoDB data operations can save you from catastrophic data loss. This comprehensive guide will walk you through the complete process of backing up, restoring, and migrating MongoDB databases on Ubuntu 24, covering both the traditional mongodump/mongorestore tools and modern approaches using MongoDB Atlas migration tools.
Understanding MongoDB Backup Methods
MongoDB offers several backup strategies, each with distinct advantages and use cases. The choice depends on your database size, downtime tolerance, and infrastructure setup.
Method | Best For | Downtime Required | Storage Space | Compression |
---|---|---|---|---|
mongodump | Small to medium databases | Minimal | Moderate | Optional |
File System Snapshots | Large databases with consistent storage | Brief lock period | Full database size | Depends on filesystem |
Replica Set Backups | Production environments | None on primary | Full database size | Optional |
MongoDB Atlas Live Migration | Cloud migrations | Minimal | N/A | Built-in optimization |
The mongodump utility creates BSON exports of your data, while file system snapshots capture the entire database files. For most development and small production environments, mongodump provides the right balance of simplicity and reliability.
Setting Up Your Ubuntu 24 Environment
Before diving into backup operations, ensure your Ubuntu 24 system has the necessary MongoDB tools installed. The MongoDB Database Tools package includes mongodump, mongorestore, and other essential utilities.
sudo apt update
sudo apt install wget curl gnupg2 software-properties-common apt-transport-https ca-certificates lsb-release
# Import MongoDB public GPG key
curl -fsSL https://pgp.mongodb.com/server-7.0.asc | sudo gpg --dearmor -o /usr/share/keyrings/mongodb-server-7.0.gpg
# Add MongoDB APT repository
echo "deb [ arch=amd64,arm64 signed-by=/usr/share/keyrings/mongodb-server-7.0.gpg ] https://repo.mongodb.org/apt/ubuntu jammy/mongodb-org/7.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-7.0.list
# Install MongoDB tools
sudo apt update
sudo apt install mongodb-database-tools
# Verify installation
mongodump --version
mongorestore --version
If you’re working with authentication enabled (which you should be in production), create a dedicated backup user with appropriate privileges:
mongo
use admin
db.createUser({
user: "backupuser",
pwd: "securepassword123",
roles: [
{ role: "backup", db: "admin" },
{ role: "restore", db: "admin" }
]
})
Creating MongoDB Backups
The mongodump command offers various options for different backup scenarios. Here’s how to handle the most common situations:
Basic Database Backup
# Backup entire MongoDB instance
mongodump --host localhost --port 27017 --out /backup/mongodb/$(date +%Y%m%d_%H%M%S)
# Backup specific database
mongodump --host localhost --port 27017 --db myapplication --out /backup/mongodb/myapp_$(date +%Y%m%d_%H%M%S)
# Backup specific collection
mongodump --host localhost --port 27017 --db myapplication --collection users --out /backup/mongodb/users_$(date +%Y%m%d_%H%M%S)
Authenticated Backup
# Using authentication
mongodump --host localhost --port 27017 --username backupuser --password securepassword123 --authenticationDatabase admin --out /backup/mongodb/$(date +%Y%m%d_%H%M%S)
# Using URI string (recommended for complex configurations)
mongodump --uri "mongodb://backupuser:securepassword123@localhost:27017/myapplication?authSource=admin" --out /backup/mongodb/myapp_$(date +%Y%m%d_%H%M%S)
Compressed Backup with Query Filtering
# Compressed backup with query filter
mongodump --host localhost --port 27017 --db myapplication --collection orders --query '{"status": "completed", "date": {"$gte": {"$date": "2024-01-01T00:00:00.000Z"}}}' --gzip --out /backup/mongodb/orders_completed_$(date +%Y%m%d_%H%M%S)
Performance tip: For large databases, use the –numParallelCollections flag to speed up the backup process:
mongodump --host localhost --port 27017 --db myapplication --numParallelCollections 4 --gzip --out /backup/mongodb/parallel_$(date +%Y%m%d_%H%M%S)
Restoring MongoDB Databases
Restoration is where things can get tricky, especially when dealing with existing data, indexes, and schema changes. The mongorestore tool provides several options to handle different scenarios.
Basic Restoration
# Restore entire backup
mongorestore --host localhost --port 27017 /backup/mongodb/20241201_143022/
# Restore specific database
mongorestore --host localhost --port 27017 --db myapplication /backup/mongodb/20241201_143022/myapplication/
# Restore to different database name
mongorestore --host localhost --port 27017 --db myapplication_restored /backup/mongodb/20241201_143022/myapplication/
Handling Existing Data
By default, mongorestore will not overwrite existing documents. Use these flags for different behaviors:
# Drop existing collections before restore
mongorestore --host localhost --port 27017 --db myapplication --drop /backup/mongodb/20241201_143022/myapplication/
# Upsert documents (update existing, insert new)
mongorestore --host localhost --port 27017 --db myapplication --upsert /backup/mongodb/20241201_143022/myapplication/
# Only restore indexes
mongorestore --host localhost --port 27017 --db myapplication --restoreDbUsersAndRoles /backup/mongodb/20241201_143022/myapplication/
Performance Optimization During Restore
# Use multiple parallel connections for faster restore
mongorestore --host localhost --port 27017 --numParallelCollections 4 --numInsertionWorkersPerCollection 2 /backup/mongodb/20241201_143022/
# Disable index creation during restore for speed
mongorestore --host localhost --port 27017 --noIndexRestore /backup/mongodb/20241201_143022/
# Then rebuild indexes separately
mongo myapplication --eval "db.runCommand({reIndex: 'mycollection'})"
Database Migration Strategies
Migration involves moving data between different MongoDB instances, which could be on different servers, versions, or even cloud providers. Here are the most effective approaches:
Direct Migration Using mongodump/mongorestore
# One-step migration with pipe
mongodump --host source.mongodb.com --port 27017 --username sourceuser --password sourcepass --authenticationDatabase admin --archive | mongorestore --host destination.mongodb.com --port 27017 --username destuser --password destpass --authenticationDatabase admin --archive
# Migration with transformation
mongodump --host source.mongodb.com --port 27017 --db oldapp --archive | mongorestore --host destination.mongodb.com --port 27017 --nsFrom 'oldapp.*' --nsTo 'newapp.*' --archive
Cross-Version Migration
When migrating between different MongoDB versions, compatibility issues may arise:
# Check compatibility first
mongodump --host localhost --port 27017 --db myapp --forceTableScan --out /tmp/compatibility_check
# For MongoDB 4.x to 5.x migration
mongodump --host old.server.com --port 27017 --db myapp --excludeCollection system.users --out /migration/v4_to_v5/
# Restore with version-specific options
mongorestore --host new.server.com --port 27017 --db myapp --convertLegacyIndexes /migration/v4_to_v5/myapp/
Replica Set Migration
For zero-downtime migrations in production environments:
# Add new replica set member
mongo --host primary.old.com --eval "
rs.add({
host: 'new.server.com:27017',
priority: 0,
votes: 0
})
"
# Wait for initial sync, then gradually promote new member
mongo --host primary.old.com --eval "
cfg = rs.conf()
cfg.members[3].priority = 1
cfg.members[3].votes = 1
rs.reconfig(cfg)
"
Real-World Use Cases and Examples
E-commerce Platform Migration
A typical e-commerce platform migration might involve multiple databases with different requirements:
#!/bin/bash
# E-commerce migration script
BACKUP_DATE=$(date +%Y%m%d_%H%M%S)
SOURCE_HOST="old-ecommerce.internal"
DEST_HOST="new-ecommerce.cloud"
BACKUP_DIR="/backup/ecommerce_migration_$BACKUP_DATE"
# Create backup directory
mkdir -p $BACKUP_DIR
# Backup critical collections with different strategies
echo "Backing up user data..."
mongodump --host $SOURCE_HOST --db ecommerce --collection users --out $BACKUP_DIR
echo "Backing up orders (last 90 days)..."
mongodump --host $SOURCE_HOST --db ecommerce --collection orders \
--query '{"created_at": {"$gte": {"$date": "'$(date -d '90 days ago' -I)'T00:00:00.000Z"}}}' \
--out $BACKUP_DIR
echo "Backing up product catalog..."
mongodump --host $SOURCE_HOST --db ecommerce --collection products --gzip --out $BACKUP_DIR
# Restore with error handling
echo "Restoring to new environment..."
mongorestore --host $DEST_HOST --db ecommerce_new --drop $BACKUP_DIR/ecommerce/ || {
echo "Restore failed, check logs"
exit 1
}
echo "Migration completed successfully"
Development to Production Promotion
# Sanitize development data for production
mongodump --host dev.mongodb.com --db myapp --excludeCollection debug_logs --excludeCollection temp_data --out /promotion/$(date +%Y%m%d)
# Transform sensitive data during restore
mongorestore --host prod.mongodb.com --db myapp_staging --drop /promotion/$(date +%Y%m%d)/myapp/
# Run sanitization scripts
mongo prod.mongodb.com/myapp_staging --eval "
db.users.updateMany({}, {\$unset: {email: 1, phone: 1}});
db.orders.updateMany({}, {\$set: {customer_email: 'test@example.com'}});
"
Best Practices and Common Pitfalls
Backup Best Practices
- Always test your backup and restore procedures in a non-production environment
- Implement automated backup schedules using cron jobs with proper error handling
- Store backups in multiple locations (local, remote, cloud) for redundancy
- Monitor backup file sizes and completion times to detect issues early
- Use consistent naming conventions that include timestamps and database identifiers
- Regularly validate backup integrity by performing test restores
Common Pitfalls to Avoid
- Authentication Issues: Always specify the correct authenticationDatabase, usually ‘admin’ for user accounts
- Network Timeouts: For large databases, increase socket timeout values using –socketTimeoutMS
- Disk Space: Ensure sufficient disk space for both backup files and MongoDB’s working space during operations
- Index Recreation: Large collections may take significant time to rebuild indexes after restore
- Version Compatibility: Always check MongoDB version compatibility between source and destination
- Oplog Size: For replica sets, ensure oplog is large enough to cover the backup duration
Automated Backup Script
#!/bin/bash
# Production-ready backup script with logging and error handling
BACKUP_DIR="/backup/mongodb"
LOG_FILE="/var/log/mongodb_backup.log"
RETENTION_DAYS=30
DB_HOST="localhost"
DB_PORT="27017"
DB_USER="backupuser"
DB_PASS="securepassword123"
# Create backup directory if it doesn't exist
mkdir -p $BACKUP_DIR
# Function to log messages
log_message() {
echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | tee -a $LOG_FILE
}
# Function to cleanup old backups
cleanup_old_backups() {
find $BACKUP_DIR -type d -name "backup_*" -mtime +$RETENTION_DAYS -exec rm -rf {} \;
log_message "Cleaned up backups older than $RETENTION_DAYS days"
}
# Main backup function
perform_backup() {
local backup_path="$BACKUP_DIR/backup_$(date +%Y%m%d_%H%M%S)"
log_message "Starting backup to $backup_path"
if mongodump --host $DB_HOST --port $DB_PORT --username $DB_USER --password $DB_PASS --authenticationDatabase admin --gzip --out $backup_path; then
log_message "Backup completed successfully"
# Calculate backup size
local backup_size=$(du -sh $backup_path | cut -f1)
log_message "Backup size: $backup_size"
# Verify backup integrity
if [ $(find $backup_path -name "*.bson.gz" | wc -l) -gt 0 ]; then
log_message "Backup integrity check passed"
else
log_message "ERROR: Backup integrity check failed"
return 1
fi
else
log_message "ERROR: Backup failed"
return 1
fi
}
# Main execution
log_message "Starting automated backup process"
if perform_backup; then
cleanup_old_backups
log_message "Backup process completed successfully"
else
log_message "Backup process failed"
exit 1
fi
Performance Considerations and Optimization
Understanding the performance implications of backup and restore operations helps in planning maintenance windows and optimizing database operations.
Database Size | Backup Time (mongodump) | Restore Time (mongorestore) | Recommended Approach |
---|---|---|---|
< 1GB | 1-5 minutes | 2-10 minutes | Standard mongodump |
1-10GB | 5-30 minutes | 10-60 minutes | Compressed with parallel collections |
10-100GB | 30-180 minutes | 60-300 minutes | Filesystem snapshots or replica backup |
> 100GB | 3+ hours | 5+ hours | Incremental backups or continuous backup service |
For optimal performance during large migrations:
# Optimize for large database migration
mongodump --host source.server.com \
--numParallelCollections 8 \
--gzip \
--excludeCollection large_log_collection \
--out /backup/optimized_migration/
# Restore with performance tuning
mongorestore --host destination.server.com \
--numParallelCollections 8 \
--numInsertionWorkersPerCollection 4 \
--bypassDocumentValidation \
--noIndexRestore \
/backup/optimized_migration/
# Rebuild indexes in background after restore
mongo destination.server.com --eval "
db.adminCommand({
createIndexes: 'mycollection',
indexes: [
{key: {field1: 1}, name: 'field1_1', background: true}
]
})
"
Integration with Cloud Services and Modern Tools
Modern MongoDB deployments often involve cloud services and containerized environments. Here’s how to adapt backup strategies for these scenarios:
MongoDB Atlas Integration
# Connect to Atlas cluster for backup
mongodump --uri "mongodb+srv://username:password@cluster.mongodb.net/mydb?retryWrites=true&w=majority" --out /backup/atlas_backup_$(date +%Y%m%d)
# Migrate from self-hosted to Atlas
mongodump --host localhost --port 27017 --db myapp --archive | mongorestore --uri "mongodb+srv://username:password@cluster.mongodb.net/myapp?retryWrites=true&w=majority" --archive
Docker Container Backups
# Backup from MongoDB running in Docker
docker exec mongodb-container mongodump --host localhost --port 27017 --out /backup/docker_backup_$(date +%Y%m%d)
# Copy backup from container to host
docker cp mongodb-container:/backup/docker_backup_$(date +%Y%m%d) ./backups/
# Restore to new container
docker run --rm -v $(pwd)/backups:/backup mongo:7.0 mongorestore --host target.server.com --port 27017 /backup/docker_backup_$(date +%Y%m%d)
The combination of proper backup strategies, performance optimization, and integration with modern infrastructure ensures robust data management for MongoDB deployments on Ubuntu 24. Regular testing and monitoring of these processes will help maintain data integrity and availability in production environments.
For more detailed information about MongoDB backup strategies, refer to the official MongoDB backup documentation and the MongoDB Database Tools documentation.

This article incorporates information and material from various online sources. We acknowledge and appreciate the work of all original authors, publishers, and websites. While every effort has been made to appropriately credit the source material, any unintentional oversight or omission does not constitute a copyright infringement. All trademarks, logos, and images mentioned are the property of their respective owners. If you believe that any content used in this article infringes upon your copyright, please contact us immediately for review and prompt action.
This article is intended for informational and educational purposes only and does not infringe on the rights of the copyright owners. If any copyrighted material has been used without proper credit or in violation of copyright laws, it is unintentional and we will rectify it promptly upon notification. Please note that the republishing, redistribution, or reproduction of part or all of the contents in any form is prohibited without express written permission from the author and website owner. For permissions or further inquiries, please contact us.