
How to Import and Export Databases in MySQL or MariaDB
Database import and export operations are fundamental tasks every developer and sysadmin eventually faces, whether you’re migrating between servers, creating backups, or setting up development environments. MySQL and MariaDB offer multiple methods to move data around, from simple command-line utilities to advanced replication techniques. This guide walks through the practical approaches, gotchas you’ll encounter, and strategies that actually work in production environments.
How Database Import/Export Works
At its core, database export creates a SQL dump containing CREATE statements, INSERT statements, and metadata needed to reconstruct your database elsewhere. MySQL’s mysqldump utility generates these dumps by connecting to the server and systematically extracting schema and data information.
The process involves several layers:
- Schema extraction: Table structures, indexes, constraints, and database-level settings
- Data extraction: Row data converted to INSERT statements or alternative formats
- Metadata handling: User permissions, stored procedures, triggers, and views
- Transaction management: Ensuring data consistency during large exports
Import operations reverse this process, executing the SQL statements against a target database. The complexity comes from handling character sets, storage engines, and version differences between source and destination systems.
Step-by-Step Export Implementation
Let’s start with basic export scenarios that cover most real-world situations.
Single Database Export
The standard approach uses mysqldump with specific options for reliability:
mysqldump -u username -p --single-transaction --routines --triggers database_name > backup.sql
Breaking down these critical options:
--single-transaction
: Ensures consistent backup for InnoDB tables--routines
: Includes stored procedures and functions--triggers
: Exports trigger definitions
For production databases with heavy write activity, add these options:
mysqldump -u username -p \
--single-transaction \
--routines \
--triggers \
--quick \
--add-drop-table \
--disable-keys \
database_name > backup.sql
Multiple Database Export
Export specific databases:
mysqldump -u username -p --databases db1 db2 db3 > multiple_databases.sql
Export entire MySQL instance:
mysqldump -u username -p --all-databases --single-transaction > full_backup.sql
Table-Specific Exports
Sometimes you need granular control:
# Export specific tables
mysqldump -u username -p database_name table1 table2 > specific_tables.sql
# Export table structure only
mysqldump -u username -p --no-data database_name > schema_only.sql
# Export data only (no CREATE statements)
mysqldump -u username -p --no-create-info database_name > data_only.sql
Import Implementation Guide
Import operations require more careful handling, especially with large datasets or production systems.
Basic Import Process
Standard import using mysql client:
mysql -u username -p database_name < backup.sql
For better performance and error handling:
mysql -u username -p --default-character-set=utf8mb4 database_name < backup.sql
Large File Import Optimization
When dealing with multi-gigabyte dumps, standard import can timeout or crash. Use these techniques:
# Disable foreign key checks and autocommit
mysql -u username -p -e "
SET foreign_key_checks = 0;
SET unique_checks = 0;
SET autocommit = 0;
SOURCE /path/to/backup.sql;
SET foreign_key_checks = 1;
SET unique_checks = 1;
COMMIT;
" database_name
Alternative approach using mysql with specific options:
mysql -u username -p \
--init-command="SET foreign_key_checks=0;SET unique_checks=0;SET autocommit=0;" \
database_name < backup.sql
Monitoring Import Progress
For large imports, monitor progress using pv (pipe viewer):
# Install pv first: apt-get install pv
pv backup.sql | mysql -u username -p database_name
Real-World Examples and Use Cases
Here are scenarios you'll encounter in production environments.
Development Environment Setup
Automated script for refreshing development databases:
#!/bin/bash
PROD_HOST="production.server.com"
DEV_HOST="localhost"
DB_NAME="myapp"
# Export from production
mysqldump -h $PROD_HOST -u prod_user -p$PROD_PASS \
--single-transaction \
--routines \
--triggers \
$DB_NAME | gzip > prod_backup.sql.gz
# Drop and recreate dev database
mysql -h $DEV_HOST -u dev_user -p$DEV_PASS -e "DROP DATABASE IF EXISTS $DB_NAME; CREATE DATABASE $DB_NAME;"
# Import to development
gunzip < prod_backup.sql.gz | mysql -h $DEV_HOST -u dev_user -p$DEV_PASS $DB_NAME
echo "Development database refreshed successfully"
Cross-Server Migration
Direct database transfer between servers:
mysqldump -h source_host -u username -p --single-transaction database_name | \
mysql -h destination_host -u username -p destination_database
Selective Data Migration
Export specific records based on conditions:
mysqldump -u username -p \
--where="created_date >= '2023-01-01'" \
database_name users > recent_users.sql
Alternative Methods and Tools
While mysqldump is standard, other approaches offer advantages for specific scenarios.
Method | Best For | Pros | Cons |
---|---|---|---|
mysqldump | General purpose, small-medium databases | Universal compatibility, includes metadata | Slow for large datasets, single-threaded |
MySQL Shell Dump Utility | Large databases, parallel processing | Multi-threaded, compression, progress tracking | Requires MySQL Shell 8.0+ |
Percona XtraBackup | Hot backups, minimal downtime | Online backup, faster recovery | InnoDB only, complex setup |
Binary log replication | Real-time synchronization | Continuous sync, minimal lag | Complex configuration, version sensitive |
MySQL Shell Dump Utility
For modern MySQL installations, the shell dump utility offers significant performance improvements:
# Export using MySQL Shell
mysqlsh -u username -p --sql -e "util.dumpSchemas(['database_name'], '/backup/path/')"
# Import using MySQL Shell
mysqlsh -u username -p --sql -e "util.loadDump('/backup/path/')"
Physical File Copy Method
For same-version transfers with downtime tolerance:
# Stop MySQL service
systemctl stop mysql
# Copy data directory
cp -R /var/lib/mysql/database_name /backup/location/
# Start MySQL service
systemctl start mysql
Performance Optimization and Best Practices
Database operations can impact production systems significantly. Here's how to minimize disruption.
Export Performance Tuning
Configure MySQL settings before large exports:
# Add to my.cnf for export sessions
[mysql]
max_allowed_packet = 1024M
net_buffer_length = 32K
Use compression for network transfers:
mysqldump -u username -p --single-transaction database_name | gzip | ssh user@remote "gunzip | mysql database_name"
Import Performance Settings
Temporary configuration changes for faster imports:
SET session sql_log_bin = 0;
SET session foreign_key_checks = 0;
SET session unique_checks = 0;
SET session autocommit = 0;
-- Import operations here --
SET session foreign_key_checks = 1;
SET session unique_checks = 1;
SET session autocommit = 1;
COMMIT;
Memory and Storage Considerations
Calculate storage requirements before operations:
# Check database size
SELECT
table_schema AS 'Database',
ROUND(SUM(data_length + index_length) / 1024 / 1024, 2) AS 'Size (MB)'
FROM information_schema.tables
WHERE table_schema = 'your_database'
GROUP BY table_schema;
Monitor available space during operations:
df -h /var/lib/mysql
df -h /tmp
Common Issues and Troubleshooting
Real-world database operations rarely go smoothly. Here are issues you'll encounter and their solutions.
Character Set Problems
Character encoding mismatches cause data corruption:
# Export with explicit charset
mysqldump -u username -p --default-character-set=utf8mb4 database_name > backup.sql
# Import with matching charset
mysql -u username -p --default-character-set=utf8mb4 database_name < backup.sql
Check current character sets:
SHOW VARIABLES LIKE 'character_set%';
Version Compatibility Issues
MySQL 8.0 introduced authentication changes affecting older clients:
# For compatibility with older MySQL versions
mysqldump -u username -p --compatible=mysql40 database_name > backup.sql
# Handle authentication plugin issues
mysql -u username -p -e "ALTER USER 'username'@'host' IDENTIFIED WITH mysql_native_password BY 'password';"
Large File Handling
Import failures with large files often relate to timeout or packet size limits:
# Increase timeouts and packet size
mysql -u username -p \
--init-command="SET SESSION net_read_timeout=600;SET SESSION net_write_timeout=600;SET SESSION max_allowed_packet=1073741824;" \
database_name < large_backup.sql
Locked Tables and Deadlocks
Export from busy production systems can encounter locks:
# Use read lock with timeout
mysqldump -u username -p \
--single-transaction \
--master-data=2 \
--flush-logs \
--lock-tables=false \
database_name > backup.sql
Security Considerations
Database operations involve sensitive data and require proper security measures.
Credential Management
Avoid passwords in command history:
# Create .my.cnf file
cat > ~/.my.cnf << EOF
[client]
user=username
password=your_password
host=localhost
EOF
chmod 600 ~/.my.cnf
# Use without -p flag
mysqldump database_name > backup.sql
Network Security
Encrypt transfers between servers:
# Use SSL connections
mysqldump -u username -p --ssl-mode=REQUIRED database_name > backup.sql
# SSH tunneling for remote operations
ssh -L 3307:localhost:3306 user@remote_server
mysqldump -h localhost -P 3307 -u username -p database_name > backup.sql
Backup File Security
Protect backup files with encryption:
# Encrypt during export
mysqldump -u username -p database_name | gpg --cipher-algo AES256 --compress-algo 1 --symmetric > backup.sql.gpg
# Decrypt during import
gpg --decrypt backup.sql.gpg | mysql -u username -p database_name
Integration with Server Infrastructure
Database operations integrate with broader server management practices. Whether you're running on VPS environments or managing dedicated server infrastructure, automated backup and migration processes become critical operational components.
Automated backup scripts should include error handling, logging, and notification systems:
#!/bin/bash
BACKUP_DIR="/var/backups/mysql"
LOG_FILE="/var/log/mysql_backup.log"
DATABASE="production_db"
# Create backup with error handling
if mysqldump -u backup_user -p$BACKUP_PASS \
--single-transaction \
--routines \
--triggers \
$DATABASE > "$BACKUP_DIR/${DATABASE}_$(date +%Y%m%d_%H%M%S).sql"; then
echo "$(date): Backup successful" >> $LOG_FILE
# Clean old backups
find $BACKUP_DIR -name "${DATABASE}_*.sql" -mtime +7 -delete
else
echo "$(date): Backup failed" >> $LOG_FILE
# Send notification
echo "MySQL backup failed for $DATABASE" | mail -s "Backup Alert" admin@company.com
fi
For additional reference, consult the official MySQL mysqldump documentation and MariaDB mysqldump guide for comprehensive option details and version-specific features.

This article incorporates information and material from various online sources. We acknowledge and appreciate the work of all original authors, publishers, and websites. While every effort has been made to appropriately credit the source material, any unintentional oversight or omission does not constitute a copyright infringement. All trademarks, logos, and images mentioned are the property of their respective owners. If you believe that any content used in this article infringes upon your copyright, please contact us immediately for review and prompt action.
This article is intended for informational and educational purposes only and does not infringe on the rights of the copyright owners. If any copyrighted material has been used without proper credit or in violation of copyright laws, it is unintentional and we will rectify it promptly upon notification. Please note that the republishing, redistribution, or reproduction of part or all of the contents in any form is prohibited without express written permission from the author and website owner. For permissions or further inquiries, please contact us.