BLOG POSTS
    MangoHost Blog / How to Create and Manage Databases in MySQL and MariaDB on a Cloud Server
How to Create and Manage Databases in MySQL and MariaDB on a Cloud Server

How to Create and Manage Databases in MySQL and MariaDB on a Cloud Server

Managing databases is arguably one of the most critical skills for anyone running applications on cloud servers. MySQL and MariaDB dominate the relational database landscape, powering everything from simple web apps to massive enterprise systems. This comprehensive guide walks through creating, configuring, and managing these databases on cloud infrastructure, covering everything from initial setup to advanced optimization techniques, troubleshooting common issues, and implementing proper backup strategies.

Understanding MySQL vs MariaDB Architecture

MySQL and MariaDB share common roots, but they’ve diverged significantly since MariaDB forked from MySQL in 2009. Both use similar storage engines, SQL syntax, and connection protocols, making them largely interchangeable for most applications.

Feature MySQL MariaDB
Default Storage Engine InnoDB InnoDB (with additional engines)
JSON Support Native JSON data type JSON functions (no native type)
Thread Pool Enterprise only Built-in
Performance Optimized for read-heavy Better write performance
Licensing Dual (GPL/Commercial) GPL v2

The core difference lies in MariaDB’s additional storage engines like Aria, TokuDB, and Spider, plus built-in features that MySQL reserves for enterprise editions. For cloud deployments, MariaDB often provides better out-of-the-box performance due to its thread pool implementation.

Step-by-Step Installation and Initial Setup

Installing MySQL or MariaDB on a cloud server requires proper repository configuration and security hardening. Here’s the complete process for both Ubuntu and CentOS systems.

Installing MySQL 8.0 on Ubuntu

# Update package index
sudo apt update

# Install MySQL server
sudo apt install mysql-server

# Start and enable MySQL service
sudo systemctl start mysql
sudo systemctl enable mysql

# Run security script
sudo mysql_secure_installation

Installing MariaDB on Ubuntu

# Install MariaDB
sudo apt install mariadb-server mariadb-client

# Start and enable service
sudo systemctl start mariadb
sudo systemctl enable mariadb

# Secure installation
sudo mysql_secure_installation

CentOS/RHEL Installation

# For MySQL 8.0
sudo dnf install mysql-server
sudo systemctl start mysqld
sudo systemctl enable mysqld

# For MariaDB
sudo dnf install mariadb-server mariadb
sudo systemctl start mariadb
sudo systemctl enable mariadb

The security script handles several critical configurations: removing anonymous users, disabling remote root login, removing test databases, and setting root password policies. Always run this on production servers.

Database Creation and User Management

Proper database and user management forms the foundation of secure database operations. Here’s how to create databases, users, and manage permissions effectively.

Creating Databases and Users

# Connect to MySQL/MariaDB
mysql -u root -p

# Create a new database
CREATE DATABASE ecommerce_app CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;

# Create application user with limited privileges
CREATE USER 'app_user'@'localhost' IDENTIFIED BY 'SecurePassword123!';

# Grant specific privileges
GRANT SELECT, INSERT, UPDATE, DELETE ON ecommerce_app.* TO 'app_user'@'localhost';

# Create read-only user for reporting
CREATE USER 'readonly_user'@'%' IDENTIFIED BY 'ReadOnlyPass456!';
GRANT SELECT ON ecommerce_app.* TO 'readonly_user'@'%';

# Apply changes
FLUSH PRIVILEGES;

Advanced User Management

# Create user with limited connection time
CREATE USER 'temp_user'@'192.168.1.%' IDENTIFIED BY 'TempPass789!'
  WITH MAX_QUERIES_PER_HOUR 100
  MAX_CONNECTIONS_PER_HOUR 20
  MAX_USER_CONNECTIONS 5;

# View user privileges
SHOW GRANTS FOR 'app_user'@'localhost';

# Modify existing user
ALTER USER 'app_user'@'localhost' IDENTIFIED BY 'NewSecurePassword123!';

# Remove user
DROP USER 'temp_user'@'192.168.1.%';

Configuration Optimization for Cloud Environments

Cloud servers require specific configuration adjustments to handle variable workloads and resource constraints effectively. The default MySQL/MariaDB configurations are conservative and need tuning for production workloads.

Essential Configuration Parameters

# Edit MySQL configuration file
sudo nano /etc/mysql/mysql.conf.d/mysqld.cnf

# For MariaDB
sudo nano /etc/mysql/mariadb.conf.d/50-server.cnf

# Key parameters to adjust:
[mysqld]
# Memory allocation (set to 70-80% of available RAM)
innodb_buffer_pool_size = 2G

# Connection handling
max_connections = 200
thread_cache_size = 16

# InnoDB settings
innodb_file_per_table = 1
innodb_flush_log_at_trx_commit = 2
innodb_log_file_size = 256M

# Query cache (MySQL 5.7 and earlier)
query_cache_type = 1
query_cache_size = 128M

# Slow query logging
slow_query_log = 1
slow_query_log_file = /var/log/mysql/slow.log
long_query_time = 2

# Binary logging for replication
log-bin = mysql-bin
server-id = 1

Performance Tuning Based on Server Specs

Server RAM innodb_buffer_pool_size max_connections thread_cache_size
2GB 1.4G 100 8
4GB 2.8G 150 12
8GB 5.6G 200 16
16GB 11.2G 300 24

After configuration changes, restart the service and monitor performance:

# Restart MySQL/MariaDB
sudo systemctl restart mysql
# or
sudo systemctl restart mariadb

# Check configuration values
mysql -u root -p -e "SHOW VARIABLES LIKE 'innodb_buffer_pool_size';"

# Monitor performance
mysql -u root -p -e "SHOW ENGINE INNODB STATUS\G"

Real-World Database Management Scenarios

Understanding practical applications helps cement database management concepts. Here are common scenarios you’ll encounter in production environments.

E-commerce Application Setup

# Create database structure for online store
CREATE DATABASE online_store CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;

USE online_store;

# Products table
CREATE TABLE products (
    id INT AUTO_INCREMENT PRIMARY KEY,
    name VARCHAR(255) NOT NULL,
    description TEXT,
    price DECIMAL(10,2) NOT NULL,
    stock_quantity INT DEFAULT 0,
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
    INDEX idx_name (name),
    INDEX idx_price (price)
);

# Orders table with proper foreign key constraints
CREATE TABLE orders (
    id INT AUTO_INCREMENT PRIMARY KEY,
    user_id INT NOT NULL,
    total_amount DECIMAL(10,2) NOT NULL,
    status ENUM('pending', 'processing', 'shipped', 'delivered', 'cancelled') DEFAULT 'pending',
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    INDEX idx_user_id (user_id),
    INDEX idx_status (status),
    INDEX idx_created_at (created_at)
);

# Create dedicated users for different application components
CREATE USER 'webapp'@'%' IDENTIFIED BY 'WebApp_SecurePass123!';
GRANT SELECT, INSERT, UPDATE ON online_store.products TO 'webapp'@'%';
GRANT SELECT, INSERT, UPDATE ON online_store.orders TO 'webapp'@'%';

CREATE USER 'analytics'@'%' IDENTIFIED BY 'Analytics_ReadOnly456!';
GRANT SELECT ON online_store.* TO 'analytics'@'%';

FLUSH PRIVILEGES;

Multi-Tenant SaaS Application

# Create tenant-specific databases
CREATE DATABASE tenant_001 CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
CREATE DATABASE tenant_002 CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;

# Or use single database with tenant_id approach
CREATE DATABASE saas_platform CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;

USE saas_platform;

CREATE TABLE users (
    id INT AUTO_INCREMENT PRIMARY KEY,
    tenant_id INT NOT NULL,
    username VARCHAR(50) NOT NULL,
    email VARCHAR(100) NOT NULL,
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    UNIQUE KEY unique_user_per_tenant (tenant_id, username),
    INDEX idx_tenant_id (tenant_id)
);

Backup and Recovery Strategies

Robust backup strategies are non-negotiable for production databases. Cloud environments offer unique advantages and challenges for database backups.

Automated Backup Script

#!/bin/bash
# Database backup script for cloud servers

DB_USER="backup_user"
DB_PASS="BackupPassword123!"
BACKUP_DIR="/opt/db_backups"
DATE=$(date +%Y%m%d_%H%M%S)
RETENTION_DAYS=7

# Create backup directory
mkdir -p $BACKUP_DIR

# Function to backup single database
backup_database() {
    local db_name=$1
    echo "Starting backup of $db_name..."
    
    mysqldump -u$DB_USER -p$DB_PASS \
              --single-transaction \
              --routines \
              --triggers \
              --events \
              --add-drop-database \
              --databases $db_name > $BACKUP_DIR/${db_name}_$DATE.sql
    
    # Compress backup
    gzip $BACKUP_DIR/${db_name}_$DATE.sql
    
    echo "Backup completed: ${db_name}_$DATE.sql.gz"
}

# Backup specific databases
backup_database "ecommerce_app"
backup_database "user_management"

# Clean old backups
find $BACKUP_DIR -name "*.sql.gz" -mtime +$RETENTION_DAYS -delete

# Upload to cloud storage (example with AWS S3)
# aws s3 sync $BACKUP_DIR s3://your-backup-bucket/mysql-backups/

Point-in-Time Recovery Setup

# Enable binary logging in MySQL configuration
[mysqld]
log-bin = /var/log/mysql/mysql-bin.log
expire_logs_days = 7
max_binlog_size = 100M
sync_binlog = 1

# Create backup user with necessary privileges
CREATE USER 'backup_user'@'localhost' IDENTIFIED BY 'BackupPassword123!';
GRANT SELECT, LOCK TABLES, SHOW VIEW, EVENT, TRIGGER, RELOAD, REPLICATION CLIENT ON *.* TO 'backup_user'@'localhost';
FLUSH PRIVILEGES;

# Perform consistent backup with binary log position
mysqldump -u backup_user -p \
          --single-transaction \
          --master-data=2 \
          --all-databases > full_backup_$(date +%Y%m%d).sql

Monitoring and Performance Troubleshooting

Proactive monitoring prevents most database issues before they impact users. Here’s how to implement comprehensive monitoring for cloud-based MySQL/MariaDB instances.

Performance Monitoring Queries

# Check current connections and processes
SHOW PROCESSLIST;

# View slow queries
SELECT * FROM mysql.slow_log ORDER BY start_time DESC LIMIT 10;

# Check InnoDB status
SHOW ENGINE INNODB STATUS\G

# Monitor buffer pool usage
SELECT 
    ROUND((PagesData*PageSize)/POWER(1024,3),2) AS DataGB,
    ROUND((PagesFree*PageSize)/POWER(1024,3),2) AS FreeGB,
    ROUND((PagesData*PageSize)/POWER(1024,3)/
    ((PagesData+PagesFree)*PageSize)/POWER(1024,3)*100,2) AS PercentUsed
FROM (
    SELECT variable_value AS PagesData
    FROM INFORMATION_SCHEMA.GLOBAL_STATUS
    WHERE VARIABLE_NAME='Innodb_buffer_pool_pages_data'
) AS A,
(
    SELECT variable_value AS PagesFree  
    FROM INFORMATION_SCHEMA.GLOBAL_STATUS
    WHERE VARIABLE_NAME='Innodb_buffer_pool_pages_free'
) AS B,
(
    SELECT variable_value AS PageSize
    FROM INFORMATION_SCHEMA.GLOBAL_STATUS
    WHERE VARIABLE_NAME='Innodb_page_size'
) AS C;

# Check table sizes
SELECT 
    table_schema AS 'Database',
    table_name AS 'Table',
    ROUND(((data_length + index_length) / 1024 / 1024), 2) AS 'Size (MB)'
FROM information_schema.TABLES 
ORDER BY (data_length + index_length) DESC;

Automated Monitoring Script

#!/bin/bash
# MySQL/MariaDB monitoring script

DB_USER="monitor_user"
DB_PASS="MonitorPassword123!"
ALERT_EMAIL="admin@yourcompany.com"
LOG_FILE="/var/log/mysql_monitor.log"

# Function to log messages
log_message() {
    echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" >> $LOG_FILE
}

# Check if MySQL/MariaDB is running
if ! systemctl is-active --quiet mysql && ! systemctl is-active --quiet mariadb; then
    log_message "ERROR: Database service is not running"
    echo "Database service down on $(hostname)" | mail -s "Database Alert" $ALERT_EMAIL
    exit 1
fi

# Check connection count
CONNECTIONS=$(mysql -u$DB_USER -p$DB_PASS -e "SHOW STATUS LIKE 'Threads_connected';" | grep Threads_connected | awk '{print $2}')
MAX_CONNECTIONS=$(mysql -u$DB_USER -p$DB_PASS -e "SHOW VARIABLES LIKE 'max_connections';" | grep max_connections | awk '{print $2}')

CONNECTION_PERCENT=$((CONNECTIONS * 100 / MAX_CONNECTIONS))

if [ $CONNECTION_PERCENT -gt 80 ]; then
    log_message "WARNING: High connection usage: ${CONNECTION_PERCENT}%"
    echo "High database connection usage: ${CONNECTION_PERCENT}%" | mail -s "Database Warning" $ALERT_EMAIL
fi

# Check slow queries
SLOW_QUERIES=$(mysql -u$DB_USER -p$DB_PASS -e "SHOW STATUS LIKE 'Slow_queries';" | grep Slow_queries | awk '{print $2}')
if [ $SLOW_QUERIES -gt 100 ]; then
    log_message "WARNING: High number of slow queries: $SLOW_QUERIES"
fi

log_message "Monitoring check completed - Connections: $CONNECTIONS/$MAX_CONNECTIONS, Slow queries: $SLOW_QUERIES"

Security Best Practices and Common Pitfalls

Database security in cloud environments requires multiple layers of protection. Many security breaches stem from basic configuration oversights that are easily preventable.

  • Never use default ports in production – Change from 3306 to a custom port
  • Implement SSL/TLS encryption – Essential for cloud deployments
  • Use strong authentication plugins – MySQL 8.0’s caching_sha2_password or MariaDB’s ed25519
  • Regular security updates – Automate patching where possible
  • Network isolation – Use VPCs and security groups to limit access

SSL Configuration

# Generate SSL certificates
sudo mysql_ssl_rsa_setup --uid=mysql

# Add to MySQL configuration
[mysqld]
ssl-ca=/var/lib/mysql/ca.pem
ssl-cert=/var/lib/mysql/server-cert.pem
ssl-key=/var/lib/mysql/server-key.pem
require_secure_transport=ON

# Create user requiring SSL
CREATE USER 'secure_user'@'%' IDENTIFIED BY 'SecurePassword123!' REQUIRE SSL;
GRANT ALL PRIVILEGES ON app_database.* TO 'secure_user'@'%';

# Test SSL connection
mysql -u secure_user -p --ssl-mode=REQUIRED -h your_server_ip

Firewall and Network Security

# Configure UFW firewall (Ubuntu)
sudo ufw allow from 192.168.1.0/24 to any port 3306
sudo ufw deny 3306

# For application servers only
sudo ufw allow from 10.0.1.100 to any port 3306

# Configure iptables (CentOS/RHEL)
iptables -A INPUT -p tcp --dport 3306 -s 192.168.1.0/24 -j ACCEPT
iptables -A INPUT -p tcp --dport 3306 -j DROP

Scaling and High Availability Considerations

Cloud environments excel at horizontal scaling, but database scaling requires careful planning. Both MySQL and MariaDB offer several approaches to handle growing workloads.

Master-Slave Replication Setup

# Master server configuration
[mysqld]
server-id = 1
log-bin = mysql-bin
binlog-do-db = production_db

# Create replication user on master
CREATE USER 'replication_user'@'%' IDENTIFIED BY 'ReplicationPass123!';
GRANT REPLICATION SLAVE ON *.* TO 'replication_user'@'%';
FLUSH PRIVILEGES;

# Get master status
SHOW MASTER STATUS;

# Slave server configuration
[mysqld]
server-id = 2
relay-log = mysql-relay-bin
log-bin = mysql-bin
read-only = 1

# Configure slave (on slave server)
CHANGE MASTER TO
MASTER_HOST='master_server_ip',
MASTER_USER='replication_user',
MASTER_PASSWORD='ReplicationPass123!',
MASTER_LOG_FILE='mysql-bin.000001',
MASTER_LOG_POS=107;

START SLAVE;
SHOW SLAVE STATUS\G

For comprehensive database management on cloud infrastructure, consider upgrading to VPS hosting for better resource control or dedicated servers for maximum performance and security.

Advanced Management Tools and Integration

Several tools can significantly improve database management efficiency in cloud environments. These range from command-line utilities to comprehensive monitoring solutions.

Useful Management Tools

  • MySQL Workbench – GUI administration tool with cloud connectivity
  • phpMyAdmin – Web-based management interface
  • Percona Toolkit – Advanced command-line utilities for MySQL/MariaDB
  • pt-query-digest – Analyze slow query logs effectively
  • mydumper/myloader – Parallel backup and restore utilities

Integration with Cloud Storage

#!/bin/bash
# Automated backup with cloud storage integration

# Variables
BACKUP_DIR="/tmp/mysql_backup"
S3_BUCKET="your-backup-bucket"
DB_NAME="production_db"
DATE=$(date +%Y%m%d_%H%M%S)

# Create compressed backup
mysqldump -u backup_user -p$BACKUP_PASSWORD \
          --single-transaction \
          --routines \
          --triggers \
          $DB_NAME | gzip > $BACKUP_DIR/${DB_NAME}_${DATE}.sql.gz

# Upload to AWS S3
aws s3 cp $BACKUP_DIR/${DB_NAME}_${DATE}.sql.gz s3://$S3_BUCKET/mysql-backups/

# Or upload to Google Cloud Storage
# gsutil cp $BACKUP_DIR/${DB_NAME}_${DATE}.sql.gz gs://$GCS_BUCKET/mysql-backups/

# Clean local backup
rm $BACKUP_DIR/${DB_NAME}_${DATE}.sql.gz

echo "Backup completed and uploaded: ${DB_NAME}_${DATE}.sql.gz"

Managing MySQL and MariaDB databases on cloud servers requires balancing performance, security, and maintainability. The key lies in proper initial configuration, regular monitoring, automated backups, and staying current with security updates. For additional resources, consult the official MySQL documentation and MariaDB knowledge base for the latest best practices and feature updates.



This article incorporates information and material from various online sources. We acknowledge and appreciate the work of all original authors, publishers, and websites. While every effort has been made to appropriately credit the source material, any unintentional oversight or omission does not constitute a copyright infringement. All trademarks, logos, and images mentioned are the property of their respective owners. If you believe that any content used in this article infringes upon your copyright, please contact us immediately for review and prompt action.

This article is intended for informational and educational purposes only and does not infringe on the rights of the copyright owners. If any copyrighted material has been used without proper credit or in violation of copyright laws, it is unintentional and we will rectify it promptly upon notification. Please note that the republishing, redistribution, or reproduction of part or all of the contents in any form is prohibited without express written permission from the author and website owner. For permissions or further inquiries, please contact us.

Leave a reply

Your email address will not be published. Required fields are marked