Support >
  About cloud server >
  Lightweight cloud server data suddenly stopped working? Five-step diagnosis and recovery guide.
Lightweight cloud server data suddenly stopped working? Five-step diagnosis and recovery guide.
Time : 2025-12-09 15:51:08
Edit : Jtti

Why is my database on a lightweight cloud server suddenly refusing to start? What could be the reasons? Unlike a fully configured dedicated database server, lightweight cloud servers have limited resources and are more susceptible to configuration issues, resource contention, and unexpected changes.

The most common database startup problem on lightweight cloud servers stems from insufficient memory. Services like MySQL or PostgreSQL pre-allocate a portion of memory as a cache and buffer during startup. If the server's available memory is less than this requirement, the startup process will fail. The first step in checking memory usage is to confirm the actual available memory on the server:

free -h

If available memory is indeed tight, you have several options. For MySQL, you can edit the configuration file (usually located in `/etc/mysql/my.cnf` or `/etc/my.cnf`) to adjust key memory parameters:

ini
[mysqld]
innodb_buffer_pool_size = 64M
key_buffer_size = 16M
query_cache_size = 8M
thread_cache_size = 4
max_connections = 30

These settings significantly reduce MySQL's memory usage from the default several hundred MB to a level suitable for lightweight servers. For PostgreSQL, you need to modify the `shared_buffers` (usually set to 15-25% of system memory) and `work_mem` parameters in `postgresql.conf`.

After modifying the configuration, try restarting the database service. For systems using systemd:

sudo systemctl restart mysql #

or

sudo systemctl restart postgresql

If the service starts successfully but is unstable, further configuration optimization may be needed, or you may need to consider upgrading the server specifications. In extreme cases, a temporary swap file can be created as an emergency measure, but this will impact database performance and should only be used as a temporary solution.

The second common reason for database startup failure is insufficient disk space. Databases need to write log files, temporary files, and sometimes expand data files during startup. Without sufficient disk space, these operations will fail. Use the following command to quickly check disk usage:

df -h

If insufficient disk space is found, you need to identify and clean up the files consuming space. Database-related space consumption usually comes from several sources: first, binary log files (MySQL) or WAL files (PostgreSQL); second, error logs and slow query logs; and third, the growth of the data files themselves. For MySQL, you can clean up old binary logs after logging into the database:

sql PURGE BINARY LOGS BEFORE '2024-01-01 00:00:00';

However, the tricky part is that when the database fails to start, you cannot clean up logs using SQL commands. In this case, you need to manually locate and delete the old log files. MySQL's binary logs are typically located in the `/var/lib/mysql` or `/var/log/mysql` directories, with filenames in the form of `mysql-bin.000001`, etc. Before deleting these logs, ensure you don't need them for data recovery:

sudo rm /var/log/mysql/mysql-bin.000001
sudo rm /var/log/mysql/mysql-bin.000002

For long-term management, it's recommended to configure a log rotation policy to prevent the disk from filling up again. Additionally, check if non-database files are consuming excessive space, such as application logs, temporary files, or backup files.

If the database fails to start after modifying the configuration file, this usually indicates a syntax error or incompatible parameter settings in the configuration file. The database service reads the configuration file during startup, and any syntax errors can cause startup failure. Checking the configuration file syntax is the first step in troubleshooting. For MySQL, you can use the following command to test the configuration file syntax:

mysqld --validate-config --defaults-file=/etc/mysql/my.cnf

If a syntax error is found, the command will output the error message and its location. Another common problem is that parameters are placed in the wrong configuration section. For example, MySQL server parameters should be placed in the `[mysqld]` section. Incorrectly placing them in the `[mysql]` or `[client]` section can lead to unpredictable behavior.

Sometimes the problem isn't a syntax error, but an inappropriate parameter value. For example, setting `innodb_buffer_pool_size` to a value greater than the system's available memory, or setting an excessively high `max_connections` value for a lightweight server. In this case, you need to consult the database documentation to choose appropriate parameter values ​​for the lightweight environment.

If you can't determine which parameter is causing the problem, you can try starting the database with a minimal configuration and then gradually add parameters until you find the issue. Create a minimal configuration file:

ini
[mysqld]
datadir=/var/lib/mysql
socket=/var/run/mysqld/mysqld.sock

Start the MySQL service with this minimal configuration. If it starts successfully, the problem is indeed with other parameters. You can then gradually add parameters from the original configuration file to the minimal configuration, adding a few parameters at a time and restarting the service until you find the problematic parameter.

The database service requires access to data files, log files, and socket files. If the permissions for these files are incorrect, the database will fail to start. Permission issues typically arise after: data files have been manually moved, the database running user has been changed, or a backup file has been restored without proper permissions.

Check which user the database process is attempting to run as. For MySQL, this is usually the `mysql` user; for PostgreSQL, it's the `postgres` user. Then check the permissions of the data directory:

ls -la /var/lib/mysql/

Correct permissions mean the data directory and its contents are owned by the database user, and only that user has write permissions. If permissions are incorrect, you can fix them using the following commands:

sudo chown -R mysql:mysql /var/lib/mysql
sudo chmod -R 750 /var/lib/mysql

In addition to data files, you also need to check the permissions of the error log file, socket files, and other database-related files. Sometimes security modules such as SELinux or AppArmor can also prevent the database service from accessing necessary files. You can try temporarily disabling these security modules to determine if they are causing the problem:

sudo setenforce 0` # Temporarily disable SELinux

If the database can start after disabling SELinux, the problem is indeed related to the security policy. In this case, you need to adjust the policy settings, rather than permanently disabling the security modules. For SELinux, you can use the `audit2allow` tool to analyze the audit logs and generate the correct policy modules.

When the database shuts down abnormally (such as a sudden power outage), the data files may be corrupted, causing the database to fail to start. This is the most serious situation and needs to be handled with care to avoid data loss. Most database systems have built-in recovery mechanisms, but sometimes manual intervention is required.

For MySQL's InnoDB storage engine, you can try adding recovery settings in the configuration file:

ini [mysqld] innodb_force_recovery = 1

The `innodb_force_recovery` parameter can be set to a value from 1 to 6. A higher number indicates a more aggressive recovery method. It is recommended to start with 1. If the database starts, immediately back up the data and then rebuild the database. Important Note: In recovery mode, InnoDB is read-only; only SELECT queries can be executed, not INSERT, UPDATE, or DELETE operations.

If InnoDB recovery parameters are ineffective, you may need to consider the feature that tables created using the `innodb_file_per_table` option have separate files. You can try to extract data from these files, but this requires specialized knowledge and tools.

For PostgreSQL, the situation is similar, but the specific operations differ. PostgreSQL automatically attempts recovery at startup, but if recovery fails, you may need to intervene manually. First, check the specific error messages in the PostgreSQL log files, and then take appropriate measures based on the errors. A common method is to use the `pg_resetwal` tool to reset the write-ahead log, but this will result in data loss and should be used as a last resort.

In all cases, regular backups are the best defense against data corruption. If you have a recent backup, restoring data is generally safer and more reliable than repairing corrupted files. Implementing automatic backup strategies and regularly testing the recovery process can significantly reduce the risk of data loss.

When faced with a database suddenly failing to start on a lightweight cloud server, a systematic diagnostic approach is crucial: start by checking resource limitations, then gradually investigate configuration and permission issues, and finally address data corruption recovery. Each successful resolution of such a problem deepens your understanding of the infrastructure. The best solution is always preventionmonitoring resource usage, implementing regular backups, and logging configuration changes. These practices ensure that your lightweight server maintains stable and reliable data service capabilities even under resource-constrained conditions.

Pre-sales consultation
JTTI-Coco
JTTI-Jean
JTTI-Eom
JTTI-Selina
JTTI-Ellis
JTTI-Defl
JTTI-Amano
Technical Support
JTTI-Noc
Title
Email Address
Type
Sales Issues
Sales Issues
System Problems
After-sales problems
Complaints and Suggestions
Marketing Cooperation
Information
Code
Submit