Support >
  About cloud server >
  Practical methods for managing nohup logs on US VPS cloud servers
Practical methods for managing nohup logs on US VPS cloud servers
Time : 2025-12-22 16:32:10
Edit : Jtti

When you use the `nohup command &` command to start a background service on a Linux US VPS cloud server, all output is redirected to the `nohup.out` file in the current directory by default. If this service runs for several weeks or months, the `nohup.out` file may grow to several gigabytes or even larger, leading to disk space shortages, making log viewing extremely difficult, and even affecting the service's own write operations. Therefore, regularly splitting and cleaning up these logs is not an option, but a necessary task for production environment operations and maintenance. Manual processing is inefficient and unreliable; we need an automated solution to implement timed rolling archiving and expired deletion of logs.

The core idea of ​​log splitting is to move the current log file to an archive without interrupting the original process, and then let the service start writing a new file. The key here is how to safely move a file that is being continuously written to by a process. Using `mv nohup.out nohup.out.old` directly seems simple, but it can cause log loss. This is because after the move, the original process still holds a write file descriptor to the original file (now renamed `nohup.out.old`), and newly generated logs will continue to be written to `nohup.out.old`, while the new `nohup.out` file will be empty. The correct approach is to first copy the file contents and then clear the original file, ensuring the process's write point remains unchanged.

A common and reliable method is to use `cp` in conjunction with the `truncate` command. The following script `split_nohup.sh` demonstrates this process:

#!/bin/

# split_nohup.sh - Split the nohup log file

LOG_FILE="nohup.out"

BACKUP_DIR="/var/log/myapp_backup"

# Exit if the log file does not exist or is empty

if [ ! -f "$LOG_FILE" ] || [ ! -s "$LOG_FILE" ]; then

exit 0

fi

# Create the backup directory (if it does not exist)

mkdir -p "$BACKUP_DIR"

# Generate a backup filename with timestamps

BACKUP_NAME="nohup_$(date +%Y%m%d_%H%M%S).out"

BACKUP_PATH="$BACKUP_DIR/$BACKUP_NAME"

# Core step: Copy the current log content to the backup file

cp "$LOG_FILE" "$BACKUP_PATH"

# Clear the original log file (instead of deleting it)

truncate -s 0 "$LOG_FILE"

echo "$(date): Log file split to $BACKUP_PATH"

This script first checks if `nohup.out` exists and is not empty. Then it creates a backup directory (e.g., `/var/log/myapp_backup`), copies the current log file to a new file with a timestamp (e.g., `nohup_20231027_143022.out`), and finally uses `truncate -s 0` to truncate the original file to zero bytes. The advantage of the `truncate` command is that it directly manipulates the file's data blocks, while the process's file pointer (file descriptor) still points to the same file node. Subsequent logs will be seamlessly written to this emptied file without causing program errors or log interruptions.

The split log files will accumulate in the backup directory over time, so we need to periodically clean up old files to free up disk space. The common strategy is to "delete by time," for example, keeping only the logs from the most recent 30 days. This can be easily achieved using the `find` command. Save the following cleanup script as `cleanup_old_logs.sh`:

#!/bin/

# cleanup_old_logs.sh - Clean up expired log files

BACKUP_DIR="/var/log/myapp_backup"

RETENTION_DAYS=30 # Number of days to retain

# Delete .log and .out backup files older than 30 days

find "$BACKUP_DIR" -name "nohup_*.out" -o -name "*.log" -mtime +$RETENTION_DAYS -delete

# Optional: Record cleanup operations

echo "$(date): Log files older than $RETENTION_DAYS days in $BACKUP_DIR have been cleaned" >> /var/log/log_cleanup.log

The `-mtime +30` parameter of the `find` command means "modified more than 30 days ago". The `-delete` parameter directly deletes the found files. Always test to ensure the path is correct. You can also use `-ls` instead of `-delete` to preview the files to be deleted.

Automation is core to system administration. We will schedule these two scripts to run on a `crontab` scheduler. Let's assume we want to split logs daily at 2 AM and clean up old logs every Sunday at 3 AM. Edit crontab as root or a user with privileges: `crontab -e`, and then add the following lines:

# Execute log splitting daily at 2 AM

0 2 * * * /bin/ /path/to/split_nohup.sh

# Execute log cleanup every Sunday at 3 AM

0 3 * * 7 /bin/ /path/to/cleanup_old_logs.sh

Please be sure to replace `/path/to/` with the actual path to the script and ensure the script has executable permissions (`chmod +x /path/to/*.sh`).

For more complex or enterprise-level environments, using the built-in `logrotate` tool in Linux is a more standard choice. `logrotate` is powerful, supporting compression, email notifications, size-based rotation, and more. Create a `logrotate` configuration for nohup logs (e.g., `/etc/logrotate.d/myapp-nohup`):

/opt/myapp/nohup.out {

daily

rotate 30

missingok

notifempty

compress

delaycompress

copytruncate

}

This configuration means: rotate logs once a day, retaining 30 old logs, not reporting errors if a file doesn't exist, not rotating empty files, and using gzip compression for old logs (`delaycompress` means delaying compression for easier viewing of the latest archive). Most importantly, the `copytruncate` parameter implements the same "copy then clear" operation as a manual script, without requiring an application restart. `logrotate` is usually run automatically by a system scheduled task, requiring no additional cron configuration.

When implementing this on a US VPS cloud server, several key points need attention: **Permissions:** Ensure the user running the script has read and write permissions to the log files and write permissions to the backup directory. **Disk Monitoring:** Even with regular cleanup, the backup directory should be included in disk usage monitoring to prevent unexpected growth. **Service Restart Scenarios:** If your service restarts periodically and recreates `nohup.out` each time, splitting the script may require adjusting the logic, or you should consider differentiating logs by service instance. **Multi-Application Management:** If multiple services on a server use nohup, each service should have its own independent log path and backup directory to avoid confusion.

A more robust, integrated script can merge, split, and perform simple monitoring, issuing alerts when disk space is low:

#!/bin/

LOG_FILE="/opt/myapp/nohup.out"

BACKUP_DIR="/var/log/myapp_backup"

MAX_USAGE=90

# Check disk usage

CURRENT_USAGE=$(df -h "$BACKUP_DIR" | awk 'NR==2 {print $5}' | tr -d '%')

if [ "$CURRENT_USAGE" -gt "$MAX_USAGE" ]; then

echo "Warning: $BACKUP_DIR disk usage ${CURRENT_USAGE}%, higher than the threshold ${MAX_USAGE}%" | mail -s "Disk space alert" admin@example.com

fi

# Perform log splitting (reusing previous logic)

if [ -f "$LOG_FILE" `mkdir -p "$BACKUP_DIR"

BACKUP_PATH="$BACKUP_DIR/nohup_$(date +%Y%m%d_%H%M%S).out.gz" # Directly compress and back up

# Compress the current log and back it up

gzip -c "$LOG_FILE" > "$BACKUP_PATH"

truncate -s 0 "$LOG_FILE"

fi

This script directly uses `gzip` compression during backup, saving disk space, and sends an email alert when the disk space of the backup directory is too high.

In short, the goal of managing nohup logs is to achieve automatic log maintenance while ensuring service continuity. Whether using custom script combinations or logrotate, the key is to understand the core mechanism of "truncation after copying" and establish a complete process that includes periodic splitting, compressed archiving, expired cleanup, and space monitoring.

Pre-sales consultation
JTTI-Amano
JTTI-Defl
JTTI-Eom
JTTI-Selina
JTTI-Coco
JTTI-Jean
JTTI-Ellis
Technical Support
JTTI-Noc
Title
Email Address
Type
Sales Issues
Sales Issues
System Problems
After-sales problems
Complaints and Suggestions
Marketing Cooperation
Information
Code
Submit