Linux Web Hosting, DevOps, and Cloud Solutions

Empowering you with the knowledge to master Linux web hosting, DevOps and Cloud

 Linux Web Hosting, DevOps, and Cloud Solutions

Installing Ubuntu 22.04 with Btrfs RAID 1 on Hetzner Dedicated Server (installimage script)






Installing Ubuntu 22.04 with Btrfs RAID 1 on Hetzner Dedicated Server (NVMe Disks)


Installing Ubuntu 22.04 with Btrfs RAID 1 on Hetzner Dedicated Server (NVMe Disks)

Objective

In this guide, I’ll walk you through installing Ubuntu 22.04 on a Hetzner dedicated server using Btrfs with RAID 1. We’ll be using Hetzner’s installimage tool with two NVMe drives. The initial provisioning will be done with software RAID disabled (SWRAID 0) — allowing Btrfs to manage the redundancy layer itself.

At the end of this process, we’ll have:

  • A running Ubuntu 22.04 system on Btrfs
  • Metadata and data converted to RAID1
  • Dual GRUB installations for boot redundancy
  • Clean separation of /boot/efi, swap, and Btrfs root.

Step 1: Boot into Rescue System

From the Hetzner Robot or Cloud Console, reboot the server into Rescue Mode and SSH into it.

Before starting the installation, make sure to clean up any existing software RAID and partitions:

mdadm --stop /dev/md/*
wipefs -fa /dev/sd* # for SATA
wipefs -fa /dev/nvme*n1   # for NVMe

Check the disk layout:

lsblk

You should see something like:

root@rescue ~ # lsblk
NAME     MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
loop0      7:0    0  3.4G  1 loop
nvme1n1  259:0    0  1.7T  0 disk
nvme0n1  259:1    0  1.7T  0 disk
root@rescue ~ #

Step 2: Prepare a Custom installimage.conf

Create a minimal config to set up EFI, swap, and a single Btrfs volume with one subvolume mounted as root (/):

cat > installimage.conf <<EOF
DRIVE1  /dev/nvme0n1
DRIVE2  /dev/nvme1n1
SWRAID  0
BOOTLOADER grub
PART /boot/efi esp 256M
PART swap swap 32G
PART btrfs.1 btrfs all
SUBVOL btrfs.1 @ /
IMAGE /root/.oldroot/nfs/images/Ubuntu-2204-jammy-amd64-base.tar.gz
HOSTNAME Ubuntu-2204-jammy-amd64-base
EOF

Ensure your SSH public key is available at /tmp/authorized_keys so you can log in post-installation.

Start installation:

installimage -c installimage.conf -K /tmp/authorized_keys

Step 3: Confirm Initial Installation

After installation, validate disk partitioning:

lsblk

You should see:

root@rescue ~ # lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
loop0         7:0    0  3.4G  1 loop
nvme0n1     259:0    0  1.7T  0 disk
├─nvme0n1p1 259:2    0  256M  0 part
├─nvme0n1p2 259:4    0   32G  0 part
└─nvme0n1p3 259:6    0  1.7T  0 part
nvme1n1     259:1    0  1.7T  0 disk

Step 4: Clone Partition Table to Second Disk

To mirror partition layout:

sfdisk -d /dev/nvme0n1 | sfdisk --force /dev/nvme1n1

Verify:

lsblk

Now both disks should be identically partitioned.

root@rescue ~ # lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
loop0         7:0    0  3.4G  1 loop
nvme0n1     259:0    0  1.7T  0 disk
├─nvme0n1p1 259:2    0  256M  0 part
├─nvme0n1p2 259:4    0   32G  0 part
└─nvme0n1p3 259:6    0  1.7T  0 part
nvme1n1     259:1    0  1.7T  0 disk
├─nvme1n1p1 259:3    0  256M  0 part
├─nvme1n1p2 259:5    0   32G  0 part
└─nvme1n1p3 259:7    0  1.7T  0 part
root@rescue ~ #

Step 5: Add Second Disk to Btrfs

Mount the root Btrfs partition:

mount /dev/nvme0n1p3 /mnt

Add the second disk:

btrfs device add /dev/nvme1n1p3 /mnt
btrfs filesystem balance /mnt

Step 6: Chroot and Install GRUB on Both Drives

Bind system directories and chroot:

for fs in proc sys dev dev/pts; do mount --bind /$fs /mnt/$fs; done
chroot /mnt
mount /boot/efi
mount -t efivarfs none /sys/firmware/efi/efivars

Check UUIDs:

blkid /dev/nvme0n1p3
blkid /dev/nvme1n1p3

Ensure /etc/fstab uses the UUID from the Btrfs volume and points to the @ subvolume.

root@Ubuntu-2204-jammy-amd64-base / # blkid /dev/nvme0n1p3
/dev/nvme0n1p3: UUID="4e6ec869-1ba4-44f8-abb7-91fd693b8a09" UUID_SUB="cf7a33d7-cddd-49d9-b62d-28e3dcef354d" BLOCK_SIZE="4096" TYPE="btrfs" PARTUUID="a2ce8444-85eb -4c81-aa9a-3f2d822d047a"
root@Ubuntu-2204-jammy-amd64-base / # blkid /dev/nvme1n1p3
/dev/nvme1n1p3: UUID="4e6ec869-1ba4-44f8-abb7-91fd693b8a09" UUID_SUB="617abcc7-faf8-43fc-8fc3-224a688433e8" BLOCK_SIZE="4096" TYPE="btrfs" PARTUUID="a2ce8444-85eb -4c81-aa9a-3f2d822d047a"
root@Ubuntu-2204-jammy-amd64-base / # cat /etc/fstab
proc /proc proc defaults 0 0
# efi-boot-partiton
UUID=B416-C1F6 /boot/efi vfat umask=0077 0 1
# /dev/nvme0n1p2
UUID=c86e8a3e-ec66-4190-b585-237568b87c7a none swap sw 0 0
# /dev/nvme0n1p3 belongs to btrfs volume 'btrfs.1'
# /dev/nvme0n1p3
UUID=4e6ec869-1ba4-44f8-abb7-91fd693b8a09 / btrfs defaults,subvol=@ 0 0

Install and update GRUB on both disks:

grub-install --target=x86_64-efi --efi-directory=/boot/efi /dev/nvme0n1
grub-install --target=x86_64-efi --efi-directory=/boot/efi /dev/nvme1n1
update-grub

Step 7: Verify GRUB Configuration

Check that GRUB entries exist:

grep menuentry /boot/grub/grub.cfg

Also confirm that UUIDs in /etc/fstab and GRUB config match the Btrfs UUID.

root@Ubuntu-2204-jammy-amd64-base ~ # cat /boot/grub/grub.cfg | grep menuentry
if [ x"${feature_menuentry_id}" = xy ]; then
  menuentry_id_option="--id"
  menuentry_id_option=""
export menuentry_id_option
menuentry 'Ubuntu' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-4e6ec869-1ba4-44f8-abb7-91fd693b8a09' {
submenu 'Advanced options for Ubuntu' $menuentry_id_option 'gnulinux-advanced-4e6ec869-1ba4-44f8-abb7-91fd693b8a09' {
        menuentry 'Ubuntu, with Linux 5.15.0-131-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-5.15.0-131-generic-advanced-4e6ec869-1ba4-44f8-abb7-91fd693b8a09' {
        menuentry 'Ubuntu, with Linux 5.15.0-131-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-5.15.0-131-generic-recovery-4e6ec869-1ba4-44f8-abb7-91fd693b8a09' {
menuentry 'UEFI Firmware Settings' $menuentry_id_option 'uefi-firmware' {

root@Ubuntu-2204-jammy-amd64-base ~ # cat /etc/fstab
proc /proc proc defaults 0 0
# efi-boot-partiton
UUID=B416-C1F6 /boot/efi vfat umask=0077 0 1
# /dev/nvme0n1p2
UUID=c86e8a3e-ec66-4190-b585-237568b87c7a none swap sw 0 0
# /dev/nvme0n1p3 belongs to btrfs volume 'btrfs.1'
# /dev/nvme0n1p3
UUID=4e6ec869-1ba4-44f8-abb7-91fd693b8a09 / btrfs defaults,subvol=@ 0 0

Step 8: Reboot and SSH Login

Exit chroot and unmount all binds, then reboot:

exit
reboot

Login using your private SSH key — you added the public key during installimage.

Step 9: Convert Btrfs to RAID 1

Convert data and metadata to RAID 1:

btrfs balance start -dconvert=raid1 -mconvert=raid1 /

After completion:

btrfs filesystem df /
btrfs filesystem show /

You should now see:

Data, RAID1: ...
Metadata, RAID1: ...
root@Ubuntu-2204-jammy-amd64-base ~ #  btrfs filesystem df /
Data, RAID1: total=4.00GiB, used=2.32GiB
System, RAID1: total=32.00MiB, used=16.00KiB
Metadata, RAID1: total=2.00GiB, used=47.39MiB
GlobalReserve, single: total=5.92MiB, used=0.00B
root@Ubuntu-2204-jammy-amd64-base ~ #

✅ Final Verification

The btrfs filesystem usage / command gives an overview of the space utilization:

root@Ubuntu-2204-jammy-amd64-base ~ # btrfs filesystem usage /
Overall:
    Device size:                   3.43TiB
    Device allocated:             12.06GiB
    Device unallocated:            3.42TiB
    Device missing:                  0.00B
    Used:                          4.73GiB
    Free (estimated):              1.71TiB      (min: 1.71TiB)
    Free (statfs, df):             1.71TiB
    Data ratio:                       2.00
    Metadata ratio:                   2.00
    Global reserve:                5.92MiB      (used: 0.00B)
    Multiple profiles:                  no

Data,RAID1: Size:4.00GiB, Used:2.32GiB (57.91%)
   /dev/nvme0n1p3          4.00GiB
   /dev/nvme1n1p3          4.00GiB

Metadata,RAID1: Size:2.00GiB, Used:47.41MiB (2.31%)
   /dev/nvme0n1p3          2.00GiB
   /dev/nvme1n1p3          2.00GiB

System,RAID1: Size:32.00MiB, Used:16.00KiB (0.05%)
   /dev/nvme0n1p3         32.00MiB
   /dev/nvme1n1p3         32.00MiB

Unallocated:
   /dev/nvme0n1p3          1.71TiB
   /dev/nvme1n1p3          1.71TiB

The btrfs device stats / command shows the status of the disks, and thankfully, there are no reported input/output errors at this time:

root@Ubuntu-2204-jammy-amd64-base ~ # btrfs device stats /
[/dev/nvme0n1p3].write_io_errs    0
[/dev/nvme0n1p3].read_io_errs     0
[/dev/nvme0n1p3].flush_io_errs    0
[/dev/nvme0n1p3].corruption_errs  0
[/dev/nvme0n1p3].generation_errs  0
[/dev/nvme1n1p3].write_io_errs    0
[/dev/nvme1n1p3].read_io_errs     0
[/dev/nvme1n1p3].flush_io_errs    0
[/dev/nvme1n1p3].corruption_errs  0
[/dev/nvme1n1p3].generation_errs  0
root@Ubuntu-2204-jammy-amd64-base ~ #

root@Ubuntu-2204-jammy-amd64-base ~ # btrfs subvolume list /
ID 256 gen 147 top level 5 path @
root@Ubuntu-2204-jammy-amd64-base ~ #

Conclusion

You now have a robust Ubuntu 22.04 installation using Btrfs with RAID 1, fully bootable from either NVMe drive, and configured with GRUB on both.

This setup provides:

  • Redundancy on both bootloader and filesystem layers
  • Simple expandability for additional Btrfs devices or subvolumes
  • Clean EFI and swap layout


Deploy WordPress with Docker & SSL - No Headaches!


Setting up WordPress with SSL can be a real headache. I’ve simplified the process using Docker, making it much easier to deploy a secure and scalable WordPress site. My latest Medium article walks you through the steps, eliminating the usual frustrations. Read the full guide here: Deploy WordPress with Docker & SSL (No Headaches!)

Multiple database server versions on Plesk server using Docker






Adding Multiple Database Servers in Plesk Using Docker


Adding Multiple Database Servers in Plesk Using Docker

By default, you can only install a single version of MySQL/MariaDB on your server. However, if some of your customer require a different MySQL version, you can use docker. Plesk allows you to add multiple database servers using its Docker extension. This is particularly useful when hosting multiple applications requiring different database versions. In this guide, we will walk through installing and configuring a MariaDB 10.11 Docker container, mapping its ports, and adding it to Plesk as an external database server.

Step 1: Prepare the Server

  1. Log into your server via SSH.
  2. Create a directory for the Docker container’s data storage to ensure persistence:
    mkdir -p /var/docker/mysql/

Step 2: Install and Configure Docker in Plesk

  1. Log in to Plesk.
  2. Navigate to Extensions and ensure the Docker extension is installed. If not, install it.
  3. Go to Docker in Plesk.
  4. In the search box, type mariadb and press Enter.
  5. Select the MariaDB image and choose version 10.11.
  6. Click Next.
  7. Configure the container:
    • Check Automatic start after system reboot.
    • Uncheck Automatic port mapping and manually map:
      • Internal port 3306 to external port 3307.
    • Ensure firewall rules allow traffic on ports 3307 and 33070.
    • Set Volume mapping:
      • Container: /var/lib/mysql
      • Host: /var/docker/mysql
      • Warning: Not mapping this can result in data loss when the container is recreated.
    • Add an environment variable MYSQL_ROOT_PASSWORD and specify a secure root password.
    • Click Run to start the container.

Step 3: Add the MySQL Docker Container as an External Database in Plesk

Once the container is running, we need to add it to Plesk as an external database server:

  1. Log in to Plesk.
  2. Navigate to Tools & Settings > Database Servers (under Applications & Databases).
  3. Click Add Database Server.
  4. Configure the database server:
    • Database server type: Select MariaDB.
    • Hostname or IP Address: Use 127.0.0.1.
    • Port: Enter 3307 (as mapped earlier).
    • Set as Default (Optional): Check Use this server as default for MySQL if you want clients to use this server by default.
    • Credentials: Enter the root user and password specified during container setup.
    • Click OK to save the configuration.

Step 4: Using the New Database Server

Now that the new database server is added, clients can choose it when creating a new database in Plesk:

  1. Go to Databases in Plesk.
  2. Click Add Database.
  3. Under Database server, select the newly added MariaDB 10.11 instance.

This setup allows clients to choose different database servers for their applications while ensuring database persistence and security.

Conclusion

By leveraging Docker, you can efficiently manage multiple database versions on a Plesk server. This method ensures better isolation, easier upgrades, and avoids conflicts between database versions, providing a flexible and robust hosting environment.


How to install a SSL certificate







Steps to Install an SSL Certificate


Introduction to SSL Certificates

An SSL (Secure Sockets Layer) certificate is a crucial security feature for websites, ensuring encrypted communication between the browser and the server. SSL protects sensitive information like passwords, payment details, and personal data from being intercepted. Additionally, it boosts user trust by displaying a padlock icon in the browser and improves search engine rankings as search engines prioritize HTTPS-enabled websites.

Installing an SSL certificate is essential to secure your website and provide a safe experience for your users. Below are the high-level steps for installing an SSL certificate on your server.

Steps to Install an SSL Certificate

Step 1: Generate a Certificate Signing Request (CSR)

To get an SSL certificate, you first need to generate a Certificate Signing Request (CSR), which includes your website’s details:

  • Generate a Private Key:

    Use a tool like OpenSSL to create a private key:

    openssl genrsa -out private.key 2048

    Store the private key securely, as it is required during SSL installation.

    Important: Never share the private key.

  • Generate the CSR:

    Use the private key to generate a CSR:

    openssl req -new -key private.key -out csr.pem

    Provide the requested details, including:

    • Common Name (the domain name to be secured)
    • Organization Name (for business validation)
    • Country, State, and City

Step 2: Purchase or Obtain an SSL Certificate

  • Choose a Certificate Authority (CA) or hosting provider for your SSL certificate.
  • Submit the CSR to the CA for verification.
  • Validate your domain ownership through one of the following methods:
    • Email Validation: Respond to an email sent to your domain’s administrative address.
    • DNS Validation: Add a specific DNS record to your domain.
    • HTTP Validation: Upload a verification file to your website.
  • For Extended Validation (EV) or Organization Validation (OV) certificates, additional steps like verifying your business details with the CA may be required.
  • Once validated, download the issued SSL certificate and intermediate certificate bundle (CA bundle).

Step 3: Install the SSL Certificate on the Server

  • If Using a Control Panel:

    Log in to the hosting control panel (e.g., cPanel, Plesk).

    Navigate to the SSL/TLS or security settings.

    Upload the SSL certificate, CA bundle, and private key.

    Follow the instructions to install the certificate.

  • If No Control Panel:

    Log in to the server via SSH.

    Configure the web server (e.g., Apache, Nginx) to include the certificate details:

    • SSL certificate file (.crt or .pem)
    • Private key file
    • Intermediate certificate file (CA bundle)

    Restart the web server to apply the changes.

Step 4: Test the SSL Installation

  • Use online tools like SSL Labs SSL Test to verify your SSL setup.
  • Confirm that the certificate is valid and properly installed.
  • Ensure no SSL errors or warnings are displayed.

Step 5: Update Website Links

Update all internal links and references from http:// to https:// to avoid mixed content errors. Update your CMS settings (e.g., WordPress URL settings) to use HTTPS.

Step 6: Set Up HTTPS Redirects

Redirect all HTTP traffic to HTTPS by default to ensure all users access the secure version of your site.

Step 7: Monitor and Renew the SSL Certificate

  • Keep track of the certificate’s expiration date and renew it on time.
  • For free SSL certificates like Let’s Encrypt, automate the renewal process using tools like Certbot.
  • Periodically test your website’s SSL configuration for potential issues or updates.


How to Migrate a WordPress Site from One Host to Another: A Step-by-Step Guide








WordPress is one of the most popular content management systems (CMS) in the world, powering over 40% of websites globally. Its flexibility, ease of use, and vast ecosystem of plugins and themes make it a favorite among bloggers, businesses, and developers alike.

At its core, WordPress has a simple structure:

  • Files: These include the WordPress core, themes, plugins, and the wp-content folder where your media files are stored.
  • Database: This stores all the critical information such as posts, pages, user data, and site configurations.

When migrating a WordPress site, it is essential to back up both the files and the database, as they work together to run your WordPress site seamlessly. Missing either part can cause errors or data loss. In this guide, we’ll walk you through the high-level steps of migrating your WordPress site to a new hosting provider.

Step 1: Backup Your Website Files

Backing up your WordPress files ensures that your themes, plugins, and media are safe. You can do this using the following methods:

  • Using an FTP Client:

    Connect to your existing hosting account using an FTP client like FileZilla. Download all the WordPress files, especially the wp-content folder, which contains your themes, plugins, and uploads.

  • Using SCP for Secure Transfers:

    If you have SSH access, use the scp command to securely copy files from your server to your local machine or another server:

    scp -r username@oldhost:/path/to/wordpress /path/to/local/backup
  • Using File manager provided by the Hosting control panel:

    If your webhost provider/control panel provides a file manager, you would be able to compress the files and download the zip .

Step 2: Export the WordPress Database

The database is the heart of your WordPress site, storing all the content and settings. It’s crucial to back it up properly:

  • Using phpMyAdmin:

    Log in to your hosting control panel and open phpMyAdmin. Select your WordPress database, click the “Export” tab, and download it as a .sql file.

  • Using mysqldump via SSH:

    If you have SSH access, create a backup of your database using the mysqldump command:

    mysqldump -u username -p database_name > backup.sql

Step 3: Set Up the New Hosting Environment

Before importing the database, create a new database and user on the new hosting account:

  • Log in to your new hosting control panel or use SSH to access the server.
  • Create a new database and database user, assigning the necessary privileges.
  • Take note of the database name, username, and password for the next steps.

Step 4: Upload Website Files to the New Host

Use an FTP client, SCP, or File Manager to upload your WordPress files to the new hosting environment. Double-check that all files, particularly those in the wp-content folder, are uploaded correctly.

Step 5: Import the WordPress Database

  • Using phpMyAdmin:

    Open phpMyAdmin on the new host, select the newly created database, and import the .sql file you exported earlier.

  • Using mysql via SSH:

    If you have SSH access, import the database using the following command:

    mysql -u username -p database_name < backup.sql

Step 6: Update the wp-config.php File

Open the wp-config.php file in the root directory of your WordPress site on the new host. Update the database details to match the new database:


define('DB_NAME', 'your_new_database_name');
define('DB_USER', 'your_new_database_user');
define('DB_PASSWORD', 'your_new_database_password');
define('DB_HOST', 'localhost'); // Or the database host provided by your new host
    

Step 7: Test the Website

Update your local hosts file or use a temporary URL provided by your new host to test the site. Verify that all pages, posts, media, plugins, and themes are working correctly.

Step 8: Update DNS Records

  • Log in to your domain registrar and update the DNS settings to point to your new hosting server.
  • Typically, you will update the A record (IP address) or nameservers.
  • Allow up to 48 hours for DNS propagation.

Step 9: Monitor the Website Post-Migration

  • After the DNS propagation, thoroughly test your website again to ensure everything is functioning as expected.
  • Monitor for broken links, missing media, or issues with plugins or themes.

Bonus Tips for a Smooth Migration

  • Use plugins like All-in-One WP Migration or UpdraftPlus if you're not comfortable with manual methods.
  • Always check for PHP and MySQL compatibility between the old and new hosts.
  • Keep backups until you're certain the migration is successful.

By following these steps, you can confidently migrate your WordPress site to a new hosting provider. With proper planning and attention to detail, the transition can be smooth and hassle-free.


Protecting Critical Packages in YUM to Prevent Unintended Removal

Managing RPM-based systems with tools like YUM (Yellowdog Updater Modified) is an integral part of provisioning and maintaining Linux servers. While YUM simplifies the process of managing package dependencies, it can sometimes lead to unintended consequences, especially when developers remove a package that has critical dependencies. In this blog, we’ll explore a common use case and demonstrate how to safeguard important packages using YUM’s package protection features.

The Problem: Accidental Removal of Critical Packages
Let’s consider a scenario:
You have a custom package called dep-web that automates server provisioning by installing essential components like httpd, mod_ssl, and ingest, along with scripts and cron jobs critical to your environment. When a developer installs dep-web, everything works seamlessly. However, issues arise when they attempt to test a specific version of ingest.

A typical action might be:

yum remove ingest
This operation not only removes ingest but also uninstalls dep-web, since dep-web depends on ingest. Consequently, all the additional configurations, scripts, and cron jobs set up by dep-web are also removed. Even if the developer reinstalls ingest, dep-web and its functionality are not restored, leading to potential operational disruptions.

Developers may not always notice these cascading effects, causing long-term inconsistencies and errors in the environment. Clearly, there is a need to prevent the accidental removal of critical packages like dep-web.

The Solution: Protecting Packages in YUM
YUM includes functionality to prevent the removal of certain packages using the /etc/yum/protected.d directory and the yum-plugin-protect-packages. By default, YUM protects itself and its dependencies (e.g., rpm, python, glibc) from being uninstalled. However, administrators can extend this protection to other packages.

Steps to Protect Critical Packages
Install the YUM Plugin
Ensure the yum-plugin-protect-packages is installed on your system:

yum install yum-plugin-protect-packages
Create a Configuration File
Add your critical package to the protected list by creating a .conf file under /etc/yum/protected.d/. For example, to protect the dep-web package:

vi /etc/yum/protected.d/dep-web.conf
Add the following content:

dep-web
Save and close the file.

Verify the Protection
Attempt to remove the protected package to test the configuration:

yum remove dep-web
YUM will block the operation and display an error message, ensuring the package remains intact:

Error: Trying to remove "dep-web", which is protected
Add Additional Packages (Optional)
If there are other critical packages that need protection, create or edit their respective .conf files under the same directory.

Benefits of Package Protection
By implementing package protection, you can:

Prevent the accidental removal of critical packages and their dependencies.
Ensure that operational scripts, configurations, and cron jobs tied to these packages are preserved.
Enhance the reliability of your environment, especially in shared development and production systems.

Conclusion
Managing dependencies with YUM requires careful oversight, particularly in environments where multiple developers and administrators interact with the system. Protecting critical packages using YUM’s protected.d directory and plugins like yum-plugin-protect-packages provides a robust safeguard against unintended package removal.

In the example of dep-web, protecting the package ensures that its functionality, including the custom scripts and cron jobs, remains intact. This small configuration step can save countless hours of troubleshooting and recovery in large-scale deployments.

Proactively implementing such measures demonstrates a commitment to best practices in system administration, reducing downtime and fostering a more stable infrastructure.

Automating Email Cleanup with doveadm expunge

Managing email storage is a crucial part of maintaining efficient mail servers, especially for administrators using Dovecot. Over time, mailboxes can accumulate a massive number of emails, leading to performance issues and potential storage costs. One effective way to manage this is by automatically deleting emails older than a specific period. In this blog, we’ll discuss how to use doveadm expunge to delete old emails.

Understanding the Basics
Dovecot’s doveadm expunge command is a powerful utility for deleting emails based on specified criteria. Here’s a quick overview of the command syntax:

doveadm expunge -u mailbox ''
-u: Specifies the user mailbox.
mailbox '': Specifies the folder, such as INBOX, INBOX.Spam, etc.
: Defines the filter for emails to be deleted, e.g., before 1w (one week) or before 2w (two weeks).

Use Cases
1. List Existing Mailboxes
Before deleting emails, identify the folders within a specific mailbox. Use the following command:

doveadm mailbox list -u user@example.com
Sample output:

INBOX
INBOX.Spam
INBOX.Drafts
INBOX.Trash
INBOX.Sent

2. Delete Emails Older Than 2 Weeks in All Folders
To remove all emails older than two weeks in all folders for a specific mailbox:

doveadm expunge -u user@example.com mailbox '*' before 2w
3. Exclude INBOX Folder While Deleting
If you want to delete old emails from all folders except INBOX, use:

doveadm expunge -u user@example.com mailbox INBOX.'*' before 2w
4. Delete All Emails in a Mailbox
To delete all emails from all folders within a specific mailbox:

doveadm expunge -u user@example.com mailbox '*' all
Bulk Removal of Old Emails
When managing multiple accounts, you may need to automate the process for all mailboxes on a server. Here’s how to approach this on Plesk and cPanel.

Step 1: Generate a List of Mailboxes
For Plesk:
Run the following command to get a list of all active mailboxes:

plesk db -Ne "select concat(m.mail_name,'@',d.name) as mailbox, m.postbox from domains d, mail m, accounts a where m.dom_id=d.id and m.account_id=a.id and m.postbox='true'" | awk '{print $1}' >mbox.txt
For cPanel:
Generate a list of all mailboxes with:

for i in $(awk '{print $2}' /etc/trueuserdomains); do uapi --user=$i Email list_pops | egrep "\s+email:" ; done | awk '{print $2}' >mbox.txt
Step 2: Automate Deletion with a Script
Create a shell script (mailbox-doveadm-expunge.sh) to process the mailboxes:

#!/bin/bash
# Script to delete emails older than 2 weeks from all mailboxes

MAILBOX_FILE="mbox.txt"

if [ ! -f "$MAILBOX_FILE" ]; then
    echo "Mailbox list file $MAILBOX_FILE not found!"
    exit 1
fi

for mailbox in $(cat $MAILBOX_FILE); do
    echo "Processing mailbox: $mailbox"
    doveadm expunge -u $mailbox mailbox 'INBOX' before 2w
    doveadm expunge -u $mailbox mailbox 'INBOX.*' before 2w
    doveadm expunge -u $mailbox mailbox 'Sent' before 2w
    doveadm expunge -u $mailbox mailbox 'Trash' before 2w
    doveadm expunge -u $mailbox mailbox 'Drafts' before 2w
    doveadm expunge -u $mailbox mailbox 'Spam' before 2w
done

Save the script and ensure it has executable permissions:

chmod +x mailbox-doveadm-expunge.sh
Run the script:

./mailbox-doveadm-expunge.sh

Best Practices
1. Backup Emails: Before performing a mass deletion, create a backup of your mail directories.
2. Test on a Single Mailbox: Verify your deletion criteria by testing on a single mailbox before applying changes in bulk.
3. Monitor Logs: After running doveadm expunge, check Dovecot logs for errors or warnings.

Conclusion
Using doveadm expunge simplifies email management and helps prevent mail server overload by automatically removing old emails. Whether you’re working with individual accounts or hundreds of mailboxes, this approach can save significant time and effort. Integrate this cleanup process into your routine server maintenance to keep your mail system optimized.

Configuring Multiple IP Addresses with Netplan on Ubuntu 24.04






Managing Network Configurations in Ubuntu with Netplan


Managing Network Configurations in Ubuntu with Netplan

Netplan has simplified network configuration management in Ubuntu. This tutorial will guide you through setting up multiple IP addresses on a single network interface using Netplan.

Prerequisites

  • An Ubuntu 24.04 system with Netplan installed (default in Ubuntu installations).
  • Administrative (root) privileges or sudo access.
  • A network interface name (e.g., ens192).

Step-by-Step Configuration

1. Edit the Netplan Configuration File

Create or modify a Netplan configuration file, typically located in `/etc/netplan/`. Here, we’ll use `00-Public_network.yaml`.

Run:


  sudo nano /etc/netplan/00-Public_network.yaml
  

Add the following configuration:


  network:
    version: 2
    renderer: networkd
    ethernets:
      ens192:
        addresses:
          - 77.68.48.229/32
          - 77.68.13.96/32
          - 77.68.115.25/32
        nameservers:
          addresses: [212.227.123.16, 212.227.123.17]
        routes:
          - to: default
            via: 10.255.255.1
            on-link: true
  

2. Apply the Configuration

After saving the file, apply the changes using the following command:


  sudo netplan --debug apply
  

The `–debug` flag provides detailed logs for troubleshooting.

3. Verify the Configuration

Check the IP addresses and routing table to ensure the configuration is applied correctly.

Verify IP addresses:


  ip a
  

Expected output:


  inet 77.68.48.229/32 scope global ens192
  inet 77.68.13.96/32 scope global ens192
  inet 77.68.115.25/32 scope global ens192
  

Verify routing table:


  ip route show
  

Expected output:


  default via 10.255.255.1 dev ens192 proto static onlink
  

Understanding Key Network Configuration Components

Nameservers

These are DNS (Domain Name System) servers responsible for translating domain names (e.g., example.com) into IP addresses.

In the configuration:


  nameservers:
    addresses: [212.227.123.16, 212.227.123.17]
  

These are the DNS servers provided by your hosting provider.

If the provider does not specify DNS servers, you can use public options such as:

  • Google: 8.8.8.8, 8.8.4.4
  • Cloudflare: 1.1.1.1, 1.0.0.1
  • OpenDNS: 208.67.222.222, 208.67.220.220

Default Gateway

The default gateway routes traffic from your system to other networks, such as the internet.

In the configuration:


  routes:
    - to: default
      via: 10.255.255.1
      on-link: true
  

`via 10.255.255.1`: The gateway IP provided by the server/hosting provider.

`on-link: true`: Indicates the gateway is directly reachable on the local link.

Always use the gateway provided by your hosting provider.

Troubleshooting

If changes are not applied, check the configuration syntax:


  sudo netplan generate
  

Look for error messages in `/var/log/syslog` for additional details.

Conclusion

Using Netplan, you can easily assign multiple IP addresses to a single network interface in Ubuntu 24.04. This setup is ideal for scenarios like hosting multiple websites or services requiring distinct public IPs. Ensure you use the nameservers and default gateway provided by your hosting provider for proper network connectivity. Public DNS servers can be used as an alternative if needed.


Tracking File Activity(deletion) with auditd and Process Accounting in Linux

Maintaining a secure system involves monitoring file system activity, especially tracking file deletions, creations, and other modifications. This blog post explores how to leverage two powerful tools, auditd and process accounting with /usr/sbin/accton (provided by the psacct package), to gain a more comprehensive understanding of these events in Linux.

Introduction

Tracking file deletions in a Linux environment can be challenging. Traditional file monitoring tools often lack the capability to provide detailed information about who performed the deletion, when it occurred, and which process was responsible. This gap in visibility can be problematic for system administrators and security professionals who need to maintain a secure and compliant system.

To address this challenge, we can combine auditd, which provides detailed auditing capabilities, with process accounting (psacct), which tracks process activity. By integrating these tools, we can gain a more comprehensive view of file deletions and the processes that cause them.

What We’ll Cover:

1. Understanding auditd and Process Accounting
2. Installing and Configuring psacct
3. Enabling Audit Tracking and Process Accounting
4. Setting Up Audit Rules with auditctl
5. Simulating File Deletion
6. Analyzing Audit Logs with ausearch
7. Linking Process ID to Process Name using psacct
8. Understanding Limitations and Best Practices

Prerequisites:

1. Basic understanding of Linux commands
2. Root or sudo privileges
3. Auditd package installed (installed by default on most of the distros)

1. Understanding the Tools

auditd: The Linux audit daemon logs security-relevant events, including file system modifications. It allows you to track who is accessing the system, what they are doing, and the outcome of their actions.

Process Accounting: Linux keeps track of resource usage for processes. By analyzing process IDs (PIDs) obtained from auditd logs and utilizing tools like /usr/sbin/accton and dump-acct (provided by psacct), we can potentially identify the process responsible for file system activity. However, it’s important to understand that process accounting data itself doesn’t directly track file deletions.

2. Installing and Configuring psacct

First, install the psacct package using your distribution’s package manager if it’s not already present:

# For Debian/Ubuntu based systems
sudo apt install acct

# For Red Hat/CentOS based systems
sudo yum install psacct

3. Enabling Audit Tracking and Process Accounting

Ensure auditd is running by checking its service status:

sudo systemctl status auditd

If not running, enable and start it:

sudo systemctl enable auditd
sudo systemctl start auditd


Next, initiate recording process accounting data:

sudo /usr/sbin/accton /var/log/account/pacct

This will start saving the process information in the log file /var/log/account/pacct.

4. Setting Up Audit Rules with auditctl

To ensure audit rules persist across reboots, add the rule to the audit configuration file. The location of this file may vary based on the distribution:

For Debian/Ubuntu, use /etc/audit/rules.d/audit.rules
For Red Hat/CentOS, use /etc/audit/audit.rules
Open the appropriate file in a text editor with root privileges and add the following line to monitor deletions within a sample directory:

-w /var/tmp -p wa -k sample_file_deletion
Explanation:

-w: Specifies the directory to watch (/path/to/your/sample_directory: /var/tmp)
-p wa: Monitors both write (w) and attribute (a) changes (deletion modifies attributes)
-k sample_file_deletion: Assigns a unique key for easy identification in logs


After adding the rule, restart the auditd service to apply the changes:

sudo systemctl restart auditd

5. Simulating File Deletion

Create a test file in the sample directory and delete it:

touch /var/tmp/test_file
rm /var/tmp/test_file

6. Analyzing Audit Logs with ausearch

Use ausearch to search audit logs for the deletion event:


sudo ausearch -k sample_file_deletion
This command will display audit records related to the deletion you simulated. Look for entries indicating a “delete” operation within your sample directory and not down the the process id for the action.

# ausearch -k sample_file_deletion
...
----
time->Sat Jun 16 04:02:25 2018
type=PROCTITLE msg=audit(1529121745.550:323): proctitle=726D002D69002F7661722F746D702F746573745F66696C65
type=PATH msg=audit(1529121745.550:323): item=1 name="/var/tmp/test_file" inode=16934921 dev=ca:01 mode=0100644 ouid=0 ogid=0 rdev=00:00 obj=unconfined_u:object_r:user_tmp_t:s0 objtype=DELETE cap_fp=0000000000000000 cap_fi=0000000000000000 cap_fe=0 cap_fver=0
type=PATH msg=audit(1529121745.550:323): item=0 name="/var/tmp/" inode=16819564 dev=ca:01 mode=041777 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tmp_t:s0 objtype=PARENT cap_fp=0000000000000000 cap_fi=0000000000000000 cap_fe=0 cap_fver=0
type=CWD msg=audit(1529121745.550:323):  cwd="/root"
type=SYSCALL msg=audit(1529121745.550:323): arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=9930c0 a2=0 a3=7ffe9f8f2b20 items=2 ppid=2358 pid=2606 auid=1001 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=2 comm="rm" exe="/usr/bin/rm" subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 key="sample_file_deletion"

As you can see in the above log, the user root(uid=0) deleted(exe=”/usr/bin/rm”) the file /var/tmp/test_file. Note down the the ppid=2358 pid=2606 as well. If the file is deleted by a script or cron, you would need these to track the script or cron.

7. Linking Process ID to Process Name using psacct

The audit logs will contain a process ID (PID) associated with the deletion. Utilize this PID to identify the potentially responsible process:

Process Information from dump-acct

After stopping process accounting recording with sudo /usr/sbin/accton off, analyze the captured data:

sudo dump-acct /var/log/account/pacct
This output shows various process details, including PIDs, command names, and timestamps. However, due to the nature of process accounting, it might not directly pinpoint the culprit. Processes might have terminated after the deletion, making it challenging to definitively identify the responsible one. You can grep the ppid or pid we received from audit log against the output of the dump-acct command.

sudo dump-acct /var/log/account/pacct | tail
grotty          |v3|     0.00|     0.00|     2.00|  1000|  1000| 12000.00|     0.00|  321103|  321101|     |       0|pts/1   |Fri Aug 14 13:26:07 2020
groff           |v3|     0.00|     0.00|     2.00|  1000|  1000|  6096.00|     0.00|  321101|  321095|     |       0|pts/1   |Fri Aug 14 13:26:07 2020
nroff           |v3|     0.00|     0.00|     4.00|  1000|  1000|  2608.00|     0.00|  321095|  321087|     |       0|pts/1   |Fri Aug 14 13:26:07 2020
man             |v3|     0.00|     0.00|     4.00|  1000|  1000| 10160.00|     0.00|  321096|  321087| F   |       0|pts/1   |Fri Aug 14 13:26:07 2020
pager           |v3|     0.00|     0.00|  2018.00|  1000|  1000|  8440.00|     0.00|  321097|  321087|     |       0|pts/1   |Fri Aug 14 13:26:07 2020
man             |v3|     2.00|     0.00|  2021.00|  1000|  1000| 10160.00|     0.00|  321087|  318116|     |       0|pts/1   |Fri Aug 14 13:26:07 2020
clear           |v3|     0.00|     0.00|     0.00|  1000|  1000|  2692.00|     0.00|  321104|  318116|     |       0|pts/1   |Fri Aug 14 13:26:30 2020
dump-acct       |v3|     2.00|     0.00|     2.00|  1000|  1000|  4252.00|     0.00|  321105|  318116|     |       0|pts/1   |Fri Aug 14 13:26:35 2020
tail            |v3|     0.00|     0.00|     2.00|  1000|  1000|  8116.00|     0.00|  321106|  318116|     |       0|pts/1   |Fri Aug 14 13:26:35 2020
clear           |v3|     0.00|     0.00|     0.00|  1000|  1000|  2692.00|     0.00|  321107|  318116|     |       0|pts/1   |Fri Aug 14 13:26:45 2020

To better understand what you’re looking at, you may want to add column headings as I have done with these commands:

echo "Command vers runtime systime elapsed UID GID mem_use chars PID PPID ? retcode term date/time" "
sudo dump-acct /var/log/account/pacct | tail -5

Command         vers  runtime   systime   elapsed    UID    GID   mem_use     chars      PID     PPID  ?   retcode   term     date/time
tail            |v3|     0.00|     0.00|     3.00|     0|     0|  8116.00|     0.00|  358190|  358188|     |       0|pts/1   |Sat Aug 15 11:30:05 2020
pacct           |v3|     0.00|     0.00|     3.00|     0|     0|  9624.00|     0.00|  358188|  358187|S    |       0|pts/1   |Sat Aug 15 11:30:05 2020
sudo            |v3|     0.00|     0.00|     4.00|     0|     0| 10984.00|     0.00|  358187|  354579|S    |       0|pts/1   |Sat Aug 15 11:30:05 2020
gmain           |v3|    14.00|     3.00|  1054.00|  1000|  1000|  1159680|     0.00|  358169|    3179|    X|       0|__      |Sat Aug 15 11:30:03 2020
vi              |v3|     0.00|     0.00|   456.00|  1000|  1000| 10976.00|     0.00|  358194|  354579|     |       0|pts/1   |Sat Aug 15 11:30:28 2020

Alternative: lastcomm (Limited Effectiveness)

In some cases, you can try lastcomm to potentially retrieve the command associated with the PID, even if the process has ended. However, its effectiveness depends on system configuration and might not always be reliable.

Important Note

While combining auditd with process accounting can provide insights, it’s crucial to understand the limitations. Process accounting data offers a broader picture of resource usage but doesn’t directly correlate to specific file deletions. Additionally, processes might terminate quickly, making it difficult to trace back to a specific action.

Best Practices

1. Regular Monitoring: Regularly monitor and analyze audit logs to stay ahead of potential security breaches.
2. Comprehensive Logging: Ensure comprehensive logging by setting appropriate audit rules and keeping process accounting enabled.
3. Timely Responses: Respond quickly to any suspicious activity by investigating audit logs and process accounting data promptly.

By combining the capabilities of auditd and process accounting, you can enhance your ability to track and understand file system activity, thereby strengthening your system’s security posture.

Page 1 of 8

Powered by WordPress & Theme by Anders Norén