Setting up WordPress with SSL can be a real headache. I’ve simplified the process using Docker, making it much easier to deploy a secure and scalable WordPress site. My latest Medium article walks you through the steps, eliminating the usual frustrations. Read the full guide here: Deploy WordPress with Docker & SSL (No Headaches!)
Tag: Linux Page 1 of 2
Adding Multiple Database Servers in Plesk Using Docker
By default, you can only install a single version of MySQL/MariaDB on your server. However, if some of your customer require a different MySQL version, you can use docker. Plesk allows you to add multiple database servers using its Docker extension. This is particularly useful when hosting multiple applications requiring different database versions. In this guide, we will walk through installing and configuring a MariaDB 10.11 Docker container, mapping its ports, and adding it to Plesk as an external database server.
Step 1: Prepare the Server
- Log into your server via SSH.
- Create a directory for the Docker container’s data storage to ensure persistence:
mkdir -p /var/docker/mysql/
Step 2: Install and Configure Docker in Plesk
- Log in to Plesk.
- Navigate to Extensions and ensure the Docker extension is installed. If not, install it.
- Go to Docker in Plesk.
- In the search box, type
mariadb
and press Enter. - Select the MariaDB image and choose version 10.11.
- Click Next.
- Configure the container:
- Check Automatic start after system reboot.
- Uncheck Automatic port mapping and manually map:
- Internal port
3306
to external port3307
.
- Internal port
- Ensure firewall rules allow traffic on ports
3307
and33070
. - Set Volume mapping:
- Container:
/var/lib/mysql
- Host:
/var/docker/mysql
- Warning: Not mapping this can result in data loss when the container is recreated.
- Container:
- Add an environment variable
MYSQL_ROOT_PASSWORD
and specify a secure root password. - Click Run to start the container.
Step 3: Add the MySQL Docker Container as an External Database in Plesk
Once the container is running, we need to add it to Plesk as an external database server:
- Log in to Plesk.
- Navigate to Tools & Settings > Database Servers (under Applications & Databases).
- Click Add Database Server.
- Configure the database server:
- Database server type: Select MariaDB.
- Hostname or IP Address: Use
127.0.0.1
. - Port: Enter
3307
(as mapped earlier). - Set as Default (Optional): Check Use this server as default for MySQL if you want clients to use this server by default.
- Credentials: Enter the root user and password specified during container setup.
- Click OK to save the configuration.
Step 4: Using the New Database Server
Now that the new database server is added, clients can choose it when creating a new database in Plesk:
- Go to Databases in Plesk.
- Click Add Database.
- Under Database server, select the newly added MariaDB 10.11 instance.
This setup allows clients to choose different database servers for their applications while ensuring database persistence and security.
Conclusion
By leveraging Docker, you can efficiently manage multiple database versions on a Plesk server. This method ensures better isolation, easier upgrades, and avoids conflicts between database versions, providing a flexible and robust hosting environment.
SSH (Secure Shell) relies on public-key cryptography for secure logins. But how can you be sure your public and private key pair are actually linked? This blog post will guide you through a simple method to verify their authenticity in Linux and macOS.
Understanding the Key Pair:
Imagine a lock and key. Your public key acts like the widely distributed lock – anyone can see it. The private key is the unique counterpart, kept secret, that unlocks the metaphorical door (your server) for SSH access.
Using ssh-keygen
This method leverages the ssh-keygen tool, already available on most Linux and macOS systems.
1. Locate the keys :Open a terminal and use cd to navigate to the directory where your private key resides (e.g., cd ~/.ssh).
2. Use the command ‘ls -al’ to list all files in the directory, and locate your private/public keypair you wish to check.
Example:
ababwaha@ababwaha-mac .ssh % ls -al total 32 drwx------ 6 ababwaha staff 192 Jun 24 16:04 . drwxr-x---+ 68 ababwaha staff 2176 Jun 24 16:04 .. -rw------- 1 ababwaha staff 411 Jun 24 16:04 id_ed25519 -rw-r--r-- 1 ababwaha staff 103 Jun 24 16:04 id_ed25519.pub -rw------- 1 ababwaha staff 3389 Jun 24 16:04 id_rsa -rw-r--r-- 1 ababwaha staff 747 Jun 24 16:04 id_rsa.pub
3. Verify the Key Pair: Run the following command, replacing
ssh-keygen -lf
This command displays fingerprint information about your key pair.
ababwaha@ababwaha-mac .ssh % ssh-keygen -l -f id_rsa 4096 SHA256:7qXL09ejiSkrKs8HfhEo8EXkUVFOsoPfv52QY/l/kzg ababwaha@ababwaha-mac (RSA) ababwaha@ababwaha-mac .ssh % ssh-keygen -l -f id_rsa.pub 4096 SHA256:7qXL09ejiSkrKs8HfhEo8EXkUVFOsoPfv52QY/l/kzg ababwaha@ababwaha-mac (RSA) ababwaha@ababwaha-mac .ssh % ababwaha@ababwaha-mac .ssh % ababwaha@ababwaha-mac .ssh % ababwaha@ababwaha-mac .ssh % ssh-keygen -l -f id_ed25519 256 SHA256:4pWu5rdA1IvbbjD7/k4/k/7A4X6kft28MpKL1HMqmgQ ababwaha@ababwaha-mac (ED25519) ababwaha@ababwaha-mac .ssh % ssh-keygen -l -f id_ed25519.pub 256 SHA256:4pWu5rdA1IvbbjD7/k4/k/7A4X6kft28MpKL1HMqmgQ ababwaha@ababwaha-mac (ED25519) ababwaha@ababwaha-mac .ssh %
4. Match the Fingerprints: Compare the fingerprint displayed by ssh-keygen with the beginning of the text in your public key file. If they match, congratulations! Your public and private keys are a verified pair.
Remember:
By following this method, you can easily verify the authenticity of your public and private SSH key pair, ensuring a secure connection to your server.
Maintaining a secure system involves monitoring file system activity, especially tracking file deletions, creations, and other modifications. This blog post explores how to leverage two powerful tools, auditd and process accounting with /usr/sbin/accton (provided by the psacct package), to gain a more comprehensive understanding of these events in Linux.
Introduction
Tracking file deletions in a Linux environment can be challenging. Traditional file monitoring tools often lack the capability to provide detailed information about who performed the deletion, when it occurred, and which process was responsible. This gap in visibility can be problematic for system administrators and security professionals who need to maintain a secure and compliant system.
To address this challenge, we can combine auditd, which provides detailed auditing capabilities, with process accounting (psacct), which tracks process activity. By integrating these tools, we can gain a more comprehensive view of file deletions and the processes that cause them.
What We’ll Cover:
1. Understanding auditd and Process Accounting
2. Installing and Configuring psacct
3. Enabling Audit Tracking and Process Accounting
4. Setting Up Audit Rules with auditctl
5. Simulating File Deletion
6. Analyzing Audit Logs with ausearch
7. Linking Process ID to Process Name using psacct
8. Understanding Limitations and Best Practices
Prerequisites:
1. Basic understanding of Linux commands
2. Root or sudo privileges
3. Auditd package installed (installed by default on most of the distros)
1. Understanding the Tools
auditd: The Linux audit daemon logs security-relevant events, including file system modifications. It allows you to track who is accessing the system, what they are doing, and the outcome of their actions.
Process Accounting: Linux keeps track of resource usage for processes. By analyzing process IDs (PIDs) obtained from auditd logs and utilizing tools like /usr/sbin/accton and dump-acct (provided by psacct), we can potentially identify the process responsible for file system activity. However, it’s important to understand that process accounting data itself doesn’t directly track file deletions.
2. Installing and Configuring psacct
First, install the psacct package using your distribution’s package manager if it’s not already present:
# For Debian/Ubuntu based systems
sudo apt install acct
# For Red Hat/CentOS based systems
sudo yum install psacct
3. Enabling Audit Tracking and Process Accounting
Ensure auditd is running by checking its service status:
sudo systemctl status auditd
If not running, enable and start it:
sudo systemctl enable auditd
sudo systemctl start auditd
Next, initiate recording process accounting data:
sudo /usr/sbin/accton /var/log/account/pacct
This will start saving the process information in the log file /var/log/account/pacct.
4. Setting Up Audit Rules with auditctl
To ensure audit rules persist across reboots, add the rule to the audit configuration file. The location of this file may vary based on the distribution:
For Debian/Ubuntu, use /etc/audit/rules.d/audit.rules
For Red Hat/CentOS, use /etc/audit/audit.rules
Open the appropriate file in a text editor with root privileges and add the following line to monitor deletions within a sample directory:
-w /var/tmp -p wa -k sample_file_deletion
Explanation:
-w: Specifies the directory to watch (/path/to/your/sample_directory: /var/tmp)
-p wa: Monitors both write (w) and attribute (a) changes (deletion modifies attributes)
-k sample_file_deletion: Assigns a unique key for easy identification in logs
After adding the rule, restart the auditd service to apply the changes:
sudo systemctl restart auditd
5. Simulating File Deletion
Create a test file in the sample directory and delete it:
touch /var/tmp/test_file
rm /var/tmp/test_file
6. Analyzing Audit Logs with ausearch
Use ausearch to search audit logs for the deletion event:
sudo ausearch -k sample_file_deletion
This command will display audit records related to the deletion you simulated. Look for entries indicating a “delete” operation within your sample directory and not down the the process id for the action.
# ausearch -k sample_file_deletion ... ---- time->Sat Jun 16 04:02:25 2018 type=PROCTITLE msg=audit(1529121745.550:323): proctitle=726D002D69002F7661722F746D702F746573745F66696C65 type=PATH msg=audit(1529121745.550:323): item=1 name="/var/tmp/test_file" inode=16934921 dev=ca:01 mode=0100644 ouid=0 ogid=0 rdev=00:00 obj=unconfined_u:object_r:user_tmp_t:s0 objtype=DELETE cap_fp=0000000000000000 cap_fi=0000000000000000 cap_fe=0 cap_fver=0 type=PATH msg=audit(1529121745.550:323): item=0 name="/var/tmp/" inode=16819564 dev=ca:01 mode=041777 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tmp_t:s0 objtype=PARENT cap_fp=0000000000000000 cap_fi=0000000000000000 cap_fe=0 cap_fver=0 type=CWD msg=audit(1529121745.550:323): cwd="/root" type=SYSCALL msg=audit(1529121745.550:323): arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=9930c0 a2=0 a3=7ffe9f8f2b20 items=2 ppid=2358 pid=2606 auid=1001 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=2 comm="rm" exe="/usr/bin/rm" subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 key="sample_file_deletion"
As you can see in the above log, the user root(uid=0) deleted(exe=”/usr/bin/rm”) the file /var/tmp/test_file. Note down the the ppid=2358 pid=2606 as well. If the file is deleted by a script or cron, you would need these to track the script or cron.
7. Linking Process ID to Process Name using psacct
The audit logs will contain a process ID (PID) associated with the deletion. Utilize this PID to identify the potentially responsible process:
Process Information from dump-acct
After stopping process accounting recording with sudo /usr/sbin/accton off, analyze the captured data:
sudo dump-acct /var/log/account/pacct
This output shows various process details, including PIDs, command names, and timestamps. However, due to the nature of process accounting, it might not directly pinpoint the culprit. Processes might have terminated after the deletion, making it challenging to definitively identify the responsible one. You can grep the ppid or pid we received from audit log against the output of the dump-acct command.
sudo dump-acct /var/log/account/pacct | tail grotty |v3| 0.00| 0.00| 2.00| 1000| 1000| 12000.00| 0.00| 321103| 321101| | 0|pts/1 |Fri Aug 14 13:26:07 2020 groff |v3| 0.00| 0.00| 2.00| 1000| 1000| 6096.00| 0.00| 321101| 321095| | 0|pts/1 |Fri Aug 14 13:26:07 2020 nroff |v3| 0.00| 0.00| 4.00| 1000| 1000| 2608.00| 0.00| 321095| 321087| | 0|pts/1 |Fri Aug 14 13:26:07 2020 man |v3| 0.00| 0.00| 4.00| 1000| 1000| 10160.00| 0.00| 321096| 321087| F | 0|pts/1 |Fri Aug 14 13:26:07 2020 pager |v3| 0.00| 0.00| 2018.00| 1000| 1000| 8440.00| 0.00| 321097| 321087| | 0|pts/1 |Fri Aug 14 13:26:07 2020 man |v3| 2.00| 0.00| 2021.00| 1000| 1000| 10160.00| 0.00| 321087| 318116| | 0|pts/1 |Fri Aug 14 13:26:07 2020 clear |v3| 0.00| 0.00| 0.00| 1000| 1000| 2692.00| 0.00| 321104| 318116| | 0|pts/1 |Fri Aug 14 13:26:30 2020 dump-acct |v3| 2.00| 0.00| 2.00| 1000| 1000| 4252.00| 0.00| 321105| 318116| | 0|pts/1 |Fri Aug 14 13:26:35 2020 tail |v3| 0.00| 0.00| 2.00| 1000| 1000| 8116.00| 0.00| 321106| 318116| | 0|pts/1 |Fri Aug 14 13:26:35 2020 clear |v3| 0.00| 0.00| 0.00| 1000| 1000| 2692.00| 0.00| 321107| 318116| | 0|pts/1 |Fri Aug 14 13:26:45 2020
To better understand what you’re looking at, you may want to add column headings as I have done with these commands:
echo "Command vers runtime systime elapsed UID GID mem_use chars PID PPID ? retcode term date/time" "
sudo dump-acct /var/log/account/pacct | tail -5
Command vers runtime systime elapsed UID GID mem_use chars PID PPID ? retcode term date/time tail |v3| 0.00| 0.00| 3.00| 0| 0| 8116.00| 0.00| 358190| 358188| | 0|pts/1 |Sat Aug 15 11:30:05 2020 pacct |v3| 0.00| 0.00| 3.00| 0| 0| 9624.00| 0.00| 358188| 358187|S | 0|pts/1 |Sat Aug 15 11:30:05 2020 sudo |v3| 0.00| 0.00| 4.00| 0| 0| 10984.00| 0.00| 358187| 354579|S | 0|pts/1 |Sat Aug 15 11:30:05 2020 gmain |v3| 14.00| 3.00| 1054.00| 1000| 1000| 1159680| 0.00| 358169| 3179| X| 0|__ |Sat Aug 15 11:30:03 2020 vi |v3| 0.00| 0.00| 456.00| 1000| 1000| 10976.00| 0.00| 358194| 354579| | 0|pts/1 |Sat Aug 15 11:30:28 2020
Alternative: lastcomm (Limited Effectiveness)
In some cases, you can try lastcomm
Important Note
While combining auditd with process accounting can provide insights, it’s crucial to understand the limitations. Process accounting data offers a broader picture of resource usage but doesn’t directly correlate to specific file deletions. Additionally, processes might terminate quickly, making it difficult to trace back to a specific action.
Best Practices
1. Regular Monitoring: Regularly monitor and analyze audit logs to stay ahead of potential security breaches.
2. Comprehensive Logging: Ensure comprehensive logging by setting appropriate audit rules and keeping process accounting enabled.
3. Timely Responses: Respond quickly to any suspicious activity by investigating audit logs and process accounting data promptly.
By combining the capabilities of auditd and process accounting, you can enhance your ability to track and understand file system activity, thereby strengthening your system’s security posture.
Securing Your Connections: A Guide to SSH Keys
SSH (Secure Shell) is a fundamental tool for securely connecting to remote servers. While traditional password authentication works, it can be vulnerable to brute-force attacks. SSH keys offer a more robust and convenient solution for secure access.
This blog post will guide you through the world of SSH keys, explaining their types, generation process, and how to manage them for secure remote connections and how to configure SSH key authentication.
Understanding SSH Keys: An Analogy
Imagine your home has two locks:
Similarly, SSH keys work in pairs:
The server verifies the public key against your private key when you attempt to connect. This verification ensures only authorized users with the matching private key can access the server.
Types of SSH Keys
There are many types of SSH keys, we are discussing the two main ones:
RSA (Rivest–Shamir–Adleman): The traditional and widely supported option. It offers a good balance of security and performance.
Ed25519 (Edwards-curve Digital Signature Algorithm): A newer, faster, and potentially more secure option gaining popularity.
RSA vs. Ed25519 Keys:
Choosing Between RSA and Ed25519:
For most users, Ed25519 is a great choice due to its speed and security. However, if compatibility with older servers is a critical concern, RSA remains a reliable option.
Generating SSH Keys with ssh-keygen
Here’s how to generate your SSH key pair using the ssh-keygen command:
Open your terminal.
Run the following command, replacing
ssh-keygen -t <key_type> -b 4096 -C "<your_email@example.com>"
You’ll be prompted to enter a secure passphrase for your private key. Choose a strong passphrase and remember it well (it’s not mandatory, but highly recommended for added security).
The command will generate two files:
<key_name>>.pub: The public key file (you’ll add this to the server).
<key_name>>: The private key file (keep this secure on your local machine).
Important Note: Never share your private key with anyone!
Adding Your Public Key to the Server’s authorized_keys File
- Access the remote server you want to connect to (through a different method if you haven’t set up key-based authentication yet).
- Locate the
~/.ssh/authorized_keys
file on the server (the ~ represents your home directory). You might need to create the .ssh directory if it doesn’t exist. - Open the authorized_keys file with a text editor.
- Paste the contents of your public key file (
.pub) into the authorized_keys file on the server. - Save the authorized_keys file on the server.
Permissions:
Ensure the authorized_keys file has permissions set to 600 (read and write access only for the owner).
Connecting with SSH Keys
Once you’ve added your public key to the server, you can connect using your private key:
ssh <username>@<server_address>
You’ll be prompted for your private key passphrase (if you set one) during the connection. That’s it! You’re now securely connected to the server without needing a password.
Benefits of SSH Keys:
By implementing SSH keys, you can significantly improve the security and convenience of your remote server connections. Remember to choose strong passwords and keep your private key secure for optimal protection.
nopCommerce is an open-source e-commerce platform that allows users to create and manage their online stores. It is built on the ASP.NET Core framework and supports multiple database systems, including MySQL, Microsoft SQL Server, and PostgreSQL as it’s backend. The platform is highly customizable and offers a wide range of features, including product management, order processing, shipping, payment integration, and customer management. nopCommerce is a popular choice for businesses of all sizes because of its flexibility, scalability, and user-friendly interface.
In this tutorial, we will guide you through the process of installing nopCommerce on Ubuntu Linux with Nginx reverse proxy and SSL.
Register Microsoft key and feed
To register the Microsoft key and feed, launch the terminal and execute these commands:
1. Download the packages-microsoft-prod.deb file by running the command:
wget https://packages.microsoft.com/config/ubuntu/20.04/packages-microsoft-prod.deb -O packages-microsoft-prod.deb
2. Install the packages-microsoft-prod.deb package by running the command:
sudo dpkg -i packages-microsoft-prod.deb
Install the .NET Core Runtime
To install the .NET Core Runtime, perform the following steps:
1. Update the available product listings for installation by running the command:
sudo apt-get update
2. Install the .NET runtime by running the command:
sudo apt-get install -y apt-transport-https aspnetcore-runtime-7.0
To determine the appropriate version of the .NET runtime to install, you should refer to the documentation provided by nopCommerce, which takes into account both the version of nopCommerce you are using and the Ubuntu OS version. Refer to the link below:
https://learn.microsoft.com/en-us/dotnet/core/install/linux-ubuntu
https://learn.microsoft.com/en-us/dotnet/core/install/linux-ubuntu#supported-distributions
3. Verify the installed .Net Core runtimes by running the command:
dotnet --list-runtimes
4. Install the libgdiplus library:
sudo apt-get install libgdiplus
libgdiplus is an open-source implementation of the GDI+ API that provides access to graphic-related functions in nopCommerce and is required for running nopCommerce on Linux.
Install MySql Server
Latest nopCommerce support latest MySQL and MariaDB versions. We will install the latest MariaDB 10.6.
1. To install mariadb-server for nopCommerce, execute the following command in the terminal:
sudo apt-get install mariadb-server
2. After installing MariaDB Server, you need to set the root password. Execute the following command in the terminal to set the root password:
sudo /usr/bin/mysql_secure_installation
This will start a prompt to guide you through the process of securing your MySQL installation and setting the root password.
3. Create a database and User. We will use these details while installing nopCommerce. Replace the names of the database and the database user accordingly.
mysql -u root -p create database nopCommerceDB; grant all on nopCommerceDB.* to nopCommerceuser@localhost identified by 'P@ssW0rD';
Please replace the database name, username and password accordingly.
4. Reload privilege tables and exit the database.
flush privileges; quit;
Install nginx
1. To install Nginx, run the following command:
sudo apt-get install nginx
2. After installing Nginx, start the service by running:
sudo systemctl start nginx
3. You can verify the status of the service using the following command:
sudo systemctl status nginx
4. Nginx Reverse proxy configuration
To configure Nginx as a reverse proxy for your nopCommerce application, you’ll need to modify the default Nginx configuration file located at /etc/nginx/sites-available/nopcommerce.linuxwebhostingsupport.in. Open the file in a text editor and replace its contents with the following:
server { server_name nopcommerce.linuxwebhostingsupport.in; listen 80; listen [::]:80; location / { proxy_pass http://localhost:5000; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection keep-alive; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } }
You need to replace nopcommerce.linuxwebhostingsupport.in with your domain name
5. Enable the virtual host configuration file:
Enable the server block by creating a symbolic link in the /etc/nginx/sites-enabled directory:
sudo ln -s /etc/nginx/sites-available/nopcommerce.linuxwebhostingsupport.in /etc/nginx/sites-enabled/
6. Reload Nginx for the changes to take effect:
sudo systemctl reload Nginx
Install NopCommerce
In this example, we’ll use /var/www/nopCommerce for storing the files.
1. Create a directory:
sudo mkdir /var/www/nopCommerce
2. Navigate to the directory where you want to store the nopCommerce files, Download and unpack nopCommerce:
cd /var/www/nopCommerce sudo wget https://github.com/nopSolutions/nopCommerce/releases/download/release-4.60.2/nopCommerce_4.60.2_NoSource_linux_x64.zip sudo apt-get install unzip sudo unzip nopCommerce_4.60.2_NoSource_linux_x64.zip
3. Create two directories that nopCommerce needs to run properly:
sudo mkdir bin sudo mkdir logs
4. Change the ownership of the nopCommerce directory and its contents to the www-data group:
sudo chown -R www-data.www-data /var/www/nopCommerce/
www-data is the user Nginx webserver runs.
Create the nopCommerce service
1. Create a file named nopCommerce.service in the /etc/systemd/system directory with the following content:
[Unit] Description=Example nopCommerce app running on Xubuntu [Service] WorkingDirectory=/var/www/nopCommerce ExecStart=/usr/bin/dotnet /var/www/nopCommerce/Nop.Web.dll Restart=always # Restart service after 10 seconds if the dotnet service crashes: RestartSec=10 KillSignal=SIGINT SyslogIdentifier=nopCommerce-example User=www-data Environment=ASPNETCORE_ENVIRONMENT=Production Environment=DOTNET_PRINT_TELEMETRY_MESSAGE=false [Install] WantedBy=multi-user.target
2. Start the nopCommerce service by running:
sudo systemctl start nopCommerce.service
3. To check the status of the nopCommerce service, use the following command:
sudo systemctl status nopCommerce.service
Also, check if the service is running on port 5000
sudo lsof -i:5000
4. After that, restart the nginx server:
sudo systemctl restart nginx
Now that the prerequisites are installed and configured, you can proceed to install and set up your nopCommerce store.
Install nopCommerce
After completing the previous steps, you can access the website through the following URL: http://nopcommerce.linuxwebhostingsupport.in. Upon visiting the site for the first time, you will be automatically redirected to the installation page as shown below:
Provide the following information in the Store Information panel:
In the Database Information panel, you will need to provide the following details:
Click on the Install button to initiate the installation process. Once the installation is complete, the home page of your new site will be displayed. Access your site from the following URL: http://nopcommerce.linuxwebhostingsupport.in.
Note:
You can reset a nopCommerce website to its default settings by deleting the appsettings.json file located in the App_Data folder.
Adding and Securing the nopCommerce
We will be using Let’s Encrypt to add free and secure SSL certificate.
Let’s Encrypt is a free, automated, and open certificate authority that allows you to obtain SSL/TLS certificates for your website. Certbot is a command-line tool that automates the process of obtaining and renewing these certificates, making it easier to secure your website with HTTPS.
Here are the steps to install SSL with Certbot Nginx plugins:
1.Install Certbot: First, make sure you have Certbot installed on your server. You can do this by running the following command:
sudo apt-get update sudo apt-get install certbot python3-certbot-nginx
2. Obtain SSL Certificate: Next, you need to obtain an SSL certificate for your domain. You can do this by running the following command:
sudo certbot –nginx -d yourdomain.com
Replace yourdomain.com with your own domain name. This command will automatically configure Nginx to use SSL, obtain a Let’s Encrypt SSL certificate and set an automatic redirect from http to https.
3.Verify SSL Certificate: Once the certificate is installed, you can verify it by visiting your website using the https protocol. If the SSL certificate is valid, you should see a padlock icon in your browser’s address bar.
4. Automatic Renewal: Certbot SSL certificates are valid for 90 days. To automatically renew your SSL certificate before it expires, you can set up a cron job to run the following command:
sudo certbot renew --quiet
This will check if your SSL certificate is due for renewal and automatically renew it if necessary.
5. nopCommerce also recommend turning “UseProxy setting to true in the appsettings.json file located in the App_Data folder if we are using SSL. So change this value too.
nopCommerce is a popular open-source e-commerce platform that offers users a flexible and scalable solution for creating and managing online stores. In this tutorial, we provided a step-by-step guide for installing and configuring nopCommerce on Ubuntu Linux with Nginx reverse proxy and SSL. We covered the installation of Microsoft key and feed, .NET Core Runtime, MySQL server, and Nginx reverse proxy. We also discussed how to configure Nginx as a reverse proxy for the nopCommerce application. By following this tutorial, you can set up a secure and reliable nopCommerce e-commerce store on Ubuntu Linux.
PHP GEOS is a PHP extension for geographic objects support, while RunCloud is a cloud server control panel designed for PHP applications. With PHP GEOS module installed on RunCloud, PHP applications can take advantage of geographic data and use the GEOS (Geometry Engine – Open Source) library to perform spatial operations.
In this blog post, I will show you how to install PHP GEOS module on RunCloud module.
Steps
1. Install the required development tools
Before installing the PHP GEOS module, make sure that the required development tools are installed on your Ubuntu server. You can install them by running the following command:
apt-get install autoconf
2. Install GEOS library
Next, download and install the latest GEOS (Geometry Engine – Open Source)
wget http://download.osgeo.org/geos/geos-3.9.4.tar.bz2
tar xvf geos-3.9.4.tar.bz2
cd geos-3.9.4/
./configure
make
make install
3. Install PHP GEOS module
Now, it’s time to install the PHP GEOS module. Follow the steps below to install it for PHP 8.2:
# Set module name
MODULE_NAME="geos"
Download the latest module files
git clone https://git.osgeo.org/gitea/geos/php-geos.git
mv php-geos/ php-geos_PHP82
# make clean will always fail if you never compile it before
make clean
/RunCloud/Packages/php82rc/bin/phpize --clean
/RunCloud/Packages/php82rc/bin/phpize
./configure --with-php-config=/RunCloud/Packages/php82rc/bin/php-config
make && make install
This will install geos.so in the correct php extension directory
4. Add the module to PHP.ini file
echo "extension=$MODULE_NAME.so" > /etc/php82rc/conf.d/$MODULE_NAME.ini
And finally restart the PHP FPM service
systemctl restart php82rc-fpm
It’s important to note that the above steps are specific to PHP 8.2. If you wish to install the module for a different version, you will need to modify the commands accordingly. For instance, you can replace PHP 8.2 with 8.1 with below changes:
Replace /RunCloud/Packages/php82rc/bin/phpize with /RunCloud/Packages/php81rc/bin/phpize, replace ./configure –with-php-config=/RunCloud/Packages/php82rc/bin/php-config with ./configure –with-php-config=/RunCloud/Packages/php81rc/bin/php-config, replace /etc/php82rc/conf.d/$MODULE_NAME.ini with /etc/php81rc/conf.d/$MODULE_NAME.ini, and replace systemctl restart php82rc-fpm with systemctl restart php81rc-fpm.
You can contact me if you need help with installing any custom modules on RunCloud control panel.
Hi all
Recently, I helped one of my clients who was using an Amazon Lightsail WordPress instance provided by Bitnami. Bitnami is advantageous in that it provides a fully working stack, so you don’t have to worry about configuring LAMP or environments. You can find more information about the Bitnami Lightsail stack here.
However, the client’s stack was using the latest PHP 8.x version, while the WordPress site he runs uses several plugins that need PHP 7.4. I advised the client to consider upgrading the website to support the latest PHP versions. However, since that would require a lot of work, and he wanted the site to be up and running, he decided to downgrade PHP.
The issue with downgrading or upgrading PHP on a Bitnami stack is that it’s not possible. Bitnami recommends launching a new server instance with the required PHP, MySQL, or Apache version and migrating the data over. So, I decided to do it manually.
Here are the server details:
Debian 11
Current installed PHP: 8.1.x
Upgrading or downgrading PHP versions on a Bitnami stack is essentially the same as on a normal Linux server. In short, you need to:
Ensure the PHP packages for the version you want are installed.
Update any configuration for that PHP version.
Update your web server configuration to point to the correct PHP version.
Point PHP CLI to the correct PHP version.
Restart your web server and php-fpm.
What we did was install the PHP version provided by the OS. Then, we updated php.ini to use the non-default MySQL socket location used by the Bitnami server. We created a php-fpm pool that runs as the “daemon” user. After that, we updated the Apache configuration to use the new PHP version.
1. Make sure packages for your target version of PHP are installed
To make sure that the correct packages are available on your system for the PHP version you want, first make sure your system is up to date by running these commands:
sudo apt update
sudo apt upgrade
If it prompts you to do anything with config files, usually, you should just go with the default option and leave the current config as-is. Then, install the packages you need. For example, you can use the following command to install common PHP packages and modules:
sudo apt install -y php7.4-cli php7.4-dev php7.4-pgsql php7.4-sqlite3 php7.4-gd php7.4-curl php7.4-memcached php7.4-imap php7.4-mysql php7.4-mbstring php7.4-xml php7.4-imagick php7.4-zip php7.4-bcmath php7.4-soap php7.4-intl php7.4-readline php7.4-common php7.4-pspell php7.4-tidy php7.4-xmlrpc php7.4-xsl php7.4-fpm
2. Make sure PHP configuration for your target version is updated
Find the mysql socket path used by your Bitnami stack by running this command:
# ps aux | grep –color mysql.sock
mysql 7700 1.1 2.0 7179080 675928 ? Sl Mar21 11:21 /opt/bitnami/mariadb/sbin/mysqld –defaults-file=/opt/bitnami/mariadb/conf/my.cnf –basedir=/opt/bitnami/mariadb –datadir=/bitnami/mariadb/data –socket=/opt/bitnami/mariadb/tmp/mysql.sock –pid-file=/opt/bitnami/mariadb/tmp/mysqld.pid
Edit php.ini file
vi /etc/php/7.4/fpm/php.ini
Find
[Pdo_mysql]
; Default socket name for local MySQL connects. If empty, uses the built-in
; MySQL defaults.
pdo_mysql.default_socket=
Replace with
[Pdo_mysql]
; Default socket name for local MySQL connects. If empty, uses the built-in
; MySQL defaults.
pdo_mysql.default_socket= “/opt/bitnami/mariadb/tmp/mysql.sock”
Find
mysqli.default_socket =
Replace with
mysqli.default_socket = “/opt/bitnami/mariadb/tmp/mysql.sock”
Create a php-fpm pool file
vi /etc/php/8.1/fpm/pool.d/wp.conf
[wordpress]
env[PATH] = $PATH
listen=/opt/bitnami/php/var/run/www2.sock
user=daemon
group=daemon
listen.owner=daemon
listen.group=daemon
pm=dynamic
pm.max_children=400
pm.start_servers=260
pm.min_spare_servers=260
pm.max_spare_servers=300
pm.max_requests=5000
Feel free to adjust the PHP FPM settings to match your server specifications or needs. Check out this informative article for more tips on optimizing PHP FPM performance. Just keep in mind that Bitnami configures their stack with the listen.owner and listen.group settings set to daemon.
This pool will listen on unix socket “/opt/bitnami/php/var/run/www2.sock”.
Enable and restart PHP 8.1 fpm service
systemctl enable php7.4-fpm
systemctl restart php7.4-fpm
3. Update your web server configuration to point to the correct PHP version
Edit file
vi /opt/bitnami/apache2/conf/bitnami/php-fpm.conf
For some installations, file is located at
vi /opt/bitnami/apache2/conf/php-fpm-apache.conf
Inside you file find
SetHandler “proxy:fcgi://www-fpm”
Find and replace www.sock with www2.sock
4. Make sure PHP-CLI points to the right PHP version
Rename the default PHP installed by bitnami.
mv /opt/bitnami/php/bin/php /opt/bitnami/php/bin/php_8.1_bitnami.
create a symlink from newly installed PHP 7.4
ln -s /usr/bin/php7.4 /opt/bitnami/php/bin/php
Test the installed version by running below command
~# php -v
PHP 7.4.33 (cli) (built: Feb 22 2023 20:07:47) ( NTS )
Copyright (c) The PHP Group
Zend Engine v3.4.0, Copyright (c) Zend Technologies
with Zend OPcache v7.4.33, Copyright (c), by Zend Technologies
5. Restart PHP-FPM and your webserver
sudo systemctl restart php7.4-fpm; sudo /opt/bitnami/ctlscript.sh restart apache
What is a LAMP Stack?
LAMP stack is a popular combination of open-source software that is used to run dynamic websites and web applications. The acronym LAMP stands for Linux (operating system), Apache (web server), MySQL (database management system), and PHP (scripting language).
Linux provides the foundation for the LAMP stack, serving as the operating system on which the other software components are installed. Apache is the web server that handles HTTP requests and serves web pages to users. MySQL is a powerful database management system that is used to store and manage website data. PHP is a popular scripting language used to create dynamic web content, such as interactive forms and web applications.
Together, these software components create a powerful platform for building and deploying web applications. The LAMP stack is highly customizable and widely used, making it an excellent choice for developers and system administrators alike.
Prerequisites
1. Ubuntu server: You will need an Ubuntu server to install the LAMP stack. You can use a Virtual/CLoud server or a physical server as per your requirement.
2. SSH access: You will need SSH access to your Ubuntu server to be able to install the LAMP stack. SSH (Secure Shell) is a secure network protocol that allows you to access and manage your server remotely.
3. Non-root user with sudo privileges: It is recommended that you use a non-root user with sudo privileges to install and configure the LAMP stack. This is because running as root can pose a security risk and may lead to unintended consequences if something goes wrong. You can also run the commands as root user.
4. Basic familiarity with Linux command line: A basic understanding of how to use the Linux command line interface (CLI) to run commands and navigate your Ubuntu server is recommended, not mandatory.
Installing a LAMP Stack on Ubuntu
In this section, the process of installing a LAMP Stack on Ubuntu 22.04 LTS is outlined. These instructions can be applied to Ubuntu 20.04 LTS as well.
A LAMP stack is a popular combination of open-source software used to run dynamic websites or web applications. LAMP stands for Linux (operating system), Apache (web server), MySQL (database management system), and PHP (scripting language). In this guide, we will walk you through the steps involved in installing and configuring a LAMP stack on an Ubuntu server.
Step 1: Update Your Ubuntu Server
Before we begin installing LAMP stack components, let’s update the server’s software packages by running the following command:
sudo apt update && sudo apt upgrade
Step 2: Install Apache
Apache is the most widely used web server software. To install it, run the following command:
sudo apt install apache2
Once the installation is complete, you can check the status of Apache by running the following command:
sudo systemctl status apache2
This will display Apache’s status as either active or inactive.
Step 3: Install MySQL
MySQL is a popular open-source database management system. To install it, run the following command:
sudo apt install mysql-server
Once the installation is complete, you can check the status of MySQL by running the following command:
sudo systemctl status mysql
This will display MySQL’s status as either active or inactive.
Step 4: Install PHP
PHP is a popular server-side scripting language used to create dynamic web content. To install it, run the following command:
sudo apt install php libapache2-mod-php php-mysql
There are several additional PHP modules recommended for a CMS like WordPress. You can install them by running the command below:
sudo apt-get install php-curl php-gd php-xml php-mbstring php-imagick php-zip php-xmlrpc
After installing these modules, you will need to restart your Apache server for the changes to take effect. You can do this by running the following command:
sudo systemctl restart apache2
Setting up firewall rules to allow access to Apache web server
UFW is the default firewall with Ubuntu systems, providing a simple command-line interface to configure iptables, the software-based firewall used in most Linux distributions. UFW provides various application profiles that can be utilized to manage traffic to and from different services. To view a list of all the available UFW application profiles, you can run the command:
sudo ufw app list
Output
Available applications:
Apache
Apache Full
Apache Secure
OpenSSH
These application profiles have different configurations for opening specific ports on the firewall. For instance:
Apache: Allows traffic on port 80, which is used for normal, unencrypted web traffic.
Apache Full: Allows traffic on both port 80 and port 443, which is used for TLS/SSL encrypted traffic.
Apache Secure: Allows traffic only on port 443 for TLS/SSL encrypted traffic.
To allow traffic on both port 80 and port 443(SSL), you can use the Apache Full profile by running the following command:
sudo ufw allow in "Apache Full"
You can verify that the change has been made by running the command:
sudo ufw status
Output
Status: active To Action From -- ------ ---- OpenSSH ALLOW Anywhere Apache Full ALLOW Anywhere OpenSSH (v6) ALLOW Anywhere (v6) Apache Full(v6) ALLOW Anywhere (v6)
To test if the ports are open and Apache web server is accessible, you can try visiting your server’s public IP address in a web browser using the URL http://your_server_ip. If successful, you should see the default Apache web page.
If you can view this page, your web server is correctly installed and accessible through your firewall.
Configuring the MySQL Database server
Upon installation of MySQL, it is immediately available for use. However, in order to utilize it for web applications such as WordPress and improve the security of said applications, it is imperative to generate a database user and database. To complete the configuration process for MySQL, please adhere to the following steps.
To configure MySQL and improve application security, follow these steps:
1. Log in to the MySQL shell as the root user:
sudo mysql -u root
2. Using the MySQL shell, you can create the wpdatabase database and generate a new user account for accessing the web application. Instead of using the placeholders “dbuser” and “password” in the CREATE USER query, you should provide a real username and password. Furthermore, you should grant complete permissions to the user. After each line, MySQL should respond with “Query OK.”
CREATE DATABASE wpdatabase ;
CREATE USER 'dbuser' IDENTIFIED BY 'password';
GRANT ALL ON wpdatabase .* TO 'dbuser';
Exit the SQL shell:
quit
3. Set a password for root’@’localhost:
sudo mysql
ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password by 'password';
Exit the SQL shell:
quit
Note: Replace “password” with a strong password.
4. Use the mysql_secure_installation tool to increase database security:
sudo mysql_secure_installation
When prompted to change the root password, leave it unchanged. Answer Y for the following questions:
Remove anonymous users?
Disallow root login remotely?
Remove test database and access to it?
Reload privilege tables now?
To log in to the MySQL shell as root after this change, use “sudo mysql -u root” and type “quit” exit the SQL Shell.
It’s worth noting that when connecting as the root user, there’s no need to enter a password, despite having defined one during the mysql_secure_installation script. This is due to the default authentication method for the administrative MySQL user being unix_socket rather than password. Although it may appear to be a security issue, it actually strengthens the security of the database server by only allowing system users with sudo privileges to log in as the root MySQL user from the console or through an application with the same privileges. As a result, you won’t be able to use the administrative database root user to connect from your PHP application. However, setting a password for the root MySQL account acts as a precautionary measure in case the default authentication method is changed from unix_socket to password.
Creating a Virtual Host for your Website
In order to host multiple domains from a single server, Apache web server provides the capability to create virtual hosts. These virtual hosts are beneficial as they allow you to encapsulate configuration details for each domain. In this tutorial, we will walk you through setting up a domain named “example.com”. However, it is important to keep in mind that you should replace “example.com” with your own domain name.
By default, Ubuntu 22.04’s Apache web server has a single virtual host that is enabled and configured to serve documents from the /var/www/html directory. While this is a workable solution for a single site, it becomes cumbersome when hosting multiple sites. Therefore, instead of modifying /var/www/html, we will create a directory structure within the /var/www directory specifically for the example.com site. In doing so, we will leave /var/www/html in place as the default directory to be served if a client request does not match any other sites.
1. First, create a new directory for the “example.com” website files:
sudo mkdir /var/www/example.com
2. Assign the ownership of the directory to the web server user (www-data):
sudo chown -R www-data:www-data /var/www/example.com
3. Create a new virtual host configuration file for “example.com” using the nano text editor:
sudo nano /etc/apache2/sites-available/example.com.conf
4. Add the following configuration to the file, replacing “example.com” with your own domain name:
<VirtualHost *:80> ServerName example.com ServerAlias www.example.com DocumentRoot /var/www/example.com <Directory /var/www/example.com> Options Indexes FollowSymLinks AllowOverride All Require all granted </Directory> ErrorLog ${APACHE_LOG_DIR}/example.com_error.log CustomLog ${APACHE_LOG_DIR}/example.com_access.log combined </VirtualHost>
This configuration specifies that the “example.com” domain should use the files located in the /var/www/example.com directory as its document root.
5. Disable the default Apache site configuration to avoid conflicts:
sudo a2dissite 000-default.conf
6. Enable the “example.com” site configuration:
sudo a2ensite example.com.conf
7. Restart Apache to apply the changes:
sudo systemctl restart apache2
8. Create a test “hello world” HTML file:
sudo nano /var/www/example.com/index.html
Add the following HTML code to the file:
<!DOCTYPE html> <html> <head> <title>Hello World</title> </head> <body> <h1>Hello World!</h1> </body> </html>
9. Save and close the file.
10. Finally, configure your DNS records to point the “example.com” domain to your server’s IP address. Once the DNS records are updated, you can access the website by visiting “http://example.com” in your web browser.
Testing the LAMP Stack Installation on Your Ubuntu Server
To ensure that the LAMP stack configuration is fully functional, it’s necessary to conduct tests on Apache, PHP, and MySQL components. Verifying the Apache operational status and virtual host configuration was done earlier. Now, it’s important to test the interaction between the web server and PHP and MySQL components.
The easiest way to verify the configuration of the Ubuntu LAMP stack is by using a short test script. The PHP code does not need to be lengthy or complex; however, it must establish a connection to MySQL. The test script should be placed within the DirectoryRoot directory.
To validate the database, use PHP to invoke the mysqli_connect function. Use the username and password created in the “Configuring the MySQL Database server” section. If the attempt is successful, the mysqli_connect function returns a Connection object. The script should indicate whether the connection succeeded or failed and provide more information about any errors.
To verify the installation, follow these steps:
1. Create a new file called “phptest.php” in the /var/www/example.com directory.
<html> <head> <title>PHP MySQL Test</title> </head> <body> <?php echo '<p>Welcome to the Site!</p>'; // When running this script on a local database, the servername must be 'localhost'. Use the name and password of the web user account created earlier. Do not use the root password. $servername = "localhost"; $username = "dbuser"; $password = "password"; // Create MySQL connection $conn = mysqli_connect($servername, $username, $password); // If the conn variable is empty, the connection has failed. The output for the failure case includes the error message if (!$conn) { die('<p>Connection failed: </p>' . mysqli_connect_error()); } echo '<p>Connected successfully</p>'; ?> </body> </html>
2. To test the script, open a web browser and type the domain name followed by “/phptest.php” in the address bar. For example, if your domain name is “example.com”, you would enter “example.com/phptest.php” in the address bar. Make sure to substitute the actual name of the domain for “example.com” in the example provided.
http://example.com/phptest.php
3. Upon successful execution of the script, the web page should display without any errors. The page should contain the text “Welcome to the Site!” and “Connected successfully.” However, if you encounter the “Connection Failed” error message, review the SQL error information to troubleshoot the issue.
Bonus: Install phpMyAdmin
phpMyAdmin is a web-based application used to manage MySQL databases. To install it, run the following command:
sudo apt install phpmyadmin
During the installation process, you will be prompted to choose the web server that should be automatically configured to run phpMyAdmin. Select Apache and press Enter.
You will also be prompted to enter a password for phpMyAdmin’s administrative account. Enter a secure password and press Enter.
Once the installation is complete, you can access phpMyAdmin by navigating to http://your_server_IP_address/phpmyadmin in your web browser.
Congratulations! You have successfully installed and configured a LAMP stack on your Ubuntu server.
Summary
This guide walks through the process of setting up a LAMP Stack, a combination of the Linux operating system, Apache web server, MySQL RDBMS, and PHP programming language, to serve PHP websites and applications. The individual components are free and open source, designed to work together, and easy to install and use. Following the steps provided, you can install the LAMP Stack on Ubuntu 22.04 LTS using apt, configure the Apache web server, create a virtual host for the domain, and integrate the MySQL web server by creating a new account to represent the web user. Additional PHP packages are required for Apache, PHP, and the database to communicate. A short PHP test script can be used to test the new installation by connecting to the database.
How to remove or compress huge MySQL general and query log table
If you have enabled MySQL general or slow logging, it can create quite big log, depending upon your MySQL usage/queries.
So we may have to periodically clear them to save space.
Please note that MySQL can save logs to either table or files. This document assumes you are using table as log output.
Files: slow_log.CSV and general_log.CSV (The location and the name of the file can be different)
By default, logging is to CSF file.
MYSQL supports run time clearing of these logs. So no need to restart the MySQL service.
Never delete the CSV file directly. It can crash MySQL.
Slow query log
SET GLOBAL slow_query_log='OFF'; DROP TABLE IF EXISTS slow_log2; CREATE TABLE slow_log2 LIKE slow_log; RENAME TABLE slow_log TO slow_log_backup, slow_log2 TO slow_log; gzip /var/db/mysql/mysql/slow_log_backup.CSV DROP TABLE slow_log_backup; SET GLOBAL slow_query_log = 'ON';
General log
USE mysql; SET GLOBAL general_log = 'OFF'; DROP TABLE IF EXISTS general_log2; CREATE TABLE general_log2 LIKE general_log; RENAME TABLE general_log TO general_log_backup, general_log2 TO general_log; gzip /var/db/mysql/mysql/general_log_backup.CSV DROP TABLE general_log_backup;
What we did is create new log table, move current log file to a backup copy and compress the backup and remove it.