SSH (Secure Shell) relies on public-key cryptography for secure logins. But how can you be sure your public and private key pair are actually linked? This blog post will guide you through a simple method to verify their authenticity in Linux and macOS.
Understanding the Key Pair:
Imagine a lock and key. Your public key acts like the widely distributed lock – anyone can see it. The private key is the unique counterpart, kept secret, that unlocks the metaphorical door (your server) for SSH access.
Using ssh-keygen
This method leverages the ssh-keygen tool, already available on most Linux and macOS systems.
1. Locate the keys :Open a terminal and use cd to navigate to the directory where your private key resides (e.g., cd ~/.ssh). 2. Use the command ‘ls -al’ to list all files in the directory, and locate your private/public keypair you wish to check.
Example:
ababwaha@ababwaha-mac.ssh % ls -al
total 32
drwx------6 ababwaha staff 192 Jun 2416:04 .
drwxr-x---+68 ababwaha staff 2176 Jun 2416:04 ..
-rw-------1 ababwaha staff 411 Jun 2416:04 id_ed25519
-rw-r--r--1 ababwaha staff 103 Jun 2416:04 id_ed25519.pub
-rw-------1 ababwaha staff 3389 Jun 2416:04 id_rsa
-rw-r--r--1 ababwaha staff 747 Jun 2416:04 id_rsa.pub
3. Verify the Key Pair: Run the following command, replacing with the actual path to your private key file (e.g., ssh-keygen -lf ~/.ssh/id_rsa):
ssh-keygen -lf ssh-keygen -lf
This command displays fingerprint information about your key pair.
4. Match the Fingerprints: Compare the fingerprint displayed by ssh-keygen with the beginning of the text in your public key file. If they match, congratulations! Your public and private keys are a verified pair.
Remember:
Security: Always keep your private key secure. Avoid storing it on publicly accessible locations.
Permissions: Ensure your private key file has appropriate permissions (usually 600) to prevent unauthorized access.
By following this method, you can easily verify the authenticity of your public and private SSH key pair, ensuring a secure connection to your server.
Maintaining a secure system involves monitoring file system activity, especially tracking file deletions, creations, and other modifications. This blog post explores how to leverage two powerful tools, auditd and process accounting with /usr/sbin/accton (provided by the psacct package), to gain a more comprehensive understanding of these events in Linux.
Introduction
Tracking file deletions in a Linux environment can be challenging. Traditional file monitoring tools often lack the capability to provide detailed information about who performed the deletion, when it occurred, and which process was responsible. This gap in visibility can be problematic for system administrators and security professionals who need to maintain a secure and compliant system.
To address this challenge, we can combine auditd, which provides detailed auditing capabilities, with process accounting (psacct), which tracks process activity. By integrating these tools, we can gain a more comprehensive view of file deletions and the processes that cause them.
What We’ll Cover:
1. Understanding auditd and Process Accounting 2. Installing and Configuring psacct 3. Enabling Audit Tracking and Process Accounting 4. Setting Up Audit Rules with auditctl 5. Simulating File Deletion 6. Analyzing Audit Logs with ausearch 7. Linking Process ID to Process Name using psacct 8. Understanding Limitations and Best Practices
Prerequisites:
1. Basic understanding of Linux commands 2. Root or sudo privileges 3. Auditd package installed (installed by default on most of the distros)
1. Understanding the Tools
auditd: The Linux audit daemon logs security-relevant events, including file system modifications. It allows you to track who is accessing the system, what they are doing, and the outcome of their actions.
Process Accounting: Linux keeps track of resource usage for processes. By analyzing process IDs (PIDs) obtained from auditd logs and utilizing tools like /usr/sbin/accton and dump-acct (provided by psacct), we can potentially identify the process responsible for file system activity. However, it’s important to understand that process accounting data itself doesn’t directly track file deletions.
2. Installing and Configuring psacct
First, install the psacct package using your distribution’s package manager if it’s not already present:
# For Debian/Ubuntu based systems sudo apt install acct
# For Red Hat/CentOS based systems sudo yum install psacct
3. Enabling Audit Tracking and Process Accounting
Ensure auditd is running by checking its service status:
This will start saving the process information in the log file /var/log/account/pacct.
4. Setting Up Audit Rules with auditctl
To ensure audit rules persist across reboots, add the rule to the audit configuration file. The location of this file may vary based on the distribution:
For Debian/Ubuntu, use /etc/audit/rules.d/audit.rules For Red Hat/CentOS, use /etc/audit/audit.rules Open the appropriate file in a text editor with root privileges and add the following line to monitor deletions within a sample directory:
-w /var/tmp -p wa -k sample_file_deletion Explanation:
-w: Specifies the directory to watch (/path/to/your/sample_directory: /var/tmp) -p wa: Monitors both write (w) and attribute (a) changes (deletion modifies attributes) -k sample_file_deletion: Assigns a unique key for easy identification in logs
After adding the rule, restart the auditd service to apply the changes:
sudo systemctl restart auditd
5. Simulating File Deletion
Create a test file in the sample directory and delete it:
touch /var/tmp/test_file rm /var/tmp/test_file
6. Analyzing Audit Logs with ausearch
Use ausearch to search audit logs for the deletion event:
sudo ausearch -k sample_file_deletion This command will display audit records related to the deletion you simulated. Look for entries indicating a “delete” operation within your sample directory and not down the the process id for the action.
As you can see in the above log, the user root(uid=0) deleted(exe=”/usr/bin/rm”) the file /var/tmp/test_file. Note down the the ppid=2358 pid=2606 as well. If the file is deleted by a script or cron, you would need these to track the script or cron.
7. Linking Process ID to Process Name using psacct
The audit logs will contain a process ID (PID) associated with the deletion. Utilize this PID to identify the potentially responsible process:
Process Information from dump-acct
After stopping process accounting recording with sudo /usr/sbin/accton off, analyze the captured data:
sudo dump-acct /var/log/account/pacct This output shows various process details, including PIDs, command names, and timestamps. However, due to the nature of process accounting, it might not directly pinpoint the culprit. Processes might have terminated after the deletion, making it challenging to definitively identify the responsible one. You can grep the ppid or pid we received from audit log against the output of the dump-acct command.
In some cases, you can try lastcomm to potentially retrieve the command associated with the PID, even if the process has ended. However, its effectiveness depends on system configuration and might not always be reliable.
Important Note
While combining auditd with process accounting can provide insights, it’s crucial to understand the limitations. Process accounting data offers a broader picture of resource usage but doesn’t directly correlate to specific file deletions. Additionally, processes might terminate quickly, making it difficult to trace back to a specific action.
Best Practices
1. Regular Monitoring: Regularly monitor and analyze audit logs to stay ahead of potential security breaches. 2. Comprehensive Logging: Ensure comprehensive logging by setting appropriate audit rules and keeping process accounting enabled. 3. Timely Responses: Respond quickly to any suspicious activity by investigating audit logs and process accounting data promptly.
By combining the capabilities of auditd and process accounting, you can enhance your ability to track and understand file system activity, thereby strengthening your system’s security posture.
SSH (Secure Shell) is a fundamental tool for securely connecting to remote servers. While traditional password authentication works, it can be vulnerable to brute-force attacks. SSH keys offer a more robust and convenient solution for secure access.
This blog post will guide you through the world of SSH keys, explaining their types, generation process, and how to manage them for secure remote connections and how to configure SSH key authentication.
Understanding SSH Keys: An Analogy Imagine your home has two locks:
Combination Lock (Password): Anyone can access your home if they guess the correct combination.
High-Security Lock (SSH Key): Only someone with a specific physical key (your private key) can unlock the door.
Similarly, SSH keys work in pairs:
Private Key: A securely stored key on your local machine. You never share this.
Public Key: A unique identifier you share with the server you want to access. The server verifies the public key against your private key when you attempt to connect. This verification ensures only authorized users with the matching private key can access the server.
Types of SSH Keys There are many types of SSH keys, we are discussing the two main ones:
RSA (Rivest–Shamir–Adleman): The traditional and widely supported option. It offers a good balance of security and performance. Ed25519 (Edwards-curve Digital Signature Algorithm): A newer, faster, and potentially more secure option gaining popularity.
RSA vs. Ed25519 Keys:
Security: Both are considered secure, but Ed25519 might offer slightly better theoretical resistance against certain attacks.
Performance: Ed25519 is generally faster for both key generation and signing/verification compared to RSA. This can be beneficial for slower connections or resource-constrained devices.
Key Size: RSA keys are typically 2048 or 4096 bits, while Ed25519 keys are 256 bits. Despite the smaller size, Ed25519 offers comparable security due to the underlying mathematical concepts.
Compatibility: RSA is widely supported by all SSH servers. Ed25519 is gaining popularity but might not be universally supported on older servers.
Choosing Between RSA and Ed25519:
For most users, Ed25519 is a great choice due to its speed and security. However, if compatibility with older servers is a critical concern, RSA remains a reliable option.
Generating SSH Keys with ssh-keygen Here’s how to generate your SSH key pair using the ssh-keygen command:
Open your terminal.
Run the following command, replacing with your desired name for the key pair:
-b 4096: Specifies the key size (4096 bits is recommended for strong security).
-C “<your_email@example.com”>: Adds a comment to your key (optional).
You’ll be prompted to enter a secure passphrase for your private key. Choose a strong passphrase and remember it well (it’s not mandatory, but highly recommended for added security).
The command will generate two files:
<key_name>>.pub: The public key file (you’ll add this to the server). <key_name>>: The private key file (keep this secure on your local machine).
Important Note: Never share your private key with anyone!
Adding Your Public Key to the Server’s authorized_keys File
Access the remote server you want to connect to (through a different method if you haven’t set up key-based authentication yet).
Locate the ~/.ssh/authorized_keys file on the server (the ~ represents your home directory). You might need to create the .ssh directory if it doesn’t exist.
Open the authorized_keys file with a text editor.
Paste the contents of your public key file (.pub) into the authorized_keys file on the server.
Save the authorized_keys file on the server.
Permissions:
Ensure the authorized_keys file has permissions set to 600 (read and write access only for the owner).
Connecting with SSH Keys Once you’ve added your public key to the server, you can connect using your private key:
ssh <username>@<server_address>
You’ll be prompted for your private key passphrase (if you set one) during the connection. That’s it! You’re now securely connected to the server without needing a password.
Benefits of SSH Keys:
Enhanced Security: More secure than password authentication, making brute-force attacks ineffective.
Convenience: No need to remember complex passwords for multiple servers.
Faster Logins: SSH key-based authentication is often faster than password authentication.
By implementing SSH keys, you can significantly improve the security and convenience of your remote server connections. Remember to choose strong passwords and keep your private key secure for optimal protection.
nopCommerce is an open-source e-commerce platform that allows users to create and manage their online stores. It is built on the ASP.NET Core framework and supports multiple database systems, including MySQL, Microsoft SQL Server, and PostgreSQL as it’s backend. The platform is highly customizable and offers a wide range of features, including product management, order processing, shipping, payment integration, and customer management. nopCommerce is a popular choice for businesses of all sizes because of its flexibility, scalability, and user-friendly interface. In this tutorial, we will guide you through the process of installing nopCommerce on Ubuntu Linux with Nginx reverse proxy and SSL.
Register Microsoft key and feed To register the Microsoft key and feed, launch the terminal and execute these commands:
1. Download the packages-microsoft-prod.deb file by running the command:
To determine the appropriate version of the .NET runtime to install, you should refer to the documentation provided by nopCommerce, which takes into account both the version of nopCommerce you are using and the Ubuntu OS version. Refer to the link below:
3. Verify the installed .Net Core runtimes by running the command:
dotnet --list-runtimes
4. Install the libgdiplus library:
sudo apt-get install libgdiplus
libgdiplus is an open-source implementation of the GDI+ API that provides access to graphic-related functions in nopCommerce and is required for running nopCommerce on Linux.
Install MySql Server Latest nopCommerce support latest MySQL and MariaDB versions. We will install the latest MariaDB 10.6.
1. To install mariadb-server for nopCommerce, execute the following command in the terminal:
sudo apt-get install mariadb-server
2. After installing MariaDB Server, you need to set the root password. Execute the following command in the terminal to set the root password:
sudo /usr/bin/mysql_secure_installation
This will start a prompt to guide you through the process of securing your MySQL installation and setting the root password.
3. Create a database and User. We will use these details while installing nopCommerce. Replace the names of the database and the database user accordingly.
mysql -u root -p
create database nopCommerceDB;
grant all on nopCommerceDB.* to nopCommerceuser@localhost identified by 'P@ssW0rD';
Please replace the database name, username and password accordingly.
4. Reload privilege tables and exit the database.
flush privileges;
quit;
Install nginx
1. To install Nginx, run the following command:
sudo apt-get install nginx
2. After installing Nginx, start the service by running:
sudo systemctl start nginx
3. You can verify the status of the service using the following command:
sudo systemctl status nginx
4. Nginx Reverse proxy configuration To configure Nginx as a reverse proxy for your nopCommerce application, you’ll need to modify the default Nginx configuration file located at /etc/nginx/sites-available/nopcommerce.linuxwebhostingsupport.in. Open the file in a text editor and replace its contents with the following:
You need to replace nopcommerce.linuxwebhostingsupport.in with your domain name 5. Enable the virtual host configuration file: Enable the server block by creating a symbolic link in the /etc/nginx/sites-enabled directory: sudo ln -s /etc/nginx/sites-available/nopcommerce.linuxwebhostingsupport.in /etc/nginx/sites-enabled/
6. Reload Nginx for the changes to take effect:
sudo systemctl reload Nginx
Install NopCommerce
In this example, we’ll use /var/www/nopCommerce for storing the files.
1. Create a directory:
sudo mkdir /var/www/nopCommerce
2. Navigate to the directory where you want to store the nopCommerce files, Download and unpack nopCommerce:
1. Create a file named nopCommerce.service in the /etc/systemd/system directory with the following content:
[Unit]
Description=Example nopCommerce app running on Xubuntu
[Service]
WorkingDirectory=/var/www/nopCommerce
ExecStart=/usr/bin/dotnet /var/www/nopCommerce/Nop.Web.dll
Restart=always
# Restart service after 10 seconds if the dotnet service crashes:
RestartSec=10
KillSignal=SIGINT
SyslogIdentifier=nopCommerce-example
User=www-data
Environment=ASPNETCORE_ENVIRONMENT=Production
Environment=DOTNET_PRINT_TELEMETRY_MESSAGE=false
[Install]
WantedBy=multi-user.target
2. Start the nopCommerce service by running:
sudo systemctl start nopCommerce.service
3. To check the status of the nopCommerce service, use the following command:
sudo systemctl status nopCommerce.service
Also, check if the service is running on port 5000
sudo lsof -i:5000
4. After that, restart the nginx server:
sudo systemctl restart nginx
Now that the prerequisites are installed and configured, you can proceed to install and set up your nopCommerce store.
Install nopCommerce After completing the previous steps, you can access the website through the following URL: http://nopcommerce.linuxwebhostingsupport.in. Upon visiting the site for the first time, you will be automatically redirected to the installation page as shown below:
Provide the following information in the Store Information panel:
Admin user email: This is the email address of the first administrator for the website.
Admin user password: You must create a password for the administrator account.
Confirm password: Confirm the admin user password.
Country: Choose your country from the dropdown list. By selecting a country, you can configure your store with preinstalled language packs, preconfigured settings, shipping details, VAT settings, currencies, measures, and more.
Create sample data: Check this box if you want sample products to be created. It is recommended so that you can start working with your website before adding your own products. You can always delete or unpublish these items later.
In the Database Information panel, you will need to provide the following details:
Database: Select either Microsoft SQL Server, MySQL, or PostgreSQL. Since, we are installing nopCommerce on Linux and MariaDB, choose the MySQL.
Create database if it doesn’t exist: We recommend creating your database and database user ahead of time to ensure a successful installation. Simply create a database instance and add the database user to it. The installation process will create all the tables, stored procedures, and more. Uncheck this option since we can use the database and database user we created earlier.
Enter raw connection string (advanced): Select this option if you prefer to enter a Connection string instead of filling the connection fields. For now, leave this unchecked
Server name: This is the IP, URL, or server name of your database. Use “localhost”.
Database name: This is the name of the database used by nopCommerce. Use the database we created earlier.
Use integrated Windows authentication: Leave it unchecked
SQL Username: Enter your database user name we created earlier.
SQL Password: Use your database user password we used earlier.
Specify custom collation: Leave this advanced setting empty.
Click on the Install button to initiate the installation process. Once the installation is complete, the home page of your new site will be displayed. Access your site from the following URL: http://nopcommerce.linuxwebhostingsupport.in.
Note: You can reset a nopCommerce website to its default settings by deleting the appsettings.json file located in the App_Data folder.
Adding and Securing the nopCommerce We will be using Let’s Encrypt to add free and secure SSL certificate. Let’s Encrypt is a free, automated, and open certificate authority that allows you to obtain SSL/TLS certificates for your website. Certbot is a command-line tool that automates the process of obtaining and renewing these certificates, making it easier to secure your website with HTTPS.
Here are the steps to install SSL with Certbot Nginx plugins:
1.Install Certbot: First, make sure you have Certbot installed on your server. You can do this by running the following command:
2. Obtain SSL Certificate: Next, you need to obtain an SSL certificate for your domain. You can do this by running the following command: sudo certbot –nginx -d yourdomain.com
Replace yourdomain.com with your own domain name. This command will automatically configure Nginx to use SSL, obtain a Let’s Encrypt SSL certificate and set an automatic redirect from http to https.
3.Verify SSL Certificate: Once the certificate is installed, you can verify it by visiting your website using the https protocol. If the SSL certificate is valid, you should see a padlock icon in your browser’s address bar.
4. Automatic Renewal: Certbot SSL certificates are valid for 90 days. To automatically renew your SSL certificate before it expires, you can set up a cron job to run the following command:
sudo certbot renew --quiet
This will check if your SSL certificate is due for renewal and automatically renew it if necessary.
5. nopCommerce also recommend turning “UseProxy setting to true in the appsettings.json file located in the App_Data folder if we are using SSL. So change this value too.
nopCommerce is a popular open-source e-commerce platform that offers users a flexible and scalable solution for creating and managing online stores. In this tutorial, we provided a step-by-step guide for installing and configuring nopCommerce on Ubuntu Linux with Nginx reverse proxy and SSL. We covered the installation of Microsoft key and feed, .NET Core Runtime, MySQL server, and Nginx reverse proxy. We also discussed how to configure Nginx as a reverse proxy for the nopCommerce application. By following this tutorial, you can set up a secure and reliable nopCommerce e-commerce store on Ubuntu Linux.
PHP GEOS is a PHP extension for geographic objects support, while RunCloud is a cloud server control panel designed for PHP applications. With PHP GEOS module installed on RunCloud, PHP applications can take advantage of geographic data and use the GEOS (Geometry Engine – Open Source) library to perform spatial operations.
In this blog post, I will show you how to install PHP GEOS module on RunCloud module.
Steps 1. Install the required development tools
Before installing the PHP GEOS module, make sure that the required development tools are installed on your Ubuntu server. You can install them by running the following command:
apt-get install autoconf
2. Install GEOS library Next, download and install the latest GEOS (Geometry Engine – Open Source)
wget http://download.osgeo.org/geos/geos-3.9.4.tar.bz2 tar xvf geos-3.9.4.tar.bz2 cd geos-3.9.4/ ./configure make make install
3. Install PHP GEOS module
Now, it’s time to install the PHP GEOS module. Follow the steps below to install it for PHP 8.2:
# make clean will always fail if you never compile it before make clean /RunCloud/Packages/php82rc/bin/phpize --clean /RunCloud/Packages/php82rc/bin/phpize ./configure --with-php-config=/RunCloud/Packages/php82rc/bin/php-config make && make install
This will install geos.so in the correct php extension directory
4. Add the module to PHP.ini file echo "extension=$MODULE_NAME.so" > /etc/php82rc/conf.d/$MODULE_NAME.ini
And finally restart the PHP FPM service systemctl restart php82rc-fpm
It’s important to note that the above steps are specific to PHP 8.2. If you wish to install the module for a different version, you will need to modify the commands accordingly. For instance, you can replace PHP 8.2 with 8.1 with below changes: Replace /RunCloud/Packages/php82rc/bin/phpize with /RunCloud/Packages/php81rc/bin/phpize, replace ./configure –with-php-config=/RunCloud/Packages/php82rc/bin/php-config with ./configure –with-php-config=/RunCloud/Packages/php81rc/bin/php-config, replace /etc/php82rc/conf.d/$MODULE_NAME.ini with /etc/php81rc/conf.d/$MODULE_NAME.ini, and replace systemctl restart php82rc-fpm with systemctl restart php81rc-fpm.
You can contact me if you need help with installing any custom modules on RunCloud control panel.
Recently, I helped one of my clients who was using an Amazon Lightsail WordPress instance provided by Bitnami. Bitnami is advantageous in that it provides a fully working stack, so you don’t have to worry about configuring LAMP or environments. You can find more information about the Bitnami Lightsail stack here.
However, the client’s stack was using the latest PHP 8.x version, while the WordPress site he runs uses several plugins that need PHP 7.4. I advised the client to consider upgrading the website to support the latest PHP versions. However, since that would require a lot of work, and he wanted the site to be up and running, he decided to downgrade PHP.
The issue with downgrading or upgrading PHP on a Bitnami stack is that it’s not possible. Bitnami recommends launching a new server instance with the required PHP, MySQL, or Apache version and migrating the data over. So, I decided to do it manually.
Here are the server details:
Debian 11 Current installed PHP: 8.1.x
Upgrading or downgrading PHP versions on a Bitnami stack is essentially the same as on a normal Linux server. In short, you need to:
Ensure the PHP packages for the version you want are installed. Update any configuration for that PHP version. Update your web server configuration to point to the correct PHP version. Point PHP CLI to the correct PHP version. Restart your web server and php-fpm.
What we did was install the PHP version provided by the OS. Then, we updated php.ini to use the non-default MySQL socket location used by the Bitnami server. We created a php-fpm pool that runs as the “daemon” user. After that, we updated the Apache configuration to use the new PHP version.
1. Make sure packages for your target version of PHP are installed To make sure that the correct packages are available on your system for the PHP version you want, first make sure your system is up to date by running these commands:
sudo apt update sudo apt upgrade If it prompts you to do anything with config files, usually, you should just go with the default option and leave the current config as-is. Then, install the packages you need. For example, you can use the following command to install common PHP packages and modules: sudo apt install -y php7.4-cli php7.4-dev php7.4-pgsql php7.4-sqlite3 php7.4-gd php7.4-curl php7.4-memcached php7.4-imap php7.4-mysql php7.4-mbstring php7.4-xml php7.4-imagick php7.4-zip php7.4-bcmath php7.4-soap php7.4-intl php7.4-readline php7.4-common php7.4-pspell php7.4-tidy php7.4-xmlrpc php7.4-xsl php7.4-fpm
2. Make sure PHP configuration for your target version is updated Find the mysql socket path used by your Bitnami stack by running this command:
[Pdo_mysql] ; Default socket name for local MySQL connects. If empty, uses the built-in ; MySQL defaults. pdo_mysql.default_socket=
Replace with
[Pdo_mysql] ; Default socket name for local MySQL connects. If empty, uses the built-in ; MySQL defaults. pdo_mysql.default_socket= “/opt/bitnami/mariadb/tmp/mysql.sock”
Feel free to adjust the PHP FPM settings to match your server specifications or needs. Check out this informative article for more tips on optimizing PHP FPM performance. Just keep in mind that Bitnami configures their stack with the listen.owner and listen.group settings set to daemon.
This pool will listen on unix socket “/opt/bitnami/php/var/run/www2.sock”.
Test the installed version by running below command ~# php -v PHP 7.4.33 (cli) (built: Feb 22 2023 20:07:47) ( NTS ) Copyright (c) The PHP Group Zend Engine v3.4.0, Copyright (c) Zend Technologies with Zend OPcache v7.4.33, Copyright (c), by Zend Technologies
LAMP stack is a popular combination of open-source software that is used to run dynamic websites and web applications. The acronym LAMP stands for Linux (operating system), Apache (web server), MySQL (database management system), and PHP (scripting language).
Linux provides the foundation for the LAMP stack, serving as the operating system on which the other software components are installed. Apache is the web server that handles HTTP requests and serves web pages to users. MySQL is a powerful database management system that is used to store and manage website data. PHP is a popular scripting language used to create dynamic web content, such as interactive forms and web applications.
Together, these software components create a powerful platform for building and deploying web applications. The LAMP stack is highly customizable and widely used, making it an excellent choice for developers and system administrators alike.
Prerequisites
1. Ubuntu server: You will need an Ubuntu server to install the LAMP stack. You can use a Virtual/CLoud server or a physical server as per your requirement.
2. SSH access: You will need SSH access to your Ubuntu server to be able to install the LAMP stack. SSH (Secure Shell) is a secure network protocol that allows you to access and manage your server remotely.
3. Non-root user with sudo privileges: It is recommended that you use a non-root user with sudo privileges to install and configure the LAMP stack. This is because running as root can pose a security risk and may lead to unintended consequences if something goes wrong. You can also run the commands as root user.
4. Basic familiarity with Linux command line: A basic understanding of how to use the Linux command line interface (CLI) to run commands and navigate your Ubuntu server is recommended, not mandatory.
Installing a LAMP Stack on Ubuntu In this section, the process of installing a LAMP Stack on Ubuntu 22.04 LTS is outlined. These instructions can be applied to Ubuntu 20.04 LTS as well.
A LAMP stack is a popular combination of open-source software used to run dynamic websites or web applications. LAMP stands for Linux (operating system), Apache (web server), MySQL (database management system), and PHP (scripting language). In this guide, we will walk you through the steps involved in installing and configuring a LAMP stack on an Ubuntu server.
Step 1: Update Your Ubuntu Server Before we begin installing LAMP stack components, let’s update the server’s software packages by running the following command:
sudo apt update && sudo apt upgrade
Step 2: Install Apache Apache is the most widely used web server software. To install it, run the following command:
sudo apt install apache2
Once the installation is complete, you can check the status of Apache by running the following command:
sudo systemctl status apache2 This will display Apache’s status as either active or inactive.
Step 3: Install MySQL MySQL is a popular open-source database management system. To install it, run the following command:
sudo apt install mysql-server Once the installation is complete, you can check the status of MySQL by running the following command:
sudo systemctl status mysql This will display MySQL’s status as either active or inactive.
Step 4: Install PHP PHP is a popular server-side scripting language used to create dynamic web content. To install it, run the following command:
sudo apt install php libapache2-mod-php php-mysql
There are several additional PHP modules recommended for a CMS like WordPress. You can install them by running the command below: sudo apt-get install php-curl php-gd php-xml php-mbstring php-imagick php-zip php-xmlrpc After installing these modules, you will need to restart your Apache server for the changes to take effect. You can do this by running the following command:
sudo systemctl restart apache2
Setting up firewall rules to allow access to Apache web server
UFW is the default firewall with Ubuntu systems, providing a simple command-line interface to configure iptables, the software-based firewall used in most Linux distributions. UFW provides various application profiles that can be utilized to manage traffic to and from different services. To view a list of all the available UFW application profiles, you can run the command:
sudo ufw app list
Output Available applications: Apache Apache Full Apache Secure OpenSSH
These application profiles have different configurations for opening specific ports on the firewall. For instance:
Apache: Allows traffic on port 80, which is used for normal, unencrypted web traffic. Apache Full: Allows traffic on both port 80 and port 443, which is used for TLS/SSL encrypted traffic. Apache Secure: Allows traffic only on port 443 for TLS/SSL encrypted traffic.
To allow traffic on both port 80 and port 443(SSL), you can use the Apache Full profile by running the following command:
sudo ufw allow in "Apache Full"
You can verify that the change has been made by running the command: sudo ufw status
Output
Status: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
Apache Full ALLOW Anywhere
OpenSSH (v6) ALLOW Anywhere (v6)
Apache Full(v6) ALLOW Anywhere (v6)
To test if the ports are open and Apache web server is accessible, you can try visiting your server’s public IP address in a web browser using the URL http://your_server_ip. If successful, you should see the default Apache web page.
If you can view this page, your web server is correctly installed and accessible through your firewall.
Configuring the MySQL Database server Upon installation of MySQL, it is immediately available for use. However, in order to utilize it for web applications such as WordPress and improve the security of said applications, it is imperative to generate a database user and database. To complete the configuration process for MySQL, please adhere to the following steps.
To configure MySQL and improve application security, follow these steps:
1. Log in to the MySQL shell as the root user:
sudo mysql -u root
2. Using the MySQL shell, you can create the wpdatabase database and generate a new user account for accessing the web application. Instead of using the placeholders “dbuser” and “password” in the CREATE USER query, you should provide a real username and password. Furthermore, you should grant complete permissions to the user. After each line, MySQL should respond with “Query OK.”
CREATE DATABASE wpdatabase ; CREATE USER 'dbuser' IDENTIFIED BY 'password'; GRANT ALL ON wpdatabase .* TO 'dbuser';
Exit the SQL shell: quit
3. Set a password for root’@’localhost:
sudo mysql ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password by 'password';
Exit the SQL shell: quit
Note: Replace “password” with a strong password. 4. Use the mysql_secure_installation tool to increase database security:
sudo mysql_secure_installation
When prompted to change the root password, leave it unchanged. Answer Y for the following questions:
Remove anonymous users? Disallow root login remotely? Remove test database and access to it? Reload privilege tables now?
To log in to the MySQL shell as root after this change, use “sudo mysql -u root” and type “quit” exit the SQL Shell.
It’s worth noting that when connecting as the root user, there’s no need to enter a password, despite having defined one during the mysql_secure_installation script. This is due to the default authentication method for the administrative MySQL user being unix_socket rather than password. Although it may appear to be a security issue, it actually strengthens the security of the database server by only allowing system users with sudo privileges to log in as the root MySQL user from the console or through an application with the same privileges. As a result, you won’t be able to use the administrative database root user to connect from your PHP application. However, setting a password for the root MySQL account acts as a precautionary measure in case the default authentication method is changed from unix_socket to password.
Creating a Virtual Host for your Website
In order to host multiple domains from a single server, Apache web server provides the capability to create virtual hosts. These virtual hosts are beneficial as they allow you to encapsulate configuration details for each domain. In this tutorial, we will walk you through setting up a domain named “example.com”. However, it is important to keep in mind that you should replace “example.com” with your own domain name.
By default, Ubuntu 22.04’s Apache web server has a single virtual host that is enabled and configured to serve documents from the /var/www/html directory. While this is a workable solution for a single site, it becomes cumbersome when hosting multiple sites. Therefore, instead of modifying /var/www/html, we will create a directory structure within the /var/www directory specifically for the example.com site. In doing so, we will leave /var/www/html in place as the default directory to be served if a client request does not match any other sites.
1. First, create a new directory for the “example.com” website files:
sudo mkdir /var/www/example.com
2. Assign the ownership of the directory to the web server user (www-data):
10. Finally, configure your DNS records to point the “example.com” domain to your server’s IP address. Once the DNS records are updated, you can access the website by visiting “http://example.com” in your web browser.
Testing the LAMP Stack Installation on Your Ubuntu Server To ensure that the LAMP stack configuration is fully functional, it’s necessary to conduct tests on Apache, PHP, and MySQL components. Verifying the Apache operational status and virtual host configuration was done earlier. Now, it’s important to test the interaction between the web server and PHP and MySQL components.
The easiest way to verify the configuration of the Ubuntu LAMP stack is by using a short test script. The PHP code does not need to be lengthy or complex; however, it must establish a connection to MySQL. The test script should be placed within the DirectoryRoot directory.
To validate the database, use PHP to invoke the mysqli_connect function. Use the username and password created in the “Configuring the MySQL Database server” section. If the attempt is successful, the mysqli_connect function returns a Connection object. The script should indicate whether the connection succeeded or failed and provide more information about any errors.
To verify the installation, follow these steps:
1. Create a new file called “phptest.php” in the /var/www/example.com directory.
<html><head><title>PHP MySQL Test</title></head><body><?php echo '<p>Welcome to the Site!</p>'; // When running this script on a local database, the servername must be 'localhost'. Use the name and password of the web user account created earlier. Do not use the root password. $servername = "localhost"; $username = "dbuser"; $password = "password"; // Create MySQL connection $conn = mysqli_connect($servername, $username, $password); // If the conn variable is empty, the connection has failed. The output for the failure case includes the error message if (!$conn) { die('<p>Connection failed: </p>' . mysqli_connect_error()); } echo '<p>Connected successfully</p>'; ?></body></html>
2. To test the script, open a web browser and type the domain name followed by “/phptest.php” in the address bar. For example, if your domain name is “example.com”, you would enter “example.com/phptest.php” in the address bar. Make sure to substitute the actual name of the domain for “example.com” in the example provided.
http://example.com/phptest.php
3. Upon successful execution of the script, the web page should display without any errors. The page should contain the text “Welcome to the Site!” and “Connected successfully.” However, if you encounter the “Connection Failed” error message, review the SQL error information to troubleshoot the issue.
Bonus: Install phpMyAdmin phpMyAdmin is a web-based application used to manage MySQL databases. To install it, run the following command:
sudo apt install phpmyadmin During the installation process, you will be prompted to choose the web server that should be automatically configured to run phpMyAdmin. Select Apache and press Enter.
You will also be prompted to enter a password for phpMyAdmin’s administrative account. Enter a secure password and press Enter.
Once the installation is complete, you can access phpMyAdmin by navigating to http://your_server_IP_address/phpmyadmin in your web browser.
Congratulations! You have successfully installed and configured a LAMP stack on your Ubuntu server.
Summary This guide walks through the process of setting up a LAMP Stack, a combination of the Linux operating system, Apache web server, MySQL RDBMS, and PHP programming language, to serve PHP websites and applications. The individual components are free and open source, designed to work together, and easy to install and use. Following the steps provided, you can install the LAMP Stack on Ubuntu 22.04 LTS using apt, configure the Apache web server, create a virtual host for the domain, and integrate the MySQL web server by creating a new account to represent the web user. Additional PHP packages are required for Apache, PHP, and the database to communicate. A short PHP test script can be used to test the new installation by connecting to the database.
How to remove or compress huge MySQL general and query log table
If you have enabled MySQL general or slow logging, it can create quite big log, depending upon your MySQL usage/queries. So we may have to periodically clear them to save space.
Please note that MySQL can save logs to either table or files. This document assumes you are using table as log output.
Files: slow_log.CSV and general_log.CSV (The location and the name of the file can be different)
By default, logging is to CSF file.
MYSQL supports run time clearing of these logs. So no need to restart the MySQL service. Never delete the CSV file directly. It can crash MySQL.
Slow query log
SET GLOBAL slow_query_log='OFF';
DROP TABLE IF EXISTS slow_log2;
CREATE TABLE slow_log2 LIKE slow_log;
RENAME TABLE slow_log TO slow_log_backup, slow_log2 TO slow_log;
gzip /var/db/mysql/mysql/slow_log_backup.CSV
DROP TABLE slow_log_backup;
SET GLOBAL slow_query_log = 'ON';
General log
USE mysql;
SET GLOBAL general_log = 'OFF';
DROP TABLE IF EXISTS general_log2;
CREATE TABLE general_log2 LIKE general_log;
RENAME TABLE general_log TO general_log_backup, general_log2 TO general_log;
gzip /var/db/mysql/mysql/general_log_backup.CSV
DROP TABLE general_log_backup;
What we did is create new log table, move current log file to a backup copy and compress the backup and remove it.
Ajenti is an open source, web-based control panel that can be used for a large variety of server management tasks. Optionally, an add-on package called Ajenti V allows you to manage multiple websites from the same control panel
Step 1: First make sure that all your system packages are up-to-date sudo apt-get update sudo apt-get upgrade
Step 2: Installing Ajenti Control Panel. wget -O- https://raw.github.com/ajenti/ajenti/1.x/scripts/install-ubuntu.sh | sudo sh
Anjeti will be available on HTTP port 8000 by default. Open your favourite browser and navigate to http://yourdomain.com:8000 or http://server-ip:8000 and enter default username “admin” or “root” and password is “admin”.
Change the password immediately to something secure.
#Open the incoming TCP port 5666 on your firewall. You will have to do this using firewall software, like firewall ufw.
#Update Configuration File The file nrpe.cfg is where the following settings will be defined. It is located:
/usr/local/nagios/etc/nrpe.cfg
allowed_hosts=
At this point NRPE will only listen to requests from itself (127.0.0.1). If you wanted your nagios server to be able to connect, add it's IP address after a comma (in this example it's 10.25.5.2):
allowed_hosts=127.0.0.1,10.25.5.2
The following commands make the configuration changes described above.
sudo sh -c "sed -i '/^allowed_hosts=/s/$/,10.25.5.2/' /usr/local/nagios/etc/nrpe.cfg" sudo sh -c "sed -i 's/^dont_blame_nrpe=.*/dont_blame_nrpe=1/g' /usr/local/nagios/etc/nrpe.cfg"
#Start Service / Daemon
Different Linux distributions have different methods of starting NRPE.
Ubuntu 13.x / 14.x
sudo start nrpe
Ubuntu 15.x / 16.x / 17.x
sudo systemctl start nrpe.service
Test NRPE
Now check that NRPE is listening and responding to requests.
/usr/local/nagios/libexec/check_nrpe -H 127.0.0.1
You should see the output similar to the following: NRPE v3.2.0
If you get the NRPE version number (as shown above), NRPE is installed and configured correctly.
You can also test from your Nagios host by executing the same command above, but instead of 127.0.0.1 you will need to replace that with the IP Address / DNS name of the machine with NRPE running.
Service / Daemon Commands
Different Linux distributions have different methods of starting / stopping / restarting / status NRPE.
NRPE needs plugins to monitor different parameters. T
#Install Latest Nagios plugins
cd /usr/local/src/ wget --no-check-certificate -O nagios-plugins.tar.gz https://github.com/nagios-plugins/nagios-plugins/archive/release-2.2.1.tar.gz tar zxf nagios-plugins.tar.gz cd nagios-plugins-release-2.2.1/ ./tools/setup ./configure --enable-perl-modules make make install
#Test NRPE + Plugins
Using the check_load command to test NRPE: /usr/local/nagios/libexec/check_nrpe -H 127.0.0.1 -c check_load
You should see the output similar to the following: OK - load average: 0.01, 0.13, 0.12|load1=0.010;15.000;30.000;0; load5=0.130;10.000;25.000;0; load15=0.120;5.000;20.000;0;
You can also test from your Nagios host by executing the same command above, but instead of 127.0.0.1 you will need to replace that with the IP Address / DNS name of the machine with NRPE running.