Empowering you with the knowledge to master Linux web hosting, DevOps and Cloud

 Linux Web Hosting, DevOps, and Cloud Solutions

Author: wahab Page 5 of 7

URL Monitoring With Nagios

Capabilities

Nagios provides complete URL monitoring of HTTP and HTTPS servers and protocols as well as full URL transaction monitoring.

Benefits

Implementing effective URL monitoring with Nagios offers the following benefits:
* Increased server, services, and application availability
* Fast detection of network outages and protocol failures
* Monitor user experience when accessing URLs
* Web server performance monitoring
* Web transaction monitoring
* URL monitoring

URL monitoring

By using the ‘check_http’ nagios command, we can monitor a specific url rather than checking the Apache service is up or not. This method is helpful to identify if the website is hacked and url is injected with malicious codes or there is some Apache or php errors and page is throwing an error instead. The normal Apache service check will return successful results in the above case.
We can check for a specific keyword string on the webpage. If that string not present, an error will be returned.

Here is an real example

define service{
    use                            urlmonitoring-service
    host_name                      server.linuxwebhostingsupport.in
    service_description            url_check
    check_command                  check_http!-H linuxwebhostingsupport.in -t 30 -R "Cpanel and WHM" -f follow
}

The above will check for the keyword “Cpanel and WHM” on the page “linuxwebhostingsupport.in”. If the keyword is missing or the page is not responding nagios will retun and error.

URL monitoring +SSL

You can refer to below example if the web page has SSL/TLS enabled.

define service{
    use                            urlmonitoring-service
    host_name                      server.linuxwebhostingsupport.in
    service_description            url_check
    check_command                  check_http!-H linuxwebhostingsupport.in -t 30 -R "Cpanel and WHM" -f follow --ssl
}

Here we added the option “–ssl” to the check command

URL monitoring on ht password protected page

Normal method will not work as we need to validate ht password protection first to see the page. You can use the following example for such pages.

define service{
    use                            urlmonitoring-service
    host_name                      server.linuxwebhostingsupport.in
    service_description            url_check_protected
    check_command                  check_http!-H linuxwebhostingsupport.in -a user:password -t 30 -R "Cpanel and WHM" -f follow --ssl 
}

Replace the username and password appropriately.

FTP connectivity problem:: No route to host

FTP connectivity problem

If you are getting following error while FTP directory listing, follow the solution provided below

———-
ftp> ls
227 Entering Passive Mode (108,61,169,245,167,161).
ftp: connect: No route to host
———-

Solution

Edit /etc/sysconfig/iptables-config and add this line:

IPTABLES_MODULES=”ip_conntrack_ftp”

Save it and restart iptables.
That’s because passive mode use non standard ports to communicate, so you need to keep trak of the ftp connections and iptables will allow them when necessary.

However, you will need to do this every time you reboot your RedHat server. Thus as a more permanent solution you can persistently load this module after each reboot by creating executable shell script within /etc/sysconfig/modules/ directory. Create file /etc/sysconfig/modules/iptables.modules with the following content:

#!/bin/sh
exec /sbin/modprobe ip_conntrack_ftp >/dev/null 2>&1

Once you save this file you also need to make it executable:
# chmod +x /etc/sysconfig/modules/iptables.modules

Another solution is specify the passive ports that will be used on FTP server configuration, then open those specific ports on firewall.

Plesk update error/autoinstaller error

If you are getting the below error while updating the Plesk versions or installing the microupdates

—-
ERROR: Unable to download the MD5 sum for the new Parallels Installer binary.
Not all packages were installed.
Please, contact product technical support.
—-

Solution
—–
Remove cache from /var/cache/parallels_installer/ and start autoinstaller again.
/usr/local/psa/admin/sbin/autoinstaller –select-product-id plesk –select-release-current –reinstall-patch –install-component base
—–

CSR generation for UCC certificates

Unified Communications (UC) Certificates (also called SAN Certificates) use Subject Alternative Names o secure multiple sites (e.g. fully qualified domain names) with one certificate. Four SANs are included in the base price of the UC Certificate, but you can purchase additional names at any time during the lifetime of the certificate.

With a UC Certificate, you can secure:

www.linuxwebhostingsupport.in
www.example2.com
www.example3.net
mail.example.net
dev.example2.com

The CSR generation process is little different for creating an UCC certificates. We will have to create a Openssl based configuration file and then create private key and CSR from it.

Step 1: Create a custom OpenSSL Conf file.

The following is an example conf file that can be used for creation of a SAN/UCC cert. Save it as multissl.conf

———–
[ req ]
default_bits = 2048
default_keyfile = privkey.pem
distinguished_name = req_distinguished_name
req_extensions = req_ext # The extentions to add to the self signed cert

[ req_distinguished_name ]
countryName = Country Name (2 letter code)
countryName_default = US
stateOrProvinceName = State or Province Name (full name)
stateOrProvinceName_default = Iowa
localityName = Locality Name (eg, city)
localityName_default = Iowa City
organizationName = Organization Name (eg, company)
organizationName_default = The University of Iowa
organizationalUnitName = Organizational Unit Name (eg, section)
organizationalUnitName_default = Domain Control Validated
commonName = Common Name (eg, YOUR SSL domain name)
commonName_max = 64

[ req_ext ]
subjectAltName = @alt_names

[alt_names]
DNS.1 = www.linuxwebhostingsupport.in
DNS.2 = www.example1.com
DNS.3 = example2.com
———–

Notes:

The alt_names section (DNS.1, DNS.2, ….) are the list of all other domain names you wish to secure with this cert. Additional can be added such as DNS.4, etc.
The following examples assume that you name the above config file file multissl.conf (if it is named differently you must adjust the filename in the below examples accordingly.
Step 2: Generate the Private key and CSR with OpenSSL

Execute the following OpenSSL command

$ openssl req -nodes -newkey rsa:2048 -keyout serverfqdn.key -out multidomain.csr -config multissl.conf

* Replace “serverfqdn” with the fully qualified domain name of the server (ie: sample.server.uiowa.edu). Note: it may also be helpful to add a year to the filename.

You will then see output and be prompted for configuration as seen in the following example. Enter your details accordingly.

——————————————
$ openssl req -nodes -newkey rsa:2048 -keyout serverfqdn.key -out multidomain.csr -config multissl.conf
Generating a 2048 bit RSA private key
………………………………….+++
…………………………………………………………+++
writing new private key to ‘serverfqdn.key’
—–
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter ‘.’, the field will be left blank.
—–
Country Name (2 letter code) [US]:US
State or Province Name (full name) [Iowa]:Iowa
Locality Name (eg, city) [Iowa City]:Iowa City
Organization Name (eg, company) [The University of Iowa]:My Company name
Organizational Unit Name (eg, section) [Domain Control Validated]:IT SUPPORT
Common Name (eg, YOUR SSL domain name) []:www.linuxwebhostingsupport.in
——————————————

Note: Replace www.linuxwebhostingsupport.in with the “primary” domain name you want secured with this certificate (likely, but not necessarily the hostname of the machine).

At this point you should have the new key file, and CSR. Save the key file in a secure place, it will be needed to apply the new certificate. The CSR can now be submitted to request the SSL Cert.

Strong TLS/SSL Security on your server

SSL Report : www.linuxwebhostingsupport.in

ssllab

 

 

 

 

This is a simple guide for setting up a strong TLS/SSL configuration on your server.

If you configure a web server’s TLS configuration, you have primarily to take care of three things:

1. disable SSL 2.0 (FUBAR) and SSL 3.01 (POODLE),
2. disable TLS 1.0 compression (CRIME),
3. disable weak ciphers (DES, RC4), prefer modern ciphers (AES), modes (GCM), and protocols (TLS 1.2).

 

Your Server’s Certificate

Let’s start with your digital certificate, which is at the core of HTTPS. The certificate enables clients to verify the identity of servers, through a chain of trust from your server’s certificate through intermediate certificates and up to a root certificate trusted by users’ browsers. Your server certificate should be 2048 bits in length. Using 4096 bit certificate is more secure however it require more computation times and hence slow compared to 2048 bit certs.

 

Basic HTTPS Setup

Here are basic SSL configurations, first for Apache:

;
...
SSLEngine on
SSLCertificateFile /etc/ssl/certs/your_cert
SSLCertificateChainFile /etc/ssl/certs/chained_certs
SSLCertificateKeyFile /etc/ssl/certs/your_private_key
<;/VirtualHost>;

And then for Nginx:

server {
...
ssl on;
ssl_certificate /etc/ssl/certs/your_cert_with_chain;
ssl_certificate_key /etc/ssl/certs/your_private_key;
ssl_session_cache shared:SSL:50m;
ssl_session_timeout 10m;
}

In Nginx, the ssl_certificate parameter is confusing. It expects your certificate plus any necessary intermediate certificates, concatenated together.

Make sure all of these files are at least mode 0444, except your private key, which should be 0400.

 

Software versions

On the server side you should update your OpenSSL to 1.0.1c+ so you can support TLS 1.2, GCM, and ECDHE as soon as possible. Fortunately that’s already the case in Ubuntu 12.04 and later.

On the client side the browser vendors are starting to catch up. As of now, Chrome 30, Internet Explorer 11 on Windows 8, Safari 7 on OS X 10.9, and Firefox 26 all support TLS 1.2.

 

Cipher Suite Configuration

The recommended cipher suites for Apache are follows

SSLCipherSuite EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH
SSLHonorCipherOrder on

The recommended cipher suite for backwards compatibility (IE6/WinXP):

SSLCipherSuite EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4
SSLHonorCipherOrder on

 

And here’s the same configuration for Nginx:

ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
ssl_prefer_server_ciphers on;

The recommended cipher suite for backwards compatibility (IE6/WinXP):

ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4";
ssl_prefer_server_ciphers on;

If your version of OpenSSL is old, unavailable ciphers will be discarded automatically. Always use the full ciphersuite above and let OpenSSL pick the ones it supports.

The ordering of a ciphersuite is very important because it decides which algorithms are going to be selected in priority. The recommendation above prioritizes algorithms that provide perfect forward secrecy.

 

Prioritization logic

ECDHE+AESGCM ciphers are selected first. These are TLS 1.2 ciphers and not widely supported at the moment. No known attack currently target these ciphers.
PFS ciphersuites are preferred, with ECDHE first, then DHE.
AES 128 is preferred to AES 256.  At the moment, AES128 is preferred, because it provides good security, is really fast, and seems to be more resistant to timing attacks.
In the backward compatible ciphersuite, AES is preferred to 3DES. BEAST attacks on AES are mitigated in TLS 1.1 and above, and difficult to achieve in TLS 1.0. In the non-backward compatible ciphersuite, 3DES is not present.
RC4 is removed entirely. 3DES is used for backward compatibility

 

Protocol Support: SSL or no SSL

To prevent downgrade attacks and poodle attack, we will also disable old SSL protocols

For Apache:

SSLProtocol all -SSLv2 -SSLv3

For Nginx:

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

This disables all versions of SSL, enabling only TLS 1.0 and up. All versions of Chrome and Firefox support at least TLS 1.0.

How to Disable SSLv3 for Apache,Nginx, Litespeed, cPanel

The POODLE bug is a new bug discovered by Google in the SSLv3 protocol. The fix is easy, disable support for SSLv3.

See the google security blog for more info on the bug: http://googleonlinesecurity.blogspot.nl/2014/10/this-poodle-bites-exploiting-ssl-30.html.

 

Fix POODLE

To fix the bug, disable SSLv3 and use a secure cipherlist. SSL v2 is also insecure, so we need to disable it too.

So edit the Apache config file and add following

SSLProtocol All -SSLv2 -SSLv3

All is a shortcut for +SSLv2 +SSLv3 +TLSv1 or – when using OpenSSL 1.0.1 and later – +SSLv2 +SSLv3 +TLSv1 +TLSv1.1 +TLSv1.2, respectively. The above line enables everything except SSLv2 and SSLv3

And then restart the Apache service

service httpd restart

 

cPanel/WHM

If you have a cPanel server, you should not edit Apache configurations directly, instead you can do this from WHM.

 

Apache-Configuration-WHM

 

1. Visit your server’s WHM Panel ( https://<yourserversip>:2087 )
2. Navigate to the Apache Configuration Panel of WHM.
3. Scroll down to the ‘Include Editor’ Section of the Apache Configuration.
4. Click ‘Pre Main Include’, which will jump to the corresponding section. Via the drop-down selector, choose ‘All Versions’.
5. An empty dialogue box will appear allowing you to input the needed configuration updates. In this box, copy and paste the following:

SSLProtocol All -SSLv2 -SSLv3
SSLHonorCipherOrder On

 

For Nginx

If you’re running an NGINX web server that currently uses SSLv3, you need to edit the NGINX configuration (nginx.conf). You will need to add the following line to your server directive:

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

Then restart the nginx service

service nginx restart

For LiteSpeed:

Update to LiteSpeed version 4.2.18.

For more information about Litespeed & POODLE: http://www.litespeedtech.com/support/forum/threads/lsws-4-2-18-released-%E2%80%94-addresses-poodle-sslv3-vulnerability.9948/

Note about Mail Servers:

The POODLE attack requires the client to retry connecting several times in order to downgrade to SSLv3, and typically only browsers will do this. Mail Clients are not as susceptible to POODLE. However, users who want better security should switch to Dovecot until we upgrade Courier to a newer version.

For cpsrvd:

1. Go to WHM => Service Configuration => cPanel Web Services Configuration
2. Make sure that the “TLS/SSL Protocols” field contains “SSLv23:!SSLv2:!SSLv3”.
3. Select the “Save” button at the bottom.

For cpdavd:

1. Go to WHM => Service Configuration => cPanel Web Disk Configuration
2. Make sure that the “TLS/SSL Protocols” field contains “SSLv23:!SSLv2:!SSLv3”.
3. Select the “Save” button at the bottom.

For Dovecot:

1. Go to WHM => Service Configuration => Mailserver Configuration.
2. SSL Protocols should contain “!SSLv2 !SSLv3”. If it does not, replace the text in this field.
3. Go to the bottom of the page, and select the Save button to restart the service.

For Courier:

Courier has released a new version to mitigate this as of 10/22, until we have an opportunity to review, test, and publish the new version of Courier please switch to Dovecot for enhanced security.

For Exim:

1. Go to Home » Service Configuration » Exim Configuration Manager
2. Under Advanced Editor, look for ‘openssl_options’.
3. Make sure the field contains “+no_sslv2 +no_sslv3”.
4.Go to the bottom of the page, and select the Save button to restart the service.

 

For Lighttpd:

Lighttpd releases before 1.4.28 allow you to disable SSLv2 only.

If you are running at least 1.4.29, put the following lines in your configuration file:

ssl.use-sslv2 = "disable"
ssl.use-sslv3 = "disable"

How to verify the Poodle is disabled

You can use a website like http://poodlebleed.com/ for a web based check.

 

Manual check

To make sure services on your server are not accepting SSLv3 connections, you can run the openssl client on your server against the SSL ports. This command is run as follows:

openssl s_client -connect linuxwebhostingsupport.in:443 -ssl3

If it fails (which is what you want), you should see something like this at the top of the output:

3078821612:error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure:s3_pkt.c:1257:SSL alert number 40
3078821612:error:1409E0E5:SSL routines:SSL3_WRITE_BYTES:ssl handshake failure:s3_pkt.c:596:

Shellshock How to check if you are vulnerable

A new vulnerability has been found that potentially affects most versions of the Linux and Unix operating systems, in addition to Mac OS X. Known as the “Bash Bug” or “ShellShock,” the GNU Bash Remote Code Execution Vulnerability could allow an attacker to gain control over a targeted computer if exploited successfully. And because Bash is everywhere on Linux and UNix-like machines and interacts with all parts of the operating system, everyone anticipates that it will have lot of repercussions.

How does Shellshock work?

Shellshock exploits a flaw in how Bash parses environment variables; Bash allows functions to be stored in environment variables, but the issue is Bash will execute any code placed after the function in the environment variable value.

For example, an environment variable setting of VAR=() { ignored; }; /bin/id will execute /bin/id when the environment is imported into the bash process.

I am vulnerable?

You can check if you’re vulnerable by running the following lines in your default shell.

env X=”() { :;} ; echo vulnerable” `which bash` -c “echo Check completed”

If you see the word “vulnerable” echo’d back , then you’re at risk.

How Shellshock is Impacting the Web

The most likely route of attack is through Web servers utilizing CGI (Common Gateway Interface), the widely-used system for generating dynamic Web content. An attacker can potentially use CGI to send a malformed environment variable to a vulnerable Web server. The attacker is able to inject environment variables inside all bash process spawned by a web server under the CGI specification. This will occur directly if the CGI script is programmed in bash or indirectly by system calls inside other types of CGI scripts since the environment will propagate to the sub-shell. The vulnerability will automatically be triggered at the shell process instantiation. Furthermore if specific headers are used as attack points, the payload may not appear in the webserver logs, letting a compromise occur with virtually no trace of the intrusion.

Example:

CGI stores the HTTP headers in environment variables. Let’s say the example.com is running a CGI application written in Bash script.

We can modify the HTTP headers such that it will exploit the shellshock vulnerability in the target server and executes our code.

curl -k http://example.com/cgi-bin/test -H “User-Agent: () { :;}; echo Hacked > /tmp/Hacked.txt”

Here, the curl is sending request to the target website with the User-Agent containing the exploit code. This code will create a file “Hacked.txt” in the “/tmp” directory of the server.

What can I do to protect myself?

Major operating software vendors including RedHaT, CentOS, etc are already released a initial patch for this bug.

Debian—https://www.debian.org/security/2014/dsa-3032

Ubuntu—http://www.ubuntu.com/usn/usn-2362-1/

Red Hat—https://access.redhat.com/articles/1200223*

CentOS—http://centosnow.blogspot.com/2014/09/critical-bash-updates-for-centos-5.html

Novell/SUSE— http://support.novell.com/security/cve/CVE-2014-6271.html

If a patch is unavailable for a specific distribution of Linux or Unix, it is recommended that users switch to an alternative shell until one becomes available.

Need expert assistanace?

I can help you to patch your server against this bug and make sure you and your customers are secure. Mail me at therealfreelancer[at]gmail[dot]com.

Limit /throttle rsync transfer speed

If you use the rsync utility to keep your backups synchronized between your servers or with a local machine, you might want to prevent the script from using too much bandwidth. However, rsync makes a lots of network I/O. The point of limiting bandwidth is to make sure your backup scripts don’t clog up the network connection.

Naturally, limiting the amount of bandwidth your backups are using is going to make them happen more slowly, but if you can deal with that, this is the way to do it.

 

Normal rsync command

rsync –avz -e ‘ssh’ /path/to/source user@remotehost:/path/to/dest/

What you’ll want to do is use the –bwlimit parameter with a KB/second value, like this:

rsync –bwlimit=<kb/second> –avz -e ‘ssh’ /path/to/source user@remotehost:/path/to/dest/

So if you wanted to limit transfer to around 10000KB/s (9.7MB/s), enter:

rsync –bwlimit=10000 –avz -e ‘ssh’ /path/to/source user@remotehost:/path/to/dest/

 

Example:-

rsync –bwlimit=10000 –avz -e ‘ssh’ /backup/ root@192.168.0.51:/backup/

Blocking spoofed outgoing mails from your cPanel server

Spoofing is where the mail headers are manipulated to appear as if the mail comes from some other domain. When emails are set to be from an email address on your domain and bounce, they are sent to our servers, attempting to deliver themselves to that mailbox. Generally, you will never see these emails; however, if the email spoofer happens to configure the “From:” header to be a real email box, the bounce will come back to your mailbox and you will receive the email. There is a high chance that a very large number of spam message already sent from server. This can cause high load in the server and sometimes leads to the blacklist of your mail server IP address.

There are two ways in which a spoofed mail can be created.

1. Exploiting vulnerable form to mail scripts to send out spoofed mails through local mail agent.
2. Using stolen mail account login details to send spoofed mails through SMTP authentication.
Let’s look at a solution on how spoofing can be prevented in Exim mail servers commonly implemented in cPanel/WHM servers.

I. Blocking all un-authenticated spoofed outbound emails
1. Login to WHM >> EXIM CONFIGURATION MANAGER >> ADVANCED EDITOR

2. Add the following entry in the top using Add additional configuration setting:

domainlist remote_domains = lsearch;/etc/remotedomains

3. Add the following code under acl_not_smtp :

deny
condition = ${if ! match_domain{${domain:${address:$h_From:}}}{
+local_domains : +remote_domains}}
message = Sorry, you don’t have\
permission to send email from this server with a header that\
states the email is from ${lc:${domain:${address:$h_from:}}}.
accept

Here, the ACL will check for the presence of domain name part of the from address in either of the files – /etc/localdomains or /etc/remotedomains. If there is a mismatch, server will reject the email.

 

II. Blocking all authenticated spoofed outbound emails
1. WHM >> EXIM CONFIGURATION MANAGER >> ADVANCED EXIM EDITOR

2. Search for acl_smtp_data and add the following lines under it:

deny
authenticated = *
condition = ${if or {{ !eqi{$authenticated_id} {$sender_address} }\
{ !eqi{$authenticated_id} {${address:$header_From:}} }\
}\
}
message = Your FROM address ( $sender_address , $header_From )
must match your authenticated email user ( $authenticated_id ).
Treating this as a spoofed email.
Here, for all authenticated users, the rule will check whether the authenticated userid matches with the from address. If it matches, it will allow the email. Else, it will display the message “Your FROM must match your authenticated email user. Treating this as spoofed email”

 

PS: If the acl_smtp_data is mentioned as something else(like acl_smtp_data = check_message), locate check_message and add the above lines just under it.
IMPORTANT points to keep in mind
a. POP before SMTP won’t work with this setting. You will have to ask your customers to use the option – “My Server Requires Authentication” in the SMTP settings of their email client.
b. Username in the format user+domain.com will not work. They have to use user@domain.com instead.

Also your customer cannot change the from field to something other than original authentcated user. People use this method in Website Contact forms.

 

Setting SPF records

Another way to prevent spoofing is using SPF records. You must specify valid spf records for your domain, so that only the intended people or server can send mails on behalf of your domain name.

Apache: Multiple SSL websites on a single IP address

Apache: Multiple SSL websites on a single IP address

Update: This is a new update from a cPanel Tech
“There is nothing to enable. As long as you are using cPanel & WHM version 11.38 on CentOS, RHEL, or CloudLinux version 6 or newer, SNI works out of the box”.

One of the frustrating limitations in supporting secure websites has been the inability to share IP addresses among SSL websites.
When website administrators and IT personnel are restricted to use a single SSL Certificate per socket (combination of IP Address and socket) it can cost a lot of money. Well we can actually share IP addresses for multiple secure websites. Solving this limitation required an extension to the Transport Layer Security (TLS) protocol that includes the addition of what hostname a client is connecting to when a handshake is initiated with a web server. The name of the extension is Server Name Indication (SNI). SNI is supported in Apache v2.2.12 , and OpenSSL v0.9.8j or later.

With SNI, you can have many virtual hosts sharing the same IP address and port, and each one can have its own unique certificate

Prerequisites to use SNI

Use OpenSSL 0.9.8f or later
Build OpenSSL with the TLS Extensions option enabled (option enable-tlsext; OpenSSL 0.9.8k and later has this enabled by default).
Apache must have been built with that OpenSSL (./configure –with-ssl=/path/to/your/openssl). In that case, mod_ssl will automatically detect the availability of the TLS extensions and support SNI.
Apache must use that OpenSSL at run-time, which might require setting LD_LIBRARY_PATH or equivalent to point to that OpenSSL, maybe in bin/envvars. (You’ll get unresolved symbol errors at Apache startup if Apache was built with SNI but isn’t finding the right openssl libraries at run-time.)

Setting up SNI with Apache

The configuration is pretty simple and straight forward, though I recommend making a backup of your existing httpd.conf file before proceeding.

# Ensure that Apache listens on port 443
Listen 443

# Listen for virtual host requests on all IP addresses
NameVirtualHost *:443

# Go ahead and accept connections for these vhosts
# from non-SNI clients
SSLStrictSNIVHostCheck off

# Because this virtual host is defined first, it will
# be used as the default if the hostname is not received
# in the SSL handshake, e.g. if the browser doesn't support
# SNI.
DocumentRoot /www/example2
ServerName www.linuxwebhostingsupport.in

# Other directives here
SSLEngine On
SSLCertificateFile /path/to/linuxwebhostingsupport.in.crt
SSLCertificateKeyFile /path/to/linuxwebhostingsupport.in.key
SSLCertificateChainFile /path/to/CA.crt

DocumentRoot /www/example2
ServerName www.abdulwahabmp.co.in

# Other directives here
SSLEngine On
SSLCertificateFile /path/to/abdulwahabmp.co.in.crt
SSLCertificateKeyFile /path/to/abdulwahabmp.co.in.key
SSLCertificateChainFile /path/to/CA.crt

 

That it!!!. Just restart APache service. Now go and check your Websites using https. That should be working.

Plesk support SNI from 10.2.x version onwards.

SNI will work on following Operating systems out of box

OpenSuSE Linux 11.3 or later.
Ubuntu Linux 10.4 or later.
Debian Linux 6.0 or later.
RedHat Linux 6.0 or later.
CentOS Linux 60.0 or later

Supported Desktop Browsers
Internet Explorer 7 and later
Firefox 2 and later
Opera 8 with TLS 1.1 enabled
Google Chrome:
Supported on Windows XP on Chrome 6 and later
Supported on Vista and later by default
OS X 10.5.7 in Chrome Version 5.0.342.0 and later
Chromium 11.0.696.28 and later
Safari 2.1 and later (requires OS X 10.5.6 and later or Windows Vista and later).
Note: No versions of Internet Explorer on Windows XP support SNI

 

Page 5 of 7

Powered by WordPress & Theme by Anders Norén