fbpx

OpenLiteSpeed vs NGINX

OpenLiteSpeed vs NGINX

OpenLiteSpeed is getting lots of attention lately. OpenLiteSpeed is an open source version of LiteSpeed Enterprise Web Server that shares the same code base thus you eventually get the same Enterprise Grade performance. Today we will see the performance of openlitespeed vs nginx.  We will look into various scenarios such as

 

  1. Static file performance of openlitespeed vs nginx.
  2. Simple PHP file performance.
  3. WordPress site performance with and without LSCache and FastCGI Cache for NGINX.

 

We will run our tests on DigitalOcean $5 Droplet with following specs:

 

  1. 1GB Ram.
  2. 25GB SSD Disk Space.

 

For OpenLiteSpeed environment, we will install CyberPanel and for NGINX environment we will use clean VestaCP installation. We will be using h2load for benchmarking on a DigitalOcean $10 plan. (All these virtual machines reside in Frankfurt location)


Install h2load (nghttp2)

 

As mentioned above, we are going to use h2load for performing benchmarks. On our Centos 7.6 DigitalOcean server ($10 plan) we ran following commands to install h2load

 

yum install epel-release -y

yum install nghttp2

 

Then make sure it is installed

 

[[email protected] nghttp2]# h2load –version
h2load nghttp2/1.31.1

 

This server is only dedicated to run the benchmarks.


Make sure to Enable HTTP2 on NGINX Server

 

By default on VestaCP, you get HTTP 1.1 with NGINX. You can open the vhost configuration file to turn on HTTP2.

 

nano /home/admin/conf/web/yourdomain.com.nginx.ssl.conf

 

Replace yourdomain.com with the domain you have on VestaCP. Once in the file convert

 

server {
listen 192.168.100.1:443;

 

Into

 

server {
listen 192.168.100.1:443 ssl http2;

 

Save the file and restart nginx using systemctl restart nginx. On CyberPanel you will get HTTP2 by default.


Let’s test a small static file of 725 Bytes

 

In this test, we will be using the following command

 

h2load -t1 -H ‘Accept-Encoding: gzip’ -c1000 -n100000

 

 

OpenLiteSpeed completed the requests in almost half the time.

 

Result for OpenLiteSpeed

 

[[email protected] nghttp2]# h2load -t1 -H ‘Accept-Encoding: gzip’ -c1000 -n100000 $domainols
starting benchmark…
spawning thread #0: 1000 total client(s). 100000 total requests
TLS Protocol: TLSv1.2
Cipher: ECDHE-RSA-AES128-GCM-SHA256
Server Temp Key: ECDH P-256 256 bits
Application protocol: h2
progress: 10% done
progress: 20% done
progress: 30% done
progress: 40% done
progress: 50% done
progress: 60% done
progress: 70% done
progress: 80% done
progress: 90% done
progress: 100% done

finished in 7.77s, 12864.34 req/s, 4.71MB/s
requests: 100000 total, 100000 started, 100000 done, 100000 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 100000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 36.64MB (38416114) total, 1.21MB (1267114) headers (space savings 94.34%), 32.81MB (34400000) data
min max mean sd +/- sd
time for request: 4.47ms 468.95ms 66.97ms 16.56ms 94.64%
time for connect: 186.83ms 1.97s 864.64ms 371.78ms 88.00%
time to 1st byte: 615.39ms 2.03s 970.81ms 343.46ms 90.80%
req/s : 12.90 13.47 13.23 0.14 70.60%

 

 

Result for NGINX

 

[[email protected] nghttp2]# h2load -t1 -H ‘Accept-Encoding: gzip’ -c1000 -n100000 $domainnginx
starting benchmark…
spawning thread #0: 1000 total client(s). 100000 total requests
TLS Protocol: TLSv1.2
Cipher: ECDHE-RSA-AES256-GCM-SHA384
Server Temp Key: ECDH P-256 256 bits
Application protocol: h2
progress: 10% done
progress: 20% done
progress: 30% done
progress: 40% done
progress: 50% done
progress: 60% done
progress: 70% done
progress: 80% done
progress: 90% done
progress: 100% done

finished in 17.68s, 5657.34 req/s, 2.57MB/s
requests: 100000 total, 100000 started, 100000 done, 100000 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 100000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 45.35MB (47549000) total, 10.78MB (11300000) headers (space savings 35.80%), 32.81MB (34400000) data
min max mean sd +/- sd
time for request: 69.67ms 1.46s 104.37ms 74.98ms 96.91%
time for connect: 6.19s 7.76s 7.13s 521.05ms 61.80%
time to 1st byte: 7.66s 7.95s 7.75s 71.72ms 62.60%
req/s : 5.66 5.71 5.69 0.01 66.90%

 

Make sure that when you run the test against NGINX application protocol is h2.


Static file of size 2MB

 

In this test, we will be using the following command

 

h2load -t1 -H ‘Accept-Encoding: gzip’ -c100 -n1000

 

 

OpenLiteSpeed completed the requests in 8.4 seconds, while for the same number of requests NGINX took 74.81 seconds.

 

Result for OpenLiteSpeed

 

[[email protected] nghttp2]# h2load -t1 -H ‘Accept-Encoding: gzip’ -c100 -n1000 $domainols
starting benchmark…
spawning thread #0: 100 total client(s). 1000 total requests
TLS Protocol: TLSv1.2
Cipher: ECDHE-RSA-AES128-GCM-SHA256
Server Temp Key: ECDH P-256 256 bits
Application protocol: h2
progress: 10% done
progress: 20% done
progress: 30% done
progress: 40% done
progress: 50% done
progress: 60% done
progress: 70% done
progress: 80% done
progress: 90% done
progress: 100% done

finished in 8.40s, 119.05 req/s, 231.84MB/s
requests: 1000 total, 1000 started, 1000 done, 1000 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 1000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 1.90GB (2041926867) total, 37.08KB (37967) headers (space savings 83.56%), 1.90GB (2036628000) data
min max mean sd +/- sd
time for request: 7.53ms 1.94s 791.62ms 185.17ms 75.20%
time for connect: 101.46ms 112.75ms 107.14ms 2.21ms 71.00%
time to 1st byte: 115.26ms 136.43ms 125.44ms 5.40ms 61.00%
req/s : 1.19 1.40 1.25 0.04 68.00%

 

 

Result for NGINX

 

[[email protected] nghttp2]# h2load -t1 -H ‘Accept-Encoding: gzip’ -c100 -n1000 $domainnginx
starting benchmark…
spawning thread #0: 100 total client(s). 1000 total requests
TLS Protocol: TLSv1.2
Cipher: ECDHE-RSA-AES256-GCM-SHA384
Server Temp Key: ECDH P-256 256 bits
Application protocol: h2
progress: 10% done
progress: 20% done
progress: 30% done
progress: 40% done
progress: 50% done
progress: 60% done
progress: 70% done
progress: 80% done
progress: 90% done
progress: 100% done

finished in 74.81s, 13.37 req/s, 25.99MB/s
requests: 1000 total, 1000 started, 1000 done, 1000 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 1000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 1.90GB (2039006900) total, 112.30KB (115000) headers (space savings 35.75%), 1.90GB (2036628000) data
min max mean sd +/- sd
time for request: 66.81ms 44.02s 7.04s 1.82s 92.30%
time for connect: 545.07ms 920.01ms 646.84ms 92.66ms 86.00%
time to 1st byte: 635.69ms 8.21s 4.34s 2.17s 59.00%
req/s : 0.13 0.15 0.14 0.00 61.00%

 

In both large and small files OpenLiteSpeed clearly stands a winner.


Testing a simple PHP Hello World Application

 

We will now create a simple php file with the following content:

 

<?php

echo “hello world”

?>

 

Additional Configuration for OpenLiteSpeed

 

PHP_LSAPI_CHILDREN=10

LSAPI_AVOID_FORK=1

 

Additional Configuration for NGINX

 

pm.start_servers = 10

 

Command Used

 

h2load -t1 -H ‘Accept-Encoding: gzip’ -c1000 -n100000

 

 

OpenLiteSpeed completed the requests in 23.76 seconds, while for the same number of requests NGINX took 115.02 seconds. OpenLiteSpeed is a winner with PHP application due to its own implementation of PHP processes called LSPHP (PHP + LSAPI) it performs much better then PHP-FPM which is being used with NGINX.


LiteSpeed Cache vs FastCGI Caching with NGINX

 

We will now discuss caching in OpenLiteSpeed and NGINX.

 

 

With OpenLiteSpeed web server you get a built-in cache module and with NGINX you get a FastCGI Caching Module.

 

Why OpenLiteSpeed Cache Module is better?

 

  1. Tag-based caching, pages can be cached for an unlimited amount of time until the cache copy gets invalid.
  2. Built right into the Web server.
  3. Multiple Cache modules available for popular CMS.
  4. Use disk to save cache copies.

 

What is wrong with NGINX Based FastCGI Caching

 

  1. Not tag based caching, or you can say time-based caching.
  2. This type of caching is not intelligent and does not know when to invalidate the cache copy.
  3. You can use as MicroCaching but it is explained here as to why MicroCaching is not recommended.

Benchmarking LiteSpeed vs NGINX for WordPress

 

We will now benchmark litespeed vs nginx for wordpress by installing WordPress on both.

 

  1. OpenLiteSpeed will use LiteSpeed Official WordPress Caching Plugin.
  2. On NGINX setup we will use Cache Enabler Caching plugin.

 

Command used

 

h2load -t1 -H ‘Accept-Encoding: gzip’ -c100 -n1000

 

 

The first question after seeing the graph above will be why OpenLiteSpeed took only 1.4 seconds and NGINX (even using the cache plugin) took 91.6 seconds to complete the same number of requests. Let’s recall the image we shared above.

 

 

Here you can see that in case of OpenLiteSpeed when there is a cache hit, request does not go to PHP Engine which is a very costly operation that causes all the bottleneck. Because OpenLiteSpeed cache module sits inside the web server and all logic is handled there, which means no need to invoke PHP Engine.

 

However in case of NGINX, it is not true, Cache Enabler plugin resides on the PHP side. So even if there is a cache hit, PHP needs to be forked and used which causes all the bottleneck.  Let see the detailed results now

 

OpenLiteSpeed

 

[[email protected] nghttp2]# h2load -t1 -H ‘Accept-Encoding: gzip’ -c100 -n10000 $domainols
starting benchmark…
spawning thread #0: 100 total client(s). 10000 total requests
TLS Protocol: TLSv1.2
Cipher: ECDHE-RSA-AES128-GCM-SHA256
Server Temp Key: ECDH P-256 256 bits
Application protocol: h2
progress: 10% done
progress: 20% done
progress: 30% done
progress: 40% done
progress: 50% done
progress: 60% done
progress: 70% done
progress: 80% done
progress: 90% done
progress: 100% done

finished in 1.44s, 6925.11 req/s, 25.10MB/s
requests: 10000 total, 10000 started, 10000 done, 10000 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 10000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 36.25MB (38006300) total, 118.55KB (121400) headers (space savings 95.55%), 35.87MB (37610000) data
min max mean sd +/- sd
time for request: 9.31ms 20.81ms 13.39ms 1.13ms 89.23%
time for connect: 89.91ms 100.89ms 95.89ms 2.78ms 64.00%
time to 1st byte: 101.79ms 113.77ms 107.89ms 3.45ms 61.00%
req/s : 69.35 70.00 69.66 0.19 62.00%

 

NGINX

 

[[email protected] nghttp2]# h2load -t1 -H ‘Accept-Encoding: gzip’ -c100 -n10000 $domainnginx
starting benchmark…
spawning thread #0: 100 total client(s). 10000 total requests
TLS Protocol: TLSv1.2
Cipher: ECDHE-RSA-AES256-GCM-SHA384
Server Temp Key: ECDH P-256 256 bits
Application protocol: h2
progress: 10% done
progress: 20% done
progress: 30% done
progress: 40% done
progress: 50% done
progress: 60% done
progress: 70% done
progress: 80% done
progress: 90% done
progress: 100% done

finished in 91.69s, 109.06 req/s, 417.50KB/s
requests: 10000 total, 10000 started, 10000 done, 10000 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 10000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 37.38MB (39198270) total, 1.44MB (1513370) headers (space savings 27.93%), 35.76MB (37500000) data
min max mean sd +/- sd
time for request: 355.76ms 1.23s 907.05ms 78.63ms 76.91%
time for connect: 357.00ms 678.18ms 506.17ms 153.42ms 54.00%
time to 1st byte: 712.81ms 1.60s 1.15s 264.29ms 57.00%
req/s : 1.09 1.10 1.10 0.00 57.00%


OpenLiteSpeed and .htaccess

 

OpenLiteSpeed also have a support for .htaccess file (very popular feature provided by Apache Web Server). But some people confuse with with slow performance, yes incase of Apache your performance will get affected if you have enabled the use of .htaccess file. However in case of OpenLiteSpeed it will only look for .htaccess file in the directory for the first time, which means you get benefit of .htaccess file along with high performance.


Conclusion

 

We ran multiple type of tests

 

  1. Small static file.
  2. Large static file.
  3. Simple Hello World php application.
  4. WordPress Site

 

In all the cases OpenLiteSpeed was clear winner against NGINX. So what are you waiting for, you can start right now with our managed vps service and let us handle the speed for you. You get 3 days free trial (no credit card required) with free migration.

Tags: , , , , , , ,

2 thoughts on “OpenLiteSpeed vs NGINX

Leave a Reply

Your email address will not be published.