fbpx

Archive

[Solved] HTTP Error 503. The service is unavailable

HTTP Error 503. The service is unavailable

Web servers such as LiteSpeed and Apache use various codes to tell browsers about responses. For example, if the web server replies back with HTTP code 200, it means that everything is fine and the response generation was successful. There are many other response codes, but today we will discuss HTTP Error 503, and when this usually happens you get following error on your browser: HTTP Error 503. The service is unavailable.

 

If you are not the administrator of the site, there is nothing much you can do as mostly this error indicates that there is something wrong on the server side. You can either refresh the page, visit later or better notify an administrator of the site. However, if you are the administrator of the site, you can do much to figure out what is wrong and there are various reasons and ways to fix this error. There are multiple web servers, each may give you a slightly different error message, such as:

 

  1. 503 Error
  2. Http/1.1 Service Unavailable
  3. 503 Service Temporarily Available
  4. 503 Service Unavailable
  5. HTTP Error 503
  6. Service Unavailable – DNS Failure
  7. Error 503 Service Unavailable

 

Usually, the main thing to look for is error code which is HTTP error code 503. Today we will see how we can discuss various reasons and respective ways to fix the issue.

 

Server Side Issue

 

Before deep diving into various reasons as to why this could happen, I would again like to mention that this is a server side issue. All errors in the 5xx range are considered errors on the server side including 503 Service Unavailable Error. However do keep in mind that 503 error means the server was able to process your web request and it was functioning properly but it chooses to return 503 error code because due to some problem/issue server is not able to process this request the way it should have.

 


Refresh the page

 

Some times you will get the following error

 

503 Service Unavailable – The server is temporarily busy, try again later!

 

 

 

It may really be a temporary error as the error message says, so wait some time and refresh the page. This can happen to high traffic sites, where enough resources are not available to handle the request. On the user end, be careful if you are seeing this error on the payment related pages, and make sure you won’t get charged twice.


Using our Apache as Backend Feature

 

If you are our customer and using our Apache as Backend feature, there is a chance that Apache is down. In this case frontend server which is OpenLiteSpeed, will give you 503 error, as it is failed to connect to Apache. See if Apache is running

 

systemctl status httpd

 

If Apache service is not running, you can start with

 

systemctl start httpd

 

And see if your issue is resolved.

 

PHP FPM is down

 

If you get something like (Assuming you are using our Apache as backend feature or your stack includes PHP-FPM)

 

Service Unavailable

The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later.

 

This means your PHP-FPM service is down. You can start php-fpm using

 

systemctl start php-fpm

 

In case you are our customer, there are multiple PHP-FPMs are available for different PHP versions and their respective commands are:

 

systemctl start php54-php-fpm

systemctl start php55-php-fpm

systemctl start php56-php-fpm

systemctl start php70-php-fpm

systemctl start php71-php-fpm

systemctl start php72-php-fpm

systemctl start php73-php-fpm

 


Check various log files

 

If your issues are still not resolved, you can start to check various log files. In the case of CyberPanel and LiteSpeed (OpenLiteSpeed), log files to check are:

 

/usr/local/lsws/logs/error.log

/usr/local/lsws/logs/stderr.log

 

In the case of Apache

 

/etc/httpd/logs/error_log

 

You can do an efficient search of log files using the grep command such as:

 

cat log_file_path | grep error

cat log_file_path | grep notice

 

This command will make sure that you only get the most relevant information, otherwise, you may also get info level messages in your log files and they are not relevant in this case.


503 Errors due to PHP Malfunction

 

Most of the times 503 errors can come due to a problem in your PHP code, or either PHP-FPM/LSPHP are not able to produce response thus server started giving you 503 error. It is always recommended to first create a phpinfo page and see if your PHP side is working fine. If you can see the phpinfo page, you can move forward to further debug the cause, otherwise, make sure LSPHP is working fine and external application is created properly or PHP-FPM is up and running.

 

Disable PHP OPCode Caching (xCache, ACP or eAccelerator)

 

On a default install of CyberPanel (OpenLiteSpeed or LiteSpeed Enterprise), OPCode caching is enabled. Sometimes different sorts of opcode caching can have compatibility issues with LSPHP (PHP). So if you are getting HTTP Error 503 Error better try to disable opcode caching. On CyberPanel you first need to find out which PHP version is used by your site. To find out your PHP version on CyberPanel run the following command

 

cat /usr/local/lsws/conf/vhosts/yourdomain.com/vhost.conf | grep php

 

For example, the PHP version of your site is 7.2. Go to the configurations directory of PHP 7.2 and disable opcode caching.

 

cd /usr/local/lsws/lsphp72/etc/php.d

mv 10-opcache.ini 10-opcache.ini.bak

systemctl restart lsws

or

/usr/local/lsws/bin/lswsctrl restart

 

This will disable OPCode caching. If you are not on CyberPanel, you need to find the php.ini file for your PHP and disable OPCode caching. Usually, php.ini location is disclosed in phpinfo page. If your issue is still not resolved, you can move on to the next step.

No space left on /tmp

 

Some web application use /tmp directory to store temporary files (session data etc). If /tmp is full you can get HTTP Error 503 Error. Use the following commands to inspect /tmp directory space

 

df -h

df -i

 

 

PHP memory_limit reached

 

memory_limit is a php directive that specifies how much memory a PHP script is allowed to allocate. Sometimes your application might be exceeding this limit, thus failed to produce response for the web server resulting in HTTP Error 503 Error. As explained above, first find out the PHP version used by your site. Then you can directly increase memory_limit from CyberPanel interface.

 

Login to your CyberPanel Dashboard then from left sidebar PHP -> Edit PHP Configs

 

 

  1. Select PHP version to change the value of memory_limit directive.
  2. Set the new value of the directive.

 

Finally, scroll down and click Save Changes.

 

max_execution_time reached

 

max_execution_time is similar to memory_limit. So if your PHP script exits early without producing response again you will get the same error. You can follow the same procedure as described above to fix max_execution_time as well. Make sure to set it to a high enough value so that your script is properly executed.


Conclusion

 

We’ve pretty much discussed all the possible causes of “HTTP Error 503. The service is unavailable”. However we recommend moving your sites to CyberPanel, because CyberPanel use LiteSpeed servers. Which means in low cost VPS you can host more sites, and using LSCache WordPress plugin you can avoid many such errors including HTTP Error 503. Because if your pages are cached, PHP engine is not used, thus giving other applications more resources to run. So in a low cost server you can run multiple sites at super fast speed and avoid such errors. You can learn in our OpenLiteSpeed vs NGINX comparision post as to why you would use CyberPanel and OpenLiteSpeed.

 

You can also get our managed vps and let us do this for you. We offer 3 days trial (no credit card required, plus free migration and fully managed support)

OpenLiteSpeed vs NGINX

OpenLiteSpeed is getting lots of attention lately. OpenLiteSpeed is an open source version of LiteSpeed Enterprise Web Server that shares the same code base thus you eventually get the same Enterprise Grade performance. Today we will see the performance of openlitespeed vs nginx.  We will look into various scenarios such as

 

  1. Static file performance of openlitespeed vs nginx.
  2. Simple PHP file performance.
  3. WordPress site performance with and without LSCache and FastCGI Cache for NGINX.

 

We will run our tests on DigitalOcean $5 Droplet with following specs:

 

  1. 1GB Ram.
  2. 25GB SSD Disk Space.

 

For OpenLiteSpeed environment, we will install CyberPanel and for NGINX environment we will use clean VestaCP installation. We will be using h2load for benchmarking on a DigitalOcean $10 plan. (All these virtual machines reside in Frankfurt location)


Install h2load (nghttp2)

 

As mentioned above, we are going to use h2load for performing benchmarks. On our Centos 7.6 DigitalOcean server ($10 plan) we ran following commands to install h2load

 

yum install epel-release -y

yum install nghttp2

 

Then make sure it is installed

 

[[email protected] nghttp2]# h2load –version
h2load nghttp2/1.31.1

 

This server is only dedicated to run the benchmarks.


Make sure to Enable HTTP2 on NGINX Server

 

By default on VestaCP, you get HTTP 1.1 with NGINX. You can open the vhost configuration file to turn on HTTP2.

 

nano /home/admin/conf/web/yourdomain.com.nginx.ssl.conf

 

Replace yourdomain.com with the domain you have on VestaCP. Once in the file convert

 

server {
listen 192.168.100.1:443;

 

Into

 

server {
listen 192.168.100.1:443 ssl http2;

 

Save the file and restart nginx using systemctl restart nginx. On CyberPanel you will get HTTP2 by default.


Let’s test a small static file of 725 Bytes

 

In this test, we will be using the following command

 

h2load -t1 -H ‘Accept-Encoding: gzip’ -c1000 -n100000

 

 

OpenLiteSpeed completed the requests in almost half the time.

 

Result for OpenLiteSpeed

 

[[email protected] nghttp2]# h2load -t1 -H ‘Accept-Encoding: gzip’ -c1000 -n100000 $domainols
starting benchmark…
spawning thread #0: 1000 total client(s). 100000 total requests
TLS Protocol: TLSv1.2
Cipher: ECDHE-RSA-AES128-GCM-SHA256
Server Temp Key: ECDH P-256 256 bits
Application protocol: h2
progress: 10% done
progress: 20% done
progress: 30% done
progress: 40% done
progress: 50% done
progress: 60% done
progress: 70% done
progress: 80% done
progress: 90% done
progress: 100% done

finished in 7.77s, 12864.34 req/s, 4.71MB/s
requests: 100000 total, 100000 started, 100000 done, 100000 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 100000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 36.64MB (38416114) total, 1.21MB (1267114) headers (space savings 94.34%), 32.81MB (34400000) data
min max mean sd +/- sd
time for request: 4.47ms 468.95ms 66.97ms 16.56ms 94.64%
time for connect: 186.83ms 1.97s 864.64ms 371.78ms 88.00%
time to 1st byte: 615.39ms 2.03s 970.81ms 343.46ms 90.80%
req/s : 12.90 13.47 13.23 0.14 70.60%

 

 

Result for NGINX

 

[[email protected] nghttp2]# h2load -t1 -H ‘Accept-Encoding: gzip’ -c1000 -n100000 $domainnginx
starting benchmark…
spawning thread #0: 1000 total client(s). 100000 total requests
TLS Protocol: TLSv1.2
Cipher: ECDHE-RSA-AES256-GCM-SHA384
Server Temp Key: ECDH P-256 256 bits
Application protocol: h2
progress: 10% done
progress: 20% done
progress: 30% done
progress: 40% done
progress: 50% done
progress: 60% done
progress: 70% done
progress: 80% done
progress: 90% done
progress: 100% done

finished in 17.68s, 5657.34 req/s, 2.57MB/s
requests: 100000 total, 100000 started, 100000 done, 100000 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 100000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 45.35MB (47549000) total, 10.78MB (11300000) headers (space savings 35.80%), 32.81MB (34400000) data
min max mean sd +/- sd
time for request: 69.67ms 1.46s 104.37ms 74.98ms 96.91%
time for connect: 6.19s 7.76s 7.13s 521.05ms 61.80%
time to 1st byte: 7.66s 7.95s 7.75s 71.72ms 62.60%
req/s : 5.66 5.71 5.69 0.01 66.90%

 

Make sure that when you run the test against NGINX application protocol is h2.


Static file of size 2MB

 

In this test, we will be using the following command

 

h2load -t1 -H ‘Accept-Encoding: gzip’ -c100 -n1000

 

 

OpenLiteSpeed completed the requests in 8.4 seconds, while for the same number of requests NGINX took 74.81 seconds.

 

Result for OpenLiteSpeed

 

[[email protected] nghttp2]# h2load -t1 -H ‘Accept-Encoding: gzip’ -c100 -n1000 $domainols
starting benchmark…
spawning thread #0: 100 total client(s). 1000 total requests
TLS Protocol: TLSv1.2
Cipher: ECDHE-RSA-AES128-GCM-SHA256
Server Temp Key: ECDH P-256 256 bits
Application protocol: h2
progress: 10% done
progress: 20% done
progress: 30% done
progress: 40% done
progress: 50% done
progress: 60% done
progress: 70% done
progress: 80% done
progress: 90% done
progress: 100% done

finished in 8.40s, 119.05 req/s, 231.84MB/s
requests: 1000 total, 1000 started, 1000 done, 1000 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 1000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 1.90GB (2041926867) total, 37.08KB (37967) headers (space savings 83.56%), 1.90GB (2036628000) data
min max mean sd +/- sd
time for request: 7.53ms 1.94s 791.62ms 185.17ms 75.20%
time for connect: 101.46ms 112.75ms 107.14ms 2.21ms 71.00%
time to 1st byte: 115.26ms 136.43ms 125.44ms 5.40ms 61.00%
req/s : 1.19 1.40 1.25 0.04 68.00%

 

 

Result for NGINX

 

[[email protected] nghttp2]# h2load -t1 -H ‘Accept-Encoding: gzip’ -c100 -n1000 $domainnginx
starting benchmark…
spawning thread #0: 100 total client(s). 1000 total requests
TLS Protocol: TLSv1.2
Cipher: ECDHE-RSA-AES256-GCM-SHA384
Server Temp Key: ECDH P-256 256 bits
Application protocol: h2
progress: 10% done
progress: 20% done
progress: 30% done
progress: 40% done
progress: 50% done
progress: 60% done
progress: 70% done
progress: 80% done
progress: 90% done
progress: 100% done

finished in 74.81s, 13.37 req/s, 25.99MB/s
requests: 1000 total, 1000 started, 1000 done, 1000 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 1000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 1.90GB (2039006900) total, 112.30KB (115000) headers (space savings 35.75%), 1.90GB (2036628000) data
min max mean sd +/- sd
time for request: 66.81ms 44.02s 7.04s 1.82s 92.30%
time for connect: 545.07ms 920.01ms 646.84ms 92.66ms 86.00%
time to 1st byte: 635.69ms 8.21s 4.34s 2.17s 59.00%
req/s : 0.13 0.15 0.14 0.00 61.00%

 

In both large and small files OpenLiteSpeed clearly stands a winner.


Testing a simple PHP Hello World Application

 

We will now create a simple php file with the following content:

 

<?php

echo “hello world”

?>

 

Additional Configuration for OpenLiteSpeed

 

PHP_LSAPI_CHILDREN=10

LSAPI_AVOID_FORK=1

 

Additional Configuration for NGINX

 

pm.start_servers = 10

 

Command Used

 

h2load -t1 -H ‘Accept-Encoding: gzip’ -c1000 -n100000

 

 

OpenLiteSpeed completed the requests in 23.76 seconds, while for the same number of requests NGINX took 115.02 seconds. OpenLiteSpeed is a winner with PHP application due to its own implementation of PHP processes called LSPHP (PHP + LSAPI) it performs much better then PHP-FPM which is being used with NGINX.


LiteSpeed Cache vs FastCGI Caching with NGINX

 

We will now discuss caching in OpenLiteSpeed and NGINX.

 

 

With OpenLiteSpeed web server you get a built-in cache module and with NGINX you get a FastCGI Caching Module.

 

Why OpenLiteSpeed Cache Module is better?

 

  1. Tag-based caching, pages can be cached for an unlimited amount of time until the cache copy gets invalid.
  2. Built right into the Web server.
  3. Multiple Cache modules available for popular CMS.
  4. Use disk to save cache copies.

 

What is wrong with NGINX Based FastCGI Caching

 

  1. Not tag based caching, or you can say time-based caching.
  2. This type of caching is not intelligent and does not know when to invalidate the cache copy.
  3. You can use as MicroCaching but it is explained here as to why MicroCaching is not recommended.

Benchmarking LiteSpeed vs NGINX for WordPress

 

We will now benchmark litespeed vs nginx for wordpress by installing WordPress on both.

 

  1. OpenLiteSpeed will use LiteSpeed Official WordPress Caching Plugin.
  2. On NGINX setup we will use Cache Enabler Caching plugin.

 

Command used

 

h2load -t1 -H ‘Accept-Encoding: gzip’ -c100 -n1000

 

 

The first question after seeing the graph above will be why OpenLiteSpeed took only 1.4 seconds and NGINX (even using the cache plugin) took 91.6 seconds to complete the same number of requests. Let’s recall the image we shared above.

 

 

Here you can see that in case of OpenLiteSpeed when there is a cache hit, request does not go to PHP Engine which is a very costly operation that causes all the bottleneck. Because OpenLiteSpeed cache module sits inside the web server and all logic is handled there, which means no need to invoke PHP Engine.

 

However in case of NGINX, it is not true, Cache Enabler plugin resides on the PHP side. So even if there is a cache hit, PHP needs to be forked and used which causes all the bottleneck.  Let see the detailed results now

 

OpenLiteSpeed

 

[[email protected] nghttp2]# h2load -t1 -H ‘Accept-Encoding: gzip’ -c100 -n10000 $domainols
starting benchmark…
spawning thread #0: 100 total client(s). 10000 total requests
TLS Protocol: TLSv1.2
Cipher: ECDHE-RSA-AES128-GCM-SHA256
Server Temp Key: ECDH P-256 256 bits
Application protocol: h2
progress: 10% done
progress: 20% done
progress: 30% done
progress: 40% done
progress: 50% done
progress: 60% done
progress: 70% done
progress: 80% done
progress: 90% done
progress: 100% done

finished in 1.44s, 6925.11 req/s, 25.10MB/s
requests: 10000 total, 10000 started, 10000 done, 10000 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 10000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 36.25MB (38006300) total, 118.55KB (121400) headers (space savings 95.55%), 35.87MB (37610000) data
min max mean sd +/- sd
time for request: 9.31ms 20.81ms 13.39ms 1.13ms 89.23%
time for connect: 89.91ms 100.89ms 95.89ms 2.78ms 64.00%
time to 1st byte: 101.79ms 113.77ms 107.89ms 3.45ms 61.00%
req/s : 69.35 70.00 69.66 0.19 62.00%

 

NGINX

 

[[email protected] nghttp2]# h2load -t1 -H ‘Accept-Encoding: gzip’ -c100 -n10000 $domainnginx
starting benchmark…
spawning thread #0: 100 total client(s). 10000 total requests
TLS Protocol: TLSv1.2
Cipher: ECDHE-RSA-AES256-GCM-SHA384
Server Temp Key: ECDH P-256 256 bits
Application protocol: h2
progress: 10% done
progress: 20% done
progress: 30% done
progress: 40% done
progress: 50% done
progress: 60% done
progress: 70% done
progress: 80% done
progress: 90% done
progress: 100% done

finished in 91.69s, 109.06 req/s, 417.50KB/s
requests: 10000 total, 10000 started, 10000 done, 10000 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 10000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 37.38MB (39198270) total, 1.44MB (1513370) headers (space savings 27.93%), 35.76MB (37500000) data
min max mean sd +/- sd
time for request: 355.76ms 1.23s 907.05ms 78.63ms 76.91%
time for connect: 357.00ms 678.18ms 506.17ms 153.42ms 54.00%
time to 1st byte: 712.81ms 1.60s 1.15s 264.29ms 57.00%
req/s : 1.09 1.10 1.10 0.00 57.00%


OpenLiteSpeed and .htaccess

 

OpenLiteSpeed also have a support for .htaccess file (very popular feature provided by Apache Web Server). But some people confuse with with slow performance, yes incase of Apache your performance will get affected if you have enabled the use of .htaccess file. However in case of OpenLiteSpeed it will only look for .htaccess file in the directory for the first time, which means you get benefit of .htaccess file along with high performance.


Conclusion

 

We ran multiple type of tests

 

  1. Small static file.
  2. Large static file.
  3. Simple Hello World php application.
  4. WordPress Site

 

In all the cases OpenLiteSpeed was clear winner against NGINX. So what are you waiting for, you can start right now with our managed vps service and let us handle the speed for you. You get 3 days free trial (no credit card required) with free migration.