Using Nginx as a load balancer
Nginx as you may know is a fast and powerful web server. It can also be used as a simple load balancer. Nginx makes load balancing for applications, web sites, and services between multiple servers fast and easy.
Note
Not included in this guide is the process for setting up DNS. If you have multiple servers, serving a web site or application and you would like to load balance them, then you will need to have the ability to change your DNS settings with your hosting provider to point your domain name to the load balancer.
Why use a load balancer
What if you have a service that must support hundreds, thousands, millions of connections per day? If a single server is not going to be able to keep up with the load and provide service without downtime then you probably want to have a way to distribute your traffic in a way that keeps your servers from becoming overwhelmed by connections. Load balancing gives you the ability to ensure that no single server receives more connections than it can reasonably handle. Load balancing also provides redundancy, and the power to remove a server from your application without impacting the level of service. Potentially providing zero downtime.
Installing Nginx
As in other posts I am using Ubuntu 14.04 server edition. We will be installing the nginx-core package.
sudo apt-get update sudo apt-get install nginx-core
After this package is installed you should have the following files and directories listed under /etc/nginx/
conf.d koi-win naxsi.rules proxy_params sites-enabled win-utf fastcgi_params mime.types naxsi-ui.conf.1.4.1 scgi_params ssl koi-utf naxsi_core.rules nginx.conf sites-available uwsgi_params
Configure load balancing
We will define our load balancer in the sites-available directory using the default round robin method. Round robin means that the load balancer will start with the first server in our list and work it’s way down the list with each connection then circle back when it reaches the bottom.
Change to the nginx directory.
cd /etc/nginx/sites-available
Create a file for our nginx load balancer for the site example.com
vi www.example.com.conf
Add the following to this file
upstream example { server server1.example.com; server server2.example.com; server server3.example.com; } server { location / { proxy_pass http://example; } }
The upstream directive creates a server group. In this case our group is called example. Within the example group we have 3 servers that will be selected in round robin.
The server directive defines our virtual host which acts as a proxy for www.example.com. This directive will capture requests for www.example.com and pass them off to the “example†group which will then assign a server to the request from the group.
Activate load balancing
To make our configuration active we have two more steps
- Create a link for www.example.com from the sites-available directory to the sites-enabled directory.
-
ln -s /etc/nginx/sites-available/www.example.local.conf /etc/nginx/sites-enabled
-
- Restart the nginx server
-
sudo service nginx restart
-
Further configuration
One of the cool things you can do with this configuration is pull servers out for maintenance without impacting services by using the “down†parameter.
upstream example { server server1.example.com; server server2.example.com; server server3.example.com down; } server { location / { proxy_pass http://example; } }
In the above example server3.example.com is now pulled out of the round robin loop and you can update it. Reboot it, shut it down, or replace it with a new server all without your user base being affected.
To read more about load balancing check out the nginx website: https://www.nginx.com/resources/admin-guide/load-balancer/
Luke has an RHCSA for Red Hat Enterpirse Linux 7 and currently works as a Linux Systems Adminstrator in Ohio.
This post, re-published here with permission, was originally published on Luke’s site here.
Leave a Reply