Load Balancing in Docker Compose with NGINX

Learn how to configure a NGINX load balancer inside docker compose for your containers

Load Balancing in Docker Compose with NGINX

— UPDATE 11.09.2022 —

The article below describes how to use NGINX with a custom-compiled module that has some warts. I would recommend utilizing Caddy instead and I have written another article about how to do that here.

As I continue to build the new bare-metal infrastructure for exdividend.app I was looking for ways to handle balancing the load between hosts and their services. I run Alpine Linux on the hosts and I have yet to decide between utilizing containers or a process manager such as supervisord for running the services.

I decided to prototype both and here I will walk through how to load balance between workloads in Docker.

NGINX Active Load Balancing

NGINX is a modern web server. The open source version is one of the top web servers in production today. Setting it up is fairly straightforward and there are great websites to help you generate the configurations. I would recommend checking out the one from Digital Ocean.

One thing to be aware of is NGINX puts the "active" load balancing feature behind a paid NGINX Plus license. The good news is that there is a open source upstream module. This module enables NGINX to actively monitor the upsream destinations and take them out of circulation when they are having problems.

We first have to compile this module into NGINX from source.

FROM nginx:1.21.1-alpine

RUN wget "http://nginx.org/download/nginx-${NGINX_VERSION}.tar.gz" -O nginx.tar.gz

# For see https://github.com/nginxinc/docker-nginx/blob/master/mainline/alpine/Dockerfile
RUN apk add --no-cache --virtual .build-deps \
  gcc \
  libc-dev \
  make \
  openssl-dev \
  pcre-dev \
  zlib-dev \
  linux-headers \
  curl \
  gnupg \
  libxslt-dev \
  gd-dev \
  geoip-dev \
  git \
  patch

ENV CONFARGS --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --with-perl_modules_path=/usr/lib/perl5/vendor_perl --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-Os' --with-ld-opt=-Wl,--as-needed,-O1,--sort-common

RUN mkdir -p /usr/src

RUN tar -zxC /usr/src -f nginx.tar.gz && \
    cd /usr/src/nginx-$NGINX_VERSION && \
    git clone --depth 1 --single-branch --branch master https://github.com/nginx-modules/nginx_upstream_check_module && \
    patch -p1 < nginx_upstream_check_module/check_1.16.1+.patch && \
    ./configure $CONFARGS --add-module=nginx_upstream_check_module && \
    make && make install && \
    cd / && \
    apk del .build-deps && \
    rm -rf /usr/src

EXPOSE 80

STOPSIGNAL SIGTERM

CMD ["nginx", "-g", "daemon off;"]

This dockerfile grabs the NGINX source specified in the FROM directive of the Dockerfile - this lets you change the version of NGINX in one place and have it cascade down.

It then grabs the source code for NGINX, the source code for the upstream module, and compiles them together.

The CONFARGS seems scary but I grabbed these from the nginx:1.21.5-alpine container. You can run the container locally and grab the output from the nginx executable itself.

docker run nginx:1.21.5-alpine nginx -V

nginx version: nginx/1.21.5
built by gcc 10.3.1 20211027 (Alpine 10.3.1_git20211027)
built with OpenSSL 1.1.1l  24 Aug 2021
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --with-perl_modules_path=/usr/lib/perl5/vendor_perl --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-Os -fomit-frame-pointer -g' --with-ld-opt=-Wl,--as-needed,-O1,--sort-common

Everything after arguments can then be passed to build NGINX from source.

NGINX Reverse Proxy Config

I am not going to dive into the main nginx.conf since it is taken nearly directly from the Digital Ocean config generator. However lets take a peek at what the proxy looks like that contains the upstream configuration.

upstream www {
    least_conn;
    server www01:8080;
    server www02:8080;

    check interval=1500 rise=1 fall=3 timeout=1500;
}

server {

    listen 80;

    location / {
        check_status;
        gzip_static on;
        proxy_pass http://www;
        include include/proxy.conf;
    }

}

The upstream directive is what tells NGINX where the worloads are. We also define a check that will determine if our workloads are running. There is also a single check_status entry to enable the active load balancing.

You can customize more of these options by taking a look at the upstream module documentation. You can configure different endpoints or types of checks if needed.

Load Balancer in Docker Compose

Lets now setup a docker compose file. We will setup two containers that NGINX will load balance for us. These can be any container workload of your choosing. In this case I will use a container that will print out the hostname of the current container along with the request details.

The two workloads will run on port 8080 and the NGINX load balancer will be exposed to our local machine on port 80. We will isolate these containers in a network called example.

version: "3.9"
services:
  nginx-lb:
    networks:
      - example
    build:
      context: .
      dockerfile: ./nginx/Dockerfile
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
      - ./nginx/loadbalancer/include:/etc/nginx/include:ro
      - ./nginx/loadbalancer/conf.d:/etc/nginx/conf.d:ro
    depends_on:
      - www01
      - www02
    ports:
      - "80:80"
  www01:
    image: jmalloc/echo-server
    networks:
      - example
    ports:
      - "8080"
  www02:
    image: jmalloc/echo-server
    networks:
      - example
    ports:
      - "8080"

networks:
  example:

Lets run this example.

docker-compose up --build --remove-orphans

Then try to hit localhost in your browser of choice.

Try refreshing a few times and you will see the request is served by a different container as it switches between the upstream destinations. Now lets try killing one of the containers.

docker kill $ID

Replace $ID with one of the ID's from the browser.

Try refreshing a few more times and you will see NGINX no longer serves traffic from that container.

Example project

You can download the files for this example project here.

Subscribe to zsiegel.com

Don’t miss out on the latest articles. Sign up now to get exclusive members only content and early access to articles.
jamie@example.com
Subscribe