Blue / Green Deployment

What is Blue / Green Deployment?

Blue / Green deployment is a technique companies use to release new versions of their application with no downtime. This technique involves two identical environments running on a single or multiple servers. We will call these Blue and Green environments. On a high level, just imagine that there is a magical switch for you to alternate between the two environments instantly. In this article, I will walk you through how to execute Blue / Green deployment with DigitalOcean and Docker.

Using DigitalOcean’s Floating Ip

DigitalOcean provides services to allow you to create and access Droplets which are essentially servers sitting on their data server centers. They are very similar to AWS’s EC2 instances if you are familiar with that. Once you create an account and are logged in, you can navigate to their Floating IP by clicking Networking on top then Floating IPs underneath. Once you have reached this page, you will be able to create Floating IPs and assign them to your Droplet.

By doing this, you can map the Floating IP to your DNS and assign this Floating IP to a different Droplet anytime and your domain should automatically render the corresponding application. This is definitely one of the easiest ways to implement the Blue / Green Deployment for starters.

Using Docker/Docker Swarm

Docker is a technology that allows you to containerize your applications. To learn more about Docker and how to get started with Docker, please read my other article Getting started with Docker On Ubuntu.

It’s just as simple to implement the Blue / Green deployment strategy with Docker. You can start by creating your first instance, let’s call it test

docker service create —name test -p 8080:8080 —replicas 5 example:1.0

Then let’s assume that you have an updated version of the app example, call it example:2.0, then we can update the service by executing this command

docker service update --image example:v2.0 --update-parallelism 2 --update-delay 15s test

This command updates the 5 replicas of test that you initiated with the first command, to the newer version, two out of the five replicas at a time, with a 15 second delay. There are many more flags along with the docker service update command, you can read about them here.


There are many more other ways that you can implement the Blue / Green deployment technique in your production environment. But the purpose of this article is for you to have an understanding of what it is and why do companies use it. Be sure to explore other strategies such as Nginx, AWS and more as well. Anyhow, being able to switching between environments with no downtime is a very valuable capability, hopefully this article has given you a good outline of what it is and how to integrate it into your own production environments.

Nginx configuration cheatsheet

Nginx Cheatsheet

Nginx vs Apache

  • Nginx interprets incoming requests as URI locations whereas Apache prefers to interpret requests as filesystem locations
  • Nginx can handle more concurrent processes
  • Nginx requires less resources



  • worker_process
    • sets the number of CPU cores to produce the best performance
    • “auto” sets to the number of CPU cores available
    • use nproc or lscpu to find out the number of CPU cores available and use no more than that

Events { }

  • worker_connections
    • use ulimit -n to find out the number
    • worker_process * worker_connections = total amount of connections accetable
  • multi_accept
    • allows multiple connections at once
    • multi_accept on;

Http { }

  • types
    • map of types to corresponding mime types
    • can be replaced with default list mime.types
      • include mime.types
  • basic settings
    • charset utf-8;
    • sendfile on;
    • tcp_nopush on;
    • tcp_nodelay off;
    • types_hash_max_size 2048;
  • file cache
    • open_file_cache max=1000 inactive=20s;
    • open_file_cache_valid 30s;
    • open_file_cache_min_uses 2;
    • open_file_cache_errors on;
  • fast cgi cache
    • good for caching static responses from the backend
    • fastcgi_cache_path /tmp/nginx_cache levels=1:2 keys_zone=microcache:10m max_size=500m;
      • max_size = disk space or mem space depending on the location of the cache
    • fastcgi_cache_key “schemerequest_methodhostrequest_uri”
      • leaving out each $value represents caching at a level within what’s left out
      • $scheme = http/https
        • leaving out would cache the same for http/https
      • $request_method = GET/POST/etc…
        • leaving out would cache the same for all methods
    • location{fastcgi_cache microcache}
      • can be called in multiple locations within the location directory
    • location {fastcgi_cache_valid 200 60m; }
    • location{fastcgi_pass;}
    • add_header microcache-status $upstream_cache_status;
      • to add header for using fastcgi
  • buffer sizes
    • client_body_buffer_size 16k;
    • client_header_buffer_size 1k;
    • client_max_body_size 8m;
    • large_client_header_buffers 2 1k;
  • timeouts
    • client_body_timeout 12;
    • client_header_timeout 12;
    • keepalive_timeout 300;
      • need for repeated handshakes
    • send_timeout 10;
  • server token
    • server_tokens off;
      • hide nginx version info

Server { }

  • listen
    • port that the server listens on
  • server_name
    • domain name
    • ip
  • root
    • static folder of what the server renders on
  • compressing
    • gzip on;
    • gzip_min_length 100;
      • minimum file length to compress
    • gzip_comp_level 3;
      • keep it between 2-4, since it takes more CPU the higher the number
    • gzip_types text/plain;
      • types of files that needs to be compressed
    • gzip_disable “msie6”;
      • disable gzip for certain browsers, some browsers aren’t compatible with compressing
  • adding headers
  • add_header “Cache-Control” “no-transform”;
    • an array types that allows you to add multiple headers to the response

Location { }

  • uri mapping to serve pages
  • many formats and syntax on which uri/uris are served, look up for details while implementing
    • =
      • Exact match
    • ^~
      • Preferential prefix
    • ~&~*
      • Regex match
    • no modifier
      • Prefix match
  • error_log [directory]
    • specify where to log
  • turning the logs off
    • access_log off
    • error_log off
  • try_files
    • try_files $uri = 404;
    • try_files $url [preferred html] = 404;
  • rewrite
    • rewrite ^ /index.html;
    • cache expirations
      • expires 1M;
        • time for cache to expire and have to request again
      • add_header Pragma public;
      • add_header Cache-Control public;
      • add_header Vary Accept-Encoding;
  • logs
    • /var/log/nginx (default)
      • access.log
      • error.log
        • 404 is not logged as error, check access.log


  • listen on 443
    • listen 443 ssl;
  • apply ssl certs
    • ssl_certificate [path_to_.crt];
    • ssl_certificate_key [path_to_.key];
  • autoindex
    • autoindex off;
      • disable auto indexing directories on the server
  • server tokens
    • server_tokens off;
      • hide version of nginx from the headers
  • buffer
    • set the buffer size as described above to prevent buffer overflow attacks
  • user agents
    • if($http_user_agent ~* [bad_agent_name]) {return 403;}
    • if($http_referer ~* [bad_referer_name]) {return 403;}
      • both blocks the corresponding bad agents by return 403 when seen
  • X-Frame-Option
    • add_header X-Frame-Options SAMEORIGN;
      • only allows browser to render a page within a fram or iframe from the same origin

Reverse Proxy

  • proxy_pass
  • header
    • add_header proxied nginx;
      • add the proxied header to the client
    • proxy_set_header proxied nginx;
      • add the proxied header to all requests

Load Balancer

  • upstream
        upstream servers {
          server localhost:10001;
          server localhost:10002;
          server localhost:10003;
        server {
          listen 8888;
          location / {
            proxy_pass 'http://servers';
  • options
    • ip_hash
      • ties to the main server, if it goes down then uses next server
    • least_conn
      • connects to the lease connected server
    • usage
      • upstream servers{ip_hash;}
      • upstream servers{least_conn;}

Getting started with Docker on Ubuntu

Docker is a technology that allows you to run many types of applications in containers. For those of you who are not familiar with the concept of containers, I like to think of them as super lightweight VMs that do not require its own operating systems. Here’s a diagram that I made to help you understand the difference between VMs and containers. Before we get started, I suggest you to go through the Docker training on the Docker training platform to get the a good understanding of how it works.



  • Docker
  • Ubuntu 14.04 or higher

Install Docker

Get the latest Docker Engine.
wget -qO- | sh
Add yourself as a user for Docker.
sudo usermod -aG docker rjzheng


Docker has a cloud storage platform for users to store their Docker images. Sort of like Github, but they call it the Docker Hub. Go ahead and make an account so you can have a Docker ID. Docker Hub is completely free for regular users but they also offer many other features at an affordable price.

Build image

This step assumes that you have your application tested and ready to be built into a Docker image. Before running the build command, make sure to have a Dockerfile ready. Once everything is set and done, run the following command in your terminal inside of the project’s directory.

docker build -t $DOCKERID/[image_name]:[version] .

-t = tag, every image that are going to be uploaded to Docker Hub needs to be tagged by the owners Docker ID followed by the image name and image version in the above syntax.

Upload image

docker push $DOCKERID/[image_name]

Pull image

docker pull $DOCKERID/[image_name]

Run image

docker run -d -p [desired_port]:[exposed_port] --name [app_name] $DOCKERID/[image_name]:[version]

-d = detached, this allows the app to run in the background so the terminal wouldn’t be hung by the container process. (Note: if you don’t add -d, then the once you

-p = port, this maps the port that the application was exposed on to the port that you would like the users to have access to.

–name = name of the Docker application, Docker applications can be stopped, started, restarted, or deleted anytime, assigning a name to the container makes it easier to perform those tasks.


Here you have a running Docker application and you can monitor all of your Docker containers by running

docker container ps


For more docker related commands and information, docker provides a very detailed documentation.