Essential Data Structures

Array & String


  • Compute key’s hash code -> Int/long
    • keys can share the same hash
  • Find index of the array
    • hash % array_length
  • Use linkedlist to store key and value at this index

ArrayList & Resizable Arrays

  • Resizing ArrayList takes O(1)
    • Doubling takes O(n) but very rare so its amortized insertion time is still O(1)


  • Creates a resizable array and copies strings over only when necessary



  • LIFO (last in first out)\


  • pop()
    • Remove the top item from the stack
  • push(item)
    • Add an item to the top of the stack
  • peek()/top()
    • Return the top of a stack
  • isEmpty()
    • Return true iff the stack is empty


  • O(1) add/remove
  • O(n) access


class Stack {
  constructor(items) {
    this.items = items == null ? [] : items;

  push(data) {
    this.items[this.items.length] = data;

  pop() {
    if (this.items.length === 0) return undefined;

    var removed = this.items[this.items.length - 1];
    // Cannot do
    // delete this.items[this.items.length - 1]
    // Because that will leave the index as undefined instead of removing the index altogether
    this.items.splice(this.items.length - 1, this.items.length);
    return removed;

  peek() {
    if (this.items.length === 0) return undefined;
    return this.items[this.items.length - 1];

  isEmpty() {
    return this.items.length === 0;



  • Good for certain recursive algorithms
  • Can be used to implement a recursive algorithm iteratively




  • FIFO (first in first out)


  • add(item)
    • Add an item to the end of the list
  • remove()
    • Remove the first item in the list
  • peek()/top()
    • Return the top of the stack
  • isEmpty()
    • Return true iff the stack is empty


class Queue {
  constructor(items) {
    this.items = items == null ? [] : items;

  enqueue(data) {
    return this.items.push(data);

  dequeue() {
    return this.items.shift();

  peek() {
    return this.items[0];

  isEmpty() {
    return this.items.length == 0;


  • BFS
  • Implementing cache


Linked Lists

  • Sequence of nodes, each points to the next and previous nodes in the list
  • Can add or remove from the beginnning of the list in constant time

Deleting a node

  • Single
    • Set = n.prev
  • Double
    • Set = n.prev
    • Set = n.prev


class LinkedList {
  constructor(head) {
    this.head = head;

  push(data) {
    var node = new LinkedListNode(data, null);
    var current;
    if (!this.head) {
      this.head = node;
    } else {
      var current = this.head;
      while ( {
        current =;
      } = node;

  remove(data) {
    var current = this.head;

    // Remove the only node in the LinkedList
    if ( == data) {
      this.head =;
    } else {
      var prev = current;

      while ( {
        // Remove the node in between the nodes
        if ( == data) {
        prev = current;
        current =;

      // Remove last no9de
      if ( == data) { == null;

class LinkedListNode {
  constructor(data, next) { = data; = next;

  toString() {
    var node = this;
    var output = String(;
    while ( {
      output = output + " -> " + String(;
      node =;

    return output;


The “Runner” Technique

  • Have 2 pointers for a linked list that has an even number of length
  • Iterate 1 pointer one by one and iterate the other every other
  • By the time the faster pointer reaches the end, the first pointer will be at the midpoint
  • Iterate backwards and connect these nodes alternatively

Recursive Strategy

  • Takes at least O(n) space
  • All recursive methods can be done iteratively

Blue / Green Deployment

What is Blue / Green Deployment?

Blue / Green deployment is a technique companies use to release new versions of their application with no downtime. This technique involves two identical environments running on a single or multiple servers. We will call these Blue and Green environments. On a high level, just imagine that there is a magical switch for you to alternate between the two environments instantly. In this article, I will walk you through how to execute Blue / Green deployment with DigitalOcean and Docker.

Using DigitalOcean’s Floating Ip

DigitalOcean provides services to allow you to create and access Droplets which are essentially servers sitting on their data server centers. They are very similar to AWS’s EC2 instances if you are familiar with that. Once you create an account and are logged in, you can navigate to their Floating IP by clicking Networking on top then Floating IPs underneath. Once you have reached this page, you will be able to create Floating IPs and assign them to your Droplet.

By doing this, you can map the Floating IP to your DNS and assign this Floating IP to a different Droplet anytime and your domain should automatically render the corresponding application. This is definitely one of the easiest ways to implement the Blue / Green Deployment for starters.

Using Docker/Docker Swarm

Docker is a technology that allows you to containerize your applications. To learn more about Docker and how to get started with Docker, please read my other article Getting started with Docker On Ubuntu.

It’s just as simple to implement the Blue / Green deployment strategy with Docker. You can start by creating your first instance, let’s call it test

docker service create —name test -p 8080:8080 —replicas 5 example:1.0

Then let’s assume that you have an updated version of the app example, call it example:2.0, then we can update the service by executing this command

docker service update --image example:v2.0 --update-parallelism 2 --update-delay 15s test

This command updates the 5 replicas of test that you initiated with the first command, to the newer version, two out of the five replicas at a time, with a 15 second delay. There are many more flags along with the docker service update command, you can read about them here.


There are many more other ways that you can implement the Blue / Green deployment technique in your production environment. But the purpose of this article is for you to have an understanding of what it is and why do companies use it. Be sure to explore other strategies such as Nginx, AWS and more as well. Anyhow, being able to switching between environments with no downtime is a very valuable capability, hopefully this article has given you a good outline of what it is and how to integrate it into your own production environments.

Nginx configuration cheatsheet

Nginx Cheatsheet

Nginx vs Apache

  • Nginx interprets incoming requests as URI locations whereas Apache prefers to interpret requests as filesystem locations
  • Nginx can handle more concurrent processes
  • Nginx requires less resources



  • worker_process
    • sets the number of CPU cores to produce the best performance
    • “auto” sets to the number of CPU cores available
    • use nproc or lscpu to find out the number of CPU cores available and use no more than that

Events { }

  • worker_connections
    • use ulimit -n to find out the number
    • worker_process * worker_connections = total amount of connections accetable
  • multi_accept
    • allows multiple connections at once
    • multi_accept on;

Http { }

  • types
    • map of types to corresponding mime types
    • can be replaced with default list mime.types
      • include mime.types
  • basic settings
    • charset utf-8;
    • sendfile on;
    • tcp_nopush on;
    • tcp_nodelay off;
    • types_hash_max_size 2048;
  • file cache
    • open_file_cache max=1000 inactive=20s;
    • open_file_cache_valid 30s;
    • open_file_cache_min_uses 2;
    • open_file_cache_errors on;
  • fast cgi cache
    • good for caching static responses from the backend
    • fastcgi_cache_path /tmp/nginx_cache levels=1:2 keys_zone=microcache:10m max_size=500m;
      • max_size = disk space or mem space depending on the location of the cache
    • fastcgi_cache_key “schemerequest_methodhostrequest_uri”
      • leaving out each $value represents caching at a level within what’s left out
      • $scheme = http/https
        • leaving out would cache the same for http/https
      • $request_method = GET/POST/etc…
        • leaving out would cache the same for all methods
    • location{fastcgi_cache microcache}
      • can be called in multiple locations within the location directory
    • location {fastcgi_cache_valid 200 60m; }
    • location{fastcgi_pass;}
    • add_header microcache-status $upstream_cache_status;
      • to add header for using fastcgi
  • buffer sizes
    • client_body_buffer_size 16k;
    • client_header_buffer_size 1k;
    • client_max_body_size 8m;
    • large_client_header_buffers 2 1k;
  • timeouts
    • client_body_timeout 12;
    • client_header_timeout 12;
    • keepalive_timeout 300;
      • need for repeated handshakes
    • send_timeout 10;
  • server token
    • server_tokens off;
      • hide nginx version info

Server { }

  • listen
    • port that the server listens on
  • server_name
    • domain name
    • ip
  • root
    • static folder of what the server renders on
  • compressing
    • gzip on;
    • gzip_min_length 100;
      • minimum file length to compress
    • gzip_comp_level 3;
      • keep it between 2-4, since it takes more CPU the higher the number
    • gzip_types text/plain;
      • types of files that needs to be compressed
    • gzip_disable “msie6”;
      • disable gzip for certain browsers, some browsers aren’t compatible with compressing
  • adding headers
  • add_header “Cache-Control” “no-transform”;
    • an array types that allows you to add multiple headers to the response

Location { }

  • uri mapping to serve pages
  • many formats and syntax on which uri/uris are served, look up for details while implementing
    • =
      • Exact match
    • ^~
      • Preferential prefix
    • ~&~*
      • Regex match
    • no modifier
      • Prefix match
  • error_log [directory]
    • specify where to log
  • turning the logs off
    • access_log off
    • error_log off
  • try_files
    • try_files $uri = 404;
    • try_files $url [preferred html] = 404;
  • rewrite
    • rewrite ^ /index.html;
    • cache expirations
      • expires 1M;
        • time for cache to expire and have to request again
      • add_header Pragma public;
      • add_header Cache-Control public;
      • add_header Vary Accept-Encoding;
  • logs
    • /var/log/nginx (default)
      • access.log
      • error.log
        • 404 is not logged as error, check access.log


  • listen on 443
    • listen 443 ssl;
  • apply ssl certs
    • ssl_certificate [path_to_.crt];
    • ssl_certificate_key [path_to_.key];
  • autoindex
    • autoindex off;
      • disable auto indexing directories on the server
  • server tokens
    • server_tokens off;
      • hide version of nginx from the headers
  • buffer
    • set the buffer size as described above to prevent buffer overflow attacks
  • user agents
    • if($http_user_agent ~* [bad_agent_name]) {return 403;}
    • if($http_referer ~* [bad_referer_name]) {return 403;}
      • both blocks the corresponding bad agents by return 403 when seen
  • X-Frame-Option
    • add_header X-Frame-Options SAMEORIGN;
      • only allows browser to render a page within a fram or iframe from the same origin

Reverse Proxy

  • proxy_pass
  • header
    • add_header proxied nginx;
      • add the proxied header to the client
    • proxy_set_header proxied nginx;
      • add the proxied header to all requests

Load Balancer

  • upstream
        upstream servers {
          server localhost:10001;
          server localhost:10002;
          server localhost:10003;
        server {
          listen 8888;
          location / {
            proxy_pass 'http://servers';
  • options
    • ip_hash
      • ties to the main server, if it goes down then uses next server
    • least_conn
      • connects to the lease connected server
    • usage
      • upstream servers{ip_hash;}
      • upstream servers{least_conn;}

Getting started with Docker on Ubuntu

Docker is a technology that allows you to run many types of applications in containers. For those of you who are not familiar with the concept of containers, I like to think of them as super lightweight VMs that do not require its own operating systems. Here’s a diagram that I made to help you understand the difference between VMs and containers. Before we get started, I suggest you to go through the Docker training on the Docker training platform to get the a good understanding of how it works.



  • Docker
  • Ubuntu 14.04 or higher

Install Docker

Get the latest Docker Engine.
wget -qO- | sh
Add yourself as a user for Docker.
sudo usermod -aG docker rjzheng


Docker has a cloud storage platform for users to store their Docker images. Sort of like Github, but they call it the Docker Hub. Go ahead and make an account so you can have a Docker ID. Docker Hub is completely free for regular users but they also offer many other features at an affordable price.

Build image

This step assumes that you have your application tested and ready to be built into a Docker image. Before running the build command, make sure to have a Dockerfile ready. Once everything is set and done, run the following command in your terminal inside of the project’s directory.

docker build -t $DOCKERID/[image_name]:[version] .

-t = tag, every image that are going to be uploaded to Docker Hub needs to be tagged by the owners Docker ID followed by the image name and image version in the above syntax.

Upload image

docker push $DOCKERID/[image_name]

Pull image

docker pull $DOCKERID/[image_name]

Run image

docker run -d -p [desired_port]:[exposed_port] --name [app_name] $DOCKERID/[image_name]:[version]

-d = detached, this allows the app to run in the background so the terminal wouldn’t be hung by the container process. (Note: if you don’t add -d, then the once you

-p = port, this maps the port that the application was exposed on to the port that you would like the users to have access to.

–name = name of the Docker application, Docker applications can be stopped, started, restarted, or deleted anytime, assigning a name to the container makes it easier to perform those tasks.


Here you have a running Docker application and you can monitor all of your Docker containers by running

docker container ps


For more docker related commands and information, docker provides a very detailed documentation.

Redirect HTTP to HTTPS on Nginx

This post assumes that you have your app running and are using Nginx as the reverse proxy for your web app/server.


  • Nginx


Separate Servers
server {
       listen         80;
       return         301 https://[server_name][request_uri];

server {
       listen         443 ssl;
Concatenated Servers
server {
       listen 80;
       listen 443 ssl;

       if ($scheme != "https") {
        return 301 https://[server_name][request_uri];


Simply put, you listen on both port 80 (HTTP) and port 443 (HTTPS) to expose both HTTP and HTTPS for your web application/server, then you return 301 https://[host_ip][request_url] on port 80 which redirects all traffic to HTTPS.

After you add this to the nginx.conf file, make sure to restart the nginx service

sudo service nginx reload

and see if it takes effect.

How to use promises in nodejs

This post is going to be about writing promises in nodejs. In many projects that utilize a database server, it’s common that the application needs to wait for the data to be fetched before performing any operations on it. Therefore we need to implement an asynchronous function using promises.


  • Node – v6.11.5


Promise is a built in library in nodejs. It contains two parameters: resolve, reject. Resolve and reject can be named something else but the first parameter represents resolve and second is reject. Promise is always in one of these state: pending,  fulfilled and rejected. Before completing the operation, the promise will always be in the pending state, and you can set its state to either fulfilled or rejected by using the resolve and reject callbacks.

An example of promise may look like this:

var testFunction = () => {
  return new Promise(function(resolve, reject) {
    // Some database operations here
    if (value) {
    } else {

When using this function, you can use .then to use the results of the promise. Here is an example:

  .then(function(result) {
    // Do something with the result data
  }, function(error){
    // Handle error

The result field will have the value returned from resolve(value) and error will have the value returned from reject(value). I have seen a lot of developers either forget or purposefully leave out the error handling part of the promise, but I would recommend to leave it in there, especially if it is used for REST APIs, so you can send res.status(500).send() there.

For people who are still confused about promises, a real life example would be: you are at a fast food place, you are placing your order but you left your wallet in your car. So you made a promise with the cashier that you will go get your wallet from your car and pay when you come back. So the cashier continues to send the order to prepare your food, and when you come back you paid the cashier what you owed. From this example, notice that by making the promise, the cashier was able to continue the process by sending in your order and not block any other processes from operating.

React Nodejs Webpack Boilerplate Project

React is one of the hottest frontend libraries today. Its component based structure makes projects easy to manage. Although it is popular and powerful, it’s still only the view of the MVC framework. Personally I enjoy using Nodejs and Express for backend development. In the past 2 years, I have started many personal and work projects with this stack, so I put together a light boilerplate project to help me kick things off. Now I am not going to go into too much details, but I’ll give a brief explanation about the project structure and what each folder and file is responsible for.

Git repo:


  • React
  • Express
  • Nodejs
  • Webpack
  • Karma

Project Structure






– app

The app folder contains all of the React files as follows





— actions

Actions contains a file called actions.js. It is responsible for managing communications with Redux. You can learn about Redux and what its role in the project through their documentation and some googling, but on a high level, Redux is a state manager for React projects. Unlike regular HTML and Javascript sites, when Redux is updated with newer states, the React component that is connected to the reducer automatically updates its views with the most recent states. Now actions.js contains logic that inserts, removes, or does any magic that you want to manage the reducers.

— components

Components is the core of React projects. Components contains js files that return each view of the project. These components can be included in other components to put together the entire web page, and can also be routed to a path in routes.js

— reducers

As introduced earlier, reducers are the building blocks of Redux. Inside of each reducer, you can initiate different state variables which can be connected to React components to display their values using connect from the react-redux library.

— store

Store allows you to integrate different middleware into your React project. I have seen a lot of developers adding this block of code in app.js, but I find it more organized to have its own location.

— test

This folder is self explanatory, it contains tests for your project.

Getting started

First start by cloning the git repo

 git clone [app_name]

Before doing anything else, we need to install all of the node libraries

 npm install

Now that you have a local copy of this boilerplate project, you need to use webpack to build your React code into vanilla javascript in a generated file /public/bundle.js


At this point, all of the preparations are done, and we can start up our project

 npm start

The project should be started and open your favorite browser and direct to “localhost:8080” to access your web app. If everything’s done with no errors, you should see your boilerplate running successfully.

Please let me know if I left out any important details and if you have any questions.

How to add Comodo PositiveSSL Certificate to a Node/Express server

This blog will teach you how to add a Comodo PositiveSSL Certificate to a Node/Express server.


  • Express – v4.13.4
  • Node – v6.11.5
  • Comodo PositiveSSL
  • Digital Ocean
  • GoDaddy

Before reading further, your environment should have a Node server set up using ExpressJs listening at port 80(HTTP) and port 443(HTTPS) (Learn how to set up a Node/Express server) and upload this server on your Digital Ocean droplet. (Learn how to set up a droplet on Digital Ocean). There are _ steps to applying the Comodo PositiveSSL certificate to the server:

  1. Purchase a domain
  2. Purchase the cert from Comodo
  3. Activate the cert on Digital Ocean
  4. Apply the cert to server

1. Purchase a domain

To purchase a domain is one of the easier steps. Personally, I bought my domain from GoDaddy. You can create an account first then search for the domain or vice versa. After you search for your domain name, you should see something as follows:


Next steps are pretty intuitive so I’m not going to give you a step-by-step. After all the payments are done, you should see your domain under your account and you are done with step one.


Then click DNS to modify your Nameservers to redirect to Digital Oceans:

2. Purchase PositiveSSL Certificate from Comodo

From what I have seen, many people are introducing a free SSL certificate provider called Let’s Encrypt. I’m sure it works well for most but I assume there are people like me who are interested in using Comodo’s SSL certificates. Personally, I purchased the cheapest certificate to play around with, and that is the Positive SSL certificate.

To purchase the Positive SSL certificate, you simply sign up on their website, and click Add to cart next to the Positive SSL row under STANDARD DV SSL CERTIFICATES. After following the instructions and making the payment, you will be asked to provide the website’s domain and method of activation. Activation will be explained in the next section, but just select the CNAME option for now. Next, you will be asked to provide the CSR for the certificate. This can be done using their CSR generation tool, which should be a clickable link in the instructions above the text window. The CSR generation tool should appear in a new tab and it should be a straight forward form filling process. After that is done, store the generated private key at a safe and accessible location from your server, and copy and paste the public key to the previous tab where it originally asked you to provide the CSR.

3. Activate the cert on Digital Ocean

Activating the SSL certificate with Digital Ocean is very simple. At this point, your server droplet should be connected properly and you should be able to visit your site at your domain without a problem. After you purchased the SSL cert, you should received an order summary as follow:

The blurred lines are sensitive information which you should keep only to your self, but you do not need to store them anywhere since you will only need them to activate your SSL certificate. By following the instructions provided above, direct to your Digital Ocean account and click Networking from the top menu. Once the page loads, you can add your domain and it should redirect you to a DNS records management page. Select CNAME to create a record and copy and paste the Alias/Host Name and Point to link in the corresponding fields. Lastly, fill in 3600 for TTL and click Create record to complete the record creation. After 15-20 minutes, refresh your Comodo site and the status should change from pending to active, which means your SSL certificate is ready to be used.

4. Apply the cert to server

This is the final step to making your server secure with a SSL certificate! Once your SSL certificate is activated, you should receive an email from Comodo with all of your certificates (.crt files) in a zip file. I had a hard time transferring these certificates to the server but I was able to do it by using SFTP. Once you transferred these certificated to a safe an accessible folder, open your server.js file and populate the following field that you left blank from before.

var options = {
  key: fs.readFileSync('./ssl/private.key');,
  ca: [
  cert: fs.readFileSync('./ssl/[your_domain_name].cert');

Now try starting your server again go to your domain in the browser and you should see the SSL certificate applied correctly!

Feel free to reach out to me with questions and suggestions!