Deploying NodeJs (ExpressJs) project with Docker on Kubernetes - folio3
blog-page-header-bg

Deploying NodeJs (ExpressJs) project with Docker on Kubernetes

COMMENTS (0)
Tweet

Docker on Kubernetes

Prerequisites: Nodejs understanding, Kubernetes/Docker Architecture Theory

We are going to learn how to:

  1. Deploy an expressjs app (docker image) to kubernetes
  2. Add a kubernetes health-check to it.

I will keep this article simple and hopefully you will understand things easily.

I will start with dockerizing expressjs app, following is the sample index.js file.

Nodejs

Docker

Docker is a bit like a virtual machine. But unlike a virtual machine, rather than creating a whole virtual operating system, Docker allows applications to use the same kernel as the system that they’re running on and only requires applications be shipped with things not already running on the host computer. This gives a significant performance boost and reduces the size of the application.

Docker makes it easier to create, deploy, and run applications by using containers. Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and ship it all out as one package.
Source: https://opensource.com/resources/what-docker

Download Docker for windows:
https://docs.docker.com/docker-for-windows/install/

Dockerfile

We will have to add this docket file in the root of our project.

Our Docker file is simple and easy to understand:

  1. First line refers to Node version that we are using.
  2. We then create an app directory and copy npm’s package*.json(s) files. A asterisk(*) sign refers to copying all files starting with name package.
  3. Then, we run npm install, and copy our project to our image.
  4. Since our project is listening on 2087 port, so we have exposed it here.
  5. At the end, run npm start from terminal.

Docker Repository

Before building the image, we will be needing to push docker image to a repo in order to deploy it to Kubernetes. I would suggest you guys to create an account and repository at docker-hub.

Note: Docker hub provides only 1 private repository for free, else you will get public repositories in free account.

After you have made a docker repository, all we have to do is, write this command in terminal, residing inside the root folder of our project.
Login to repository:

Building Docker Image:

So what we have done here is that we have built a docker image and pushed it to our repository. We will need this when we deploy our app to kubernetes.

Kubernetes and Minikube

Kubernetes

Kubernetes is a portable, extensible open-source platform for managing containerized workloads and services that facilitates both declarative configuration and automation. Kubernetes provides a container-centric management environment. It orchestrates computing, networking, and storage infrastructure on behalf of user workloads. This provides much of the simplicity of Platform as a Service (PaaS) with the flexibility of Infrastructure as a Service (IaaS), and enables portability across infrastructure providers.

Source: https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/

Minikube

We will be deploying our app to Minikubebecause it will allow us to run Kubernetes locally very easily. Minikube runs a single-node Kubernetes cluster inside a VM on your laptop for users looking to try out Kubernetes or develop with it day-to-day.
Source: https://kubernetes.io/docs/setup/minikube/

There are other alternatives beside minikube as well, for example:

  • Docker Desktop is an easy-to-install application for your Mac or Windows environment that enables you to start coding and deploying in containers in minutes on a single-node Kubernetes cluster.
  • Minishift installs the community version of the Kubernetes enterprise platform OpenShift for local development & testing. It offers an all-in-one VM (minishift start) for Windows, macOS, and Linux. The container start is based on oc cluster up (Linux only). You can also install the included add-ons.
  • MicroK8s provides a single command installation of the latest Kubernetes release on a local machine for development and testing. Setup is quick, fast (~30 sec) and supports many plugins including Istio with a single command.

Source: https://kubernetes.io/docs/setup/pick-right-solution/


Setup

Please follow this article for minikube and kubernetes setup:
https://medium.com/@JockDaRock/minikube-on-windows-10-with-hyper-v-6ef0f4dc158c

Note (A big trouble savior!!!) : You might face issues in stopping minikube, so here is the work around for it in advance:

While minikube is running:


Deployment to Minkube

Kuberenetes have config files to organize information about clusters, users, namespaces, and authentication mechanisms. The kubectl command-line tool uses kubeconfig files to find the information it needs to choose a cluster and communicate with the API server of a cluster. With kubeconfig files, you can organize your clusters, users, accesses, and namespaces. You can also define contexts to quickly and easily switch between clusters and namespaces

context element in a kubeconfig file is used to group access parameters under a convenient name. Each context has three parameters: cluster, namespace, and user. By default, the kubectl command-line tool uses parameters from the current context to communicate with the cluster.

So, after you have successfully installed minikube and k8s, don’t forget to set the config to minikube as we will be using our local cluster i.e. minikube for our deployment.

Now we want to deploy our containerized applications on top of kubernetes. To do so, we need to create a Kubernetes Deployment configuration.

Once we’ve created a Deployment, the Kubernetes master schedules mentioned application instances onto individual Nodes in the cluster.

Once the application instances are created, a Kubernetes Deployment Controller continuously monitors those instances. If the Node hosting an instance goes down or is deleted, the Deployment controller replaces it. This provides a self-healing mechanism to address machine failure or maintenance. Related to this, we will be using in our deployment in the form of liveness probe.

In a pre-orchestration world, installation scripts would often be used to start applications, but they did not allow recovery from machine failure

By both creating your application instances and keeping them running across Nodes, Kubernetes Deployments provide a fundamentally different approach to application management.

When we create a Deployment, we’ll need to specify the container image for our application and the number of replicas that we want to run.

So, After we have set your config to minikube, we will be needing a deployment file that will deploy our docker image to our local minikube cluster along with a https liveness health ( supposing our project is running on https)

Save this file as deployment.yml and I will explain this deployment file later in detail. For now, lets deploy our project to our minikube cluster with a simple command:

Pod is the basic building block of Kubernetes–the smallest and simplest unit in the Kubernetes object model that you create or deploy. A Pod represents a running process on your cluster.

A Pod encapsulates an application container (or, in some cases, multiple containers), storage resources, a unique network IP, and options that govern how the container(s) should run. A Pod represents a unit of deployment: a single instance of an application in Kubernetes, which might consist of either a single container or a small number of containers that are tightly coupled and that share resources.

Docker is the most common container runtime used in a Kubernetes Pod, but Pods support other container runtimes as well

Source: https://kubernetes.io/docs/concepts/

So, after you have deployed your app on minikube, type in this command in terminal:

You will be able to see 3 pods after entering above command on terminal.

Why we see 3 pods? Because in our deployment.yml file, we have assigned 3 replicas for the project and now we have 3 instances of our project in 3 different pods.

Below the containers section, image refers to the docker image we have pushed in our docker repository, and imagePullPolicy is set to Always which means that kubernetes always have to take a fresh pull of the docker image of the project whenever we deploy our project to kubernetes.

Liveness Probe

In our deployment.yml file, you can see that we have added a liveness probe.
Liveness probe is our project savior. If due to any reason (for example: your server goes down or it stops responding to requests, etc..), our project fails at any pod, liveness probe will restart the respective pod and hence restarting our entire projectlivenessProbe:

In our nodejs project, we have exposed a ‘/health’ api (see below), and in our deployment.yml, within the section of liveness probe, we have told kubernetes that hit the ‘/health’ api of the project, defined under the httpGet section. If the project fails due any reason, our api will not return any successful status, and hence the kubernetes will restart that respective pod.

You can also see the under httpGet section, we have set a scheme equal to HTTPS, assuming that our project is running on HTTPS. Liveness check does not verify the certificates for https, hence you can easily check the health of your project even if it is on HTTPS with self-signed certificates. If you do not set any scheme than default is HTTP.

We have set initialDelaySeconds equal to 40 seconds, which is the time interval of the project to successfully start working after it is deployed or restarted. If this interval is set to a lower value (a value that is smaller than your project startup time) than the liveness probe will keep failing, because it won’t get any reply from the project as it has not yet started and hence liveness probe will restart the pod, and will continue to do so and your project will never run on that deployment.

periodSeconds in the liveness specifies the delay between the every hit of the liveness probe to our health api. Meaning, liveness probe will hit our health api every 3 seconds.

Conclusion

So in this article we have learned how to deploy a docker image project on kubernetes along with a health check that will act as a savior for our project if any failure occurs, automating the entire process of failure handling.

Kubernetes Liveness and Readiness checks have vastly improved the reliability of the deployment that will provide a more better user-end experience. However, if these probes are not handled correctly, then it will make our deployment worse rather than better.

For more information about health check, visit:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/

 

CALL

USA408 365 4638

VISIT

1301 Shoreway Road, Suite 160,

Belmont, CA 94002

Latest From Our Blog

How to make a Dismissible List View in Flutter

June 3, 2019
In this blog, we’ll look at how we can build a dismissable list view in Flutter. You can think of dismissible widget as a draggable piece UI of tha...
Read more

Deploying NodeJs (ExpressJs) project with Docke...

May 20, 2019
Prerequisites: Nodejs understanding, Kubernetes/Docker Architecture Theory We are going to learn how to: Deploy an expressjs app (docker image) to ...
Read more

How to Manage Multiple Environments in a React ...

May 16, 2019
This is a step by step guide for React Native Developers to understand how to manage multiple environments in a React Native App for Android. Envir...
Read more