Contents

The blue-green deployment basic concept in kubernetes

Introduction

In this post lets show how to deploy the blue-green deployment basic concept with kubernetes native, without any additional componentes required. In Kubernetes there are a few different ways to release an application but it is necessary to choose the right strategy to deploy your application more reliable during update. Actually native in kubernetes we have some strategies to deploy our application. Also, there are several tools to support deploy in kubernetes to achieve others possibilies to adopt. Here, is some of them:

  • Recreat: This strategy involves usually downtime version because shutting down version A of your app and deploying version B. Depending your needs, its could important or not. It is easy to setup because the native kubernetes can be setup it in the strategy type. There is no need extra steps needed or additional componentes required. You can use recreat strategy when your environment does not support an old and new version at the same time.

  • Rolling Update: This strategy involves rolling out a version of an application slowly by replacing one instance at a time, a incremental deployment. Actually, this is a default strategy in kubernetes deployment. There is no need extra steps needed or additional componentes required. It is easy to setup too, the version releases take place slowly across instances according number of replica to be updated and shutting down at the same time. The incovenient is no have any control over traffic and rollout and roolback can be slowly.

  • Blue-green Deployment: We are going to show in action here in this post. These means that release a new version alongside the old version then switch traffic. They have the same number of instances. You only switch off a old version after making sure new version meets all the requirements. Different of others strategies commented previouly, there is extra step need to acomplish it, but no additional componentes are required. Can be expensive because required double resources demands of running two versions simultaneously and to switch the traffic you must do proper tests of the entire platform

  • Canary Release: A version of software is released for a subset of users. This deployment involves gradually shifting production traffic from old version to new version splitting the traffic based on weight. For instance, 80 percent of the requests go to old version while 20 percent go to the new version and, incrementally the percent is changed until the new version is 100% achieve. A precise traffic shifting would require additional tool like Istio or Linkerd.

  • A/B Testing: The strategy is focused for testing new features, it involves routing some users to a new functionality under specific conditions. These strategy brings up full control over the traffic distribution because there is several versions run in parallel. It requires additional tool like Istio or Linkerd to provision a load balancer.

  • Shadow: The strategy is focused for performance testing of the application with production traffic. It is complex to setup because can requires mocking/stubbing service for certain cases and it is expensive as it requires double the resources like blue-green.

How Rolling update strategy works in kubernetes

Before overview how Rolling update deployment strategy works, I think it is useful to explain how replicaSet works because when we create a deploy in kubernetes, behind the scenes it creates a respective replicaSet. A ReplicaSet ensures that a specified number of pod replicas are running at any given time identifing a selector that specifies how to identify Pods it can acquire, a number of replicas indicating how many Pods it should be maintaining, and a pod template specifying the data of new Pods it should create to meet the number of replicas criteria.

Kubernetes Deployment is the process of providing declarative updates to pods and replicaSets. A rolling update strategy provides a controlled, phased replacement of the application’s pods, ensuring that there are always a minimum number available. A Deployment creates a replicaSet then, when you create or update the Deployment, it creates a new replicaSet to bring up the desired pods whose labels match. The deployment makes sure that, by default, a maximum of only 25% of pods are unavailable at any time, and it also won’t over provision more than 25% of the number of pods specified in the desired state. The deployment won’t kill old pods until there are enough new pods available to maintain the availability threshold, and it won’t create new pods until enough old pods are removed. The deployment object allows you to control the range of available and excess pods through maxSurge and maxUnavailable fields.

readiness check
As we know, Kubernetes keeps a minimum number of pods running during the rollout. However, it strongly recommended that you add a readiness check to your pods so that Kubernetes knows when they are truly ready to receive traffic.

Upgrade using Blue-Green Deployments

A blue-green deployment lets you replace an existing version of your application across all pods at once. A blue/green deployment differs from a rolling update deployment because the “green” version of the application is deployed alongside the “blue” version. After testing that the new version meets the requirements, we update the Kubernetes Service object that plays the role of load balancer to send traffic to the new version by replacing the version label in the selector field.

Here’s how Blue-Green Deployments works:

  1. Version blue of your application is already deployed.
  2. Push version green of your application to your container image repository.
  3. Deploy version green of your application to a new group of pods. Both versions blue and green pods are now running in parallel. However, only version blue is exposed to external clients.
  4. Run internal testing on version green and make sure it is ready to go live.
  5. Flip a switch and the ingress controller in front of your clusters stops routing traffic to the version blue pods and starts routing it to the version green pods.

Now let’s see an example in practice.

We are going a deploy a simple nginx application. Note that there are others ways more reliable to do it with kubernetes like Istio or Linkerd, but here we are using only native kubernetes resources to deploy it.

To make this work, we’re going to deploy a nginx app. Then We’ll deploy a blue app and analyze that all works well. In this case, we’ll deploy (blue) deployment generating the respective labels. After that, We’ll deploy the respective service. Also, alongside blue deploymentt we’ll deploy the green deployment with new version. After validade the new version works well, We’ll replace the respective label in service to use the green deployment. Finally, we can delete the blue deployment.

First, let’s generate our deploy with imperative command. Then, we are going to change and add some labels.

vagrant@master:~$ kubectl create deploy blue-deploy --image=nginx:1.16 --replicas=3 --dry-run=client -o yaml > blue-deploy.yaml

vagrant@master:~$ cat blue-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: nginx-deploy    #change
    version: nginx-1.16  #add
  name: blue-deploy
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx-deploy    #change
      version: nginx-1.16  #add
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx-deploy    #change
        version: nginx-1.16  #add
    spec:
      containers:
      - image: nginx:1.16
        name: nginx
        resources: {}
status: {}

We’ve add app label ( common in two deployments ) and version label, that will be changed in the new version ( green deploy ). A Deployment object defines a spec.selector section that matches the spec.template.metadata section. The Deployment tags keeps track with pods among them. This is how to setting up a blue-green deployment. Using different labels, you can deploy multiple versions of the same application. Note that the version label is similar to docker container image version, this help us to indentify and track when we’ll deploy new version.

If we were used Rolling Update here, it was interesting to have a readiness probe to check if new pod is able to receive connection. As we’ll deploy new version alongside the old version it is not necessary.

Now let’s apply our blue deployment.

vagrant@master:~$ kubectl apply -f blue-deploy.yaml 
deployment.apps/blue-deploy created

vagrant@master:~$ kubectl get pods --show-labels
NAME                          READY   STATUS    RESTARTS   AGE   LABELS
blue-deploy-9c78c5f79-67q9l   1/1     Running   0          12s   app=nginx-deploy,pod-template-hash=9c78c5f79,version=nginx-1.16
blue-deploy-9c78c5f79-68tlz   1/1     Running   0          12s   app=nginx-deploy,pod-template-hash=9c78c5f79,version=nginx-1.16
blue-deploy-9c78c5f79-8wd5v   1/1     Running   0          12s   app=nginx-deploy,pod-template-hash=9c78c5f79,version=nginx-1.16

Now our deploy, replicaset and pods already deployed with “blue version” labeled.

Next step we are going to expose the blue deployment to access our application on cluster. The following Service definition targets that same selector:

vagrant@master:~$  kubectl expose deployment blue-deploy --port=80 --target-port=80 --type=ClusterIP --name=service-bg --dry-run=client -o yaml > svc_blue-green.yaml

vagrant@master:~$ cat svc_blue-green.yaml 
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: nginx-deploy
  name: service-bg
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx-deploy
    version: nginx-1.16
  type: ClusterIP
status:
  loadBalancer: {}

Let’s create the service generated:

vagrant@master:~$ kubectl apply -f svc_blue-green.yaml 
service/service-bg created
vagrant@master:~$ kubectl describe svc service-bg 
Name:              service-bg
Namespace:         default
Labels:            app=nginx-deploy
Annotations:       <none>
Selector:          app=nginx-deploy,version=nginx-1.16
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.96.44.243
IPs:               10.96.44.243
Port:              <unset>  80/TCP
TargetPort:        80/TCP
Endpoints:         10.32.0.4:80,10.32.0.5:80,10.46.0.2:80
Session Affinity:  None
Events:            <none>


vagrant@master:~$ kubectl get ep
NAME         ENDPOINTS                                AGE
kubernetes   192.168.5.11:6443                        13d
service-bg   10.32.0.4:80,10.32.0.5:80,10.46.0.2:80   16s

Note that endpoints IP, they are pods created by “blue” deployment/replicaset. Also, you can see the deployment access the svc IP described previously.

vagrant@master:~$ curl 10.96.44.243
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html

Now let’s start our upgrade deploying new version to run in paralell our deploy blue. That is we are going to deploy green version, nginx:1.17 in this case, similar the blue deployment, see the yaml generated, but it is a new version app (image=nginx:1.17) and the version label was changed too. We’ll refer this label when change our service. See:

vagrant@master:~$ kubectl create deploy green-deploy --image=nginx:1.17 --replicas=3 --dry-run=client -o yaml > green-deploy.yaml

vagrant@master:~$ cat green-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: nginx-deploy    #change
    version: nginx-1.17  #add
  name: green-deploy
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx-deploy    #change
      version: nginx-1.17  #add
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx-deploy    #change
        version: nginx-1.17  #add
    spec:
      containers:
      - image: nginx:1.17
        name: nginx
        resources: {}
status: {}

After apply the new deployment, now we have two deployments, both blue and green, and the respectives labels, deployed at the same time.

vagrant@master:~$ kubectl apply -f green-deploy.yaml 
deployment.apps/green-deploy created
vagrant@master:~$ kubectl get pods --show-labels
NAME                            READY   STATUS    RESTARTS   AGE   LABELS
blue-deploy-9c78c5f79-67q9l     1/1     Running   0          84m   app=nginx-deploy,pod-template-hash=9c78c5f79,version=nginx-1.16
blue-deploy-9c78c5f79-68tlz     1/1     Running   0          84m   app=nginx-deploy,pod-template-hash=9c78c5f79,version=nginx-1.16
blue-deploy-9c78c5f79-8wd5v     1/1     Running   0          84m   app=nginx-deploy,pod-template-hash=9c78c5f79,version=nginx-1.16
green-deploy-7dc4bf4599-cxkmn   1/1     Running   0          8s    app=nginx-deploy,pod-template-hash=7dc4bf4599,version=nginx-1.17
green-deploy-7dc4bf4599-rbnlt   1/1     Running   0          8s    app=nginx-deploy,pod-template-hash=7dc4bf4599,version=nginx-1.17
green-deploy-7dc4bf4599-xvxrw   1/1     Running   0          8s    app=nginx-deploy,pod-template-hash=7dc4bf4599,version=nginx-1.17

Note that now we have two deployment alongside each other. At this point, both blue and green are deployed. Only the blue instance is receiving traffic by service. To make the switch, update your Service definition’s version selector ( label version=nginx-1.16 ) to ( label version=nginx-1.17 ). You can use kubectl edit svc service-bg and change the label to new version.

vagrant@master:~$ kubectl edit svc service-bg 
service/service-bg edited
vagrant@master:~$ kubectl describe service service-bg 
Name:                     service-bg
Namespace:                default
Labels:                   app=nginx-deploy
Annotations:              <none>
Selector:                 app=nginx-deploy,version=nginx-1.17  #changed
Type:                     ClusterIP
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.96.44.243
IPs:                      10.96.44.243
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
Endpoints:                10.32.0.6:80,10.46.0.3:80,10.46.0.4:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

Now the service selected the pods that belong to new green replicaset/deployment.

vagrant@master:~$ kubectl get ep
NAME         ENDPOINTS                                AGE
kubernetes   192.168.5.11:6443                        13d
service-bg   10.32.0.6:80,10.46.0.3:80,10.46.0.4:80   79m

Note that the endpoints Ips now are 10.32.0.6:80,10.46.0.3:80,10.46.0.4:80 instead of 10.32.0.4:80,10.32.0.5:80,10.46.0.2:80. That is it, we changed the traffic from blue deployment to another deployment green with new application version. You can verify Pods Ip and the image of the any new pod endpoint.

vagrant@master:~$ kubectl get pod -o wide
NAME                            READY   STATUS    RESTARTS   AGE    IP          NODE       NOMINATED NODE   READINESS GATES
blue-deploy-9c78c5f79-67q9l     1/1     Running   0          108m   10.32.0.5   worker-1   <none>           <none>
blue-deploy-9c78c5f79-68tlz     1/1     Running   0          108m   10.32.0.4   worker-1   <none>           <none>
blue-deploy-9c78c5f79-8wd5v     1/1     Running   0          108m   10.46.0.2   worker-2   <none>           <none>
green-deploy-7dc4bf4599-cxkmn   1/1     Running   0          24m    10.32.0.6   worker-1   <none>           <none>
green-deploy-7dc4bf4599-rbnlt   1/1     Running   0          24m    10.46.0.4   worker-2   <none>           <none>
green-deploy-7dc4bf4599-xvxrw   1/1     Running   0          24m    10.46.0.3   worker-2   <none>           <none>
vagrant@master:~$ kubectl describe pod green-deploy-7dc4bf4599-xvxrw | grep -i image
    Image:          nginx:1.17
    Image ID:       docker-pullable://nginx@sha256:6fff55753e3b34e36e24e37039ee9eae1fe38a6420d8ae16ef37c92d1eb26699
  Normal  Pulled     24m   kubelet            Container image "nginx:1.17" already present on machine

After verify all works and making sure new version meets all the requirements you can switch off the old version. So let’s delete the blue deployment:

vagrant@master:~$ kubectl delete deployments blue-deploy
deployment.apps "blue-deploy" deleted 

and the service now makes the load balance to new green deployment.

vagrant@master:~$ kubectl describe svc service-bg 
Name:                     service-bg
Namespace:                default
Labels:                   app=nginx-deploy
Annotations:              <none>
Selector:                 app=nginx-deploy,version=nginx-1.17
Type:                     ClusterIP
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.96.44.243
IPs:                      10.96.44.243
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
Endpoints:                10.32.0.6:80,10.46.0.3:80,10.46.0.4:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

This post had learn proposal, so, if you intend to deploy a blue green strategy you should take into account analyze if the app to be upgrade is statefull or stateless, put together readiness and liveness probes to check the app lifecycle and so on…

In the upcomming next posts, in order to continue deployments strategies in kubernetes, we are going to show an example of canary release deployment.

I hope this post was useful.