Software Containerisation

Table of Contents

Deployment updates

Rolling updates

If you change Deployment’s pod template (.spec.template), deployment rollout is triggered. To observe rollout, you can use e.g. kubectl rollout status <deployment>

Each pod and ReplicaSet created by Deployment controller get the same pod-template-hash label. That’s generated by hashing PodTemplate of ReplicaSet. Its purpose is to ensure that ReplicaSets created from Deployment don’t overlap.

So that the application remains available, Deployment ensures that:

You can check kubectl rollout history. To save a change cause, use the parameter --record.

To roll back, use kubectl rollout undo --to-revision=n

Canary Deployments

Problem with rolling updates is while it’s happening, you have no way of testing that it’s working fine.

Canary Deployments are used to test new release with subset of users before propagating to all users.

Involves using at least on Service to direct traffic to pods that run old code or pods that run new code. You add a label to pods, indicating whether it’s the original type or canary. If service does not discriminate based on that label, then both types of pods get traffic directed to them.