- Building Google Cloud Platform Solutions
- Ted Hunter Steven Porter Legorie Rajan PS
- 309字
- 2025-04-04 14:47:42
Rolling updates
Rolling updates, while quite simple to execute, are a vital part of the GKE story and ecosystem. Rolling updates are key, in that they support scenarios in GKE such as resiliency and continuous deployment. You will need to have deployed a multi-node container cluster in order to gain the benefits of rolling updates. The way they work involves a multistep process of removing each node from the pool, updating the image for that node, and then adding the updated node back into the pool. This process is repeated until each node in the cluster is running with the latest image.
GKE's rolling update mechanism ensures that your application remains up and available even as the system replaces instances of your old container image with your new one across all the running replicas. From beginning to end, the process to initiate a rolling update can be completed in three steps:
docker build -t gcr.io/${PROJECT_ID}/hello-node:v2
After you have an updated image, you will want to upload your updated image, either manually or via an automated process, to Google Container Registry:
gcloud docker -- push gcr.io/${PROJECT_ID}/hello-node:v2
Now that you have an updated image uploaded to Container Registry, you're ready to execute the actual rolling update. The rolling update command is quite straightforward, with all the heavy lifting being performed under the covers by GKE and Kubernetes:
kubectl set image deployment/hello-world hello-world=gcr.io/${PROJECT_ID}/hello-node:v2
Once a rolling update is initiated, Kubernetes will take nodes in and out of the cluster as needed, updating each node's image along the way. There are several tools you can use to monitor the progress of rolling updates. The Kubernetes CLI is a good option if you are just testing out deployments or want to include progress updates in an automated process. GKE also offers a workload dashboard that provides a view into your cluster's nodes and deployed workloads.