Cluster Upgrades – InApps Technology is an article under the topic Devops Many of you are most interested in today !! Today, let’s InApps.net learn Cluster Upgrades – InApps Technology in today’s post !

Read more about Cluster Upgrades – InApps Technology at Wikipedia



You can find content about Cluster Upgrades – InApps Technology from the Wikipedia website

Justin Garrison

Justin is a Senior Developer Advocate at Amazon Web Services (AWS).

If you’ve been using Kubernetes for any amount of time, you need to plan for regular upgrades. Starting with Kubernetes 1.19, each open source release provides one year of patches. You need to upgrade to the latest available minor or patch release to receive the security and bug fixes. But how can you upgrade a critical part of your infrastructure without downtime? This article will guide you through common patterns to consider when upgrading Kubernetes in any environment.

We won’t dive into all of the tools and considerations to perform an upgrade. If you are using a cluster management tool or hosted Kubernetes service, you should consult your documentation for the best option for your environment. You also need to be aware that some workloads and environments may restrict what upgrade strategy you choose.

We will discuss a few high-level patterns for cluster upgrades:

  • In place
  • Blue/Green
  • Rolling
  • Canary

These patterns are similar to application upgrade options, with some unique considerations because of their potential blast radius. Upgrading infrastructure can incur a considerable cost, depending on how long your upgrade takes and how large your environment is.

Control Plane Components

The Kubernetes control plane consists of the Kubernetes API server, etcd database, controller manager, scheduler, and any additional controllers such as cloud or ingress that you may have in your environment. Upgrading the API server is the first step when upgrading a cluster. Kubernetes stores state in etcd and with any major application upgrades, you need to make sure you have at least one backup and that you’ve verified the backup can be restored. In some cases, an API server upgrade may also require an etcd upgrade, but we won’t be covering that in this post.

Data Plane Components

The Kubernetes data plane consists of the kubelet, a container runtime, and any network, logging, or storage drivers you use in your cluster workloads. For many clusters this will — at minimum — require a kube-proxy and CNI plugin update. Your data plane components must be equal to, or one minor version behind, your API server version. Ideally, your host OS, container runtime and data plane components can be upgraded independently from each other. Decoupling these components will ensure that you can upgrade quickly when there are bug fixes, new features, or security patches.

Kubernetes Hosted Service

If you’re using a hosted Kubernetes service such as Amazon Elastic Kubernetes Service (EKS), control plane upgrades are handled for you. If you’re using a managed data plane service, such as Managed Node Groups (MNG), your data plane upgrades should also be automatically handled by your provider.

Even with a hosted service, you are still responsible to verify workloads, additional controllers, and third-party plugins (such as CNI) that you have installed in your cluster. Those components should be tested for API compatibility before you upgrade your cluster in a test or development environment. We talked about making sure your workloads and controllers are compatible with different Kubernetes API versions in a previous blog post.

In all of these upgrade strategies, you should avoid application upgrades during a cluster upgrade. If possible, keep your workloads the same version to minimize failures that may falsely be attributed to the Kubernetes upgrade. Also minimize other potential issues, such as scheme upgrades or application API compatibility.

Read More:   Update Learning to Rank: A Key Information Retrieval Tool for Machine Learning Search

With any Kubernetes upgrade, you should upgrade components in the following order:

  1. Control plane
  2. Data plane and nodes
  3. Add-ons
  4. Workloads

These upgrade patterns will help you decide how to upgrade those components that best suit your cluster and environment.

In-Place Upgrades

When performing an in-place upgrade, you must be extra careful that components remain healthy, because you are performing work on a cluster currently serving production traffic. In-place upgrades can consist of package updates (e.g., yum, apt), config management automation (e.g., Ansible, Chef), or VM/container image changes. Ideally, your upgrade will be scripted and automated — including roll-back — but if this is your first time upgrading, doing it manually in a development or test environment may be helpful.

In-place upgrades mean that all the components will upgrade at roughly the same time. If you change your desired API server version with configuration management and push a new configuration, all of the API servers will upgrade once they receive the new configuration. This is different from rolling upgrades, which we discuss later.

The main benefits of in-place upgrades are:

  • It is the fastest to perform at any scale.
  • If done manually, it can allow for more control of the components and upgrade process.
  • It is easily suited for multiple environments (on-prem or cloud).
  • It is the cheapest from an infrastructure cost perspective.

Depending on your process, scale and tooling, an in-place upgrade is probably the most straightforward approach to be able to script and roll out. Scripts can be tested locally or in development clusters, without needing to reconfigure resources that cluster administrator teams may not have access to — such as load balancers or DNS.

In-place upgrades also have the following restrictions to consider if you want to use this method for upgrades:

  • It is possible to cause downtime if all of your API servers or controllers upgrade at the same time.
  • If you want to move from Kubernetes 1.16 to 1.20, you’ll have to upgrade the entire cluster four times to each minor version.
  • Validating each step may be a manual process, which can add additional time and opportunity for mistakes.
  • You should test a rollback plan in case of failure because some upgrades cannot be easily reverted. (e.g., scheme changes).

Blue/Green Upgrades

Blue/green cluster upgrades will require you to create a second cluster with a new version of Kubernetes. You will need to deploy a new control plane and data plane, then copy all of your workloads to the new cluster before you switch traffic from the old cluster to the new one. You can use blue/green to update each component of your cluster, but a holistic cluster upgrade is easier to deploy and rollback.

The good news is, setting up new clusters is generally easier than upgrading a cluster in place. You have multiple options on how you deploy workloads to the new cluster. If your workloads are already part of a GitOps or continuous delivery, you can have deployments go to the new cluster simultaneously to the old cluster, prior to or during your upgrade. If you do not have deployments automated, you use a tool like Velero to back up your existing workloads and deploy them to the new cluster.

Creating a new “green” cluster can give you a lot of confidence that the new version works as you expect and puts you in control over when you switch versions. The new cluster can also be used to validate automation tooling, such as Terraform modules or GitOps repos. You can make the change via DNS or load balancers whenever you are ready, or even during a maintenance window or low utilization time.

Read More:   Update R Server 9 Adds Machine Learning to Work with Your Data Where It Lives

Key benefits of a blue/green upgrade are:

  • Pre-validate that all components are healthy before sending traffic.
  • You can upgrade multiple versions at a time (e.g., 1.16 directly to 1.20).
  • You can change other parts of the infrastructure that might be difficult to test (e.g., switch regions, add zones, change instance types).
  • It is the safest and easiest to roll back.

Downsides for blue/green deployments to consider include:

  • This is the most expensive strategy in infrastructure cost, because you have to run twice the compute capacity during the migration.
  • You may not be able to get all of the compute capacity you need to run a complete second cluster if you have thousands of worker nodes.
  • This strategy is hard to scale to dozens or hundreds of clusters if you have multiple concurrent cluster upgrades.
  • Blue/green is not easy to do on-prem without virtualization, unless you have spare servers.
  • Switching all traffic at once may not be easy if you have lots of endpoints to update. Load balancers may need to be pre-scaled and caches warmed. Beware of DNS time to live (TTL) which may or may not work for spreading load.
  • Switching all cluster traffic at once requires cross-team coordination to migrate to new clusters; as well as engineering cycles to verify that workloads are the correct scale.

Blue/green can be a great strategy when you have a smaller number of clusters or fewer than a couple of hundred worker nodes. It allows you to skip versions and it is safe for rollbacks, but beware how much it may cost you in infrastructure spend and coordination time.

Rolling Upgrades

If you’re familiar with Kubernetes deployment strategies, you’ll be familiar with rolling upgrades. A rolling upgrade will deploy one new copy of a component and then scale down one old copy. It will continue this pattern until all of the old components have been removed. The incremental nature of rolling upgrades has some advantages over in-place and blue/green strategies.

Similar to in-place upgrades, you’ll need to upgrade one minor version of Kubernetes at a time. This can be additional work when needing to upgrade multiple versions, but it’s the only supported option. Depending on the component you’re upgrading, you may use different tools to upgrade each.

For resources like the control plane, you may want to add a new server with an upgraded API server to the control plane and then shut down an old server. If you’re in AWS, you could change an Auto Scaling group launch configuration AMI and replace one instance at a time. Other control plane components (e.g., scheduler) may be running as containers inside your cluster, so you can upgrade those with standard Kubernetes rolling deployment upgrades.

The key difference in a rolling upgrade compared to blue/green is that your external traffic routing (DNS and load balancers) will stay pointed to the same place. You will want to make sure you test all add-ons and workloads in a different cluster or environment before you move forward with a production cluster upgrade.

Note that AWS managed node groups, kOps, Cluster-API and many other Kubernetes cluster management tools use a rolling upgrade strategy. Benefits include:

  • Safer roll-outs and roll back compared to in-place upgrades.
  • Costs less than blue/green and less likely to run out of resources.
  • Can be paused mid-upgrade if something breaks.
  • Can be adapted to on-prem environments.

Rolling upgrades are the most common for automated tools. They have a good balance between speed and cost, and even though they’re not fully immutable, they’re still immutable in the right areas to reduce manual work and risk.

When upgrading a production cluster, all of your existing workloads will still be deployed; and as long as you’ve tested their compatibility, your upgrades should be automatable.

Read More:   Update How Node.js Allows Rent the Runway to Stay Fashionable — and Nimble

Further considerations when using rolling upgrades include:

  • Rolling upgrades can be slow depending on your scale.
  • You may need to coordinate controller, daemonset, or plugin upgrades during the rollout.
  • You might not be able to make cluster-wide changes, such as adding an availability zone or changing architecture.

Canary Upgrades

Canary application deployments serve small increments of traffic to a new version of an application at a time. Canary upgrades can be thought of as rolling upgrades with blue/green benefits.

With canary upgrades, you will create a new Kubernetes cluster with the version you want to deploy. Then add a small data plane and deploy your existing applications at smaller scales to the new cluster. Add the new cluster workloads to your existing production traffic via your load balancer configuration, DNS round-robin, or service mesh.

Now you can monitor traffic going to the new cluster, slowly scale up workloads in the new cluster and scale down workloads in the old cluster. You can do this one workload at a time and as slowly or quickly as you’re comfortable with. If any individual workload starts to get errors, you can scale down the individual workload in the new cluster to have it automatically use the old cluster.

The benefits to canary cluster upgrades include:

  • New clusters are easier to create and validate.
  • You can skip minor Kubernetes versions during upgrades (e.g., 1.16 to 1.20).
  • Application deployments can be opt-in on a per-team basis.
  • Errors are minimally impacting due to incremental traffic usage.
  • You can make large infrastructure changes during upgrades.
  • Clusters start small and scale, so infrastructure costs are lower and you can warm caches and load balancers as you scale.

If you want to make large changes (such as changing architecture) or you want to add an additional availability zone, then canary is a great option. By starting the cluster small and growing it per workload, you can make sure that you’re not over-provisioning infrastructure if your new instances are more efficient or workload requests and limits have changed.

As with anything, there are trade-offs. When using canary deployments, you should be aware of some of the following concerns:

  • Rolling back an application may require manual intervention to change a load balancer or scale down the new cluster.
  • You may end up with an old cluster longer than you want, as applications slowly deploy and scale-up.
  • Debugging applications can be harder because you need to know which cluster errors are happening.
  • If you have dozens or hundreds of clusters, you will likely increase your cluster count by 50% or more as clusters are being upgraded.
  • Canary is the most complex upgrade strategy, but it benefits from automated deployments, health checks, and performance monitoring.

Conclusion

No matter which upgrade strategy you pick, it’s important to know how they work and any possible concerns as your Kubernetes usage grows. You need to have an upgrade strategy because Kubernetes has frequent releases and (like any software) occasional bugs.

Keeping up to date with new versions can be an important part of your infrastructure security process and will enable applications to take advantage of new features quickly. If you deployed Kubernetes and migrated all of your workloads without considering how you would upgrade, now is the perfect time to start planning.

If you do not have a business need to run your own Kubernetes clusters, I highly recommend you use one of the hosted Kubernetes options available. Opting into a managed control plane and data plane can save you days or weeks of planning and upgrades each year. Each hosted option may perform upgrades differently, but they will all allow you to focus on your workloads and business value instead of control plane high availability or data plane compatibility.

Feature image via Pixabay.



Source: InApps.net

Rate this post
As a Senior Tech Enthusiast, I bring a decade of experience to the realm of tech writing, blending deep industry knowledge with a passion for storytelling. With expertise in software development to emerging tech trends like AI and IoT—my articles not only inform but also inspire. My journey in tech writing has been marked by a commitment to accuracy, clarity, and engaging storytelling, making me a trusted voice in the tech community.

Let’s create the next big thing together!

Coming together is a beginning. Keeping together is progress. Working together is success.

Let’s talk

Get a custom Proposal

Please fill in your information and your need to get a suitable solution.

    You need to enter your email to download

      Success. Downloading...