- Home
- >
- Software Development
- >
- Use Multi-Availability Zone Kubernetes for Disaster Recovery – InApps 2022
Use Multi-Availability Zone Kubernetes for Disaster Recovery – InApps is an article under the topic Software Development Many of you are most interested in today !! Today, let’s InApps.net learn Use Multi-Availability Zone Kubernetes for Disaster Recovery – InApps in today’s post !
Read more about Use Multi-Availability Zone Kubernetes for Disaster Recovery – InApps at Wikipedia
You can find content about Use Multi-Availability Zone Kubernetes for Disaster Recovery – InApps from the Wikipedia website
Nicolas Vermandé
Nicolas is the principal developer advocate at Ondat. He is an experienced hands-on technologist, evangelist and product owner who has been working in the fields of cloud native technologies, open source software, virtualization and data center networking for the past 17 years. Passionate about enabling users and building cool tech solving real-life problems, you’ll often see him speaking at global tech conferences and online events.
Outages and degraded performance are inevitable. Operators make mistakes; new protocols introduce errors, natural disasters damage equipment and more.
That’s why rather than trust Amazon’s ability to design a hurricane-proof data center, most platform managers opt to spread their application’s infrastructure across multiple availability zones (AZs).
AZ outages aren’t terribly common, but they do occur, and degraded performance is even more common. And a catastrophic outage such as one caused by a hurricane is possible, so responsible platform managers design their infrastructure around failure.
This is a good instinct and one that should be applauded. But few platform managers consider the new design challenges that multi-AZ Kubernetes deployments can create. Let’s dive in and explore the new factors a multi-AZ production environment poses.
Can’t I Just Distribute My Infrastructure Across AZs?
Multi-AZ Kubernetes services will likely require more legwork than simply declaring that you want your nodes distributed across AZs. While it’s technically possible that you’ll be able to simply deploy replica nodes across multiple AZs, most applications will require additional configuration.
Specifically, stateless applications will work fine across multiple AZs. If one AZ goes down, Kubernetes will reschedule your app in the next AZ. There won’t be any stateful workloads to preserve over the transition so the end-user won’t notice anything different.
A stateful application, however, won’t just work across multiple AZs. Here’s why.
Statefulness Across AZs
Let’s say you’ve launched an application across multiple AZs to gain disaster recoverability, but you didn’t take its stateful workloads into account.
First, your application won’t function correctly for long since it’s operating with stateful workloads across containers, which aren’t very robust when it comes to running stateful applications.
Say there’s a spike in demand or some other kind of failure that forces Kubernetes to kill a container. When Kubernetes’ scheduler restarts your stateful app, it will only run if it happens to be restarted in the same AZ where its volume was initially located. If it’s restarted elsewhere, it won’t have access to that volume anymore. Thus, platform administrators must create and maintain affinity rules for apps with attached volumes to ensure they restart only in the AZ associated with their volumes.
But let’s assume that you’ve established a workaround to this issue, possibly by leveraging a suitable Container Storage Interface (CSI) provider that is not constrained by AZ boundaries. If disaster recovery is a goal, then you’ll want to ensure that your solution enables you to create persistent volume replicas. That way, if one container fails, your replicas can immediately pick up the slack with little to no downtime.
But now, you have another challenge: ensuring that your replicas are evenly distributed across AZs. For example, it’s possible that your primary volume and its replicas could be located all in the same AZ, which will obviously affect your disaster recoverability in the event that AZ goes down or experiences a service interruption.
Furthermore, you’ll need to identify a way to change where your replicas are hosted over time. What happens if you decide to add another AZ or change the layout of your replicas? Kubernetes and your cloud service providers’ infrastructure won’t do this for you automatically.
To get around this, you might script something custom in Lambda, if you’re primarily using AWS for cloud services. But then there’s something else for you to maintain, and of course, this won’t work across different cloud service providers.
The Solution: Kubernetes’ Topology Key
Fortunately, Kubernetes provides a functionality that we can use to build a solution to the problem of stateful multi-AZ Kubernetes deployments called a topology key. This enables Kubernetes services to route traffic based on the node topology of a cluster.
So, one way to use the topology key is to have Kubernetes label each node with the AZ in which it is located. With this information, a CSI solution can then distribute replica nodes such that they are evenly distributed across AZs, thereby creating a robust disaster recovery capability otherwise missing from Kubernetes’s feature set.
Kill Two Birds with One Stone
The above approach is exactly what Ondat takes. Using Kubernetes’s topology key, Ondat distributes replica nodes across AZs in a process called topology-aware placement.
Ondat also ensures your stateful workloads persist whenever an AZ, cluster, node or container goes down. As a storage orchestrator, Ondat pools your individual nodes’ storage and acts as an intermediary Kube-native distributed engine, thus separating and abstracting your frontend persistent volumes from the underlying platform topology.
If you’re curious about how Ondat works, you can learn more here.
Feature image via Pixabay
Source: InApps.net
Let’s create the next big thing together!
Coming together is a beginning. Keeping together is progress. Working together is success.