- Home
- >
- DevOps News
- >
- The Growing Complexity of Kubernetes — And What’s Being Done to Fix It – InApps Technology 2022
The Growing Complexity of Kubernetes — And What’s Being Done to Fix It – InApps Technology is an article under the topic Devops Many of you are most interested in today !! Today, let’s InApps.net learn The Growing Complexity of Kubernetes — And What’s Being Done to Fix It – InApps Technology in today’s post !
Read more about The Growing Complexity of Kubernetes — And What’s Being Done to Fix It – InApps Technology at Wikipedia
You can find content about The Growing Complexity of Kubernetes — And What’s Being Done to Fix It – InApps Technology from the Wikipedia website
Honeycomb is sponsoring InApps Technology’s coverage of Kubecon+CloudNativeCon North America 2020.
In a KubeCon + CloudNativeCon presentation this week, IBM Cloud’s Doug Davis asked: what happened to the promise of cloud computing? The promise for developers, he explained, was that “the complexities of the infrastructure would be abstracted away from us and we could focus on what really matters… the code in our applications.” But cloud computing hasn’t completely lived up to that promise. As Davis showed throughout his presentation, cloud native technologies have become overly complicated — especially for developers.
While Kubernetes (the most prominent example) has abstracted how we manage containers, Davis noted that “in many cases, the complexities of the underlying infrastructure are not just still visible but ‘in your face.’”
Interestingly, newly released 2020 survey results from the Cloud Native Computing Foundation (CNCF) surfaced the same concerns. “This year, complexity joined cultural changes with the development team as the top challenges in using and deploying containers,” the report noted.
Given that containers are now almost universally used in IT departments (“The use of containers in production has increased to 92%, up from 84% last year, and up 300% from our first survey in 2016,” the CNCF report stated), it’s a big concern that IT departments think of containers as being too complex. Especially since, as Doug Davis pointed out in his KubeCon presentation, the big selling point of cloud computing in the early days was that it would make IT much simpler. So why has it seemingly done the opposite?
Comparing Container Solutions
Davis used the examples of Kubernetes (containers), Cloud Foundry (Platform-as-a-Service, a.k.a. PaaS), and Apache OpenWhisk (an open source serverless platform) to compare and contrast features in container-based cloud platforms. For example, while Cloud Foundry and OpenWhisk have a “simplified UX” in this comparison matrix, Kubernetes does not.
Davis’ comments comparing Kubernetes and Cloud Foundry were particularly revealing, in terms of the pros and cons of complexity in containers technology. He started by listing the many benefits of Cloud Foundry: it’s very easy for developers to use, it does the build for you, and “the push-deploy model is wonderful.” But there is a tradeoff for developers.
“What you don’t get is […] necessarily access to the advanced features,” explained Davis. “In many cases, they try to hide it from you — because that’s the whole point, right? They want you to focus on: just give us your source code, and we’ll host it for you.”
The tradeoff with Kubernetes is a bit different. It’s potentially a very powerful platform, with a lot of advanced features, but it’s not necessarily easy to implement or manage those features. As Davis put it, these advanced features are “either missing, or you have to do it yourself.” He listed endpoint management, getting the container image, and load balancing as a few examples.
Davis’ point is that while Kubernetes does abstract the infrastructure, it still exposes that infrastructure to you. Even if you plug in a third-party tool to manage a certain feature, it adds complexity to the overall job of running Kubernetes.
Kubernetes Is Not The End Game
Richard MacManus
Richard is senior editor at InApps Technology and writes a weekly column about what’s next on the cloud native internet. Previously he founded ReadWriteWeb in 2003 and built it into one of the world’s most influential technology news and analysis sites.
Davis made a really interesting point later in the presentation, that perhaps the cloud native community forgets sometimes: “Kubernetes is not the end game here.”
“It may be the end game, at least for today, from a technology perspective,” he continued, “but it’s not the end game in terms of what we should expose to our users.”
He then talked about Knative, a serverless platform that Davis described as “a simplified user experience on top of Kubernetes.” Since Davis himself works on the Knative project, he’s very familiar with it. He explained that the goal of Knative is to manage “what I would call modern or advanced [Kubernetes] features under the covers, automatically, so I don’t have to manage those things myself.”
In other words, Knative doesn’t expose those advanced features to you — it’s all done “under the covers,” to use Davis’ memorable wording.
Back to the CNCF 2020 survey and another interesting trend that I think speaks to this theme of complexity needing to be swept under the covers: serverless. 51% of survey respondents are now using serverless, either in production (30%) or evaluating it (21%). That total figure of 51% is up from 41% in 2019, so serverless as a trend continues to rise.
60% of the respondents using serverless are using a “hosted” solution only, with a further 22% using a hosted solution in tandem with an “installable” solution like Knative. The top hosted serverless platforms are AWS Lambda (57%), Google Cloud Functions (27%), and Azure Functions (24%).
Another trend, not mentioned in the CNCF survey but it’s been a growing theme in my own writing this year, is that serverless is now branching out beyond Functions-as-a-Service (FaaS). AWS Lambda is basically a FaaS platform, which (among other things) means it’s designed for stateless applications. But newer projects, such as Lightbend’s Cloudstate, are trying to bring the serverless experience to stateful applications.
Similarly, Doug Davis pointed out that Knative is starting to look at a wider set of use cases. He said that Knative has been “focused on sort of low latency applications,” meaning “it doesn’t necessarily do a good job of processing requests that take, say, three hours to run.” But the framework is there, he said, to use Knative as “a stepping stone to something bigger.”
Next Steps to Tackle Complexity
Davis ended his presentation by discussing a new project from his employer, IBM Cloud, that he said might be the next step in reducing Kubernetes complexity. Code Engine is built on Kubernetes and is being pitched as “a fully-managed serverless platform that runs your containerized workloads.” The main purpose of Code Engine, explained Davis, “is to get your developers back to coding, not managing infrastructure.”
Code Engine is currently in a free beta period, but Davis said that IBM Cloud intends to charge for it as a managed service once it leaves beta.
More broadly, Davis called on the cloud native community to do more to tackle complexity. He suggested less proprietary infrastructure and “more Knative-like projects,” more collaboration on infrastructure projects, and a renewed focus on usability.
He also suggested that end-users should “push back” at the cloud native community and “demand less complexity,” and that end-users shouldn’t be afraid to “demand more dev and less ops.”
The Cloud Native Computing Foundation and Cloud Foundry are sponsors of InApps Technology.
Feature image via Pixabay.
Source: InApps.net
Let’s create the next big thing together!
Coming together is a beginning. Keeping together is progress. Working together is success.