• Home
  • >
  • DevOps News
  • >
  • Kubernetes-Autoscaling KEDA Moves into CNCF Incubation – InApps 2022

Kubernetes-Autoscaling KEDA Moves into CNCF Incubation – InApps is an article under the topic Devops Many of you are most interested in today !! Today, let’s InApps.net learn Kubernetes-Autoscaling KEDA Moves into CNCF Incubation – InApps in today’s post !

Read more about Kubernetes-Autoscaling KEDA Moves into CNCF Incubation – InApps at Wikipedia



You can find content about Kubernetes-Autoscaling KEDA Moves into CNCF Incubation – InApps from the Wikipedia website

KEDA, the Kubernetes Event-Driven Autoscaler project, has moved on from the sandbox tier at the Cloud Native Computing Foundation (CNCF) this week, joining the 21 other projects in incubation, such as Argo, Falco, gRPC and Rook.

First created in 2019 by Microsoft and Red Hat, KEDA joined the CNCF in March 2020, and since then has seen the release of KEDA 2.0 and been adopted by companies such as Alibaba, CastAI, KPMG, Meltwater, Microsoft and others.

KEDA consists of two primary components, the KEDA agent that activates and deactivates Kubernetes deployments to scale to and from zero, and the metrics server, which exposes event data to the Horizontal Pod Autoscaler to scale out. KEDA can be added to any Kubernetes cluster, providing event-driven autoscaling based on data as provided by a scaler, which serves as an integration between KEDA and a variety of databases, messaging systems, telemetry systems, CI/CD and more.

During its time in the sandbox, KEDA increased the number of available scalers from 15 to 37, and KEDA maintainer Tom Kerkhove says that more are on the way. Currently, applications can scale according to basic things like CPU or memory, but also information provided by an Apache Kafka topic, for example, or by Prometheus metrics, and Kerkhove says that an HTTP-based autoscaler is also in progress.

Read More:   How Autonomous Linux Simplifies DevSecOps for Developers – InApps 2022

Beyond adding to the number of scalers, KEDA also spent its time in the CNCF sandbox rearchitecting its security approach to separate out authentication, adding TriggerAuthentication and ClusterTriggerAuthentication.

“For example, if you want to reuse that identity across multiple applications, if you want to have the separation between dev and ops, or if you want to use secrets from a Hashicorp vault, by using TriggerAuthentication, you can do that,” said Kerkhove.

Similarly, ClusterTriggerAuthentication means that “one person can define how to authenticate with, let’s say Microsoft Azure, and then everybody inside the cluster can use that identity if they want to,” explained Kerkhove.

In its move to the incubation tier of the CNCF, Kerkhove said that the due diligence process helped the project iron out some governance issues. For example, now maintainers from the same company share their votes, which helps to prevent any one company from having a majority.

Looking ahead, KEDA has many plans, including a potential adoption into the Kubernetes project itself, but Kerkhove said that this is still off in the distance.

“Eventually we want to do this and more, but making a change in Kubernetes is hard for a good reason, because it’s a sustainable product. You need to make sure that if you change it, you will not break anybody,” said Kerkhove.

One thing holding KEDA back from this end goal, explained Kerkhove, is that KEDA can only run as a single instance on Kubernetes, which means it cannot be highly available. This is because of a Kubernetes limitation, and Kerkhove said that “what we’re trying to do is look at that Kubernetes limitation and see if we can fix that, so that both Kubernetes and KEDA now benefit from it.” For now, the project will continue iterating on its own while considering making upstream contributions of parts of the project.

Other potential plans, said Kerkhove, include separating out part of the Service Mesh Interface (SMI) spec, a fellow CNCF project, and broadening it beyond just use for service meshes.

Read More:   Use Ruckstack to Simplify Your Development Environment – InApps 2022

“We’re trying to see if there’s a place in the community to introduce a new standard for traffic metrics so that KEDA can rely on one specification and basically serve the full customer base and with all the scenarios,” said Kerkhove. “We want to take that traffic metric API, take it out of the SMI spec and create a traffic metrics spec.”

One final near-term goal, said Kerkhove, was to do predictive autoscaling.

“There’s nothing started there yet, because we only came up with the plans recently, but it’s certainly something we want to do as the next major feature,” he said. “This comes back to using data to be cost efficient and saving the environment by doing so.”

On saving the environment, Kerkhove noted a panel of interest at the upcoming KubeCon+CloudNativeCon North America 2021 in October: “How event-driven autoscaling in Kubernetes can combat climate change.”



Source: InApps.net

Rate this post
As a Senior Tech Enthusiast, I bring a decade of experience to the realm of tech writing, blending deep industry knowledge with a passion for storytelling. With expertise in software development to emerging tech trends like AI and IoT—my articles not only inform but also inspire. My journey in tech writing has been marked by a commitment to accuracy, clarity, and engaging storytelling, making me a trusted voice in the tech community.

Let’s create the next big thing together!

Coming together is a beginning. Keeping together is progress. Working together is success.

Let’s talk

Get a custom Proposal

Please fill in your information and your need to get a suitable solution.

    You need to enter your email to download

      Success. Downloading...