• Home
  • >
  • DevOps News
  • >
  • ‘Right-Fitting’ Performance Feedback Loops into the Pipeline – InApps Technology 2022

‘Right-Fitting’ Performance Feedback Loops into the Pipeline – InApps Technology is an article under the topic Devops Many of you are most interested in today !! Today, let’s InApps.net learn ‘Right-Fitting’ Performance Feedback Loops into the Pipeline – InApps Technology in today’s post !

Read more about ‘Right-Fitting’ Performance Feedback Loops into the Pipeline – InApps Technology at Wikipedia



You can find content about ‘Right-Fitting’ Performance Feedback Loops into the Pipeline – InApps Technology from the Wikipedia website

Paul Bruce

Paul is director of customer engineering on NeoLoad at Tricentis and a DevOps advisor, helping to transform enterprise software teams and delivery practices. He organizes DevOpsDays Boston, the Boston DevOps community, and chairs o11yfest. You can learn more at: https://paulsbruce.io.

What do we want? Performance feedback! When do we want it? It depends.

Feedback loops become increasingly critical the faster you deliver software. Effectively implementing automated approaches to feedback loops is all about right-fitting the who/what/when/where aspects. Typically, automated feedback loops live in pipelines and continuous integration (CI). But what about performance testing? Isn’t it too complicated to fit into DevOps and automated contexts?

The short answer is, of course not. But it takes a little engineering thought and setting the right goals first. Why do performance engineers need pipeline-driven performance feedback sooner? A few solid reasons:

  • It’s often cheaper to address obvious faults early in product and feature lifecycles.
  • Long cycles introduce delays and context switching, which increases the likelihood of faults.
  • Automating feedback requires us to drive unnecessary variance out of our processes.
  • Having feedback helps us make better-informed decisions moment by moment.
  • Frequently generating performance data allows us to understand trends over time, not just “big bang” point samples.

Traditional load testing and performance testing have focused on large, complex end-to-end testing of fully baked pre-release systems. Most automated build and packaging cycles take minutes, not hours, so traditional performance test cycles don’t fit well into delivery cycles from a time perspective. Is end-to-end testing still necessary? Yes. Is it necessary for this process to inherit flaws, faults, anti-patterns and bad assumptions that we could have addressed earlier? Absolutely not.

Read More:   Why GitLab Opted to Make Its ‘Core’ Offering Free – InApps 2022

An intelligent approach requires “right-fitting” load and performance testing into automated pipelines, so we get early signals that are actionable (i.e., feedback loops). From an engineer’s perspective, in order to avoid a mess of end-to-end late-cycle validation, we need to break down the performance verification process into meaningful smaller feedback loops that fit into shorter work cycles. This way, developers and teams can receive performance feedback about their changes earlier and more often so that by the time we test end-to-end, it’s not a dumpster fire.

3 Challenges to Modernizing Load and Performance Testing

So, what are the main obstacles in migrating from traditional performance testing to a modern automated approach?

  • Not all systems and components are created equal, so no single approach applies across the board. Fitness and candidacy for early vs. late testing require discussion; pick the right targets and apply a progressive testing strategy.
  • The more frequently you do something, the “tricky bits” sum to a volume of toil, unless automated. Specifically for load testing, manual pre-configuration of infrastructure, data and environments often take a lot of time.
  • If we only do something infrequently, it takes more effort than if we did it regularly. Flaky environments, brittle test scripts, out-of-sync data … because we don’t exercise them continuously, our ability to address these kinds of problems doesn’t improve.

A proper performance testing tool belt includes a number of practices that inherit these challenges: load testing (i.e., simulating realistic conditions), monitoring, mocking/virtualization, test data management/sanitization, experience sampling, among others. Most enterprises I work with have either partially or fully automated many of these components, with the help of tools like Delphix, Mockiato, Tricentis NeoLoad, Grafana and Selenium. The good news is that there is nothing insurmountable to any of these approaches when automating them; having a solid vision and approach aligns and organizes our efforts.

Automated Continuous Performance and Load Testing

It’s easy to say that the goal of test automation is to “go faster” or “accelerate delivery.” That applies when it works reliably and produces actionable outcomes, but getting to “reliable” and “actionable” takes more than load-testing tools. It’s like the DevOps “people, process, technology” Venn diagram. If you’re only solving for one of these aspects of the problem, you’re not really solving the whole problem. So how do we solve for the “people” and “process” elements involved in continuous load and performance testing?

Read More:   Update TNS Research: How the IT Landscape is Shifting to Accommodate Containers

Enter automation. Automating our processes puts us up to the challenge of articulating our goals, requirements and activities in a way that machines can execute on our behalf. It shines light on gaps in our processes, technologies and skills, which is a good thing. We need to know our gaps in order to properly address them.

To automate performance tests that produce meaningful and actionable outcomes, people need to communicate about goals and outcomes. Discussing SLAs, SLOs and SLIs up front is an important part of “left shifting” performance practices. There should be some kind of performance criteria intake process. It could be something as simple as a form or questionnaire about the systems and timeframes that performance testing should target. The goal is to help produce baseline automation artifacts like SLA definitions and API test details.

As you combine these artifacts into a pipeline (i.e., our “process”), you will also have to automate the load infrastructure provisioning/deprovisioning process so you can run your load tests properly isolated from the Systems under test (SUTs). Containers and Kubernetes have come a long way to providing a standards-based approach to automating infrastructure, but there are always a million other ways to manage these resources too. Whatever provisioning strategy you employ, it shouldn’t be complicated for teams to run a test. Autonomy to obtain feedback when necessary is a key component to accelerating delivery.

If Your Performance Practices Aren’t Automated, You’ll Scale-Fail

Finally, automation shouldn’t just be in the service of a single team. There’s no force-multiplier to that kind of work in larger organizations when it’s just a local efficiency (for a performance CoE team). Conversely, automation that many teams can use, even if they aren’t the experts that set it up, is what really helps organizations go faster.

In DevOps, the work of performance and reliability engineering isn’t just running tests or analyzing and consolidating results. It’s providing product teams a way to do these things themselves, while also providing safety guardrails and “good practices” around these processes so teams can grow their own performance competencies over time.

Read More:   Productivity and happy developers - 2022

I see many mature large-scale performance practices that have transitioned from the “performance team as a siloed service” mindset to a “performance processes as self-service” offering. One reason is that there’s just no way to scale “people” performance expertise to hundreds of teams by adding more full-time bodies. But mainly it’s because we can now formalize (i.e., automate) our performance testing practices into work, where machines help with the heavy lifting.

In many ways, automating your load testing practices appropriately “brings the pain forward” and puts the forward tension on organizational aspirations to accelerate delivery. Rather than the reflexive approach to leave behind traditional practices, we need to re-think them, inherit their useful points of wisdom and adjust to the constraints and challenges of today.

Where Do We Go From Here?

Running performance tests early and often at smaller volumes is the key to establishing “right-fit” automated performance feedback loops into pipelines. Start small, by targeting APIs, to build confidence and competence.

Practical how-to guidance on laying the foundation for a successful transition to an automated continuous testing approach — including strategies for prioritizing what to automate, best practices for developing dedicated performance pipelines, overcoming test infrastructure obstacles and ensuring trustworthy go/no-go decisions — is detailed in my Practical Guide to Continuous Performance Testing.

InApps Technology is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Tricentis.

Lead image via Pixabay.



Source: InApps.net

Rate this post
As a Senior Tech Enthusiast, I bring a decade of experience to the realm of tech writing, blending deep industry knowledge with a passion for storytelling. With expertise in software development to emerging tech trends like AI and IoT—my articles not only inform but also inspire. My journey in tech writing has been marked by a commitment to accuracy, clarity, and engaging storytelling, making me a trusted voice in the tech community.

Let’s create the next big thing together!

Coming together is a beginning. Keeping together is progress. Working together is success.

Let’s talk

Get a custom Proposal

Please fill in your information and your need to get a suitable solution.

    You need to enter your email to download

      Success. Downloading...