• Home
  • >
  • DevOps News
  • >
  • Continuous Integration Pitfalls at Scale – InApps 2022

Continuous Integration Pitfalls at Scale – InApps is an article under the topic Devops Many of you are most interested in today !! Today, let’s InApps.net learn Continuous Integration Pitfalls at Scale – InApps in today’s post !

Read more about Continuous Integration Pitfalls at Scale – InApps at Wikipedia



You can find content about Continuous Integration Pitfalls at Scale – InApps from the Wikipedia website

Sarjeel Yusuf

Sarjeel is a product manager at Atlassian responsible for orienting Atlassian tools to facilitate DevOps capabilities in their feature sets.

In a world where moving quickly and maintaining availability and quality in software development are key factors of product success, many are looking toward DevOps practices to aid in their development journey. At the core of this DevOps practice lies continuous integration/ continuous delivery (CI/CD), which bridges the gap between dev and ops in the development pipeline.

As a result, the CI/CD stage is crucial for an effective DevOps practice. Failure at this stage could curtail any benefits expected from arduous shifts in culture and development processes when adopting DevOps. Therefore, when building out this stage of the development pipeline, it is crucial to ensure an effective, robust solution that also meets current development needs. This pertains to both the CI pipeline and the CD pipeline.

Within the scope of this piece, we will specifically consider the CI pipeline, issues that may arise when scaling up and the solutions around these issues.

Issues at Scale

Understanding the CI pipeline’s responsibility and outcome as discussed in the previous section, several pain points are notable. These issues could have adverse effects on the various metrics used to measure DevOps success. As per the DORA metrics, these include the four following metrics:

  • Lead time (LT).
  • Deployment frequency (DF).
  • Change failure rate (CFR).
  • Mean time to resolve (MTTR).

A Hidden Bottleneck

As development scales and the number of developers working on the codebase increases, more releases are created and the CI pipeline can become a bottleneck.

Read More:   Update Python for Beginners: When and How to Use Tuples

In October 2020, engineering manager Mayank Sawney and I gave a conference talk at Atlassian’s team.work 2020 conference on DevOps for monolith. In this talk, we highlighted how the frontend codebase in several areas of the Atlassian ecosystem was a monolith. The most obvious course of action would be to break down the monolith into microservices. Acknowledging the benefits breaking down into microservices would offer, there is still the hidden threat of all of the releases using a single CI pipeline.

As a result, it does not matter how you segment the teams and how you break down the monolith, you will hit a wall when all these different teams or developers try to create a build for release at the same time. All of a sudden, developers are waiting in a queue as someone else is using the CI pipeline and testing environments.

Hence, the CI pipeline becomes a huge bottleneck on the development pipeline. Lead time and deployment frequency take a huge hit as the teams are held up waiting for one another. This is not a major concern for smaller teams and organizations, but would be devastating for any organization that’s scaling.

Crippling Configurations

With the right change management and release scheduling practices, organizations can to some extent mitigate the bottleneck effect. However, as the codebase scales, the CI pipeline would have to adhere to all the requirements of different types of releases reflecting the different parts of the system being built.

As a result, the number of parameters within the build process could become unmanageable and changes to the CI pipeline could be risky. In an attempt to increase the flexibility of the pipeline builds and tests, a failed pipeline could result in the entire release operation stalling. Everyone now would have to wait for ops teams to fix the pipeline operations. This represents a single point of failure.

Expectantly, this would also have an effect on lead time and deployment frequency. Additionally, it could have an effect on change failure rate as a rickety pipeline with various configuration parameters, which could result in an incorrectly configured build. A build that may pass all the tests but result in potential points of failure — this would not be expected with a simple codebase. However as application complexity scales, a more complex CI would be required, which as discussed, cannot scale effectively with the trajectory of the application itself.

Read More:   6 Development Insights to Empower IT Teams – InApps 2022

Scaling Development and Shrinking Confidence

A common plague that infects the CI domain are flaky tests. Flaky tests are tests that fail or succeed seemingly randomly when executed on the exact same code. The problem here is identifying which tests are flaky. A test deemed flaky in the past may fail for valid reasons in a specific build, yet still be dismissed as flaky now.

This, in turn, reduces the team’s confidence and adds to the frustration of builds. We are no longer sure whether a failing test is a flaky test or an actual fail. This is more frustrating as development scales. As the number of tests included in the pipeline increases, the possibility of flaky tests also increases.

Consequently, we can expect larger change failure rate values. Dismissing tests as flaky to continue the development process into deployment means witnessing more disruptions and incidents in production. Disruptions could have been avoided with more attention given to the tests.

Moreover, actually trying to be proactive about investigating these flaky tests could result in large lead times, especially when many teams or developers rely on the same pipelines. Hence the exasperating bottleneck issue discussed earlier, resulting in an impact on the deployment frequency and lead time metrics.

Effective Scaling

From the scaling issues identified in the previous section, it is clear that CI pipelines, like many other components, should be modularized while providing effective measures to tackle flaky tests.

When considering scaling, the best place to start is to consider separating CI servers, where each CI server and pipeline would run on an isolated environment. These separated servers can then serve the different needs of the different teams required to build various kinds of builds. This would prevent CI from becoming a bottleneck while also allowing for less complex pipelines.

However, this is easier said than done. The need to replicate servers while maintaining synchronous applications is a challenge in itself. This is where we can consider Infrastructure as Code (IaC) tooling that would allow us to define and execute the spin-up of these isolated CI servers. This is crucial, considering the operational costs of having multiple CI pipelines and servers as we scale. Without employing IaC practices, we would run into the problems associated with managing multiple CI pipelines when scaling.

Read More:   Tips to Boost up Employee Productivity during Work from Home in 2022

Additionally, we need to think about synchronizing the different CI servers with the same environment in which we would be performing tests. This is imperative when working with a monolith, to ensure that the latest version of the application is available to be served. Ian Barfield, a principal engineer at Oracle, introduced a “divide-and-conquer “method to mitigate this problem. Brainly’s DevOps team later realized this as articulated by lead infrastructure engineer Mateusz Trojak in his article explaining their story of scaling CI pipelines.

Scaling CI is not a new concept when considering Brainly’s story. However, there are concepts and technologies being developed around better facilitation. One such concept now being adopted to help with the scaling of CI is CI observability.

This concept aims to provide the right insights into issues such as flaky tests to reinstill confidence in the CI tests. Thundra is one such SaaS solution that is building toward providing the right CI observability metrics that resolve this issue.

The Thundra Foresight offering, in an early access program, aims to help answer crucial questions such as why are tests failing and what may be resulting in slow builds, therefore solving the core problems plaguing CI at scale.

Conclusion

CI, along with CD, are the core of a DevOps practice as it is the step that brings together dev and ops. A failure at this stage in the development pipeline could thwart an organization or team aiming for the benefits of DevOps, regardless of how good it is in the other areas of the development pipeline.

Considering the pain points that emerge with CI at scale, it is evident that CI modularization and observability are crucial. Scaling in any form has the potential to hamper a team’s ability to maintain velocity, quality and availability. Only by introspection and willingness to integrate the right practices can teams witness effective scaling.

Featured image via Pixabay. 



Source: InApps.net

Rate this post
As a Senior Tech Enthusiast, I bring a decade of experience to the realm of tech writing, blending deep industry knowledge with a passion for storytelling. With expertise in software development to emerging tech trends like AI and IoT—my articles not only inform but also inspire. My journey in tech writing has been marked by a commitment to accuracy, clarity, and engaging storytelling, making me a trusted voice in the tech community.

Let’s create the next big thing together!

Coming together is a beginning. Keeping together is progress. Working together is success.

Let’s talk

Get a custom Proposal

Please fill in your information and your need to get a suitable solution.

    You need to enter your email to download

      Success. Downloading...