• Home
  • >
  • DevOps News
  • >
  • How to Avoid Bias and Find Customer Metrics That Matter – InApps Technology 2022

How to Avoid Bias and Find Customer Metrics That Matter – InApps Technology is an article under the topic Devops Many of you are most interested in today !! Today, let’s InApps.net learn How to Avoid Bias and Find Customer Metrics That Matter – InApps Technology in today’s post !

Read more about How to Avoid Bias and Find Customer Metrics That Matter – InApps Technology at Wikipedia



You can find content about How to Avoid Bias and Find Customer Metrics That Matter – InApps Technology from the Wikipedia website

Stackery sponsored this post.

Toby Fee

Toby is a community developer at Stackery. Her roles and experience combine working as a software engineer, writer and technology instructor, building interesting projects with emerging tools and sharing her findings with the world. Prior to joining Stackery, Toby was an engineer at NWEA, Vacasa and New Relic.

I was reading an amazing article yesterday on “The Inspection Paradox,” a statistical illusion that occurs frequently in daily life.

Here is an example that illustrates this paradox: When we ask the dean what the average college class size is, she says 35. When we ask students, their answer is 95. No one here is lying.

The explanation is easy without doing any math: Students are much more likely to be in a large class. Random sampling would be much more likely to pick a student taking a 500-student lecture survey than one of the kids in a 10-person seminar.

The inspection paradox observes that when we try to find an average, it matters if we’re asking the observers or participants.

Read More:   Update Airbnb’s AirPal Reflects New Ways to Query and Get Answers from Hive and Hadoop

How the Inspection Paradox Relates to Development

I was thinking about this as I considered the situation of a web developer who’s trying to describe what constitutes “good” performance for their service.

Okay, before we dive into this, I want to say the above scenarios aren’t real-life examples of the inspection paradox. They’re really more closely related to selection or survivorship bias. But it was a fun entry point to the concept.

As Downey describes in his article, the inspection paradox will give you an odd feeling on the racetrack: everyone you pass seems to be going much slower and everyone who passes you seems to be really zooming. This is reasonable because when someone is going very close to your speed, it’s very very unlikely you’ll pass each other on a racetrack. If you’re moving at identical speeds you never will!

Our Distorted View of Average

Upon learning about the inspection paradox, I was immediately reminded of what developers experience when comparing the performance of their web apps. It’s easy to think of examples of apps that deliver incredibly short response times or web services that really crawl. But it’s much harder to think of examples of apps whose performance is very similar to our own.

This is also evident when we go looking for Medium blog posts about performance. There are way more articles on “how we beat speed-of-light limitations in our latest service” or “how we stopped taking two weeks to deliver an email” rather than “how we render our app in 7.2 seconds, which is fine.”

Okay, I said it already but I’m saying it again: I know this is selection bias.

The problem with focusing on extremely poor or unrealistically great examples is that it encourages a kind of paralysis.

“We’ll never get the responsiveness of Netflix, so why bother tweaking our CDN?” a developer might reasonably ask.

Read More:   Project Management for the Next 50 Million Developers – InApps Technology 2022

It’s very easy to feel daunted by examples where other dev teams seem to deliver the impossible. However, here is a more positive — and ultimately, rational — way to look at the performance gap:

“Yeah, we can be slow during peak usage, but at least we’re not Thingiverse!”

(I love you, Thingiverse — I’m sorry). Just because you’re beating the industry’s worst performers doesn’t mean you have reason to be complacent.

Above all, a focus on outliers tends to take the focus off incremental changes. Small improvements and small downgrades both seem tiny compared to extreme cases. But here’s the thing: No service gets to top performance in a single step. To make great improvements you must build on small ones. If you don’t take the time to celebrate a 50 ms average performance boost, you’ll accept changes that make for small slowdowns, and eventually, those slowdowns will accumulate until you find yourself heading to the bottom of the pack.

Why These Metrics Are Hiding the Real Contest

Finally, all of these number comparisons can be a distraction. At the end of the day, the most important thing is to listen to your users.

As Charity Majors, engineer and co-founder of honeycomb.io, once tweeted: “Nines don’t matter if your customer is unhappy.”

So, essentially, user expectations are subtle around site performance, and minor user interface (UI) tweaks can vastly change the perception of performance.

A Real Example

Were you annoyed that I didn’t use real examples of The Inspection Paradox? Okay, here’s a real one:

Something  prevalent back in the day when I managed enterprise services for successful startup New Relic was a paradox I’d describe as one of two generalized questions:

“What’s our average page load time?” vs. “How often do we risk $100,000 in revenue through poor performance?”

The average user of any software tool is going to be a small dev shop or lone developer. They’ll often have high usage rates, meaning most times when your tool gets used it is under minimal loads and performs pretty well.

Read More:   Update How and When to Use AI to Augment Traditional Analytics

But with high-value customers, it’s likely they’re using your tool to its very limits (huge amounts of data, unusual environments, etc). This often means the more a customer matters to your business, the worse your product performs. 

Measure, as Best You Can, Satisfaction Not Speed

The inspection paradox means that averaging user performance can get you a different answer than taking the average user response. But more importantly, your users know what counts as acceptable performance. If you measure their happiness either directly or with proxies like session time, you can see where you’re really succeeding. And always celebrate small improvements — they’re the building blocks for bigger wins.

New Relic is a sponsor of InApps Technology.

Feature image via Pixabay.

InApps Technology is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Real.




Source: InApps.net

Rate this post
As a Senior Tech Enthusiast, I bring a decade of experience to the realm of tech writing, blending deep industry knowledge with a passion for storytelling. With expertise in software development to emerging tech trends like AI and IoT—my articles not only inform but also inspire. My journey in tech writing has been marked by a commitment to accuracy, clarity, and engaging storytelling, making me a trusted voice in the tech community.

Let’s create the next big thing together!

Coming together is a beginning. Keeping together is progress. Working together is success.

Let’s talk

Get a custom Proposal

Please fill in your information and your need to get a suitable solution.

    You need to enter your email to download

      Success. Downloading...