feature adoption metrics featured image
Published December 14, 2023

Measuring the Impact: How to Test Feature Success Metrics Early

7 min read

The pressure to deliver new features can lead organizations into what Ant Murphy, calls the “feature factory build trap” — constantly churning out new features without pausing to measure their true impact. This relentless push for output over outcome can lead to missed opportunities for improvement and a lack of understanding of feature success metrics, and what truly drives user value.

It’s crucial to identify and track the right feature success metrics to shift the focus from mere production to actual value creation. But how do you determine if a feature is truly “done”? Is it when the code is deployed, or is it something more? Let’s dive into how you can ensure that each feature you release isn’t just completed and contributes positively towards your business goals.

If we can all agree that the outcome is what’s important, not the output (i.e. a feature) then we shouldn’t consider any item of work as ‘done’ until it’s created the desired outcome.

Avatar of the person that wrote the post

Beyond Deployment: When is a Feature Truly “Done”?

Let’s start by setting the record straight: a feature is not “done” simply because it’s live. True completion is about impact — has the feature made a difference in the way you intended? To know that, we must look beyond the production line and start measuring success in terms of outcomes.

As Ant Murphy emphasizes, we need to focus on the outcomes rather than outputs. This means redefining "done" as reaching the desired outcome, not just ticking off a task on your to-do list.

But what does that look like in practice? It’s not just about looking at the numbers post-launch; it involves integrating the measurement of impact right from the conception of the feature. As Ant Murphy emphasizes, we need to focus on the outcomes rather than outputs. This means redefining “done” as reaching the desired outcome, not just ticking off a task on your to-do list.

The Role of Helio in Feature Success Measurement

Helio offers an intriguing approach to this. It suggests testing for desirability, usability, and sentiment well before a feature even makes it onto your to-do list. This proactive approach allows you to conduct quick concept testing, which serves as a risk mitigator against building features that your users may not need or want.

Incorporating this kind of testing early on helps you establish baseline metrics before a single line of code is written. With Helio’s strategy, you’re not just building a feature; you’re crafting an experience based on actual user data and insights.

Defining Your Feature Success Metrics

So, what feature metrics should you track to ensure a feature is “done”? There are several key metrics to consider, which we’ve paired with a data example to illustrate how Helio can provide early signals around feature adoption.

1. User Engagement

This metric helps you understand how users are interacting with your feature. Are they using it as expected? Is it enhancing their experience? Track metrics like daily active users (DAUs), session length, and frequency of use.

A key component of engagement with new features is whether users truly understand how to use it. Advent, a digital marketing tech platform, decided to test graphic elements of a dashboard for their new Audience Targeting feature. Using Helio, Advent was able to source survey responses from 100 advertisers in the U.S. in minutes.

First, they asked a freeform response question, “Based on what you see on this page and your own experiences with marketing software, what do you think the ‘compass’ icon represents (located in the right sidebar)?”

advent campaign measurement dashboard. users were prompted to locate the compass icon and describe what they think it means

The compass icon used for navigating to the new audience page was hopefully conveying the idea of targeting people in specific locations. Based on the participants’ written feedback, the idea of geographical locations was clear based on the compass icon, though the concept of targeting people in those locations seemed to be lost.

After presenting the user with their dashboard, complete with new feature updates, Advent reviewed the open-text responses from participants, like this one:

“The compass, in my opinion, represents the place of usage or target.”

Avatar of the person that wrote the post

– Helio Participant, Advertiser (US)

With this feedback in hand, Advent knew that the next iteration of their new feature’s iconography needed to intuitively convey the concept of targeting people, not just locations.

2. Conversion Rates

If your feature is designed to facilitate or encourage a specific action, such as signing up or making a purchase, conversion rates can provide clear insight into its effectiveness.

In the Advent example, one desired action is for users to create a new audience. When presented with a screen for campaign measurement, we asked our audience of 100 advertisers in the U.S. to click on where they would go to create a new audience.

campaign measurement dashboard for advent

Measuring user clicks across designs was key to understanding how Advent’s new Audience feature was converting, even before it went live. By providing specific directives, the Advent team was able to gauge how successful participants were at engaging with their new feature.

The resulting click map illustrated just how scattered users actions are when trying to complete that task:

click test results asking an audience of 100 advertisers in the United States to click on where they would go to first to create a new audience within this marketing technology platform

The Advent team felt they were planning ahead by including multiple success actions across the user dashboard and navigation. However, upon closer look, only 35% of participants successfully found where to access the new feature from their dashboard:

click test results asking an audience of 100 advertisers in the United States to click on where they would go to first to create a new audience within this marketing technology platform, filtered for 35% of participants who successfully found where to access the new feature.

Part of this goes back to what the team learned in their engagement testing: the navigation icon isn’t intuitive and doesn’t provide a clear top level area to view audience information. 

They also found that other actions on the page distract from the directive, such as the bright orange Save button in the top-right corner (which shouldn’t even appear active at the time). These usability tests show clear pain-points for Advent to solve for users.


3. Performance Benchmarks

Speed and reliability can be critical for user adoption. Monitor load times, downtime, and error rates to ensure the feature performs well under different conditions.

Tracking time to action allowed the Advent team to paint a clear picture of what new elements are working over others.

The multiple success clicks in the conversion rate testing above were also evaluated to understand the reaction time of participants.

it took the majority of successful participants a median time of almost 30 seconds to find the new audience action

For instance, it took the majority of successful participants a median time of almost 30 seconds to find the new audience action. That on its own is a significant amount of time to complete an important new action on the platform.

we see a 10+ second increase in time to action for participants who wandered for a long time before finding the appropriate action on the page

However, we see a 10+ second increase in time to action for participants who wandered for a long time before finding the appropriate action on the page. More so, they missed the more clear audience action where the majority of successful users clicked in 30 seconds.

Advent knows that these time-to-action numbers need to improve, either through iconography, CTA emphasis, or user education on their dashboard.

4. Customer Satisfaction and Sentiment

Utilize surveys, net promoter scores (NPS), and sentiment analysis to gauge how your users feel about the feature. Are they satisfied? Would they recommend it to others?

Building off the experience of the click tests participants were just asked to complete, Advent was able to gather emotional reactions to the platform participants’ had just engaged with. 

The numerical scale is a common quantitative test type used to create the tried-and-tested Net Promoter Score, which measures the net difference between the proportion of promoters (9 or 10) and detractors (6 or less). 

net promoter score (NPS) results from Advent's survey to 100 advertisers in the United States

Scores range from negative to positive 100, with anything above a 0 indicating a generally positive reaction. With an NPS score of 31, Advent’s platform shows that, despite some previously mentioned comprehension and usability issues, the overall experience still meets user’s expectations, and therefore the team’s expectations. View the Helio Example.

These NPS scores can be measured over time to show improvement—or decline—based on rapid iterative testing of your designs.

5. Business Outcomes

Ultimately, your feature should contribute to meaningful outcomes for both your customers and your company. High NPS scores, daily active users, session length, and frequency of use should contribute to lifetime client value by keeping them a customer for longer. 

Another way to look at it is this: improving your feature success metrics will help reduce customer churn, creating a more favorable environment for business growth.

Depending on your product and business goals, there will likely be other relevant KPIs to monitor. If you want to talk to an expert for guidance, let us know. We’re always happy to help makers and doers in business.

Conducting Impactful Concept Testing

Before you commit resources to developing a feature, use concept testing to gather user feedback and predictions on how it might perform. This step can save you from investing in features that don’t align with user needs or business objectives.

Here’s how Helio can amplify this idea — you can test desirability, usability, and sentiment, waaaaaaay before you put it on your todo list. Before you commit to an idea, quick concept testing will limit the risk of building unnecessary features…and it helps you define baseline metrics BEFORE ever building a feature.

Making Data-Driven Decisions

You can move from guesswork to data-driven decisions with these metrics. Instead of assuming a feature’s success, you have tangible proof of its impact.

In conclusion, escaping the feature factory build trap is not just about changing how we work; it’s about transforming how we measure success. By focusing on the impact and outcome of features, and rigorously testing and validating before development begins, we can create more meaningful products that deliver real value. Armed with the right metrics and a focus on user-driven outcomes, your features will not only be “done” but will drive success for both your users and your business.

Feature Success Metrics FAQs

What are the key feature success metrics to track for a software product?
Caret signaling that you can click it to open the dropdown

Understanding the fundamental metrics that indicate feature adoption is crucial for evaluating the success of a product. What specific metrics, such as user engagement, active usage, or user retention, should be prioritized to measure feature adoption effectively?


How can we differentiate between successful and unsuccessful feature adoption through metrics?
Caret signaling that you can click it to open the dropdown

Identifying the threshold or criteria that distinguish successful feature adoption from unsuccessful adoption is essential. What benchmarks or patterns in the metrics can help us determine if users are truly embracing a new feature or if improvements are needed?


What user behaviors should we focus on when analyzing feature success metrics?
Caret signaling that you can click it to open the dropdown

It’s important to delve into user behaviors that directly impact feature adoption. Which user actions or interactions should be closely observed in the metrics to gain insights into how users are integrating the new feature into their workflows or routines?


How do external factors influence feature success metrics, and how can we account for them?
Caret signaling that you can click it to open the dropdown

Recognizing external factors, such as market trends, competition, or changes in user demographics, is crucial for interpreting feature success metrics accurately. How can we factor in these external variables to ensure that the metrics reflect the true impact of the introduced feature?


What strategies can be implemented to improve feature success metrics over time?
Caret signaling that you can click it to open the dropdown

Continuously optimizing and enhancing feature adoption is an ongoing process. What strategies, based on the analysis of metrics, can be implemented to encourage greater user adoption? This may include UX/UI improvements, targeted marketing efforts, or user education initiatives.


Build something your users truly want