10 A/B Testing Metrics To Track Results and Measure Success

10 A/B Testing Metrics To Track Results and Measure Success cover

What is A/B testing?

A/B testing is a scientific, evidence-based method to optimize the performance of a product or landing page. It tests a variation of a product, a marketing campaign, or an ad against a control group to:

1) See if it outperforms the original version.
2) Determine with statistical significance what elements bring better results (e.g. what headline makes a winning landing page for e-commerce companies).

However, the key to making A/B testing a worthy investment is to have a clear metric that aligns with business objectives.

How to choose the right A/B testing metrics?

To conduct productive A/B tests, there needs to be a structured framework and a measurable definition of success.

First, you must define the testing goal by stating what you want to accomplish. Do you want to increase sales? Expand brand recognition? Or achieve revenue growth?

Then, to determine the right metric for your experiments, you must follow a proper metric framework such as:

  • Google’s HEART framework. Which is aimed to improve the user experience through five dimensions: Happiness, engagement, adoption, retention, and task success.
  • Pirate Metrics framework (AARRR). Which organizes your key metrics based on the stages of the user journey, including: Acquisition, activation, retention, referrals, and revenue.
  • North Star Metrics framework. Where your whole company chooses a “north star metric” that every single team should aim to improve (ex: per user for an e-commerce business). Such metrics should have specific characteristics:
North star metrics checklist.
North Star metrics checklist.

Why follow a framework? Because you’re going to need a clear goal and structure in order to define your primary and secondary metrics, where:

  • The primary metric is the main indicator of success (which you actively want to improve).
  • The secondary metrics can help you understand how performance is improved (although not representative of success).

For instance, if your primary metric is conversion rates, metrics such as average session duration, bounce rate, and click-through rate will help you get valuable insights into changes in conversion rates.

Now, let’s go over 10 metrics every SaaS can A/B test to optimize their success:

Top 10 A/B testing metrics for measuring success

As we mentioned, the best metric for you will depend on your business, goals, and the metric framework you follow.

Still, at least 10 metrics are common among SaaS businesses. So let’s take a look over what they’re and how they’re calculated.

Note: When calculating your A/B testing KPIs, remember that your metrics should be time-bound, which means they can only be measured during the period of the A/B test when comparing two versions.

Conversion rate

Conversion rate is the percentage of users who perform a specific action (a conversion), such as signing up, purchasing a plan, or upgrading. To calculate it, divide the number of website visitors who performed the desired action by the number of traffic you got during a period.

In SaaS, for instance, you can A/B test free trial conversion rates. To do so, you’d need to divide the number of free trial users who converted into premium users by the number of free trial users generated during the testing period.

free trial conversion rate ab testing metrics
Free trial conversion rate formula.

Active users

Active users are the volume of users who interact with your product constantly. In short, it’s the number of people who use your app and indicates general user engagement.

Google Analytics and any customer success tool like Userpilot can track active users for you and allow you to filter by months or even user segments. And when it comes to A/B testing, you can run experiments to optimize user engagement by testing product changes against a control group and see if it represents a significant improvement.

active users ab testing metrics
Tracking user activity on Userpilot.

Average session duration

Average session duration measures how long a website visitor stays logged into your app. Depending on your app, it can indicate that users are having a positive experience and getting satisfied with it.

When A/B testing, an increase in session durations can represent a positive experience if your product is designed for it or if there’s a key element that makes your product more efficient. So it’s a great secondary metric to track along with conversion rates or customer satisfaction.

session duration analytics ab testing metrics
Analyzing user sessions on Userpilot.

Events per session

Events per session, as the name suggests, track the number of interactions a user has with your product during their session (clicks, tasks completed, etc). And you need a platform with feature tagging to measure it.

This metric is designed to represent customer behavior, so an increase in events per session can represent better usability when paired with a primary metric. This is as long as you didn’t make changes that artificially increase the steps needed to complete a task.

feature tagging for ab testing metrics
Feature tagging with Userpilot.

Goal completion

Goal completion is the number of users who achieve a specific milestone using your product, such as creating their first campaign or adding teammates.

With a tool like Userpilot, you can (for example) A/B test your in-app onboarding to optimize the number of users who reach the activation stage. Making it a great metric to optimize user behavior in a SaaS business.

ab testing metrics goals
Tracking goals on Userpilot.

Retention rate

Retention rate measures the percentage of paying users that stay with you over a month, quarter, or year. And it can be a great secondary metric to track along with business growth (for example) to understand what leads to better satisfaction and user experience.

To calculate it, divide the number of paying users at the end of the time period (without counting new users) by the number of paying users at the beginning of the period.

retention rate formula
Retention rate formula.

Churn rate

Contrary to retention rates, customer churn is the rate at which customers discontinue their subscriptions or stop using a particular product or service. And it can indicate customer dissatisfaction and a lack of market fit.

To calculate it, divide the number of users lost during the period of interest by the number of users you had at the start of the period. Then multiply the result by 100.

churn rate formula
Churn rate formula.

Customer satisfaction score

The customer satisfaction score measures the overall satisfaction sentiment users have at any point in the customer journey. It can only be measured through CSAT surveys, which means it’s not a reliable metric for A/B testing.

However, CSAT can be a great supporting metric when, for example, testing an onboarding interactive walkthrough against a control group. You can trigger a survey at the end of the walkthrough to see if an increase in satisfaction correlates with better onboarding and activation rates.

csat survey ab testing metrics
Creating CSAT surveys with Userpilot.

Customer lifetime value

Customer lifetime value (LTV or CLV) is the total revenue a customer will bring to your company throughout their time as a user of your products or services. And it’s calculated by multiplying the average order value by their purchase frequency and finally, multiplying it by their average lifespan.

LTV is a great metric to test along with conversion rates. This way, you can determine the revenue generated with product changes and optimize it.

ltv formula
LTV formula.

Revenue

For most companies, revenue is the first A/B testing metric worth optimizing—with conversion rates as a close second.

By tracking revenue, you can directly optimize your business bottom line by experimenting with different variations that drive more conversions, sales and account expansion.

In SaaS, you can track it as your average revenue per user (ARPU), which is calculated by dividing your monthly recurring revenue (MRR) by the number of paying users.

arpu formula
ARPU formula.

How to measure and improve your A/B test metrics with Userpilot?

In a SaaS, much of your A/B testing will happen inside your app. Userpilot is a platform that can help you optimize your product performance in different ways, including:

  • Setting up in-app goals and, for example, test an interactive walkthrough to see if it leads to more goals completed.
  • Tracking in-app behavior such as clicks (through feature tagging), milestone progress (custom events), or specific interactions Allowing you to A/B test a feature or in-app experience to optimize the user experience.
  • Triggering CSAT surveys at specific touchpoints automatically allows you to measure if—for instance—if users are satisfied with your self-service content or with the results they’ve achieved with your product.
  • Tracking secondary metrics such as active users, product usage, interactions per user, and events per session through advanced product analytics. Allowing you to support your A/B testing results with additional, actionable data.
userpilot product analytics for ab testing metrics
Userpilot product analytics.

Conclusion

At the end of the day, the best AB testing metrics are the ones that align with your business goals.

You should always be on the lookout for areas of improvement within the product. So, come up with your hypotheses, conduct multiple tests, and make the most evidence-based decisions available.

Plus, if you’re a product manager who needs to run in-app A/B tests without code, book a demo call with our team and get our help!

previous post next post

Leave a comment