13 A/B Testing Mistakes And How to Fix Them

Are A/B testing mistakes limiting your ability to drive engagement and conversion?

You can’t really know until you analyze your process for errors. But here’s a fact: many SaaS companies are going about A/B testing the wrong way.

This article discusses 13 common mistakes you should avoid when doing growth experiments.

What is an A/B test?

A/B testing, often referred to as split testing, compares the impact on specific metrics that a variation of an element can have against a control group.

When testing two variations against each other, this is called multivariate testing.

Through experimentation, A/B testing helps you answer critical questions like:

  • Which headline drives more conversions?
  • Does that redesigned onboarding flow enhance user retention?
  • Is the pricing page layout optimal for conversion rates?

However, there’s more to A/B testing than using fancy tools to set up your tests. Without the right approach, you’ll be wasting time and money.

Below are some common mistakes SaaS companies make:

1. Thinking that A/B tests work for testing landing pages only

What comes to your mind when someone mentions A/B testing?

Most people think only of landing pages and ads. While the practice was popularized by marketers testing campaigns, A/B testing’s scope goes way beyond that.

From user onboarding to feature adoption, pricing strategies to user interface design—A/B testing has different applications in SaaS.

By comparing website pages, lead generation landing pages, in-product pages, in-product experiences, ad campaigns, etc., you can test how customer behavior is influenced across the entire user journey.

For example, you can test if adding a welcome screen on your onboarding journey increases the activation rate, then implement what works.

How to A/B test with Userpilot.

How to fix this:

Start by taking a holistic view of the user journey within your product.

This involves tracking user interactions from the moment they land on your website or app to the point where they achieve their desired outcomes.

Armed with a deeper understanding of customer behavior, formulate hypotheses for potential improvements at various touchpoints along the customer journey.

Each hypothesis should address a specific pain point or opportunity for enhancement. A/B test your hypothesis and note how users respond.

Funnel-analysis-userpilot
Funnel analysis on Userpilot.

2. Choosing the wrong A/B testing hypothesis

An A/B testing hypothesis is your speculation of why you’re not getting the desired results and prediction of the changes you need to make.

The right hypothesis is foundational to developing an effective A/B testing process. Otherwise, you’ll keep spending resources and getting ineffective results or be distracted by the wrong metrics.

This mistake happens when you don’t do enough research and, therefore, don’t properly understand what to test.

How to fix this:

To avoid this mistake, make sure to do your research before you run any test.

Talk to your users, analyze their behavior data, and think about what you’re trying to achieve with your test. Once you have a good understanding of the situation, you can create a hypothesis that is more likely to be accurate.

3. Testing too many hypotheses at the same time

Testing multiple elements at once may seem like a good way to save time, but it’s counterproductive as you won’t know what actually impacted the results.

Imagine your demo sign-up page has many drop-offs and you want to change several things to improve the results.

You play with CTA wording, color variations, and the length of the sign-up form.

Your demo sign-ups may go up, but it will be difficult to tell exactly what influenced the change, and it will keep you from learning more about users—meaning you’ll need to experiment again if something similar isn’t working.

How to fix this:

If you have enough users, try multivariate testing—a testing method that allows you to compare multiple variables at the same time.

To do multivariate testing, you need to identify the variables that you want to test and create variations of your page or product that test different combinations of those items.

You can then use a suitable analysis tool to determine which variation performs the best.

In cases where your user base is limited, conducting multivariate tests with statistical significance may be challenging.

Instead, use surveys to gather user opinions and understand if a specific change has improved their experience.

in-app-survey-ab-testing-mistakes
Building user experience survey in Userpilot.

4. Using A/B testing instead of multivariate testing

Not everything needs to be an A/B test. Sometimes, a multivariate test is the best for your hypothesis and the results you seek to achieve.

How to fix this:

To avoid this mistake, start with understanding the difference between a typical A/B test that simply compares one variable against null and multivariate tests where you’re analyzing multiple items.

Use A/B tests when you have a large sample size and want to assess the impact of small, incremental changes or variations in your product.

Common examples include testing specific elements like call-to-action buttons, headlines, images, or forms.

Multivariate testing is best suited when you need to run multiple tests at the same time and see how multiple page elements interact with each other. It helps uncover the most effective combination of changes for optimal results.

Using-Userpilot-for-ab-testing
Userpilot allows you to run Controlled A/B and multivariate tests.

5. Running A/B tests without enough users

A/B tests work best when you have the audience. Without a sufficient number of users, it will take longer than necessary to achieve test results that are statistically significant.

We’re talking about several weeks and months that could be wasted if your hypothesis turns out incorrect.

How to fix this:

You can implement concept testing to get quick user feedback on your hypothesis.

This technique involves using surveys to present new ideas to users and asking what they think. For example, imagine you plan to redesign the user interface of your SaaS to improve user experience.

Decide on the factors you wish to play around with, then create mockups or interactive prototypes showcasing the new UI design. Share these concepts with a group of current users.

Use surveys to gather their input on the aesthetics, ease of navigation, and overall user-friendliness of the new design. Identify potential areas of confusion or resistance and use that data to make informed decisions.

In-app-survey-ab-testing-mistakes
Build surveys and collect feedback data with Userpilot.

6. Failing to identify relevant experiments

People make this mistake when they don’t start with the user journey in mind.

Without picturing the steps users take to use your tool, it can be difficult to know the exact page to prioritize.

All pages on your app or website aren’t created equal; focus more on optimizing the high-value pages.

How to fix this:

So, what are examples of high-value pages for SaaS? The major ones include:

  • Pricing page
  • Checkout/demo booking page
  • Signup page or signup flow

How to identify the right pages to improve on your website and in the product?

Check trends report from your analytics tool to spot patterns in how users interact with different pages. Use the data to determine what to improve and where.

Trends-report-Userpilot
Behavior analysis on Userpilot.

For any page you want to work on, start by identifying the metrics you hope to improve. This does two things: it gives you an opportunity to gauge if the test is worth it and makes it easy to measure results.

7. Not setting a significant test duration

A/B tests are based on statistical principles. In order to get accurate results, you need to collect enough data.

It’s easy to end the test after you see results that confirm your hypothesis, but you can’t be sure until you’ve allowed the test enough time.

Consider this scenario: You’re testing a revamped pricing page for your product, aiming to boost subscription sign-ups.

If you prematurely end the test after only a week because it’s convenient or you’re eager to see results, you risk drawing conclusions based on limited data.

Customer behavior can be influenced by various factors, such as weekdays versus weekends, marketing campaigns, or seasonal trends. A short test period might inadvertently capture these fluctuations instead of the true impact of your pricing page changes.

The primary factor that should influence when to stop an A/B test is statistical significance.

Continue the test until you have gathered enough data to confidently determine whether the observed differences between the A and B groups are statistically significant. Aim for a confidence level of 95% or higher.

Statistical-significance-ab-testing-mistakes
Image source.

How to fix this:

How long you need to run the test to achieve statistical significance depends on the number of users and what you’re trying to impact.

For instance, if you want to improve your trial to paid conversion but get 10 users per month on average, your test will probably need to run for more than a month to be able to get relevant data.

8. A/B testing using the wrong traffic source

A common example of this is when teams prioritize traffic data from desktop users against mobile users. This is a huge oversight because a good chunk of traffic comes from mobile.

There are exceptions—depending on your target audience—but generally, testing without including mobile traffic can give you false positive results.

How to fix this:

Properly segment your users when examining data. This will enable you to properly understand the different user segments engaging with your tool, and you’ll know the groups to include when running experiments.

For example, here’s a screenshot of analyzing trial signups by country with Userpilot.

By gathering such data, you can tell where most of your traffic comes from and keep that information in mind when running tests.

Website-traffic-analysis-to-check-ab-testing-mistakes
Analyzing trial signups with Userpilot.

9. Using a tool for your A/B tests that changes the parameters of your test

One of the most critical aspects of A/B testing is maintaining consistency in the test environment to isolate the impact of the changes being tested.

When a testing tool is changing parameters mid-test, it effectively muddles the waters and makes it challenging to attribute any observed differences to the changes you’ve implemented.

This can lead to misleading conclusions about the effectiveness of your optimizations.

For example, imagine you’re running website split testing to compare two different versions of your website and see which one leads to higher user engagement.

But unknown to you, the A/B testing tool slows down page load time when working in the background.

The testing tool’s impact on page speed interferes with the A/B test. Users experiencing slower page load times might abandon the site, impacting engagement and conversion rate.

The consequence: Your A/B test may incorrectly attribute differences in user behavior to the website versions when, in fact, the tool’s performance impact is a significant factor.

This can lead to flawed conclusions and misguided website optimization decisions.

How to fix this:

Conduct an A/A test before your A/B test.

An A/A test involves splitting your audience into two identical groups and exposing them to the same content or experience. This type of test serves as a control, with the expectation that there should be no significant differences in customer behavior between the two groups.

By conducting an A/A test, you can evaluate the impact of the A/B testing tool on your website or application’s performance, such as page load times, user experience, and server resources.

It helps you understand if the tool introduces any unintended variables or slowdowns that could interfere with future A/B tests.

You might need to find alternative solutions if the tool affects your results.

10. Building new tests vs iterating on existing A/B tests

A null result doesn’t mean the end of your investigation; rather, it’s a starting point for deeper exploration.

Don’t write your report and move on to the next hypothesis. You’ll miss out on the valuable insights you could have generated if you pressed further.

How to fix this:

If your hypothesis was based on a problem you spotted after analyzing customer behavior data, then moving to the next problem doesn’t solve anything.

The fact that your hypothesis was null simply means you had the wrong opinion; investigate further and see if you can develop another angle.

You don’t have to stop even after finding a new hypothesis and proving it right.

You could experiment more to see how the change affects other aspects of your platform or website.

For example, if you A/B tested banner designs for your homepage and your hypothesis that little changes to the CTA and value prop could increase conversion proved right, why stop there?

You could explore adding testimonials to the banner and see if the extra credibility further boosts engagement and conversion.

11. Not running your A/B tests strategically

Testing is fun, SaaS product teams enjoy the process for the most part.

However, not running A/B tests strategically leads to several pitfalls.

First and foremost, it becomes challenging to draw significant conclusions from these experiments.

Without a clear plan, you may stumble upon positive results occasionally, but it becomes challenging to replicate these successes or understand why they occurred.

Moreover, it can lead to a waste of resources and time, as you chase fleeting wins that don’t contribute meaningfully to your product’s growth or user satisfaction.

How to fix this:

Document learnings

Maintain a record of your A/B test results and learnings. Record not only what worked but also what didn’t and why.

This knowledge repository becomes invaluable in guiding future strategic decisions.

Test sequentially:

Instead of running random tests concurrently, consider a sequential testing approach.

Conduct one test at a time, analyze the results, and use the insights to inform the next test. This iterative process allows you to build on your learnings strategically.

Note that this doesn’t rule out periodic radical testing that you might make when you need to see massive changes.

Continual feedback loop:

Establish a continual feedback loop between A/B testing and product development.

Regularly share insights from tests with your development team, designers, and other stakeholders to inform product enhancements and updates.

12. Copying A/B testing case studies

One tempting yet risky mistake is to blindly copy A/B testing case studies from other companies without a deep understanding of your unique user base, context, and goals.

It’s understandable; case studies provide inspiration and save time.

However, they shouldn’t guide your testing strategy 100%. Learn what worked for others, then get the main ideas and adapt them to your specific circumstances.

How to fix this:

  • Always base your tests on data. Look at analytics and past test results to decide what to work on.
  • If you must copy, test and do it incrementally. Test changes in controlled A/B experiments to gauge their impact on your specific user base. This allows you to learn from real user interactions and iterate based on actual results.
  • Prioritize based on impact: What worked for one company may not align with your primary objectives, so focus on improvements that matter most to your users and business.

13. Overestimating the impact of your tests

There’s a degree of excitement that comes with running successful A/B tests. You might be tempted to replicate the same changes across different parts of your website or app.

For example, when you find that adding an exit intent popup on your signup page increases signup, you might overestimate the experiment’s implication and add one to all pages.

Doing this without proper investigation can have unintended consequences as what worked on one page may not necessarily work everywhere.

How to fix this:

In this case, don’t rely solely on the A/B test data. Use tools like Userpilot to understand how users engage inside your product or Google Analytics to understand how users engage with your website.

Examine the overall data and segment the users that converted to draw specific conclusions.

Conclusion

As you go on to apply the lessons from this article, remember to align your A/B testing efforts with your product roadmap and broader business objectives. This alignment ensures that your testing efforts contribute directly to your product’s strategic growth, and you won’t be making random decisions.

Userpilot can help you collect customer data through in-app surveys and user behavior tracking. You can use this data to determine the A/B tests that are worth your time and effort. Once you have the data, you can also conduct the actual A/B and multivariate tests using Userpilot.

Ready to start avoiding A/B testing mistakes and making changes that improve the user experience and boost engagement? Book a demo now to get started.

About the author
Sophie Grigoryan

Sophie Grigoryan

Content Project Manager

All posts Connect