3 Product Experiments to Reduce Product Churn

3 Product Experiments Your SaaS Must Try Now to Reduce Churn

If there’s one thing every SaaS company is worrying about now – it’s product churn.

Amid the uncertainties of the coronacrisis – a lot of businesses are looking for unnecessary costs to cut.

As we wrote in our previous post on customer retention tactics for the time of crisis – trying to stop people from cancelling by giving them discounts is a quick-fix that will backfire in the long run.

It’s like trying to cure an illness with paracetamol – you may get quick pain relief, but the underlying cause remains unaddressed. And actually – while you’re distracted from the immediate symptoms – the bug is causing even more damage in the background.

To translate this metaphor into the product churn situation in SaaS if you tackle churn with discounts, you’re simply undermining the value of your product in the eye of the customer.

What happens next? They still see your product as an unnecessary expense (albeit now lower) and will – at some point – cancel anyway. So you’re basically deferring the inevitable, while at the same time distracting yourself from facing the underlying cause of your churn:

The customers are not getting enough value from your product.

churn coronacrisisMaybe your onboarding process isn’t great and they are not adopting the key features that drive them to the ‘AHA moment’ – where they get the value of your product.

Or maybe you haven’t provided enough in-app guidance and education to help them achieve their goals with your product.

One thing is sure – there is something wrong with your post-registration growth process (that includes your onboarding, engagement and retention strategy).

But how do you know what is not working? And how do you know which approach will be best to fix your churn rate?

In this post – we will address exactly that – how you can reduce product churn in a data-driven way, by doing the right product experiments:

  1. Why should you run product experiments
  2. Which product experiments should you run?
  3. Real-world examples from SaaS products to give you some inspiration
  4. How you can start running product experiments of your own

Why should you run product experiments? How does it reduce product churn?

When you’re first designing your signup and onboarding flows, you need to make some assumptions.

Product experiments are an incredibly effective way of validating assumptions: and optimizing your onboarding flows accordingly.

SaaS businesses are obsessed with user behaviour and marketing data – and nobody argues the need for A/B testing and marketing experiments.

Why not apply the same logic to product then?

RELATED: See how to set up A/B tests inside your product with Userpilot – in minutes and without any coding:

For years now, most business decisions have been aided by data. Marketing teams use detailed analytics to measure the success of campaigns and to identify areas for improvement. Customer success teams measure customer health data to understand which users are struggling and need extra guidance.

And more recently, Product teams have started using data to inform the direction of their product. Product usage analytics tools, like Heap or Mixpanel, have become a common part of SaaS tech stacks. They enable you to see how users are interacting with your product.

Collecting all of this data enables Product teams to start experimenting. You can split-test different aspects of your product to see which is most effective.

For instance: knowing the basic phenomena of user onboarding psychology such as the Zeigarnik effect (check out our earlier blog post about it here to find out more) – you know that ticking the first item off your onboarding checklist increases the engagement and completion rate.

🤔But what would happen if you ticked off two items?

Would it work the same way for your engagement rate as putting 3 CTAs instead of one in your newsletter does for your click-through rates?

As a social media scheduler, Postfity has two main activation points that the user need to reach to hit the initial ‘AHA moment’: 1) connecting their social media accounts, and 2) scheduling the first post. 

But without any guidance, a lot of the new users didn’t complete these two critical steps after signing up.

This changed when Postfity did away with their standard lengthy product tour and implemented an onboarding checklist with an interactive walkthrough instead:

At first, Postfity used an onboarding checklist without any pre-ticked points:

The engagement rate for this checklist was 67%, with only 6% completion rate though.

The Userpilot Customer Success team suggested that Postfity should do an experiment employing the Zeigarnik effect – since people are encouraged to complete a task list more when they already have something checked off the list, Postfity created two more checklists: with one ‘dummy’ point checked off, and another one – with two dummies.

Postfity Checklist with two dummies

The results quickly showed that the ‘Zeigarnik effect’ was working:

postfity checklist product experiment

As you can see from this experiment, the checklist with two dummies was the winner in terms of both the engagement and completion rate. This, in the case of Postfity – a social media scheduler – means that more people are linking their accounts and scheduling their first posts:

postfity checklist userpilot

These two key activation points are critical for the social media scheduler users to see the value of the tool.

Running a real experiment (which checklist performs better) allowed Postfity to get more users to that ‘AHA moment’ – hence – improve the conversion and adoption rates, and ultimately reduce churn.

The benefits of product experiments

Before we dive into building them + examples – let’s recap the benefits of product experiments:

#1 Data doesn’t lie. Unlike user testing, and other qualitative methods, product experiments provide you with clear, hard numbers. These numbers are easily comparable and you can analyze them to find out exactly what you need.

#2 product experiments run themselves. Well, almost. Once you have your product experiments up and running, you can sit back and wait for the results to come in. With other forms of testing, however, you often need to guide users through it. This can be time-consuming and costly.

#3 product experiments take place in a natural setting. What I mean by that is that product experiments are taking place whenever your users are using your product like normal. In user testing, by contrast, users know they’re in an experiment setting, and may act differently.

So ultimately, product experiments are more cost-effective, easier to analyze, and provide more reliable data.

That’s what makes them such a powerful way of improving your onboarding.

Which experiments should you run to reduce product churn?

Good news: Running product experiments is now easier than it’s ever been before.

Thanks to code-free tools that allow you to create in-app experiences with a visual editor, you can build any experiments in your product without engaging the development team.

The question is: What product experiments should you be running?

Now we’re going to look at the essential product experiments that every SaaS company should be running.

These product experiments will help you understand where your onboarding is falling short, and how you can improve it.

This will help you improve product adoption, decrease product churn, and maximize your MRR.

Product experiment 1: To welcome or not to welcome?

The first interaction your users have with your product and make or break the entire product experience.

This first product experiment focuses on exactly that.

In our recent research into the State of SaaS Onboarding we found that 60% of SaaS products greet users with a welcome screen when they sign in for the first time.

A welcome screen can take many different forms, and contain all kinds of information. That’s why it’s such an important product experiment to run.

Of course, two-fifths of products don’t even have a welcome screen at all. In some cases that might be more effective.

Let’s take a look at each of the product experiments you should run in relation to welcome screens.

1a: Having no welcome screen

First of all – this is the ‘default state’ which will allow you to compare your conversion benchmarks against those with welcome screens.

But then again: while a welcome screen can provide new users with some essential information, it’s not always required.

In some instances, your users may prefer to simply jump in and get started.

As far as I can see, there are three reasons you might have for skipping a welcome screen entirely.

One reason is that your product is really simple. If you only really have one core feature, then a welcome screen may seem like overkill.

Another reason to omit the welcome screen is if your users have already had an in-depth software demo, either given to them by your team or provided by the product itself.

Stripo do a great job at providing a software demo to onboard users BEFORE they’ve even started paying.

The final reason is if you provide your users with a detailed walkthrough or other form of onboarding flow. Perhaps your welcome screen would simply end up repeating that information, and so may not be needed.

If either of those reasons I’ve mentioned applies to your product, then this is a product experiment worth running.

1b: Welcome screen with information

The most basic form of welcome screen will greet your users and hopefully provide some form of useful information.

The information you include can vary. Some products keep to a bare minimum, simply saying hello. Others will provide some tips and tricks to get started, or the important first step to take. Others still will remind users of the key benefits that the product provides.

YNAB (You Need A Budget) has a fairly simple welcome screen to greet new users:

ynab onboarding

It gives you a healthy dose of motivation to get started, and sets expectations. It also points you in the right direction to take your first budgeting steps.

Any of the welcome screen approaches I’ve mentioned are valid. The only way you can really know which is most effective for your users is by testing. That’s why this is one of the more useful product experiments you should be running.

1c: Welcome screen with questions

Welcome screens can also be more than just a bulletin board. Some products will use them to collect valuable information about the users.

Often, this information can then be used to direct users towards the most relevant onboarding flow. Information such as a user’s name, their job role, or even what they want to achieve from your product can help make your onboarding even more effective.

It also means that users are engaged from the start, and therefore are more likely to get up and running with your product once the welcome screen has gone.

Here’s a great example of a welcome screen with questions from Sprout Social:

sprout onboarding product experiments

As you can see, Sprout asks new users about how much knowledge they have of social media tools. This will help Sprout to show the most appropriate level of onboarding.

It’s also interested in finding out what users are hoping to achieve with the product. Understanding the user’s goals will enable Sprout to point them in the direction of relevant features.

Welcome screens that ask a couple of questions to new users when they first sign in can help you tailor your onboarding flow to each and every user, thus improving the product experience.

If you have a more complex product, then asking questions at the start is one of the most important product experiments to run.

Product experiment 2: Do you need a checklist?

Checklists are one of the most common onboarding methods you’ll see in SaaS products.

Our State of User Onboarding research found that over half of SaaS products included an onboarding checklist.

There are, however, a number of factors to consider when it comes to building your checklist. How many steps should it have? What should those steps be?

That’s why if you’re thinking of adding an onboarding checklist, it’s essential that you run a number of different product experiments.

2a: No checklist

This first product experiment is simple: Should you even include a checklist at all?

Checklists are most effective if they guide the user towards your product’s Aha! Moment. This is the moment where your product provides value to the user and everything clicks into place. This is an important milestone for your product’s onboarding.

However, if you have a relatively straightforward product, then the Aha! Moment might well be obvious from the first interaction. In that case, an interactive tour may be more effective.

Of course, the best way of knowing whether to include an onboarding checklist is – you guessed it – by running product experiments to find out.

2b: Checklist with incentives

You could design the best onboarding checklist in the world, but sometimes users will still ignore it. How do you help users who don’t want to be helped?

One potential answer is with incentives. If you can provide some form of reward to users as they complete your checklist, then they’re more likely to be onboarded successfully.

These incentives can take different forms, and it’s worth running product experiments to see which kind of reward provides the best incentive.

One approach is to reward users with something genuinely useful, such as a trial extension, or account credit.

That’s what Deputy do:

deputy onboarding

For every onboarding task a new user completes, time is added on to their trial, essentially meaning they can enjoy the product free of charge for longer.

This also means users are more likely to spend time with Deputy, and as a result will receive more value from it.

An alternative is to gamify your checklist. Gamification is a popular psychological technique that makes using your product more enjoyable.

You could offer badges or achievements to users who complete tasks on the checklist.

A more subtle approach, as employed by Sked Social, is to add a progress bar to your checklist:

sked onboarding product experiments

Sked Social used Userpilot to add this checklist, complete with progress bar, to their product. The result? Tripled conversions.

Incentivizing your checklist can mean the difference between a properly onboarded user and a half-onboarded user. That makes it a useful product experiment to run.

2c: Checklist with success message

Our State of User Onboarding research flagged up a surprising statistic: Only 17% of products celebrate users’ achievements.

This suggests a lack of empathy within the SaaS ecosystem.

It’s a shame, because celebrating your users’ successes can help strengthen their relationship to your product. It tells them that they’re on the right track, and encourages them to keep making progress.

In short, it’s a good idea to run some product experiments around it.

Here’s a good example of celebrating when a user completes a checklist:

stripo product onboarding

Stripo provides users with pop-up modal to congratulate users when they reach the end of the onboarding checklist. It’s a nice way to let users know they’re on the right track, and deepens their relationship.

Product experiment 3: Should you offer a tour?

Product tours and interactive walkthroughs have become two of the most popular methods of onboarding.

They have slight differences. A product tour focuses on quickly showing new users around the product. It points out the core features, and then finishes by prompting users to take a first step.

An interactive walkthrough, on the other hand, has the user actually complete tasks within the product as they’re shown around. This means users learn by doing.

Both product tours and interactive walkthroughs have their use cases, which means the only way of really knowing which works for your product is by running product experiments.

3a: No tour or walkthrough

As with some of the other onboarding elements, if you have a fairly straightforward product then you may not need a tour or walkthrough at all.

There’s an argument that having new users sit through a tour or walkthrough when they first sign in can actually add more friction. As a result, users may get bored and quit before they’ve even got to your product.

With that in mind, it’s worth making sure you actually need a tour or walkthrough. If your product is overwhelming at first glance, then sure you’ll probably need one.

The best way of knowing for sure is, as always, to run product experiments to find out.

3b: Adding a product tour

A product tour is designed to give new users a whistle-stop tour of your product. It shows users the key features of your product, and where to find them.

The key is to only focus on the most essential elements of your product. You don’t want your tour to be so long that users switch off before it’s even over.

Mint uses a product tour to good effect:

mint product onboarding

A series of tooltips guide new users around the product, directing their attention to each of the core features.

The key to the success of this product tour is that each tooltip explains what each feature is, rather than simply showing them off.

If you’re going to add a product tour, make sure to run plenty of product experiments to make it as effective as possible.

3c: Adding an interactive walkthrough

Interactive walkthroughs not only show new users around the product, but also force them to interact with it.

This learning-by-doing approach is often far more effective at teaching users how to use the product.

One of the easiest ways of adding an interactive walkthrough to your product is with driven actions. This is an onboarding element that is exclusive to Userpilot.

userpilot driven actions product experiments

It’s essentially a tooltip that requires the user to interact. Those interactions can be a click, a hover, or even a text input.

Driven actions can be used to build an engaging interactive walkthrough, much like the one built by Platformly:

platformly onboarding

They use driven actions to force new users to set up their account. As a result, the interactive walkthrough is more engaging and users are practically guaranteed to experience Platformly’s Aha! Moment.

When you design them well, interactive walkthroughs can be one of the most effective ways of onboarding users. Make sure you run lots of product experiments to optimize it as much as you possibly can.

How to run product experiments

Now that you know the kind of product experiments you should be running, it’s time to figure out how you’re actually going to run them.

There are three key steps to take:

1 — Measure product usage

2 — Test hypotheses

3 — Analyze results

1 — Measure product usage

As I mentioned earlier in the article, it’s impossible to run these kinds of product experiments if you aren’t measuring product usage.

There are plenty of analytics tools out there you can use to do this. Two of our favorites are Heap and Mixpanel.

heap product experiments

Both are designed to integrate closely with your product. You can then find out practically anything you want to about how your users are interacting with your product.

Some of the key metrics you’ll want to be keeping an eye on are:

  • Logging in
  • Time spent in app
  • Features adopted
  • Team members invited (if relevant)

It’s good to have a period of time in which you measure product usage before you run any product experiments. This gives you a baseline to compare results to.

2 — Test hypotheses

The foundation of any good experiment is a good hypothesis. Without hypotheses, your product experiments will be stabs in the dark.

A strong hypothesis will take the following formula:

“Doing X will lead to Y within Z.”

Here, X is the change you’re making. This could be adding a checklist, for example.

Y is the result you expect. In this case, it might be increased time in-app. It’s important that you specify a direction.

Finally, Z is a time frame. It’s useful to include a time frame because otherwise your product experiments could end up running indefinitely.

An example of a good hypothesis, therefore, would be:

“Adding a checklist will increase users’ time in app within two weeks.”

Once you’ve got your hypothesis, you need to test it. The most effective way of doing that is with A/B testing (also known as split testing).

A/B testing can be as simple as having two groups. One group of users will see the experiment (in this case the checklist). The other group will see the normal product (the null hypothesis).

Setting up A/B tests inside your product with Userpilot

Setting up in-app split tests with Userpilot takes minutes – no development required!

  1. Go to the ‘Features’ tab and tag the feature you want users to use more:

feature tagging in userpilot

feature tagging in Userpilot

2) Create an experience that will push the users to use the feature more often:

experience Userpilot postifity 1

in-app experience experiment Userpilot postifity 2

3) G to settings, and set a goal for this experience: 

In the GOAL section choose ‘Feature Tags’ and select the feature you tagged earlier.

experience settings

4) Next, choose ‘Run A/B test’. This will show the experience to hal of the audience you’ve chosen in your experience settings, and nothing to the other 50% (at random).

5) choose the lenght of the experiment: until you hit a statistically significant result / for a specific time.

A/B testing enables you to compare the results of both groups. Any difference will be due to the element you’ve changed as part of the product experiment.

This is a very simple way to see the ROI of the experiences you’re creating!

Some product experiments will be more complex. You might, for example, have different groups such as:

  • No checklist
  • Checklist with three steps
  • Checklist with five steps

This is how you can optimize your onboarding elements.

3 — Analyze results, has it reduced product churn?

Once your product experiments have run their course, you need to figure out the results.

Fortunately, if you’ve nailed your hypothesis, this bit is easy.

To return to our example hypothesis:

“Adding a checklist will increase users’ time in app within two weeks.”

Now that two weeks have passed, and we’ve collected our data, all we need to do is compare the time spent in app by users in each group.

Let’s imagine that we found the following:

  • Users who saw the checklist spent an average of 2 hours a day in the app.
  • Users who saw no checklist spent an average of 1 hour a day in the app.

From those results, we can see that adding a checklist doubled the time our users spent in the app.

That means our product experiment was a success, and it would now make sense to add a checklist to our onboarding flow.

For product experiments with multiple groups, you may need to do a little more analyzing. But as long as you keep your hypothesis in mind throughout, it’ll steer you in the right direction.

userpilot in app experiences

Key takeaways on reducing product churn with experimentation:

  • Product experiments enable you to make data-driven decisions about how to improve your onboarding and reduce churn
  • You should test all kinds of different onboarding elements, including welcome screens, checklists, and product tours and walkthroughs
  • You can run product experiments with three simple steps: measure baseline product usage, create your hypothesis, and then analyze the results

About the author

Joe is a content writer with several years of experience working with SaaS startups. He’s also the founder of Turing, a conversation design agency, making chatbots more human.

previous post next post

Leave a comment