Taking advantage of AB testing

ExperimentationBy Mariana Bonanomi

It can be tempting to use your intuition to predict why people connect to your marketing communications and click your CTAs. But the only way to get accurate insights on which copies, images, and layouts actually lead to conversions is by AB testing.

As more CRO professionals and companies take advantage of AB tests, this software market keeps growing. No wonder they improve the user experience and boost conversion rates. If you're not AB testing your website and assets, you're leaving money on the table.

Here we'll tell you what you should know about AB testing and how you can start running tests for your business.

What is an AB test?

An AB test compares the performance of two versions of any given content. That can be a webpage, a landing page, an email, or other marketing elements. The test compares the default version (A) versus an alternative version (B). So version B includes a change you think will improve the user experience.

You usually begin with a hypothesis of what could improve the user's experience and convert more. That often means changing a specific element of your website. Then you run AB tests to check if your assumption is correct.

Suppose you create two landing pages. Then, you can change the color of the CTA button on the second landing page to see which one gives the best result. After analyzing the data collected, you can make strategic decisions.

Why is it efficient?

AB testing, also known as split testing, sets the foundation for your marketing strategies. It takes the guesswork out of your analysis, so you make decisions backed up by data. Suppose your goal is to boost the conversion rate. In that case, AB testing can be efficient by allowing you to try different tactics to see which performs best. This way, you don't focus on a single strategy that might not be as effective.

Difference from other tests

While AB testing suggests testing with two variants, A and B, there are also AB/n and AA tests.

Differently from AB tests, you can measure more than two variations using AB/n tests. You can test as many hypothesis variations as possible against the control version of the content. The "n" stands for the number of variations to test.

In AA tests, the two groups of users see the same variation, not a modified B version. An AA test does not declare a winner. It tests if you can trust the AB testing software and if your AB test setup is working.

There are situations when it makes more sense to test many elements simultaneously. That is where multivariate tests come in handy.

We use multivariate tests for testing multiple elements on a website. An AB test checks the user's response to variations of the same element, such as the color of a CTA button. In contrast, multivariate tests check how users respond to multiple elements, like a video next to a contact link form on a webpage.

What kind of company uses AB testing

AB testing cuts across multiple practices. No matter your type of business, you can use AB testing to understand your audience and improve your optimization processes. So you're reaching your audience at every touch point.

Developers use AB tests to try various features. They do that by randomly displaying them to different user segments. Suppose you're building an e-commerce application and want to make AB tests. You could do that with many checkout buttons, so you'll know which results in higher purchases.

Marketers also do AB tests. They try different approaches to their prospects in AB-tested ads or email campaigns. Also, they use AB tests for optimizing conversion rates, user acquisition, and audience retention on the site.

How it can help your business

AB tests are a low-risk and high-reward tactic for your business. They help you extract the most value from your website, ad campaigns, email campaigns, etc. That results in increased ROI and reduced CAC.

AB testing further helps your business by:

  • Increasing profits: It provides better user experiences, which reduces bounce rates, improves brand trust and loyalty, and brings repeat customers. As a result, you'll sell more and increase your profits.
  • Identifying issues: It helps identify little errors that ruin marketing campaigns, like poor UX design. That is critical because a great design can increase conversions.
  • Improving content and engagement: You can use the results from your AB test to inform the decision for future content and campaigns. This way, you can improve your content, increasing engagement.
  • Boosting your business image: With AB testing, you can cut off processes or steps that leave a terrible customer impression. So, you'll increase your goodwill and your business image.
  • Increasing conversion rates: AB testing helps determine the type of content that converts visitors to buyers. Testing different user experience elements makes it easy to see what works for your target audience and what doesn't. For example, changing your sign-up button from "sign up now" to "sign up now!" and then comparing both to see which one the audience prefers.

AB testing best practices

Following these AB testing best practices can help you create experiments that improve conversion rates, drive user engagement, and add value to your business.

1. Choose one variable to test

There are tons of potential variables to test. Yet, it's essential to zero in on one independent variable. So, you can evaluate the change well and find the exact variable responsible for the results.

2. Test one element at a time

Once you have chosen a variable to test, you should test one element at a time. For example, If you're testing an opt-in form, you can test for changes in the following elements:

  • Headline
  • CTA button
  • Form fields.

Testing too many elements at once would make it hard to determine which parts were successful.

3. Tie your test to specific KPIs

To set up your tests effectively, consider what metrics are essential to your goals. Also, consider how the changes will affect user behavior. Define the key performance indicators you'll need for tracking and analyzing your experiments. That'll make it easy to categorize your results and understand the impact a specific variant has on your hypothesis.

4. Set your initial test duration

Sometimes, setting your initial test duration is necessary for determining the reliability of your results. You want to let your test run long enough to get a substantial sample size for statistical significance. Also, suppose your test takes too long to reach the specific reliability rate. In that case, the tested element shouldn't significantly impact your tracking metrics. So decide how long to run your AB tests and choose the best time frame.

5. Test both versions simultaneously

When running AB tests, you should run both variants simultaneously. Suppose you run version A in one month and version B in another. That'd make it impossible to know if the change in performance was due to the change or to a seasonality caused by the difference in timing.

6. Target the right audience

Targeting a given group of customers makes it easy to define what to change in your test variants. And it also gives you the proper context. So, you can validate assumptions about your target audience and see what makes them take the desired actions on your website.

Shortcomings of AB testing

AB testing helps experiment with new ideas and discover whether they lead to better results, but it only focuses on the variable you're testing. It can't estimate or show if other usability problems on your site are giving you low conversion rates.

Also, AB testing doesn't tell you the whole story of your campaign efforts. An example is this retention campaign experiment conducted by Harvard Business Review. Two companies ran AB tests to track churn rates for over 14,000 customers. One randomly assigned group received churn reduction intervention, while the other didn't.

They collected a dataset of customer information and other metrics used to predict churn risk. And the results showed almost no correlation between the customer's churn risk level and their response to intervention campaigns. So, companies should not use AB tests to measure a campaign's overall effectiveness. They should use them to explore the types of customers most sensitive to particular interventions.

How to use AB testing in segmentation

Segmentation takes your AB testing to the next level, and vice versa. It's a strong ally for giving the most relevant experiences to your users because it also helps you incorporate AB testing into your personalization strategies. To do that, segment your target audience with their data and test within that segment. This way, you'll know the version of your site to serve each audience segment.

You can also set up a rule-based personalization and activate an AB test within each rule (for example, new users or people from some ad campaign). This way, you'll only show the test to users that match this rule. So, you'll get a better performance for each user than if you showed the best average-performing experience to everyone.

See how to run AB tests with personalization.

How to plan your tests

AB tests usually come from hypotheses about your audience or content. You should also have a proposed solution, an expected outcome, and the reasoning behind it.

Here are some steps on how to plan your test:

  • Define the problem you want to solve
  • Study the users you wish to create the solution for
  • Come up with a rationale for the expected change in user behavior or metrics
  • Create a hypothesis based on analysis and data using qualitative and quantitative research insights. Examples: user interviews, surveys, heat maps, analytics data, and customer feedback.

The quality of your insights depends on how you use AB testing to set your hypothesis apart from random results. After planning your test, the next step is to run it and measure the results using a good statistical approach.

Statistical significance of AB testing

The tested variables should have the same performance long after the test ends. That’s where statistics helps you know if the tested variables directly affects the user behavior or if it occurs due to random chance.

Your statistical significance shouldn’t be less than 95%. Suppose it is 65%. In that case, there's a high chance that your winning variation is not the winner. Running a test with statistical significance is crucial to know if your results justify a change.

How does Croct perform AB tests, and how is it different from the competitors?

At Croct, we use the Bayesian approach to analyze AB tests. Our engine calculates the metrics in real time as our system gets new data. So, we ensure the results' quality and avoid the common pitfalls of statistical testing. To stand out from competitors, we create our AB tests using dynamic mechanisms. They choose the variant with the worse conversion rates. Then, we use that as our baseline or control for comparison.

Do you have further questions about how our AB testing engine works? Learn more here. And if you want to set up your own AB tests, create your free account and explore our platform by yourself.

Explore:
Let's grow together!

Learn practical tactics our customers use to grow by 20% or more.