A/B Testing to Inform Business Decisions

February 20, 2018

What is A/B Testing?

A/B testing, also known as split testing, is a method of comparing and evaluating two versions of something in order to determine which one performs better. 

By displaying two different versions of something to two identical audiences, you can then determine the impact of the differences between the two versions. 

The variables that can be measured with A/B testing are endless. For example, maybe you’re considering changing your brand guidelines to adhere to a new color palette.

By performing an A/B test, you can measure which color palette your customers respond to more positively — the old one, or the new one being considered. The results of this study could then inform your final decision of whether or not you will adopt the new color palette. 

While A/B testing has been around for centuries — think of farmers evaluating which plots of land yielded better crops — the internet has changed everything.

Businesses are now able to conduct split test research online, in real-time environments, and at a massive scale in terms of the number of participants and the number of experiments. 

A/B testing is a simple, straightforward means of examining a variable of interest. The data resulting from split tests can be used to inform business decisions throughout functional areas such as marketing and sales, to product development and customer service. 

Why Should Businesses Use A/B Testing?

Businesses that choose to run A/B tests to inform their decisions and strategies often see positive impact immediately.

The internet has made the A/B testing of endless versions and variables easier than ever, especially for marketers. 

With people spending so much time online these days, marketers are tasked with things like investigating which versions of buttons or images people are more likely to click on. A/B testing is a fantastic tool for determining just that.

Today, A/B testing is used to test everything from the design and layout of a business’ website, to the headlines used in articles, and the images in social posts. 

Another great advantage of A/B testing in today’s internet-driven age is that most A/B experiments can be conducted without the subjects even being aware. While testing changes in public-facing materials, it’s difficult for a subject to realize they are taking part in a behind-the-scenes study.

A/B testing can also be used to test things like subject lines in email campaigns, and the positioning of online advertisements. 

The more variables you are able to A/B test, the more data you are able receive, which will then allow you to get the most out of your efforts.

How To Conduct an A/B Test

Step #1: Decide what you want to test.

The first step of conducting an A/B test is to decide what it is that you want to test.

Let’s say you’re on the marketing team at a SaaS company. You might be curious about how enlarging the size of a call-to-action (CTA) button on a webinar registration page will affect the number of people that click the CTA.

Step #2: Decide how you want to measure performance.

In this example, we’ve identified that the number of registrants that clicked on the CTA is our variable being used to measure the button’s performance.

Step #3: Show two audiences the different versions of what you’re testing.

In this case we have two different sized CTA buttons that we’re testing — small and large. 

Our two audiences can be determined by randomizing which version of the CTA is displayed when visitors come to the webinar registration web page. Note that the only difference between the two versions being tested should be your variable of interest. Everything else should be identical.  

In this example, some people will see a small button and some will see a large button. All other content is exactly the same.

By randomizing which version of the CTA button visitors to the web page will see, you minimize the chance that other variables, such as viewing from a desktop vs. mobile, are impacting your results.

Step #4: Determine which version performed better.

Here we would want to examine if we received more webinar registrations from the small CTA, or the large CTA. 

Step #5: Retest.

No matter the size of your variables or your study, it’s always best practice to retest and compare results.

Step #6: Use insights to inform business decisions.

If the large CTA drove more registrants, it’s best to move forward using the large CTA on all versions of the webinar registration web page. 

Mistakes to Avoid while A/B Testing

There are three common mistakes that you should avoid while A/B testing:

Making decisions too quickly.

It’s important to let an A/B test run its course. Due to the fact that results are often displayed in real-time, lots of managers rush their decision-making processes.

While this proactivity comes from the right place, the randomization aspect of A/B testing can take some time to solidify results. 

If these hasty managers simply let their tests run to their natural ends, it’s very possible that they’d receive different results from the ones previously interpreted.

Considering too many metrics.

While metrics should most definitely inform business decisions, if you’re looking at a large number of metrics at the same time you can run the risk of interpreting false correlations. 

The metrics you’re going to be looking at should be defined prior to the start of the experiment. 

The more metrics you’re evaluating, the more likely you are to witness fluctuations in the data. If too many metrics are being accounted for, you run the risk of deeming these fluctuations as correlations, when in fact they are not.

Neglecting to retest.

If you’ve conducted an A/B test one time, you should definitely plan on retesting. 

It’s natural for humans to perform a study, and then interpret the results as fact. But even when it comes to statistically significant results there’s the possibility for, and risk of, a false positive error.

The only way to mitigate this risk is to retest with an open mind. No one likes to undermine previous findings by retesting and receiving contradictory results. That’s why it’s essential to remove human emotion from the equation, test then retest, and allow yourself to consider that your first results weren’t necessarily flawless.

A good rule to keep in mind is that the smaller the variable that you are testing, the less reliable the results are. If you’re examining the impact of a tiny change to a web page, you’ll need to perform more retests than if you’re examining the impact of two entirely different web pages. 


Does your business currently leverage A/B testing to inform business decisions? If so, we’d love to hear what variables you’re investigating, and the impact you’ve seen. Drop us a line in the comments below!

 

  • Get Your Free Demo Today
    Get Demo
  • See How Easy Alchemer Is to Use
    See Help Docs
  • Start making smarter decisions

    Start a free trial

    Request a Demo

    By accessing and using this page, you agree to the Terms of Use . Your information will never be shared.