We compiled a list of a/b testing best practices that could help publishers in optimizing their ad inventory for maximizing their ad revenue.

A/B testing is one of the most effective ways to increase your conversion rates. 

The problem is that most publishers go about the process wrong. 

A poorly executed A/B experiment can cause a lot of problems for your website. Implementing changes from a poor test will negatively affect your conversion rates and therefore, your revenue. 

On the other hand, running an A/B test correctly will help you double or triple your conversion rates without needing to increase traffic or trying arbitrage.

In this article, we will look at 10 essential A/B testing best practices to follow in 2021.

Let’s get started.

1. Divide your audience into segments

A/B testing gives you insights into what users are doing, it doesn’t tell you why they are taking that specific action. To run a successful test, you need to analyze results in segments and not as a whole. 

Most publishers run A/B tests in just 2 variations. So if variation A wins, they assume that the users prefer it. 

This strategy can be misleading.

Even if variation A wins overall, variation B might still win in other segments. Several different factors affect the conversion rate of your ads. You need to take each of them into consideration. 

For example, if you segment your A/B tests by mobile or desktop versions, you might notice that the ads on the desktop are converting better than the ones on mobile. This could be because your ad format isn’t optimized for mobile, so mobile users aren’t having a good user experience hence, the lower conversion rate. 

Even demographics can affect the results of your test. 

Let’s say you are running an ad to sell baby products. Who will be your ideal audience? – Newborn or expectant mothers. So if you were A/B testing this ad and you set your test parameter to just measure the women by age, you will get a skewed result. If your ad is shown to all women, your ads will convert poorly. 

You might decide to ask advertisers to tweak your ad copy or call-to-action thinking that is the problem. When in fact, the problem is that you are not targeting the right audience – expectant or newborn mothers. 

Knowing tiny details like this can help you set up your ads better. 

Note: You need to be careful when segmenting. You don’t want to test for 10 – 20 things at once. You will just end up with random data that makes no sense. 

For a reliable result, divide your test into no more than 4 segments at a time. 

Here are the common segments you can divide your ads into:

Ads Targeting

  • Gender
  • Country
  • Age
  • Interests
  • Purchase behaviors
  • Customer audiences (like in the example above, you test not based just on age but also whether the women is an expectant mother or has a newborn child)
  • Relationship status
  • Education Level

Ad Placement

  • Format
  • Location
  • Size
  • Page load speed

You can also segment your ad based on ad type, bidding, and what you are optimizing the ads for; do you want more engagement, conversions, or clicks. 

2. Ensure you get your sample size right for A/B test

To get a reliable result from your tests, you need to use an appropriate sample size. 

Calculating the minimum sample size required for an A/B test prevents you from running a test with a small sample size that will give you inadequate results. 

One way to calculate your ideal sample size is by using A/B Tasty’s sample size calculator. All you need to do is input your current conversion rate, the minimum detectable effect, and statistical significance to get the required number of visitors you will need for the test.  

The minimum detectable effect stands for the percentage increase in conversion rate you will like to see. 

The standard statistical significance is 95%. Although you can change yours based on your risk tolerance. 

3. Do not end your A/B testing too soon

Your test duration plays a critical role in determining how reliable your test results will be. 

When A/B testing, consider the absolute length of time for you to get the best results. When you finish your tests too soon, you might end up making decisions from intangible results that will end up negatively affecting your conversion rates. 

To get the best results, run your test for multiple weeks at a time. I recommend you run each test for at least 3-4 weeks. The longer your test, the more conclusive the results. 

Why this long?

Your website traffic isn’t constant. It changes from day to day. Various factors like seasonal changes, promotions, and even actions from your competition can affect your traffic. By running A/B tests for several weeks at a time, you will have a clear representation of your sample size. This will ensure that you have a more conclusive result at the end of the test. 

4. Be aware of validity threats

Having the right sample size, test duration, and segmenting your audience is not enough to give you valid results. Other threats might affect the validity of your results. 
Here are some of the common threats you might encounter: 

  1. Instrumentation threat

    This is a very common threat. It is also very easy to miss. This threat is caused when the tool you are using for your test sends flawed data. This is commonly caused when you incorrectly implement the code on your website. When you notice that you are receiving flawed data, end the test and find the root of the problem. Next, reset the whole test and start over.

  2. History effect

    This is when external factors cause a flaw in your data. 
    An example of an external factor is your competition not running paywall on topics that you might be running under a subscription plan. This could affect some variables in your test. Most of your visitors might want to try them out because they get access to free content. 
    It is critical that you pay attention to external factors that can skew the results of your test. 

  3. Selection effect

    This threat happens when you assume that a portion of your traffic represents the whole.
    For example, the visitors you get at the end of the week might vary from the visitors who arrive on your site at the beginning of the week. It is not valid to say that one part of your traffic is an overall representation of all the traffic you receive. That is the reason why point 3 is important. Running your tests for a long time will help you avoid this threat. 

5. Do not make changes mid-test

Do not rush to implement changes mid-test. If you end the test too soon or introduce new elements that were not part of your initial test variables, you will get unreliable results. 

When you make changes mid-test, you won’t be able to pinpoint whether the new changes are responsible for an increase or reduction in conversions. Only take action when you have finished the initial test. 

6. Regularly test your ad placement

Your placement is one of the most important aspects that will contribute to the success of an ad. Optimizing your most performing ad units can turn a casual visitor into a potential lead. For example, first fold ads which are optimized for user experience are usually the first thing the viewer sees while scrolling, hence catching their attention. 

So how do you go about A/B testing your ad placement?

  • Test multiple placement variations for your ad: We generally encourage publishers to strike the balance between above-the-fold and below-the-fold placements by not overpopulating them and choosing ad formats that bring the highest conversion . 
  • Trust your tech platform: An increasing amount of publishers are solely relying on Google publisher products for monetizing their ad inventories. That is excellent news because Google Ad Manager reporting is often the best starting point for optimizing your ad placement. Additionally, Google also consistently updates the best performing ad sizes and formats according to their data.

Also Read: How to Find the Best Ad Placement on Your Website?

7. Do not run too many tests in succession

It is not recommended to run many tests in a short period. The reason is that you should take a significant amount of time to gather data before running any experiment. 

When you run too many tests in a short period, your sample size will not be tangible enough to give your reliable results. You will be implementing changes from half-baked results. This will only end up in decreased conversions. And since you aren’t seeing any positive conversions, you keep running more tests and you get stuck in a cycle. 

After running a test, measure the results and decide what changes to implement. When you implement any change, wait a couple of weeks to see if it will have any positive effect on your bottom line. This way, you can say for certain what is working and what isn’t. 

8. Create a hypothesis before testing

When you have determined the problem you want to solve with your test, create a strong hypothesis. 

A solid hypothesis will do the following:

  • Explain the problem you are trying to solve with the test. 
  • Postulate a potential solution
  • And anticipate the result of the experiment. 

A good hypothesis is measurable and will help you determine whether your test succeeded or failed. Without a hypothesis, your testing will be mere guesswork. 

The simplest formula for finding your hypothesis is:

Changing A into B will cause C

Where;

A = What your analysis indicates is the problem

B = What changes you think will solve the problem

C = The effect the changes will have on your key performance indicator. 

9. Ask your visitors for feedback

Your A/B test will help you visualize your visitor’s path to conversion. But that is about it. While the science of A/B testing is crucial, you need to understand how your customers feel when they interact with your website and ads. 

How do you know the reason a visitor landed on your website? How do you know why they are not clicking on your ads or signing up for your services?

This is where asking for feedback comes in. Collecting direct feedback from your visitors removes the need for guesswork. Survey your visitors to help you understand what their goals are and the difficulties they encounter on your website. This will help you identify what to test for. 

With feedback from your visitors, you will be able to identify variables that have the highest impact on your conversion.

The best way to collect feedback is by using a survey tool like SurveyMonkey and Qualaroo. You should also integrate a CRM system. This will enable you to sync the information you collect via surveys with your existing data. That way, you can easily reference the feedback from your visitors when creating an A/B experiment. 

10. Start and stop your tests on the same day

While this might seem like a no brainer, you will be shocked how many people ignore this.

To get a perfect result, you should start and stop your test on the same day of the week. It keeps each variable consistent across the test period. 

Final Thoughts

The final thing to remember is that you should never stop testing. Optimizing your conversion rate is the most effective way to increase sales or sign-ups on your website. 

As you collect more data over time, ensure that you continue testing. A/B testing is one of the most effective ways to understand your visitors. 

Getting new traffic is expensive and can be difficult. The most logical option is to increase the conversion rate of the visitors you already have. 

The A/B testing best practices above will help you get started. 

About the Author

Marvellous Aham-adi AdPushup a/b testing

Marvellous Aham-adi specializes in writing engaging, long form content that drives traffic from google, generates leads and brings in sales. His specialities include content marketing, social media marketing, email marketing, technical and analytical writing.

FAQs

1. What do you mean by AB testing?

In A/B testing, a variable (web page, page element, etc.) is shown to different segments of website visitors simultaneously to determine which version leaves the largest impact and drives business metrics.

2. Why do we do AB testing?

A/B testing has several benefits. A/B testing increases user engagement, reduces bounce rates, increases conversion rates, minimizes risk, and creates content effectively. You can significantly improve your site or mobile app by running an A/B test.

3. Where is AB testing used?

In digital marketing and web design, A/B testing is a popular experimentation method. In order to increase e-commerce sales, a marketer may experiment with the location of the “buy now” button on the product page.


Author

Shubham is a digital marketer with rich experience working in the advertisement technology industry. He has vast experience in the programmatic industry, driving business strategy and scaling functions including but not limited to growth and marketing, Operations, process optimization, and Sales.

Write A Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Recent Posts