How to calculate the business case of A/B testing?

Business case of AB testing

Written by Ruben de Boer

December 5, 2022

When companies start with CRO, it often begins in the marketing department, intending to get a good business case on A/B testing and marketing campaigns.

Besides this being a huge problem when scaling up experimentation (ROI will decline when less experienced colleagues start experimenting), there is another problem: the ROI is almost impossible to calculate!

With an ROI focus, the CRO specialist and stakeholders want to see the business case for experimentation. Calculators like ABTestguide and Speero do calculate a business case for every A/B test. However, every experimentation specialist will know or at least feel this business case is too high for most experiments.

Especially when we add the business cases for all winning tests, we get a huge number, and we wonder where all that money is.

The main reason is that adding business cases is statistically almost impossible. That’s why we get a too high number.

Why adding business cases does not work

A single A/B test can be prone to several statistical errors. By far, the biggest one is the false discovery rate.

A false positive occurs when an A/B test results in a winner but in reality, makes no impact at all.

You might know this, but do you know how big this problem is?

The following numbers come from a study by Berman & Van den Bulte (2021), using Optimizely data of 2,766 tests, with the KPI engagement. These numbers were presented by Bart Schutz at the Dutch CRO awards 2022.

  • When testing with 95% significance, 33% of your winners is a false positive.
  • When testing with 90% significance, 46% of winners is a false positive!

This means your number of false positives is huge! Therefore, your number is way too high when adding all business cases.

You can calculate your false-discovery ratio here.

The solution to false positives

There are several solutions to this problem.

Increase the alpha / Bayesian score
This will result in fewer false positives but more false negatives (not a winner in the test, but in reality, the change results in more conversions). False negatives are far worse than false positives, as you are missing money. If you had implemented the change, you would have had higher conversions, thus more revenue.

Get better ideas
In their paper, Berman & Van den Bulte (2021) show the best solution is to have better ideas. If our ideas are 10% better, it results in a 40% increase in growth (in this case, the KPI was engagement).

How do we get better ideas:

  • Do proper user, data, and scientific research
  • Run (continuous) meta-analyses

You will learn this in my Complete Conversion Rate Optimization course.

Alpha for growth and learning

A counter-intuitive solution is to test with a lower alpha / Bayesian score.

My teams always test with a Bayesian score of 80%. This means we have even more false positives but fewer (almost no) false negatives (with sufficient data in your experiment).

Here’s why.

Experimentation is not just for ROI purposes. It ensures you make better decisions and reduce risk. We achieve the highest growth by implementing all winners (and some false positives) and not implementing losing changes. Therefore, a lower alpha works great to find all winners in your experiments and still implement not losers.

You could consider a high alpha / Bayesian score for documenting your learnings. Because for learning, you prefer to have only true positives to get an accurate picture of what drives your customers and revenues. For example, only document the test as a learning when the Bayesian score is >95%.

Stop looking at business cases for every A/B test

The ROI of an A/B test is almost impossible to calculate. Let alone adding all business cases for all A/B tests. Therefore, stop looking at business cases, and see experimentation as a way to make better decisions and mitigate risk. Run meta-analysis and test with a reasonably low alpha. Implement all winners with a Bayesian score of >80% to grow. Finally, consider documenting the experiment learnings with a Bayesian score of >95%.

You May Also Like…

Get the Psychology for A/B testing ebook free

Sign up today to get the book free.

Plus, get free, bite-sized CRO tips and coupons in your inbox.

Please use your personal email address. Business email clients often block automated emails.