Data Science Approach For Effective A/B Testing

An Introduction to a/b Testing for Data Science

Be it for a B2B company with a high volume of sales leads but poor conversion or an E-commerce retailer with a high rate of shopping cart abandonment, conversion metrics perform poorly for a variety of reasons including unoptimized website design. Modern business enterprises want their online visitors to perform actions and engage with their website to increase the overall conversion rate.

Optimizing business websites with new features can help in conversion however it’s difficult to predict beforehand whether it can boost customer conversions or increase the customer bounce rate.

With an average ROI of over 223%, conversion rate optimization (or CRO) is being adopted by companies to improve conversions with 55.5% of them planning to increase their CRO budgets.

A/B Testing has emerged as the most popular form of CRO with 58% of companies already this form of testing while another 35% plan to use it in the near future.

So, what is A/B Testing and why is it the preferred form of testing in digital marketing? Let’s also see how data science can be used to perform A/B Testing.

What is A/B Testing?

As an effective web analytics tool, A/B testing (also known as Split testing) is the technique of splitting web traffic between the existing version of a website (or A) and a new (or modified) website version (B) and comparing the metrics between the two. In simple terms, it is the practise of displaying two variants of the same website (or webpage) to two equally distributed set of website visitors and measuring which variant drives more customer conversions. The variant that drives more conversions would be the preferred one used to optimize the website for increased returns.

Image Source:  https://www.dataquest.io/wp-content/uploads/2018/10/ab-testing.png

A/B testing is very effective in digital marketing as website marketeers can now base their decision about website optimization on actual data rather than relying purely on their own instincts. Website changes can be based on specific short-term goals (for example, the click-through rate of a website button) or on long-term goals (for example, conversion steps). At the same time, A/B testing can be used to avoid any drastic changes to the website that could harm user engagement.

Use of Landing Page for Conversions

In digital marketing, the landing page for incoming traffic is a valuable page that can improve (or reduce) conversions. 52% of organizations perform A/B testing on their landing page in order to improve conversions. Here are some landing page-related industry stats that are driving conversions:

Around 48% of the landing pages have multiple customer offers.

While the average number of form fields on any landing page is 11, reducing that from 11 to 4  can improve conversions by 120%.

The optimum count of form fields on any landing page must be 3.

Landing pages that do not enquire about the visitor’s age have higher conversions.

48% of the digital marketeers design and build a new landing page for every fresh marketing campaign.

Adding engaging videos to landing pages can improve conversion by 86%.

Companies with improved conversion rates on their websites have increased their A/B testing by 50% in order to drive more conversions. All these statistics show the importance of A/B testing in driving digital marketing-related decision.

In the following sections, we will discuss how the principles of A/B testing can be based on techniques developed using data science. Data science can be effective in deriving significant data-driven results from A/B testing that can provide the desired outcome for your business.

Using data analytics in A/B testing:

As a data scientist, here are some recommended principles that can maximize the success of A/B testing process:

Have multiple metrics but measure your testing success with one metric

A/B testing can be conducted for a variety of reasons including redesigning landing pages or testing your headline or banner, which require the monitoring of multiple metrics including the conversion rate, economic metrics like average check and revenue, or behavioural metrics like average session duration, page views, and feature use.

Defining the goal of your A/B testing on the basis of a higher number of metrics can increase the chances of false positives. Before initiation of the testing, define one measurable success metric that should determine the outcome of the test. For example, the conversion rate for a landing page with a product video as compared to the same landing page with a product image.

Try to achieve statistical significance or confidence

Aim to achieve statistical significance (and confidence) using A/B testing, which can be used to determine the sample size. As a data analyst, set your statistical significance to a low-value positive number (typically, 0.05) that suggests that in real-world testing, there is a small 5% probability of detecting any performance differences between the 2 variations in the A/B test.

You can achieve this by running the A/B testing fully without interrupting it ahead of time when the indicators are more inclined towards any one side (A or B). Data dredging (or p-hacking) practices that involves decision-making on the basis of false positives should be avoided following the A/B testing.

Additionally, confidence intervals can provide better evidence of any significant change rather than be fixated on p-values.

Randomize your sample split

How do you split your sample size into the correct groups for successful A/B testing results? While most sample splits are done randomly, they actually end up correlating with factors such as age, gender, or location.

For successful A/B testing, both the groups must be homogeneous groups. Use of data science can prevent the creation of testing groups based on certain factors. Among the popular methods, the intraclass correlation coefficient (or ICC) with an ICC value close to 0 is a good and proven strategy for effective splitting.

Besides these factors, the split ratio is also important for determining your testing group where a 50:50 split is the most popular and effective choice for A/B testing.

Managing hidden assumptions

While A/B testing has provided great results, it ultimately relies on certain hidden assumptions that may (or may not) reflect in the real world. Some of these are:

User behaviour of website visitors is completely independent of each other (which is actually a very valid assumption).

Even with accurate sample sizes, the probability of conversion is identical for all visitors (which is not always the case in the real world) as conversion is dependent on various factors like user motivation and engagement, marketing funnels, and referral method.

Due to these assumptions, basing your business decisions on the results of a single (or couple of) A/B tests can be inaccurate. Smaller (and many) A/B testing processes without making major website changes can be a more successful strategy.

Avoid the common pitfalls

Besides the above principles, A/B testing also include some common pitfalls that you must avoid for generating successful results. These include:

Do not go for A/B testing if you only have a few customers.

Do not stop the A/B tests before the predetermined sample size of visitors is reached or when the first statistical significance is reached.

Do not place all your bets on the winning variation even if it shows a considerable performance difference.

Stick to 2 variants in each A/B test as the statistical effect is difficult to detect with more than 2 variants and lesser sample size in each variant.

Avoid data noise by including only those online users who will be impacted by your change.

Conclusion

While the importance of conversion rate optimization through website changes cannot be undermined, CRO methods like A/B testing are designed for long-term optimization and must be based on the correct sample size and other parameters to reach statistical significance. Organizations with a consistent and long-term approach to executing A/B tests are the ones that derive the most benefits from this testing method.

And that summarizes how Data science can make a major impact on how A/B testing must be implemented. If you have previously executed A/B testing (or are planning to implement one), let us know if you agree with the points that we have listed in this article. Let us know if we have missed anything with your comments. You can also check out some of the Data Science and Digital Marketing training courses that we offer.

Related Articles

} }
Request Callback