If you’ve spent any time at all running marketing campaigns, then you will have come across Split Testing at one stage or another. Split testing is a process where two variations of the same element on a webpage are tested against each other to gauge the best-performing version.
Split testing is a relatively simple and straightforward process. In the image below, we’re testing two different variations in the color of a CTR button.
This simple form of split testing is often referred to as AB Testing since we’re only testing two variations of the same element. As in our example, that element can be the color of a button or a headline, or an image. Pretty much any element on a webpage can be the subject of a split test.
Once you’ve decided on the elements to test and the different versions, it’s simply a matter of running an equal amount of traffic to the two variations, and based on the conversion stats, pick a winner.
An enhancement on simple AB testing is Multivariate Testing. As the name implies, multivariate testing is a technique for testing multiple variables at the same time. The image below will serve to illustrate this.
In this example, our multivariate test involves two elements on the page: the top image and the color of the call-to-action button. We have two versions of the image and two versions of the call-to-action button, giving us a total number of variations in our multivariate test of four.
Number of variations of image x number of variations of CTA button = Total Number of variations
Increase the number of variations of any of the elements, and you increase the total number of variations in the multivariate test.
Multivariate testing has some significant benefits over AB testing. Properly structured multivariate testing reduces the number of tests you need to run to find the best-performing combinations of your landing pages’ elements. With multivariate testing, you get the winning combination of multiple elements in the shortest possible time and with the least number of tests.
Multivariate testing does have some downsides you need to be aware of before you conduct these tests. The most significant of these is the volume of traffic you need to get the data you need to decide on the winning combination.
Traffic in AB testing is a simple 50/50 split. In multivariate testing, the number of variations tested against each other means the testing traffic will split into much smaller segments, quarters, sixths, eights, and so on, depending on the number of variations.
The volume of available traffic for your testing will, to a great extent, determine the type of test you should be running. If the page you want to test doesn’t get enough traffic, then a simple AB test will be the best way to go. How much traffic is enough? That has to be down to your own metrics, based on your particular conversion samples.
Another downside of multivariate testing is the “noise” that comes into play in your tests if one of the elements you’re testing doesn’t make enough of a difference to register in your testing statistics. Going back to our example, if the image variations don’t significantly affect your conversion rates, but the variations of the call-to-action button do, it would probably be better to stick to a simple AB test.
If your page does have enough traffic to support multivariate testing, though, the tactic is highly recommended over the AB test solution. You’ll arrive at overall winning combinations much sooner and with the least number of tests.
If your conversions rely on the combinations of several variables, then multivariate testing is the way to go. Common multivariate tests include testing visual and textual elements, colors and texts of call-to-action buttons, and even the number of fields in a registration form.
If you need some help with your multivariate testing, you’re in the right place! Get in touch with us and get your testing strategy into top gear.