Why A/B Testing Beats “Best Practices”December 11th, 2011 at 8:29 am by Ereika
Getting the best results from your online marketing campaign is anything but straightforward. While online marketing best practices exist for a reason, they are not infallible. What makes online marketing best practices the “best” is not that they are a set of rigid, arbitrary rules guaranteed to get superior results.
What makes these practices the “best” is the fact that they’ve been proven, time and again, to be successful based on research and testing across a broad online market.
Are Best Practices Really the Best for You?
The idea of relying solely on best practices sounds great in theory, but it breaks down in practice because your company is not a broad online market. Your company occupies a specific niche, caters to specific customers and clients, and what you consider to be a successful online marketing campaign will vary accordingly.
Because of this, there are specific circumstances where “best practices” may not be as effective as other ideas that take into account your specific online marketing goals. For example, there is a definite difference between maximizing CTR in email campaigns, and maximizing CTR in email campaigns targeted to young adult mothers who are expecting a new baby within the next 3 to 5 months.
The first concept will garner some general ideas for maximizing clicks, but the second concept is far more detailed, and there is more data to consider that will directly affect how you design your emails – right down to the visual design of the content and the CTA that you include in the body copy.
While email marketing best practices may state to avoid the use of the word “free” in a subject line, you may find that for this demographic, the phrase “free samples” has a higher open rate – but you won’t know if you don’t test.
What Makes A/B Testing So Effective?
One of the best things about A/B testing is that you can put your findings into practice not only in the original context, but in additional avenues as well. An A/B test that results in a lift to conversion rates in one email campaign is worth testing in another email campaign if the two are similar. You can also infer some interesting insights that can be used elsewhere in your online marketing. Consider these scenarios:
- Email body copy A/B testing reveals that users click-through more often when there is a product description and a positive review, versus just a product description. Can you garner more clicks on your product pages by moving positive reviews above the fold?
- Landing page A/B testing reveals that users complete the purchase flow more often when there are images on the landing page that illustrate what they will receive. Can you get more clicks in your email campaigns by including multiple product images versus just one?
- Category page A/B testing reveals that users search by price and brand name more often than color or size. Could you get more click-throughs on the homepage by emphasizing the most popular brands and sale pricing?
Each of these options looks like it could be a logical assumption based on what you’ve found out in testing elsewhere. Of course, you won’t know for certain unless you actually run the tests. However, the original A/B test is what gives you the insights to test further, in ways that can consistently improve your overall metrics in a regular, predictable way.
What other marketing technique can deliver that kind of efficiency?
Of course, there are some areas where it will be impossible to test, either due to coding difficulties or other issues that make the test cost-prohibitive. In those instances, it is best to work at optimizing other areas of the flow as much as possible in order to maximize returns.
Where to start with A/B Testing
There is really only one place to start when it comes to A/B testing – look at the data. That data can be from your website’s landing pages, email campaigns, PPC ads or some other marketing initiative, but you need to have a reliable starting point. So if you don’t already have analytics in place, that’s step one.
But once you have the data, you should start your A/B testing with a specific question in mind, that the A/B test can be designed to answer. Some examples:
- Will subscribers open emails more often if they are sent from the company, or a person within the company?
- Will I get more downloads of my content if I only ask for an email address versus an email address and a first and last name?
- Can I retain users who click the unsubscribe link if I offer another email subscription with fewer emails per month?
- Will users click more often on a red button or a blue button?
As you can see, these are very specific tests based around design, content, and information. As you gain more data about how your leads and customers interact through various marketing channels, you can conduct additional testing with variables that segment your users based on various characteristics.
Once you start with A/B testing and see the results for yourself, the possibilities for further improvements will be limited only by the data you collect and the changes you can feasibly make in your online marketing initiatives.
Ask us: Is there any particular aspect of A/B testing that you find challenging or complicated?