The Basics of Newsletter A/B Testing

Big Gains From Small Edits

In the world of Email marketing, there is always room for improvement.  Sometimes, the biggest improvement can come from the smallest of edits.

The most minor adjustments to your email template have the potential to cause click-through (the % of people who actually click on a link) and open rates (the % of people who open and view an email instead of just ignoring or deleting it) to suddenly skyrocket, and in turn, drive large amounts of sales, traffic, sign ups, or conversions. However, the only way to truly learn from your template adjustments is to implement them one at a time utilizing classic scientific procedure. By isolating your adjustments, you can learn which varying factors improve your campaigns and which hurt them.

A/B Split Testing for Email

The best approach for this experimental improvement is commonly referred to as A/B Split Testing: a function available within most email marketing services.

The actual process for Newsletter A/B Split Testing  involves splitting a small, randomally selected subset of your mailing list subscribers into two groups. Then, each groups receivs an email that is nearly identical, save a small tweak that you are testing. For example, you might test different subject lines to see how that affects open rate. Once the results are measured, you then send out the “winning” variation to the all the users who didn’t receive  one of the original test emails.

To reiterate, the only difference between the two emails going to the two small test groups should be the single factor that you are testing.

An Example

Acme, Inc. would like find out which color link attracts the most clicks. In the current version of their newsletter, link is displayed in a big red font with no special decoration.

In order to find out if this is the optimal presentation for clicks, they up an A/B test that includes two identical newsletters except for this big red link. Email #1 would include the standard big red link link with no special decoration while Email #2 would include a blue link with no special decoration.  While it may tempting to experiment with decorating the link or creating a special button, that should be saved for the next test in order to keep the currently tested color variable completely isolated.  The reason for only testing one tweak at a time is so that there is no doubt about what caused the increase.  To continue testing ways to improve click-through rates, further A/B tests could be done.

But links are not the only thing that can be tested in newsletters.  Other potential variables include:

  • Subject-Line: Test subject lines to increase open rates.
  • From-Line: Test which “from” address results in the highest open rate.
  • Landing Page: Find out which landing page to link to by seeing which one results in a higher conversion rate
  • Time of Delivery: The time of day that email is sent could have a significant impact on open rates and recipient activity across all metrics.
  • Presentation of Calls to Action: Not getting enough clicks? Try testing the color of and style of buttons.
  • Just About Every Aspect of A Newsletter: Almost anything in a newsletters can be tested for improvement.

Remember: scientific testing is the only true path to improvement in your campaigns.  While there are many preachers of best practices out there, what works for most may not work for your list. The only way to be sure is to test test test.