Test new email marketing methods and be your own benchmark

There was a time when marketing was purely art. The Don Drapers of the world felt that an angle, a message, a concept would work, and then they went and executed – the “genius artist” was at work.

Of course, this isn’t entirely true. It’s a romantic idea, though, especially for those of us in the current marketing world, where we’re inundated with data, spreadsheets and KPIs all meant to tell us what works and what doesn’t. Numbers without context, though, are just numbers, and worse, they can be misleading. That’s why marketers need to systematically test new email marketing methods to give these numbers some context. And testing, when done right, is just an implementation of the Scientific Method.

The scientific method, of course, is what you learned back in middle school science class:

  1. Ask a question
  2. Form a hypothesis
  3. Test the hypothesis
  4. Analyze the results

Easy enough, right? So what do marketers typically test?

Test new email marketing methods

The insights shared in the above graphic are interesting – nearly everyone tests subject lines and creative, while far fewer marketers test mobile optimizations and “friendly froms.” Even fewer roll everything together in the much more powerful multivariate tests. But what stands out to me isn’t which tests are being run – everything above has fairly straightforward executions, save multivariate testing – but how differently marketers need to evaluate each test. Based on the relatively random breakdown of tests run and how different the evaluations are (for example, evaluating a creative test is wildly different than evaluating a frequency test), my guess is that most marketers aren’t changing their evaluations methods per test.

As an example, let’s break down a test that one of Experian’s clients recently ran. We can group it by the Scientific Method to highlight each step along the way:

  1. Ask a question

A leading retail brand wondered whether or not there is a place for editorial and content marketing within their email program. Would adding a weekly newsletter improve performance?

  1. Form a hypothesis

Based on industry best practices and research, the brand hypothesized that a newsletter would improve customer value. But this research, while valuable, isn’t enough – the brand needed to see if these best practices were applicable to their unique audience.

A note: the goal behind experimentation is not necessarily to be proven right. Even if the result is negligible – or negative – the act of testing is still valuable. Testing helps narrow down possible techniques as much as find new successful strategies.

  1. Test the hypothesis

The brand pulled two distinct customer groups from its list. One group received normal promotional mailings while the others received additional editorial content.

Part of this test’s effectiveness relied on the way the brand chose its sample groups. It’s an intricate process, but at a simple level, both randomly-chosen groups were statistically identical before the experiment. This helped ensure that the resulting change in metrics was due to the experiment and not some pre-existing difference between each group.

  1. Analyze the results

This is the fun part! Analyzing the results can be difficult – what are our measures of a successful test? What data do we need? What are we truly trying to answer? What to do should the test be considered inconclusive?

Most of these questions should have been considered during the hypothesis phase of the process, but for the purposes of this example, we will consider more than one evaluation method:

  1. Measure the success of the editorial mailings
  2. Measure the entire group over time for all mailings

The appropriate method should be obvious (it’s (b)!) After all, we don’t really care whether the editorial content was opened or clicked, but whether those who received it interacted more favorably with the brand. However, I’ve run into countless situations in which (a) is either the primary or the only method of analysis.

These two methods are easy to categorize: one evaluates campaigns, while the other evaluates customers. There are some tests (notably the email marketer’s favorite test – subject lines!), that are appropriate to analyze by campaign.  But others, such as time of day, frequency, or even a change to a welcome series are much more important to evaluate from a customer perspective. For example, we click rates in a new welcome series doesn’t much matter if the new series fosters improved engagement three months from deployment! Ultimately, to understand how marketing efforts drive real change, we want to know what our changes did for customer health.

And in the case of the test described above – things worked out pretty well!

Results from testing new email marketing methods

Data proliferation and marketing technology is making it easier than ever for brands to test and analyze creative new ideas. Marketers need to start thinking more like scientists if they hope to make smarter decisions in this new world.

Join me next Thursday for our webinar, Be your own benchmark: Testing strategies for marketers. Register today to join live or receive a recorded playback.