TESTING, TESTING, 1 2 3 4

by Marc Michaels on 23/10/2012

During my 26 years in Government communications, one of my favourite conversations with a client went as follows:

ME: Based on my experience of similar activity and the media consumption of the people we’re after, I’d would like to ‘borrow’ a tiny part of your massive TV budget to test some inserts in certain magazine titles.

CLIENT: Will it work?

ME: Well, based on previous campaigns we should see a fairly good response.

Some titles may perform better than others but that’s very much part of the test.

CLIENT: Yes, but do you think it will work?

ME: I can’t guarantee that, but it is a small diversion of budget and in campaign X which is similar, the two inserts alone at the end out-pulled the whole of the press schedule.

I think it’s worth a small test.

CLIENT: Yes, but will it work?

ME: What is it about the word TEST you don’t really understand?

My apologies for mentioning inserts in a door-drop column, but the principle remains the same.

If we don’t test, we’ll never know whether something could be better – we might have a hunch, but we can’t prove it.

Research will often tell you WHY people do things, but it generally won’t tell you WHAT people really do. Intention or even claimed behaviour will often give a false reading.

Indeed, a meta-analysis of academic studies by Webb and Sheeran (2006), shows that on average only 55% of people who intended to do something actually did it and that was often on serious medical or safety issues!

Academic findings aside, creative research on a recent door drop test I was involved in, had people claiming that they preferred the more straightforward information based leaflet route and that the more ‘creative’ die-cut was a bit gimmicky. However having run the actual test which one did they respond to more – you guessed it – the die cut. Not testing encourages staleness and sameness.

Optimisation is fine, but eventually you need to shift the paradigm and see if you can beat the control (whether that is media channel or creative treatment) and move things to a higher level.

Often a big barrier for Government was that failure was not an option. However, testing implies failure is a possibility. There is a RISK that it might not work. This is not a bad thing.

If you test four things, three fail and one works, you should whoop for joy and roll out the one that worked. However looking at the lack of testing out there, it seems marketers are more prone to agonise about the three that failed and how bad that might look on a report somewhere.

However, real testing isn’t about changing everything lock stock and barrel at once. It is often about trying to ascertain what elements might be changed to yield a better return.

Marketers can have campaigns where the previous block (particularly the creative) is overwritten by the new one and little or nothing survives. This isn’t testing.

Too many factors are changing to yield any useful data. It’s also a high risk strategy as you could be throwing out the best way of doing things in favour of the ‘new and shiny’ thing.

Testing should be properly planned and structured

Real testing implies having a CONTROL (usually the best performing pack, channel, title, mix, journey etc) thus far, and then trying something else out on a proportion of people that would yield a statistically significant result.

This implies of course that you also need mechanisms in place to measure the effect of the test.

You need to plan ahead, so that after the completion of the campaign you have a clear idea of how the campaign performed.

What can you test?

Bob Stone said, “Test the big things.”

Russell Davies said, “Test (and indeed do) lots of small things.”

Richard Benson said, “There are two answers to every problem. Answer no.1: Test everything. Answer no. 2: Refer to answer no. 1.”

“No-one in that audience would read a Door Drop – would they?”

“DRTV is dead in the water surely?”

“It’s all about social media; why not put all the money there?”

“People won’t send back a coupon, isn’t that old fashioned?”

“Radio – can it work as a response driver?”

“So what’s PR actually achieving?”

“What happens if we cut the TV budget?”

“Does it work better if we vary the message by audience segment?”

“How does that combination of media work – are we seeing an uplift?”

“Inserts are rubbish; they just fall out on the floor – don’t they?”

“Is face to face appropriate for this topic?”

“That’ll never work, or would it?”

Might be reasonable questions marketers might ask.

How do we know the answers unless we test?

We can make educated guesses, we can look at previous experience on similar activities, and we can even do some research and find out what people say they might do. But until it becomes a real world situation – we just can’t say for sure (or at least within a defined confidence interval).

As the Marketing GAP report shows each year and the most recent issue is no different, we as marketers live in a bit of a bubble, or at least a slightly different world to many of the people we are communicating with.

We tend to over-estimate the power or importance of the things we engage with (e.g. social media) and underestimate things like print media, prize draws, money off coupons etc.

So perhaps we should not make too many ‘educated’ guesses? Perhaps we should test?

If, for example, you haven’t been in door drops before or have been out of that media for a while chasing the shiny digital butterflies, then perhaps a ‘back to basics’ approach which identifies the logic of testing in the first place would be best?

Test matrices can be simple or complex looking at say:

  • The relative efficiency and effectiveness of the different door drop opportunities – Royal Mail, Newshare, Solus Teams
  • The impacts of different creative treatments
  • The impact of applying different targeting mechanisms (e.g. profiling)
  • The use of different response mechanics, e.g. their relative prominence
  • The impact of media synergy – e.g. door drop plus radio versus door drop on it’s own

Actually – you can test anything that your hunches, research, data sources, your feedback from customers (through websites/contact centres/complaints/research) make you think are likely to have a large impact if they were changed.

It is therefore worth putting a fair degree of effort behind finding out in a structured manner.

And in as small scale – low exposure – low risk way you can manage, test any of those niggling new things that everyone (who’s not yet on the latest bandwagon) thinks they’d like to play with, but are not sure whether they would actually make a contribution.

One of the great things about testing is that it limits your exposure and allows you to make really informed decisions with relatively little financial exposure or other risk, such as door drops where you can single out a small defined geographical area.

So why waste time arguing about which media or message will work best when the people you are communicating to can decide for you.

Go on; test out if what I’m saying is true.

Why not test it for yourself?

This article was written by...

– who has written 2 posts on Letterbox Consultancy for Door Drop Marketing.

Contact the author

{ 0 comments… add one now }

Leave a Comment

{ 2 trackbacks }

Previous post:

Next post: