The hidden impact of door drops

by Neal Dodd on 20/02/2019

A fairly typical approach to door drops and advertising in general is to run a piece of activity, including a response mechanism, then collect together all of the responses and make a judgement about how each channel as performed.

Does that tell you everything you need to know about how the channel has performed? I don’t believe it does.

I wrote recently about the flaws in trying to split out response from a multi-media campaign into channel specific boxes, but the problems with simply gathering together responses applies to pretty much any business.

Why?

It’s quite likely that whatever introduces your business to a new customer is not where the journey ends, regardless of whether you’ve created a unique URL, phone number or discount code.

Customers simply do not behave in such a linear fashion and whilst it’s understandable that we should want to measure the performance of media channels, we need to consider other options than simply the responses that were allocated a specific channel if we’re to get a good idea of whether our advertising is working well.

One way to do that is to look at sales data as a whole in the areas where the campaign took place and here are two recent examples:

  1. Home delivery food business

With no defined target audience, the door drop brief was for this business is simply to cover 100% of the households in the selected towns an area to which they deliver cooked food.

They wished to understand how well leaflet distribution was is working for them and so we created a simple test that split the towns down the middle. Half received a leaflet, half didn’t, no bias to selections. Other media (typically radio and OOH) were constant.

Here are the results:

As you can see, there’s a clear growth in sales activity in areas where distribution took place and at the end of the activity, sales activity returns to similar levels for both areas yet the area covered with a door drop is higher than the area not. This was true of all areas tested.

Direct response showed some activity but the bulk of activity came from in-app purchases.

  2. Online retailer

Following household profiling activity, the brief for this client was to identify postcode sectors in the London area with a high density of the client’s target audience.

We created two proposals, equal in overall target penetration and equal in households per postcode area. One proposal received the door drop and the other was considered a control, for comparison purposes, not receiving the door drop.

Here are the results:

The item being promoted was Christmas related and in the week of the drop, the retailer saw 43.33% more sales from the areas receiving a door drop vs the areas not, resulting in over £10k additional revenue. The following two weeks posted over 25% additional sales and a further £5k per week.

Those differences add up to 281 more orders, with no difference in sectors other than leaflet distribution during w/c 26th November.

Now, let’s look at direct response:

89 orders recorded as the result of the door drop item, with customers using a code from the leaflet when purchasing.

The big question we’re left with is; what prompted the other 192 other additional customers to purchase? Where have they come from and why are these postcode sectors outperforming the control areas so dramatically?

Of course it is not quite as simple as allocating all 192 to the door drop and there will be variances within any data that you compare, but don’t you think it’s pretty telling that the areas receiving a leaflet that month produced such an increased response? Particularly as in both cases, we see sales return to similar trends several weeks after distribution.

And this is before we even consider the other effects of door drop activity, including brand awareness.

So what does this mean?

My suggestion when reviewing door drop activity would be to consider how you might measure the hidden impact of your activity.

If you’re using door drops as a standalone channel, are you able to collect sales data for the period before and after the distribution to look at overall activity before, during and after? Are you able to plan some control areas into the distribution that won’t receive a leaflet? Are you set up to measure indirect response through web analytics, call volumes etc?

If you’re using the distribution as part of a wider media plan, could you ‘rest’ activity in one area and review sales data compared with areas that did receive the leaflet. Has there been a difference in the general response with one combination of media vs another?

How do the results of the activity compare with results against your business objectives?

There’s no silver bullet that will solve the challenge of attribution but customer journeys are arguably more complex than ever and so our analysis of campaign response surely has to become a little more considered than simply taking into account direct response.

This article was written by...

– who has written 84 posts on Letterbox Consultancy for Door Drop Marketing.

Contact the author

{ 0 comments… add one now }

Leave a Comment

Previous post:

Next post: