FEATURE 

You think it’s wonderful – but will it work?

What stops us testing; arrogance, ignorance or both? As with circulation modelling, everyone pays lip service to testing, but surprisingly few actually do it; odd really, when you consider that an intelligent approach to testing is one of the characteristics that separates the best direct marketers from the rest. Drayton Bird looks at what lies behind our unwillingness to test.

By Drayton Bird

You will not find the answer in a conference room or in the depths of your own wisdom. Only a test will tell you. So why don’t people test? Many years ago, at a seminar, I was asked to define, in three words, what made a perfect client. Three words! I was stumped for a moment. But not long. I replied "willingness to test".

Why did I think this - and why do I still think it? Come to that, why are so few clients willing to test? I’ve hardly met one I would call perfect (and I’m sure that’s just as true of agencies) but every time I meet someone who is determined to find out what the market wants rather than what they themselves or their colleagues like, I am impressed.

For instance, I was hugely impressed recently when a young man working in financial services told me he had tested something I have constantly recommended to clients, but which hardly any will accept or even try.

That "something" is using a typeface that looks like a typewriter in your direct mail letters. The face in question is Courier. And few clients will try it, because somebody in authority has commissioned a designer, at great cost, to create a "corporate look" which governs every communication that is sent out.

The young man I refer to tested Courier against the sort of word processing faces that most people use in their direct mail letters and found it produced uplifts of 30-35%.

Now I have no idea how scientific his testing was. For all I know there was too wide a variation in terms of statistical probability for that result to be exactly repeated. But what I do know is that the mere act of taking the trouble to find out should be applauded.

Three reasons – all stupid

To revert to my first question, why don’t clients test more? I will tell you. They don’t test more for any one of three reasons and sometimes all three.

* The first is that they think they know what will happen.
* The second is that they think there isn’t enough time.
* The third is that they don’t have enough money.

I will come back to the first in a moment but as to the second and third, anybody in business will have noted how often firms seem to find the money and time to try again after they got it wrong. So why not get it right in the first place?

As to whether a client can predict the likely results of a mailing, let me quote one of the most well informed clients I ever met. His name is Axel Andersson and for many years he ran, and I believe owned, Linguaphone in the German market. In that excellent book, Million Dollar Mailings, which he co-wrote, he says he spent at least a hundred million dollars promoting home study courses in that country. For almost 50 years he asked himself before every test or split run which one would be the winner.

"I confess that my batting average in picking winners is not better than 60%; and remember I have done thousands of tests in just one field (home study) in one market (Germany). Still my batting average is no better than 60%. What would it be in a field outside home study? My answer: it would probably be so bad I wouldn’t even consider trying to judge insurance promotions, fundraising mailings or offers of collectibles for instance. Not to mention sweepstake mailings."

Experts don’t know: research misleads

Would that the average client were as modest as Mr Andersson. I have found that over the 40 odd years I’ve been writing copy my ability to choose a winner has been nothing to write home about - and the same applies to everyone I know.

Yet you would think that since I have worked in all kinds of marketing I would have a very good idea. But I don’t - any more than Jim Kobs, the author of Profitable Direct Marketing does. He tells how he once put together a panel session on testing for a conference. The audience had to guess which one of a series of split runs would be a winner.

There were eight tests – just eight. But, "when we finished not a single one of the 400 direct marketing pros in the audience had picked more than six of them. But somebody had correctly picked all eight winners, namely, the consumers who had voted by sending in their respective reply cards and coupons."

I once played the same trick on an audience of 300 of the best marketers in American Express, at a conference in Penang. There were only six tests. Only one person got all six right.

What about research?

Since we all know that it’s hard to predict ourselves what will happen, for many years marketers have relied on research. I will not say too much about this except that many of my clients have done research and I have found that it consistently comes out with the wrong findings.

The reasons are that nobody can tell you what they are going to do; and that what people like has very little to do with what will persuade them. My old boss, David Ogilvy, was a researcher with Gallup before he went into advertising. He said, "research is often used as a drunk uses a lamp-post; for support rather than illumination".

In their classic Maxi-Marketing Stan Rapp and Tom Collins record how they compared the accuracy of predictive consumer interview research with direct response results.

104 respondents were shown eight different ads for a record club all featuring essentially the same offer. The consumers were all people who had actually bought records or tapes – so they were right for the market. They were asked to rank eight ad campaigns according to "uniqueness", "interest in reading further", "believability" and "interest in responding". The research predicted that the winning ads would be those nicknamed "guarantee" and "no fine print". Two other ads, "headphones" and "cartoons" came respectively fifth and last.

But, when the ads were actually run in an equal eight way split run, "headphones" was the clear winner and "cartoons" decisively outpulled the ads that were chosen as most likely to interest and persuade them.

I have found repeatedly that consumers not only do not know what is going to influence them but they reject scornfully things that I know from experience work very well. They say "I don’t read long copy". Yet for the most part – I would say nine times out of ten – long copy making the same promise as short copy does better. The reason for this is extremely simple. As John E Kennedy pointed out almost exactly 100 years ago, "advertising is salesmanship in print".

What would a salesman do?

Direct mail is just advertising to one person at a time and as Fairfax Cone, one of the founders of Foote Cone and Belding observed in 1940, "Advertising is what you do when you can’t be there in person".

So what would a salesman do if he or she were there in person? Certainly not be brief. That sales person would do and say whatever it took to make a sale. That means, give every sensible argument as to why the prospect should do what they want; and overcome every reasonable objection they might have.

If your product is at all complex (and believe me, publications are very complex) that means you’re going to have to write a lot of copy rather than very little copy. So why don’t people run long copy mailings. The answer is simple and comes in two parts.

The first part is that those who commission direct mail to get subscriptions are too tight fisted to pay decent money for a good job to be done, either by giving their in-house staff the time and resources to think things through or by getting someone from outside to do it for them.

This is a very short sighted way to think. In fact I’m constantly astounded at the way clients are much more interested in how little they can pay rather than in how much they can get in exchange for their money.

If something doesn’t work there is little comfort, I would imagine, in patting yourself on the back and saying, "yeah; but we only paid half as much for that mailing as we would have had to pay for the other guy who is so well known he can charge decent fees". (Does it occur to them that the man who is so well known he can charge decent fees got that way because he produced results for people consistently?)

Now may I give you my three golden rules for testing?

1. Test one element or test all. If you test more than one, you never know what made the difference, so you might just as well change everything.

2. Test big things, not small ones. Piddling about with slightly different pictures won’t make much difference.

3. Read results right. Small differences are often statistically meaningless.