looking for loyalty in all the wrong places
Every scientist knows this joke:
A cop comes across a drunk, on his hands and knees, crawling around a pool of light being thrown by an overhead street lamp.
“What are you doing?” asks the cop.
“I lost my car keys over in those bushes,” says the drunk gesturing to a hedge a hundred feet away.
“Well if you lost the keys over there, why are you looking here?” asked the cop.
“The light’s better here,” replied the drunk.
I’m only a wannabe scientist, not a real one, but the joke comes to mind every time I get involved in web metrics discussions that purport to measure interest and loyalty of email list members by looking at their open and click through rates.
I’m gonna say this a lot: OPEN AND CLICK-THROUGH RATES ARE BAD MEASURES OF ONLINE DONOR OR LIST MEMBER HAPPINESS. They are easy to measure, so, just like the drunk under the light, we impute great analytic meaning to their ebbs and flows.
it’s a trap, and a dangerous one. If you want to sustain a happy community, you need to measure community members’ satisfaction, and your spreadsheets aren’t telling the tale.
I have been increasingly enamored of a somewhat more difficult but far more probative indicator of community happiness — the net promoter score.
Cooked up by the eggheads at Bain to measure customer loyalty, it’s just as easily applied to donors or other members of your community. It’s based on one devilishly simple question:
“On a scale of 1 to 10, how likely would you be to recommend us to a friend or colleague?”
Here’s the math: take the percentage who give you a 9 or 10, and subtract from that the percentage who gave you a 6 or less. That’s your net-promoter score. Your 7 and 8 people are deemed to be “passively satisfied” and don’t count one way or another.
That article finds a direct correlation between net promoter scores and profitability in three sectors: airlines, car rentals, and ISPs. A “good” net-promoter score according to the Harvard Business Review Article that launched it would be in the 60s or 70s.
My clients all have roughly comparable open and click-through rates, but their net promoter scores range from -13 to 82. There is plenty of qualitiative and supporting data to suggest those widely diverging scores are telling a meaningful tale.
So what do you do with all that open and CTR data? Open rate data is increasingly meaningless across the board, as Outlook, gmail and others routinely block graphics, including the invisible beacons that report whether an email has been opened or not. Plus which, we have seen repeatedly that some of the most successful email fundraising appeals have paradoxically low open rates.
Is it time to ignore open rates altogether? it’s at least time to have that conversation.
Click-through rates have a very specific use — as an intermediate measure of whether a direct response email is generating a response. Your monthly newsletter is NOT a direct reponse piece — it’s a cultivation piece. Your action alerts and email appeals are direct response pieces. Click-through rates are interesting when your email is seeking to elicit a SINGLE response or action, whatever that is. But they are far less important than ultimate response rates — which bizarrely, often do not get tracked or discussed.
As a measure of community happiness, CTRs are all but worthless.
As the web 2.0 juggernaut gains steam, letting your audience talk to you in meaningful ways is only going to get more important. The answers may be harder to get and harder to hear than your routine web stats, but it’ll at least be a real conversation.