(By K.M. Richards) As we all know by now, the latest Nielsen PPM fiasco turns out to be a single family with multiple meters which were exposed long-term to an online stream. And Nielsen’s “fix” is to remove that household from the process. I do not believe that’s going to restore the growing suspicions by the radio industry that PPM is not an accurate method of determining the listenership numbers.
And yet we seem to be stuck with it, even though it’s never — as Arbitron had promised when they first rolled out the technology over a decade ago — expanded beyond the top 50 markets (okay, top 52 … but four of those are still on diaries). The claim has always been that PPM was developed to give the advertising agencies a better picture of the markets’ listening habits, but I find myself wondering if they might also be less than satisfied with the result.
Do they even know the sample size with PPM is lower than with the old diary-keeping method? Do they realize that a percentage of the reported “listenership” could be the result of a station in the background, but still detectable, being in the vicinity of a meter but the person wearing it not consciously listening? Do they know that younger demographics have been proven to switch stations at the first notes of a song they dislike or if they perceive a long commercial stopset is starting?
It’s my opinion that PPM provides a lot of irrelevant data and not enough that is relevant, and I believe the former is skewing the latter when it comes down to the processing of that data into the reports that both the stations and the agencies live and die by. And I have a fairly simple argument for going back to the diaries, by turning some of the so-called “flaws” in that methodology into a compelling positive.
I believe that stations want to know how they stand in their markets in terms of P1, P2, etc. That’s certainly the distinction most programmers focus on when trying to improve (and/or hold on to) their standing. Diary-keepers are proven to list the stations they listen to the most when filling out their personal reports; they don’t include incidental listening (which shouldn’t be important anyway) and if they dial-switch they tend to apportion their listening between all the stations they actively switch between. Let the book reflect who their actual favorite, listened-to stations are and don’t try to calculate it down to such a small fragment of time that the wobble erases the distinction.
A version of that argument applies to the time buyers as well: They shouldn’t … they likely don’t … care the precise number of minutes people listen to specific radio stations. They want — and need — to know which stations get the highest percentages of the listening for whatever demographic(s) their client wants to reach. If a diary gives them that with a reasonable margin of error (and it still does, in the vast majority of markets) that is enough for them to make their decisions.
Ratings have always been a popularity contest. Beyond that, trying to sort them into smaller granularity is more showing off the technology than providing something that is useful to all involved.
Let’s scrap the technology and go back to the method which gives us the result we really wanted all along.
K.M. Richards is the owner of K.M. Richards Programming Services in Los Angeles. He can be reached by e-mail at kmr@kmrichards.com
No comments:
Post a Comment