When building a statistical model, different analysts place different amounts of emphasis on whether their findings are 'significant'. In other words, whether a set of statistical tests say that they've found a real relationship - like advertising causes an increase in sales - or a potentially spurious one.
There are all sorts of tests for all sorts of modelling situations. Academic statisticians can spend ages with them; commercial analysts usually less time, because they have a tight deadline. We tend to satisfy ourselves that the model is as good as it needs to be and then move on.
A new piece of work by Decipher and Viacom got me thinking about this recently, because it had a sample size of just 15 households. A big, long-term, focus group.
In case you don't what to look at the link, Decipher fitted out 15 homes with as much digital entertainment kit as they could possibly want and then left them to it for 6 months to see what would happen.
Great idea. What does happen to TV viewing in a truly digital home? If you've got the internet, Sky+, a collection of DVDs and music stored on the main TV in your lounge, how much broadcast TV do you end up watching? It turns out, quite a bit.
The number that grabbed me most, was that over 6 months about 1/3 of the people in the test (not the households - the people) tried the BBC iplayer. About 1/3 of those that tried it, used it a lot.
This is in a home that has been deliberately set up to put the web on the main TV in the house.
Why all the preamble about significance testing? Well because despite being a survey of only 15 homes, 1/3 sounds about right to me. Unless iplayer or 4oD are so well integrated into the TV that the user can't tell the difference between broadcast and streaming, I'm going with around 1/3 of people will try streaming. And about 10% of the total population will be very regular users.
It's a stat that passes the common sense test and - sometimes - that's good enough.