[tweetmeme source=”lovestats” only_single=false]Ok, I give. You’ve got a study that actually uses probability sampling. By some magical slight of hand, you’ve identified every person in your desired population. Perhaps your population is the immediate members of your family, cancer doctors in your medical clinic or survey panel members.
You’ve managed to apply a random sampling method that gives each person an equal opportunity to participate. Maybe you picked numbers out of a hat. Maybe you used one of those books with 500 pages of random numbers.
You’ve managed to apply a process that gives every person an independent opportunity to participate. For this argument, let’s just assume that survey panels don’t kick certain people out because a housemate has also been selected.
Fine. You have a probability sample. You have covered off random error.
But folks, we aren’t in the business of hypothetical research. We make money from actual marketing research. Real people, real studies, real every day work. In my world, we just don’t do many of those one in a million studies that are capable of employing a reasonable semblance of probability sampling. Random error is not the whole picture.
Why does it seem like we always forget about non-random error? What about the vast majority of research that has 90% opt-out rates? Do we decide that those people weren’t part of the population to begin with? Does the lack of random error make non-random error ok?
I’m just having a hard time understanding the ongoing push to prove we are using probability samples when there remain other uneaten slices of the error pie.
Read these too