[tweetmeme source=”lovestats” only_single=false]How much is too much? How much is too little? There are lots of things in market research that require a healthy balance between doing the right thing and conducting business. Deciding how many screeners to offer to potential survey responders is one of them.
Most survey panels recognize that screening people out of surveys, no matter why, is bad for two reasons. First, and completely justified, it ticks off panelists who feel their time has been wasted and their opinions ignored. Second, it’s a waste for panels that just used up one survey in their data quality rule of “one survey invite per week” and they didn’t even get a complete in return.
For both of these reasons, many panels strive to handle the problem by offering up a number of surveys in a row to panelists. Panelists receive an invite and then proceed through one or more consecutive screeners until they qualify for a survey. (Let’s not consider what this means for probability sampling.)
I just spoke with someone who said their company takes people through up to five screeners before they say enough is enough. Panelists are even compensated for each screener they complete. I worry that even though they are being compensated, it is annoying to panelists. Screeners are obviously not surveys. Panelists can tell that they’ve been rejected once, twice, three, four, and five times. Imagine being rejected by five screeners every time you try to participate. It’s just one more source of rejection, something none of us need now or ever.
In fact, I even wonder if there is a rejection effect for which I have a two tailed hypothesis. Does increased rejection cause decreased survey scores due to the annoyance or does increased rejection cause increased survey scores due to the satisfaction of finally getting a survey to answer. I’d love an answer to that!
So what does your experience tell you? Are responders keeners for screeners?
Read these too
I completely disagree with my own title!
My world is online. I started out writing online surveys in 1996 when I bugged the computer helpdesk at my graduate school to set me up with an online database. No one else at the university had ever done such a thing and i confused the heck out of them. I wrote my own html code which allowed me to specify font sizes, font colours, page colours, radio buttons, check boxes and text boxes. ooooooo….. so sophisticated. I’d be embarrassed to tell a scripter now that “I write my own code.”
Online research has never tried to say it uses probability sampling but, other methods of research have. There has been a debate over the last year specifically directed at online panels. Well, not really a debate. Some folks have been outraged that online panels do not use probability sampling and therefore they do not qualify to use statistics. To go even further, they suggest that telephone samples do use probability sampling and so results from that type of research are the most valid.
Let me offer up some ideas…
Telephone research – Do you always answer your phone? Is your phone number unlisted? Do you return phone calls? Do you politely tell telephone interviewers that you are busy when in fact you are nursing a bag of cheetos?
Mail research – Do you just throw out all the junk you get in the mail? Do you fill out surveys AND mail them?
Online research – Are you signed up for an online survey panel? Do you click on the survey banners that appear after you run a search and then finish every survey?
It seems to me that no matter how hard you try to use probability sampling, human beings just cannot cooperate. We’re not worms or mice or molecules. People choose when they wish to pay attention or participate. It’s not online panels. It’s research with human participants.
Probability sampling of people? No such thing.