You’ve seen the commercials on TV where the host or actor discussing the fantastic properties of the amazing product is wearing clothes and accessories that match the products packaging and branding perfectly. Sometimes it makes for creepy over-branding whereas other times it makes the commercial more calm and focused. In either case, the intent is to unconsciously teach you the brand colour so that when you are in the store, the familiar colour will draw you in, consciously or unconciously.
However, the world of research is different. Using brand colours as part of questionnaire design can significantly affect the outcome of research and whether that results in increased or decreased scores, the impact is negative. Results from surveys should reflect in-market experiences, not unconcious associations of brand colours. If you plan to measure brand recall, awareness, purchase, attitudes, or perceptions within the the general population or within category users, particularly if you want to compare with other brands, never brand your questionnaires with brand colours, text styles, or formats. Questionnaires formatting should be neutral in all ways such that unconconsious recollections won’t be created.
So when is it appropriate for questionnaires to use brand features in the design? When can you use your brand’s colours and fonts and styles to pretty up what can be generic, boring pages?
When you’re contacting existing clients or customers to ask about a specific purchase experience or brand experience. That’s about it.
In such cases, the bulk of the questionnaire will focus on the specific experience with the specific brand. There may be a couple of generic introductory questions, but 90% of the questionnaire will focus heavily on your brand, your employees, your shelves, your website, your selection, etc. There is no point in creating a sense of blind review or uncontaminated response because the brand must be revealed early and significantly.
If you’re not sure which way to go, there is a very simple solution. Never brand your questionnaires unless there is no way around it. Better safe than sorry.
Like many other Canadians, I received a card in the mail from the Government of Canada promoting a website named MyDemocracy.ca. Just a day before, I’d also come across a link for it on Twitter so with two hints at hand, I decided to read the documentation and find out what it was all about. Along the way, I noticed a lot of controversy about the survey so I thought I’d share a few of my own comments here. I have no vested interested in either party. I am simply a fan of surveys and have some experience in that regard.
First, let’s recognize that one of the main reasons researchers conduct surveys is to generate results which can be generalized to a specific population, for example the population of Canada. Having heard of numerous important elections around the world recently, we’ve become attuned to polling research which attempts to predict election and electoral winners. The polling industry has taken a lot of heat regarding perceived levels of low accuracy lately and people are paying close attention.
Sometimes, however, the purpose of a survey is not to generalize to a population, but rather to gather information so as to be more informed about a population. Thus, you may not intend to learn whether 10% of people believe A and 30% believe B, but rather that there is a significant proportion of people who believe A or B or C or D. These types of surveys don’t necessarily focus on probability or random sampling, but rather on gathering a broad spectrum of opinions and understanding how they relate to each other. In other cases, the purpose of a survey to generate discussion and engagement, to allow people to better understand themselves and other people, and to think about important issues using a fair and balanced baseline that everyone can relate to.
The FAQ associated with MyDemocracy.ca explains the purpose of the survey in just this manner – to foster engagement. It explains that the experimental portion of the survey used a census balanced sample of Canadians, and that the current intention of the survey is to help Canadians understand where they sit in relation to their fellow citizens. I didn’t see any intention for the online results to be used in a predictive way.
I saw some complaints that the questions are biased or unfair. Having completed the survey two and a half times myself, I do see that the questions are pointed and controversial. Some of the choices are extremely difficult to make. To me, however, the questions seem no different than what a constituent might be actually be asked to consider and there are no easy answers in politics. Every decision comes with side-effects, some bad, some horrid. So while I didn’t like the content of some of the questions and I didn’t like the bad outcomes associated with them, I could understand the complexity and the reasoning behind them. In fact, I even noticed a number of question design practices that could be used in analysis for data quality purposes. In my personal opinion, the questions are reasonable.
I’m positive you noticed that I answered the survey more than twice. Most surveys do not allow this but if the survey was launched purely for engagement and discussion rather than prediction purposes, then response duplication is not an issue. From what I see, the survey (assuming it was developed with psychometric precision as the FAQ and methodology describe) is a tool similar to any psychological tool whether personality test, intelligence test, reading test, or otherwise. You can respond to the questions as often as you wish and see whether your opinions or skills change over time. Given what is stated in the FAQ, duplication has little bearing on the intent of the survey.
One researcher’s opinion.
Since you’re here, let me plug my new book on questionnaire design! It makes a great gift for toddlers and grandmas who want to work with better survey data!
People Aren’t Robots: A practical guide to the psychology and technique of questionnaire design