Ways to better survey results
Surveys are arguable the most overused tool in an organization. They are thrown around like McDonald’s burgers simply because they can are so convenient. Everyone from the CEO to the Product Manager wants to run a quick and dirty Google Form or Typeform survey on the entire user base.
But UX researchers have known this for a while now. They never tell you the truth because you have wanted to run the survey for a long, long time. Surveys don’t work because they never are done right. Surveys remain a cheap way to entice the user. It gets used as a way to get
quick and dirty feedback from the users. Surveys are never designed well from the beginning. In short, a survey is often like a McDonald’s burger because they’re cheap, they’re everywhere, and they’re disastrous for both the company and the user to whom it gets sent.
Here are six quick ways you can run surveys with intent and reason to get them right for your quintessential and most time-constrained projects —
Do not run a survey if you’re sending it only to massage someone’s conceit. It is easy to get lost in the world of research and experimentation during high-speed product development. Unfortunately, UI/UX designers and Researchers sometimes get lost in the momentum of things, and they start encouraging the “let’s do it all” mentality. This mentality often leads to researchers and designers using all the tools in their arsenal to uncover unknown data insights. But in the end, they end up dumping all the data on the stakeholders during research walkthroughs, and that’s the last we see of it. Instead, we should only run surveys and conduct interviews if we genuinely intend to use the data that we get out of it.
If you ask the wrong questions, you will get the wrong answers. It’s pretty simple. Asking the right questions is always going to be the most tricky part of any interview or survey. Surveys need researchers to ask the right questions because they are unmoderated pieces of research. One way to get your survey questions right is by conducting a generative interview with 5–10 users in the same cohort. The responses from these interviews will be used as questions/prompts to prepare a list of survey questions. It is always good to plan a large set of survey questions and then choose the most important ones out of the kit in the final questionnaire.
It is always a good idea to get your survey questions checked by a new set of users. Doing a pilot test is not only crucial for a product feature; it is also essential for just about everything in research as well. Surveys should be treated as a product feature on their own. Hence, it is a good idea to show your survey question to at least five people. You can also choose to send 3–4 people the survey and request them to complete it to access the range of responses you get from them. Running a pilot version of your survey will help you improve the overall quality of the surveys and help you gauge what the reactions will eventually look like when you send it to the end-users.
If you want to read more then, you can check out this short article that outlines how you can run a pretest of your survey before broadcasting it to a large set of users.
Open-ended questions reduce blind spots. Close-ended or leading questions will give you YES and NO types of responses which are never helpful. It is good to prepare a kit of open-ended questions to get descriptive long text answers from your users when sending out surveys. Open-ended questions open-up unknown doorways because users need to describe a particular situation in the form of a sentence to answer them.
On the other hand, closed-ended questions give insufficient insight and get often used for quantitative data. For example, one of the common questions is collecting the Net Promoter Score (NPS), which asks people, “How likely are you to recommend this product/service on a scale from 0 to 10?” and uses numerical answers to calculate the overall score trends.
Here are a few example questions for open-ended and closed-ended —
You can also refer to the Hotjar article that I picked up these question graphics from below.
Since a handful of people only answers surveys, their results should never be treated as proof but only as early signals. Survey results are never representative of your entire user base because the response rate of surveys is often abysmal. For example, only people who felt like answering the survey questions for discount coupons or a complimentary gift decided to answer them on a particular day. If your team treats survey answers as proof, you must ensure that the actual percentage of users in each cohort and region answered your survey is widespread. Only when the user responses come from a statistically significant source does it hold any fundamental importance.
Don’t spend an hour writing the survey questions and then blasting your entire user base with those questions the very next hour. Surveys, when designed and used correctly, can be an asset in a researcher’s arsenal. It would be best to dedicate time and energy to plan your survey accurately, segment your user base, talk to customer success teams to design them in a targeted way, and then pretest them before exposing them to the end-users. You should spend at least a day writing just the questions you want answers for from your survey. Designing, testing, and sending out an adequately designed survey can take anywhere about a week or two to get it right. Give it the time it needs, and you’ll receive the dept of results you were hoping for from the beginning.