When I first started my consulting career, one of the mantras of our client work was to always start our engagement by asking the client an open-ended question: “why do you think the sales are slowing?” or “what do you think we can do about it?” This was one of the first skills I learned and it still serves me well. The issue – as subtle as it may be – is that asking a close-ended question already contains within it an answer to the question, which inherently can lead to a bias: “How likely is it that we missed an important product fix that led to a sales drop?” or “Would you agree that we need to get engineering onboard to fix our sales problem?” Asking these questions immediately puts the respondent into a frame of mind where product is the likely problem and better engineering a likely solution.
A good researcher knows that the way a question is asked will shape the resulting data and is aware of it when interpreting results. Sometimes, however, perhaps feeling the pressure to move from design phase into field to achieve results faster, we end up focusing on other biases: check the data for outliers, did we catch any bots answering our survey, do we have representative demographics? If the data passes all the “checks”, it is tempting to declare the results bias-free and move on to building the report. The issue, of course, is that we are still potentially suffering from the “closed-ended question bias” which we had built into the study when we put pen to paper.
This is why I always encourage my colleagues and clients to start building customer questionnaires with open-ended questions before they follow, as needed, with multiple choice inquiries. The point of asking the key, top-of-mind open-ended questions first, gives us confidence that we are charting the entire space of possible answers before we dive into hypothesis testing using multiple choice. Analyzing open-ended free-text answers helps us avoid any surprises that would come from missing an important attribute of the problem we are solving with the research. Often times, the answers we get from the open-ends provide unique and unexpected insights that we would have completely missed centering our research around multiple choice questions.
Let me share an example. Recently, I worked with a manufacturer who was in the process of re-designing their logo. The original internal hypothesis was that one of the new three redesigns had the original logo beat. After we conducted a randomized test and assessment of various attributes such as logo attractiveness, being contemporary looking, and fit with the brand, we found out that the original logo handily outperformed the three redesigned options. The question was why - it was not clear from looking just at the quantitative data what drove this counterintuitive result. Fortunately, we did ask the “why” question and found that the new color scheme (bright colors) was a turn-off for the respondents, because the product itself was intended for relaxation. The original logo was rendered in more subtle hues, which drove respondents to like it more. We also discovered a few interesting design elements of the redesigned logos that reminded customers of other brands, which were not consistent with what our client’s brand stood for. These insights would have never emerged from a multiple-choice survey alone.
Having taken customer feedback from open-ended questions into consideration, the client ended up re-designing and re-testing updated logos. It suffices to say that a refreshed logo performed much better on our subsequent test and the client has much higher confidence it will do well in the real world.