Basing your investments, decisions, and particularly your jobs on the results of studies is very risky, because you are lead to believe that an outcome is sure. But if a study is done improperly or if you don’t know the limitations of even a properly done study, you’ll likely get a really bad surprise. Since I did much of my undergraduate and graduate work in this area and used to do studies as part of my job, I’d like to provide somewhat of a study primer on what to look for and why I haven’t seen a study I’d trust in an impressively long time.
What triggered this thinking was a report from a firm called Tractica, forecasting 23,000 customer service robots in use worldwide by 2022. Given that the trend toward real-time drone delivery is also expected to be peaking at that time, I was left wondering where these things would be deployed. I thought that since we don’t yet have a validated trend, any prediction would be little more than a guess anyway. This study’s prediction sucks, but it is far from alone. I should add, however, that the geographic break-down of current customer service robot sales by region does indicate that a market is emerging. It is just the five-year prediction that seems to lack adequate foundation. This speaks to some of the common strengths and weaknesses of studies. One of them is that they generally suck at making predictions.
Focus Groups vs. Studies
Focus groups are a subset of a class of things we generically refer to as studies, but I’ve found that people think of studies in terms of accurate samples of a population and believe they represent that population. Focus groups do not do that. Focus groups are designed to help you understand why something is the way it is. They can help anticipate problems. For instance, a focus group that is demographically identical to a jury can give a lawyer an idea of what works or does not before the trial, but it isn’t a crystal ball.
To make a focus group a true predictor of a future event, you must put that group in the same situation. For instance, I was in one myself a few decades back. They showed me a car that Chrysler was going to bring to market. I loved that car and responded that I’d buy it for sure if they brought it out. But, in the three years it took for them to bring it to market, I liked other cars more and I never bought that model. I wasn’t lying in the focus group. The conditions of the decision had changed, which rendered the focus group almost worthless.
Another example is the iPhone or iPod. Given that people weren’t buying screen phones before the iPhone was released and MP3 players cost well under half what the initial iPod cost, focus groups would have indicated that neither product was attractive. This might have lead you to conclude that both would fail and that wasn’t the case. However, in this instance, you could test messaging and marketing approaches to see which changed minds the most effectively nearer to the release of the product. That would have undoubtedly helped refine the marketing deliverables.
One other thing focus groups are good for is understanding why someone did something in the past. One of the most powerful studies I ever did looked at why people weren’t buying our products and it accurately determined that the sales team wasn’t selling them properly. However, to do one of these correctly, you must talk to people within a short period of time of when they made the decision and you must do it out from under the company brand you represent. Folks forget why they do things and tend to want to tell you what you want to hear, particularly if they think they are going to get a nasty call from their sales rep later.
So focus groups are best to understand the present or the past; they suck at predicting the future.
Often what people think of when they hear the word “study” is a survey. Most surveys aren’t worth the paper they are printed on. This is because it is far easier to compromise a survey than to do one right. When I did my graduate paper in market research, I intentionally compromised the survey and then wrote up how I’d compromised it because that was a ton easier than trying to do an uncompromised survey as an individual.
Surveys also aren’t good predictors of behavior for much the same reason as a focus group but, if done close to the event, they can be pretty accurate. However, to do a survey right, you first must understand the population you are attempting to survey (you need to know how much diversity you have to deal with), select a representative (this is more than just random) sample (both size and makeup), and have a survey methodology that doesn’t introduce bias.
Here is what doesn’t work. Any sample that isn’t chosen by the entity doing the survey where nearly all those asked actually do respond isn’t representative. Any inadequately sized sample or a sample that has a makeup distinctly different from the population, or one that doesn’t have a confidence interval, or one that has been compromised by news, the survey entity, or the survey process (long surveys often have people randomly answering the questions just to finish) are all problematic.
Now, if this article in NPR is accurate, survey responses have dropped below 10 percent because folks have stopped answering calls from survey companies. That means that even surveys from reputable entities are likely no longer representative of their populations.
That’s right. That means you can’t rely on most published surveys, period, until they can get more people to respond.
Customer Behavior Modeling
As we move to analytics, perhaps a better approach is Customer Behavior Modeling. This is where you take the information about a group, in this case customers, and use it to build a virtual sample. I expect this will eventually evolve to a point where the system could create an amalgam of a population as an avatar. You could then not only keep the avatar updated based on event feed, but you could query it and get a sense of priorities, preferences, and even model responses to planed activities. Tied back into one of the emerging AI systems on the query side like IBM’s Watson, you could also likely get real-time recommendations and warnings based on changes to the virtualized population. You could even alter the inputs and drill down on varied groups within the population and highlight differences in behavior and approach.
But this is still pretty new and we’ll likely have to revisit this more than once until this approach is proven successful.
Wrapping Up: Making Bad Decisions with Studies
The clear majority of studies, both focus groups and surveys, aren’t valid. They don’t predict what they attempt to predict and they don’t represent the groups that they say they represent. Those that do work are rare, done with a focus on accuracy, and tend to be more backward-looking than forward-looking. I know we all love numbers, but taking an invalid survey and using it to make a decision is incredibly foolhardy and you might actually have better, and cheaper, results if you flipped coins.
Rob Enderle is President and Principal Analyst of the Enderle Group, a forward-looking emerging technology advisory firm. With over 30 years’ experience in emerging technologies, he has provided regional and global companies with guidance in how to better target customer needs; create new business opportunities; anticipate technology changes; select vendors and products; and present their products in the best possible light. Rob covers the technology industry broadly. Before founding the Enderle Group, Rob was the Senior Research Fellow for Forrester Research and the Giga Information Group, and held senior positions at IBM and ROLM. Follow Rob on Twitter @enderle, on Facebook and on Google+