So, you heard about data-driven design somewhere or you’re already practicing it? This too, is a rather newer practice much like UX design itself and has evolved only during the last 5 years. I for one, do not rely totally on it and I rely on many other research methods in addition to this. I’d rather say, I don’t place my bet totally on it because it could be dangerous. Here’s why.
Before that, a little about myself and the part that is relevant to this post: Among other things I do, I teach research methodologies at NID and I have also practiced research in many forms in my career spanning 15 years. I think that should be enough of an introduction. Atleast, its not a bad start
There are plenty of research methods to choose from and those who do not have an idea about all of those usually talk only about a few. I’m sure many of you would have heard of focus groups, user tests, ux/ usability audits, etc., etc., and if you are reading this topic, you are also aware of heat maps, usage analytics and other data collection methods. However, the world of research is really very vast… there are more than 100 methods to choose from and Think Design finds about 50 of them very relevant to many contexts… we may have practiced almost all of them. If you are interested in knowing more about research methods and Think Design’s research framework, this is your link.
Coming to the subject, why do I not totally bet on data driven design?
Let’s start with this: I don’t find it adequate to answer my questions. The reason I don’t find it adequate enough is because it doesn’t give me adequate answers to “why something is the way it is”. Sometimes, we really want to know the reason why and our strategies are hidden in those answers. Without finding those answers, you will not walk away with rich insights; rather, all you’ll have is, a set of charts: these don’t qualify as insights. If the heat map shows me that the users have clicked in a particular area or hovered on a link repeatedly, I want to know why: Your heat map will not answer that… and this is the reason Think Design has developed this research framework. If I have to get to the fundamentals, heat maps, usage analytics or any other such usage metrics tell me the “reality”… as in, what exactly happened. We need this of course, but there’s a big missing element which is called “perception”. As a design practitioner, we want to know the reality as well as perceptions because sometimes they are very different… and in order to know perceptions, we have to engage with users and ask them questions. Let’s for instance take a live example:
A few years back, a friend of mine who was heading design at a prominent internet company faced a problem: When they re-designed their portal (a housing portal) and introduced map browser for user benefit, the acceptance was very low. Usage data clearly showed them that users were skipping this feature whereas the design team felt that it was something that benefited the user. The design team then had to engage with users to find out “why” they were behaving that way. There were three hypotheses they made and went to users for validation:
- That the users didn’t understand the benefits of using maps and hence, they ignored it.
- The icon that indicated this features wasn’t clear for users. This resulted in them not understanding what it meant.
- The users didn’t want to use maps because it would consume high bandwidth and as a result, the user would lose time.
In the above example, there was no way we could have understood users’ perceptions without engaging with them; data wouldn’t give us that answer. The only way we could get that answer is, by engaging with users.
There are many other instances where I couldn’t find right insights out of data-driven design. I would say that data is essential to understand the situation and it is an essential constituent of research, but it is certainly not all-encompassing to the extent that I would rely entirely on it… and according to the research practice at Think Design, we would certainly need at least one primary research method coupled with data in order to make our research end up in insights.
How have your experiences been, dealing with similar situations?