The goal with any UX research is to not be wasteful. You achieve this by creating a quickly referenceable library of insights and empathy tools versus formal documentation and assets that once completed, often remain unused by an organization. Our UX team at SPR teaches and works with client product teams on how to organize their user research findings into a library from general to specific buckets like: Behavior Patterns, Shared Perceptions, Opportunity Space, Guiding Principles, and Empathy Maps. An insights library can also include cross-sections by user segment and customer type depending on your needs. It will often include collations of previously conducted research in addition to secondary research in the form of publicly published surveys, market data, and case studies that create a “bank” of research. The library becomes a periodically updated stockpile of user data available for quick reference when needing to guide design direction, answer questions related to product decisions and a meaningful resource for rapid product design.
Imagine if you would, all the above, a rich referenceable library of user research data organized and available at your fingertips in a system like ChatGPT. Because the bot is trained to know your users and customers, as well as trained on secondary sets of publicly available data, it would essentially become an empathy agent that UX and product teams could engage with quick queries, questions, and perhaps even thematic analysis. But let’s not stop there. What if over the course of time, the system could be trained on data from your marketing team, customer service, call center, and your actual digital product usage?
Hang on just a minute. The crazy design guy in the room is waving his hands around and getting passionate about the universe of possibilities. To ground this more in reality, I followed up with SPR’s Chief Architect, Greg Chambers, on the validity of the idea and just how feasible this might be to create. After walking Greg through the concept and types of data sets, he commented, “One of the biggest challenges with machine learning is getting clean data. If you could convert UX data into graph data and continue feeding in those data sets, then…. Well…. you’ve seen that somewhat controversial UX meme with the sidewalk and then the path that’s been cut through the grass by people walking? If you could get the machine to look at UX as graph data, it’d be like that, but in real time.”
This is why I love talking with technical architects. Imagine if you could take all of the flat data you get from a product analytics platform, combine it with your observational data, convert it to graph data and then train the machine, you’d have a very powerful empathy agent you could continue to grow and engage over time that would not only save time from having to dig through and mine tons of data, but you’d have a single source of insights at the fingertips of UX, product, and development teams as well.