Using Data to Inform an Experience

using-data-to-inform-an-experience-header-image

Editor’s Note: Brian Dennett is the CEO of Enable.AI, a startup that empowers marketers with cutting edge machine learning and text analytics.

When it comes to customer experience data, my favorite source of insight into what is working for the user, and what is not working for the user, is review data. 

Product reviews are one of the great, untapped resources of the marketing world. They provide a candid, unfiltered view into the customer experience at scale. Even better, reviews provide that view into the customer experience for both your products and your competitor products. Applied properly, machine learning can provide clear concise answers to questions like:

“What drives good customer experiences?” 

When most people hear reviews, they often look at the average rating, the total count and maybe they’ll read the first few negative reviews. Depending on site, there might even be a few recommended reviews flagged as “helpful.”

But that’s not where the value in reviews lies. Average rating is too one dimensional and not nearly as representative as you might think. Total count is useful as a proxy for market traction, but not much else. And paging through a handful of reviews is only going to give you some anecdotal insights. 

To properly extract the value of reviews and get to the meaningful CX insights, you have to dig deep and go wide. You need to analyze the language of the reviews themselves and you need to analyze as many as you can to really see the patterns emerge in the data. Reviews are one of those places where “big data” starts to materialize. Hundreds of thousands of reviews about thousands of competing products is exponentially better than hundreds of reviews about one product. 

natural language processing

Reviews start to become a profound source of insight when you start to apply machine learning and text analysis. The language of a review is way more insightful than a value between 1 and 5. Understanding the most prevalent terms of positive reviews and conversely the most prevalent terms of a negative review provides a much better picture of your strengths and weaknesses. 

Analyzing text, at scale, is a niche but growing use case in the business world. In the data science and machine learning world, this use case falls under the category of “Natural Language Processing.” 

NLP encompasses the full bag of tricks developed by machine learning researchers, linguistic experts, and statisticians to algorithmically understand human language. NLP drives text analytics and it’s the key to unlocking the value in review data.

With NLP, you might start with something as simple as trying to extract the most common words and then scale in sophistication until you’re extracting keywords, phrases, sentiment, prevalent topics and classifying custom key indicators.

combining data and nlp

So we’ve got our data source (reviews) and we’ve got our bag of tricks (NLP.) Now what are we actually analyzing? Picking the right set of reviews is the first, interesting problem when analyzing reviews for CX data.  

Let’s work through a hypothetical where we’re looking to understand the customer experience of a specific shoe. 

We start with our shoe, but we’ve already said more data is better, so let’s expand to the rest of the shoes we produce in that category.

Perhaps from there we expand into some inter-related categories? Then we look for the competing brands and identify the shoes they have in those same categories.

Next, we might start asking questions about what other shoes compete for the same use case, even if they aren’t direct competitors, and add those to the list.

Depending on the scenario, you could find yourself at the end of this process with a thousand or more products and hundreds of thousands or millions of reviews to parse through. If you’re starting to see those kinds of numbers, you’re doing it right.

Now that the data set is collected, the typical data prep has been done, it’s time to start doing actual data science. Collecting the summary metrics, understanding the shape of the data, capturing the key dimensions, looking for obvious patterns and finally digging in to identify the outliers.

Peeling back the layers to understand what constitutes an outlier is as much an art as it is a science. 

If we’re looking at shoes and comfort is one of the biggest drivers, a whole host of follow-up questions become valuable to answer.

–       Are there specific brands driving discussions about comfort?

–       Are there specific subcategories of shoes that are driving up comfort?

–       Is comfort prevalent across the dataset?

–       Is comfort prevalent for negative reviews as well as positive reviews? Does comfort co-occur with some other major driver?

As you answer the stream of “why’s” that pop up, you get closer and closer to the insights that matter. 

At the end, we can easily identify which products and brands excel or lag at specific features ¾ like which brand tends to fit the best or which shoe has the most durability issues. 

Outliers can pop up about intangibles like brand loyalty or unexpected lifestyle use cases. Patterns emerge about which features drive positive reviews, providing insight into what you need to focus on to create the best customer experience. Just as importantly, patterns emerge about what things really drive poor reviews, indicating the areas brands can’t afford to get wrong if they want to provide a good experience. 

Done correctly, at the end of a review analysis you’ll learn plenty about what customers value the most, how they use your product and what they won’t tolerate being done poorly. 

Through this effort, you’ll arrive at an understanding of what the customer experience is really like for the majority of consumers in your space. And not just your customers but all the customers that are relevant to you. With that data in hand, you can craft product briefs, improve customer service, develop new marketing strategies, refine collateral and sharpen sales narratives.

Not bad for looking a little more closely at a few product reviews.


enable.ai’s process:

Define the set of relevant products/reviews.

Execute an NLP based text analysis:

1. Apply various techniques to extract keywords and topics.

2. Do a quick analysis of the distribution of those findings and refine accordingly.

3. Train specific models where necessary.

Check for strong correlations in the data. 

Explore those correlations first to understand where co-occurrences might produce misleading conclusions.

View the dataset from different dimensions looking for outliers and significant shifts in data distribution.

  1. Example: View the dataset by brand, by category, by review score.
  2. Example: Find “comfort” disproportionately represented in 5-star reviews.
  3. Example: Verify that comfort is relatively evenly distributed by category and brand.

Be open to refining your data definition as you go through the process. 

  1. Example: Material of the shoe seems like a possible factor. Add a new dimension for material and include leather, traditional, and knit.

Document findings.

  1. Example: Comfort is a primary driver of positive reviews.

Identify ways to put these findings into action

  1. Example: Ensure comfort becomes a major focus in product development
  2. Example: Make comfort a bigger selling point in marketing collateral.