What’s Wrong with In-Store Tracking Measurement
What’s Wrong with In-Store Tracking Measurement
By Gary Angel
|April 19, 2017
I didn’t start Digital Mortar because I was impressed with the quality of the reporting and analytics platforms in the in-store customer tracking space. I didn’t look at this industry and say to myself, “Wow – here’s a bunch of great platforms that are meeting the fundamental needs in the space at an enterprise level.” Building good analytics software is hard. And while I’ve seen great examples of SaaS analytics platforms in the digital space, solutions like Adobe and Google Analytics took many years to reach a mature and satisfying form. Ten years ago, GA was a toy and Adobe (Omniture SiteCatalyst at the time) managed to be both confusing and deeply under-powered analytically. In our previous life as consultants, we had the opportunity to use the current generation of in-store customer journey measurement tools. That hands-on experience convinced me that this data is invaluable. But it also revealed deep problems with the way in-store measurement is done.
When we started building a new SaaS in-store measurement solution here at Digital Mortar, these are the problems in the technology that we wanted to solve:
Lack of Journey Measurement
Most of today’s in-store measurement systems are setup as, in essence, fancy door counters. They start by having you draw zones in the store. Then they track how many people enter each zone and how long they spend there (dwell time).
This just sucks.
It’s like the early days of digital analytics when all of our tracking was focused on the page view. We kept counting pages and thinking it meant something. Till we finally realized that it’s customers we need to understand, not pages. With zone counting, you can’t answer the questions that matter. What did customers look at first? What else did customers look at when they shopped for something specific? Did customers interact with associates? Did those interactions drive sales? Did customer engagement in an area actually drive sales? Which parts of the store were most and least efficient? Does that efficiency vary by customer type?
If you’re not asking and answering questions about customers, you’re not doing serious measurement. Measurement that can’t track the customer journey across zones just doesn’t cut it. Which brings me to…
Lack of Segmentation
My book, Measuring the Digital World, is an extended argument for the central role of behavioral segmentation in doing customer analytics. Customer demographics and relationship variables are useful. But behavior – what customers care about right now – will nearly always be more important. If you’re trying to craft better omni-channel experiences, drive integrated marketing, or optimize associate interactions, you must focus on behavioral segmentation. The whole point of in-store customer tracking is to open up a new set of critically important customer behaviors for analysis and use. It’s all about segmentation.
Unfortunately, if you can’t track the customer journey (as per my point above), you can’t segment. It’s just that simple. When a customer is nothing more than a blip in the zone, you have no data for behavioral segmentation. Of course, even if you track the customer journey, segmentation may be deeply limited in analytic tools. You could map the improvement of Adobe or Google Analytics by charting their gradually improving segmentation capabilities. From limited filtering on pre-defined variables to more complex, query-based segmentation to the gradual incorporation of sophisticated segmentation capabilities into the analyst’s workbench.
You can have all the fancy charts and visualizations in the world, but without robust segmentation, customer analytics is crippled.
Lack of Store Context
When I introduce audiences to in-store customer tracking, I often use a slide like this:
The key point is that the basic location data about the customer journey is only meaningful when its mapped to the actual store. If you don’t know WHAT’S THERE, you don’t have interesting data. The failure to incorporate “what’s there” into their reporting isn’t entirely the fault of in-store tracking software. Far too many retailers still rely on poor, paper-based planograms to track store setups. But “what’s there” needs to be a fundamental part of the collection and the reporting. If data isn’t stored, aggregated, trended and reported based on “what’s there”, it just won’t be usable. Which brings me to…
Use of Heatmaps
Heatmaps sure look cool. And, let’s face it, they are specifically designed to tackle the problem of “Store Context” I just talked about. Unfortunately, they don’t work. If you’ve ever tried to describe (or just figure out) how two heat-maps differ, you can understand the problem. Dialog like: “You can see there’s a little more yellow here and this area is a little less red after our test” isn’t going to cut it in a Board presentation. Because heat-maps are continuous, not discrete, you can’t trend them meaningfully. You can’t use them to document specific amounts of change. And you can’t use them to compare customer segments or changed journeys. In fact, as an analyst who’s tried first hand to use them, I can pretty much attest that you can’t actually use heat-maps for much of anything. They are the prettiest and most useless part of in-store customer measurement systems. If heat-maps are the tool you have to solve the problem of store context, you’re doomed.
These four problems cripple most in-store customer journey solutions. It’s incredibly difficult to do good retail analytics when you can’t measure journeys, segment customers, or map your data effectively onto the store. And the ubiquity of heat-maps just makes these problems worse.
But the problems with in-store tracking solutions don’t end here. In my next post, I’ll detail several more critical shortcomings in the way most in-store tracking solutions are designed. Shortcomings that ensure that not only can’t the analyst effectively solve real-world business problems with the tool, but that they can’t get AT THE DATA with any tools that might be able to do better!