Making sense of behavioral data is always a challenge. Suppose I tell you that a shopper visited your store, spent fourteen minutes, lingered twice and had a single Associate interaction. That’s a lot of data, but in most respects, it’s deeply uninformative. What did the shopper care about? What were they interested in? What did they pass by?  What worked? What didn’t? You can’t even begin to formulate answers to those questions based on the data I described. I know, because I spent years trying and largely failing to do interesting analytics with metrics like these in the digital world. What’s missing from these metrics is the context. If you don’t know what was in the store at the place where the shopper lingered, you can’t attach meaning to the action. For in-store analytics, the basic context that gives meaning to the data is the store; specifically, what was there in the store at the place the visitor lingered.

Context transforms the data in Table 1 to the data in Table 2:

In-store measurement data

 

We know a lot more (and lot more interesting stuff) about Shopper A and about store performance when the metrics are contextualized to the store. With the second table, we can likely guess that the shopper is a woman. That she’s interested in Jackets and Backpacks. That she entered the store shopping for Jackets. That the sales interaction wasn’t successful and likely concerned jackets.

This idea of contextualizing behavior to understand the customer better is incredibly simple and seemingly obvious. But it’s hugely powerful and when you’re suffering from a deluge of aggregated consumption metrics that lack this context, it can be surprisingly difficult to figure out what’s missing.

So when you see a report like this:

 

Day Hour Shoppers Counted Total Time Avg. Time
5/14/2017 10 125  26,750 214
11 152  32,072 211
12 191  34,571 181
1 185  34,040 184
2 187  31,229 167
3 215  41,065 191
4 152  30,400 200
5 87  17,574 202
6 92  12,972 141
5/15/2017 10 133  27,797 209
11 145  30,015 207
12 212  44,732 211
1 210  41,370 197
2 242  46,222 191
3 206  40,170 195
4 187  34,969 187
5 161  27,209 169
6 163  25,265 155
5/16/2017 10 118  23,718 201
11 145  29,725 205
12 186  38,130 205
1 211  45,154 214
2 244  50,508 207
3 259  54,649 211
4 206  38,110 185
5 200  37,800 189
6 169  28,899 171

 

It can sure seem like the data ought to be useful. And there are some things you can do with this kind of data. Just don’t try to use it to answer any questions about customers, their journey, or store performance.

Now we’re not the first people and Digital Mortar isn’t the first company to recognize that store context is vital to understanding the data created by in-store measurement. That’s why if you’re looking at in-store measurement platforms, you’ll almost certainly see some version of the store heatmap:

store analytics heatmap

(from BusinessInsider.com)

This type of store heatmap is certainly an attempt to contextualize behavior in terms of the store. And there’s no denying that heatmaps look cool. But if you’ve ever tried to use a heatmap like this for analytics, you’ve probably been massively frustrated.

The first problem with this type of heatmap is simple. It doesn’t really make it all that easy to know what’s in the store. Sure, if I’ve memorized a store layout, I might be able to map those colors onto actual store sections and products. But chances are, I’m going to have constantly flip back and forth between the heat-map and a planogram to make sure I know what I’m looking at.

The second big problem with heatmaps is that they don’t really provide a means of analysis. Look at these two heatmaps:

Visualizing Store Data

 

Can you tell that the yellow band in the upper left corner grew in size? That the red smear in the middle was a little less pink and little more red?

Neither could I.

And even if you could pick out the differences, how could possibly communicate the nature and extent of the changes to anyone else?

When you do analysis, you need to find important differences in the data and you need to be able to communicate them. Heatmaps like this suck at both tasks.

And let’s say you made a change in the store. After all, that’s the reason you’re doing store measurement, right? What happens with the planogram? Which view do you see? And how do you compare the before vs. after and quantify the changes?

When we set out to build DM1, the single biggest problem on our minds was how to visualize store behavioral data effectively. We wanted a tool that fixed the problems with heatmaps and that could make in-store analytics come alive.

From our perspective, the store visualization capability in DM1 had to:

  1. Show the data in the context of the store
  2. Make it easy to understand what was there without having to resort to a paper planogram or external source
  3. Handle changes to the store seamlessly
  4. Be able to visualize the store at different levels – from departments down to tables – to support different kinds of analysis
  5. Provide quantifiable analytics and measurements so that an analyst could look at complex flows and be able to say EXACTLY what changed and by how much
  6. Support a variety of different metrics
  7. Provide a means to trend metrics over time no matter how often the store layout changed

In other words, we wanted to make store visualization useful.

In my next post, I’ll show you what we built in DM1 and how it fulfills all of these requirements.