Lidar Data Quality

Lidar Data Quality

By Gary Angel

|

May 7, 2024

lidar data quality

An Introduction to Lidar Data Cleaning

 

In the last couple of years lidar has become a foundational technology in people measurement. Along with camera and electronics, it’s one of the three basic technologies used to understand what’s happening in a physical space. There are plenty of other options (heat, radar, pressure, beam, etc.), but camera, electronics and lidar are the big 3. And of those, lidar is the technology most broadly applicable to the class of problems we tackle (full journey analytics).

 

Camera remains the default technology for point and threshold measurement. It’s inexpensive, very reliable, and very accurate when measuring people crossing a threshold (door counting) or in a small space. Electronic (smart device) tracking is mostly used for tracking long journeys over very large spaces with lots of spatial transitions. If you want to measure the journey time of a passenger from the access road of an airport to a gate at an airport, electronics is your only real option. But for measuring the detailed journey of people or things in most spaces (e.g., stores, airport terminals, factory floors, train cars, football fields, public squares, amusement park lines, etc.), neither camera nor electronics are ideal. Indoors, camera can do the job but for any space larger than about 10,000 square feet, the number of cameras necessary becomes daunting. Electronics captures only a small subset of the population, has terrible positional accuracy, and only measures occasional snapshots from the journey.

 

Lidar, on the other hand, has great coverage and can be used for almost any space. It works indoors or outdoors and with any kind of ceiling. It’s very accurate at identifying people (and other moving objects), and it tracks them at a very high (10 frames per second) rate. This makes it the best choice for most full-journey applications where we want detailed and accurate coverage of a location.

 

Yet lidar isn’t perfect. No people-measurement technology comes close to perfection. We spend a large chunk of our time cleaning data (as does almost everyone focused on analytics) and it’s work that’s never quite done. If you’re using our DM1 people-measurement platform, you may not have to know a lot about these issues (though a good analyst will always benefit from understanding potential data quality problems), but a lot of people ingest lidar perception software data directly. There’s nothing wrong with that. Not everybody needs a full-on people measurement platform. But if you’re doing that, you’ll almost certainly find large and potentially crippling problems in the raw data. In this blog series, I’m going to explain what those data quality issues are and how they can best be tackled.

 

 

Understanding Lidar Systems

 

To understand lidar data quality problems you need to understand a little about how lidar systems work and the three pieces that make up a typical system:

1711477468663?e=1720656000&v=beta&t=Y8SC4xFpAISGbcIjlvGEe_Tm0tcvlsRDxI-J2ADekqY Lidar Data Quality

 

Lidar sensors use beams of light to build a point-cloud image of a physical space. This point cloud is typically collected with each revolution of the sensor (or equivalent for solid-state sensors). This point-cloud is ingested by Perception software. This software typically runs on a local processor directly connected via ethernet to the sensor. There may be one or more lidar sensors connected to a single processor running the Perception software. When multiple sensors are connected, then the Perception software must also blend the objects across each sensor. This process is called fusion. It’s critical to ensure that an object is tracked consistently across an entire field-of-view. The output from the Perception layer is a timestamped list of identified objects and positions. This means that every record generated by the Perception layer has five basic components: a timestamp, an object id, an object position, an object classification, and the object dimensions. This data is then ingested by a people-measurement platform. This might be something as robust and comprehensive as our DM1 platform, but you might also be ingesting the data directly into a data-lake or a general purpose analytics package like Tableau. I’ll refer to this as the People-Measurement-Platform or Analytics layer.

 

Each of these three components is critical to overall data quality. Lidar sensors differ in very material ways including beam density, beam pattern, frame rate, and distance. The Perception software is responsible for – and may vary in – four foundational aspects of data quality: identifying discrete objects, classifying the objects correctly, fusing them across multiple sensors, and positioning them in the space. Of these, the first two are the most error prone though fusion is not without challenges. Finally, the people-measurement/analytics isn’t necessarily responsible for any data quality issues – but that’s often where you need to clean-up what happens below.

 

 

Lidar Data Quality Problems

 

So what can happen? Broadly, data quality problems come in 4 basic flavors: that map pretty closely to what lidar perception software is for: object identification, object classification, objection fusion, and object starting and stopping behavior.

 

Object identification problems occur when the Perception software layer incorrectly identifies an object in the space or misses an object moving in the space. There is some cross-over with object classification issues here, but I think its intuitive to restrict object classification to the choice between supported objects (such as person, vehicle, bicycle, etc.). If Perception software picks up a swinging door or a reflection on glass as an object, that’s a failure of identification not classification. Similarly, if the Perception software identifies a suitcase as a person or identifies two people as a single person (a particularly common lidar problem), that’s really a failure to properly identify the objects moving in the environment.

 

Object classification issues happen next – when the Perception software decides what label to attach to the object. Since identification happens in real time, a single tracked object will often have multiple object classifications attached to it over time. It’s not unusual, for example, to classify an object as unknown when it first pops into view. From a distance, the point cloud may be small or heavily occluded and it may be difficult or impossible to determine the size and shape of the object. As it comes closer to one or more sensors, however, the point-cloud density will (usually) grow and the Perception software will classify an object from among the set of types it is built to distinguish. Out-of-the-box, most lidar perception software systems have a default set of objects they identify with the most common being person, vehicle, large vehicle and bicycle. In theory, however, almost any object can be identified by a point-cloud and there is no reason why Perception software could not identify wheel-chairs, deer, roller suitcases or boat types. If a shape is reasonably distinctive, then lidar should be able to classify it. That being said, only at the Perception software level is the full detail of the point-cloud available. Once the Perception software has classified something, all that detail is lost. The Analytics or People-Measurement piece doesn’t have any information about the shape of the point cloud and cannot be used for custom classifications.

 

Fusion problems commonly occur when the lidar system loses track of an object. This can happen for many reasons but often it’s as simple as the object being blocked (occluded) briefly from view. Sometimes, a measurement system will have gaps in its coverage where all objects are missed. But even when coverage is complete, every lidar system is vulnerable to occlusion. Think about a small child surrounded by a family. Unless the lidar is directly above the child, its line-of-sight will be blocked. In crowds, occlusion can and will happen ALL the time. It will happen at random points throughout the scene and in very unpredictable ways. When that happens, a lidar system will typically issue a new object identifier when the object becomes visible again. What started out as Person X is now Person Y. This track breakage is endemic in both camera and lidar people-measurement system and it has a devastating impact on full-journey metrics. If a track breaks 2 or 3 times during a visit, then reported number of visits will be 3x the actual, their average time spent will be 1/3 the actual, and it will be impossible to map the success of behaviors in the first two segments to outcomes in the third.

 

Finally, lidar systems sometimes have issues with the start or end of a track. In particular, most systems will take 1-2 seconds before they identify a new object in a space. That may not seem like a big deal, but that delay can be impactful. Suppose you’re counting ins and outs at an entrance by measuring the number of tracks that start on one side of the door and move to the other side. If the lidar is outside the door, then it will have plenty of time to identify and track every object that starts outside the door and goes through it. Your IN count should be close to perfect. But objects emerging from the door might not be identified for a second or two after they appear. That means your count of people coming out of the door may be too low or even zero. It will look as if people just appeared in the area with the lidar – perhaps a foot or two beyond the door. The asymmetry between start and stop can plague implementations that aren’t used to lidar data. Lidar systems may also have difficulty deciding when to stop tracking an object. When tracking curbside, we see that many lidar systems will track a person into a car and then continue to track them as the car motors away – our software showing a person and a car in the same space with a person moving very fast for a few seconds! That’s not always a big deal, but it’s the kind of mistake that can mess up metrics, near-miss detection, safety triggers, and even basic counting.

 

 

Additional Considerations

 

There’s no such thing as data quality in the abstract – data quality is always relative to use. And in terms of lidar data quality, there are two basic use-case that make a profound difference to what matters and the types of improvements you can make. The first is whether or not you are using data in real-time or historically. It is MUCH easier to clean data for analytic purposes than to do it in real time. Not only is it easier, there are a number of cleaning techniques that cannot be done in real-time. You should expect that real-time data cleaning will be less complete and harder to do than data cleaning for historical analysis.

 

The difference between public-safety, operational and non-operational use-cases is similar. Lidar is often used in scenarios where public safety is the paramount objective, and in those use-cases you cannot afford to miss something important. That means you have to tune lidar measurement to make sure that you never make an error of omission. Unfortunately, it is the way of the world that if you cannot miss something, you’re likely to create false positives and that, too, can be a problem. But in public safety you’re likely to accept that trade-off. In operations, you’re likely striving for the most efficient balance of data quality vs. effort vs. impact. Whereas for pure analytics, you usually just need to make sure that your data is consistently “good enough to use.” The difference in the way you approach lidar data quality in each of these scenarios can be profound.

 

 

Perception Layer vs Analytics Layer

 

Not every data quality problem can (or should) be addressed at each layer of the system. The better your sensors, the better your data quality at every level. But no sensor can fix problems of occlusion or coverage. And, as already noted in the discussion of object classification, the Perception and Analytics layers have different data available to them that determine their role in data quality. My assumption throughout this series of posts will be that – like Digital Mortar – you’re working above the Perception layer and that’s where I’ll focus most of the attention. However, some problems can only tackled at the level of Perception and when that’s the case, I’ll at least try to note what kind of tuning or vendor options might be available.

 

Any discussion of data quality in people measurement has to be sensitive to two conflicting aims. People sometimes assume that any data quality issues preclude data use. That’s just not true. And while vendors are notoriously loathe to discuss data quality frankly, the simple fact is that every people measurement system (and pretty much every system) has data quality problems. The flip side of this issue is the equally blithe and delusional idea that data quality issues will “come out in the wash” as long as you don’t press the data too hard. Not only is that fatally untrue if you’re using people-measurement for public safety purposes, but even when your purposes are solely analytic, there’s no reason to think that ANY interpretation of the data is necessarily possible until you’re data has achieved a certain level of accuracy. That accuracy, as I pointed out above, is entirely dependent on use. I often see RFP’s from companies that request 90% or 95% accuracy – which is when I know whoever wrote the RFP doesn’t understand how lidar data quality works. Is that 90% supposed to be about object identification? Classification? Track Breakage? What exactly could it mean when applied to fundamentally different aspects of a measurement system?

 

In general, we’ve found that lidar can be very accurate and can nearly always be made accurate enough for any but the most demanding applications (where it can only sometimes be accurate enough). Over this series of posts, I hope to explain why lidar people-measurement data can rarely be used as is from the Perception layer yet can nearly always be made fit for use.

Leave a Reply

Search