Lidar Perception Software and Crowd Analytics

Lidar Perception Software and Crowd Analytics

By Gary Angel


July 10, 2023

Perception Software and Lidar

Lidar is becoming the go to technology for people measurement applications, combining accuracy, precision, privacy and excellent real-time performance. Those measurement capabilities combined with lidar’s broad coverage and environmental flexibility make it suitable for almost any people-measurement application that goes beyond door-counting. It’s terrific for queue measurement, perimeter monitoring, flow analytics and – especially – full journey tracking.


Yet if you buy a lidar sensor, you get a mechanical device. Every 10th of a second, it will produce and send a point cloud (a binary mapping of the space with millions of data points). Perception software ingests that point-cloud and identifies and classifies the objects in the location.  In many respects, buying a lidar sensor is like buying a smartphone. The specs and capabilities of the hardware matter: beams, point-cloud density, field-of-view, and field reliability are all important. But the software running on top of that sensor also makes a huge difference. People buy iPhones for iOS not their 2 core, 3.46 GHz processors and in the world of lidar, the perception software is what turns the raw capabilities of the sensor into something useful.


How Perception Software Works

The point-cloud generated by a lidar typically contains millions of points. Each point represents the return of a pulsed beam of light sent in a specific direction. By measuring the time-of-flight, the device can say exactly how far away the object that reflected that pulse was. By accumulating all those points, the perception software can cluster points that are spatially close and moving in tandem (and potentially of similar reflectivity). You’ll often see very high-resolution images of lidar that make this seem like it would be trivial:


1686248943125?e=1693440000&v=beta&t=SABkgzRAVwCoNuJMFgkP-dUeRsYBi4hBMm_M3Wdqc4U Lidar Perception Software and Crowd Analytics


But as objects get closer to the edge of the lidar field-of-view or where crowds form, it can be much harder to do.


This image is a more realistic look, and even here, the person probably has more than 100 points defining their point cloud.


1686248957447?e=1693440000&v=beta&t=eXZ-8pyC0LmkfZa_4WFiytHP8jJy3-jog5EmUT9yFng Lidar Perception Software and Crowd Analytics


At the edge of the field of view or if another person was blocking the lower part of this person, the point cloud might be more like 20-30 points. Confidently doing object classification on 20-30 points isn’t easy and often isn’t possible.


In the gray area between complete and sharp point-cloud differentiation and having too few points to make any sort of reliable identification is where the art and science of perception software come together. It’s the area where sharp differences in performance between software packages will exist. Suffice to say that in most deployments, this gray area is large and important.


Correct object identification and classification isn’t the only area where perception software matters. Continuous tracking of objects is a key part of journey analytics. It isn’t enough to identify a cloud of points as a person, the software must be able to follow that cloud as it moves. This, too, is trivial for following a single person in a big open space but can be nearly impossible in a crowded train station.


Finally, performance matters. That’s true for most kinds of software, but it’s especially true for perception software. Many people-measurement use-cases required real-time data. If the perception software can’t keep up, then lidar isn’t a viable solution. In crowded environments, perception software performance is likely the single most important factor in product selection.


Naturally, there is also a cost element tied to overall software performance. The more performant the perception software, the less hardware it requires to run a given number of sensors. On larger deployments, better efficiency means you save money on licensing, hardware or both.


Incidentally, you can’t compare performance between systems simply by using a vendor’s estimate of the number of sensors the perception software can handle. Such estimates are fraught with ambiguity and marketing hype. They are also not apples-to-apples unless you’re looking at the same or similar sensors and server. A 128-beam puck lidar will create a far denser point-cloud than an eight-beam lidar. All that density is great, but it does mean that the perception software must  do a lot more work. That’s also why 3rd Party perception software providers will sometimes scale pricing by sensor type.



Under most circumstances, it’s not necessary to cover price as a thing. We all know that price matters and most people can read a price quote perfectly well. But there’s a lot of flux in the perception software market and its pricing strategies. Many manufacturers are moving from perpetual to annual licensing since recurring revenue is so beloved in the investment community. Pricing models also vary based on whether they are per-server or per-sensor. This can be maddening since it means the software you’re deploying on “server 1” may cost 2-3x the exact same software you’re deploying on “server 2”. It also means that the cheapest system configuration may depend on a tricky balance of sensor cost, coverage and price combined with per-sensor software licensing. Nor does the complexity end there. Both manufacturers and 3rd Party perception software providers will often position the software as part of a full solution, meaning that they are tacking on their compute hardware – sometimes with overpriced or unnecessary line items. All of this makes apples to apples comparison of anything except the final TOTAL price of a system nearly impossible.



Perception Software Key Capabilities


Matrixing (Fusing) Sensors

Though lidar sensors provide great coverage, most lidar deployments will require multiple sensors. Certainly, the vast majority of the full journey tracking deployments that we do require more than one sensor. If you need multiple sensors to cover an area, then the ability of the perception software to matrix the sensors together to provide a single view and continuous object tracking is essential. This isn’t just a journey tracking problem (the way it is for matrixed cameras). With lidar, the perception software should use all the beams from every relevant sensor to define the point-cloud. That’s the only way to get really optimal object identification and tracking.


The ability to do this is mostly an under-the-hood kind of thing, but when you first install lidar sensors, you need to set up this initial calibration. The process is at least somewhat manual and ranges from quite convoluted to pretty straightforward depending on the package you’re working with. If you’re evaluating perception software, don’t neglect to try this process yourself and see how easy/hard it is.


Because this is a one-time operation, it would probably be a mistake to throw out a perception software package if it was a bit clumsy here, but this process will give you a good sense of how polished and well designed the software is.


Object Classification

The only way to test the accuracy of object classification is with side-by-side tests in the field. And often, what you’re testing is the combination of sensor and software. That’s okay – that’s usually what you’ll be buying.


However, even before you get into the field, you should ascertain whether the perception software automatically provides the object classifications you care about. Sure, everything will identify people. But if you care about bicycles or wheelchairs or carts or trucks, you need to make sure this is supported out of the box.


Tuning & Flexibility

Almost every perception software package provides a range of configuration options that can help tune the performance of the tracking. These options may span different lidar models or may be specific to a single lidar make/model. Common configuration options include the ability to set the minimum number of points before classifying an object, the “stillness” time before an object is counted as background, the frame rate, and the minimum and maximum box size of tracked objects. It‘s worth determining the extent to which these settings are exposed by the software for users to adjust. Mature solutions will have a user-friendly GUI for adjusting settings but in less mature products these settings might only be editable by software vendor resources.


This is just a small sample of what can be available. The process of object identification and tracking is very complex and different packages provide many different levers to adapt it to different environments.


Are these important? They can be vital. The hard part is that you often won’t know what you need (if anything) until you start measuring your environment. That’s why starting with a smaller PoC in a space is often advisable. Real-world conditions not only make for better product comparison, they expose the tuning levers you need.


Feed Structures

The perception software needs to send the identified object data somewhere – usually to the cloud. Different packages have different strategies for doing this. Some use MQTT, some use sockets, others post the data with https.


There’s no right answer to what’s best, but there’s a decent chance your engineers will have a preference based on what they’ve done, the tools they use, and your broader IT software stack. We generally prefer that the perception software communicates via MQTT but since we’ve integrated with multiple packages this is definitely not a hard-and-fast requirement for us.


It’s also important to see how configurable the feed you get from the perception software is. At minimum, you’ll get the timestamp, object identifier, object classification and X-Y-Z box. You can’t live without all of this data.


Optionally, you may get velocity, points in the object-cloud, and classification confidence. Each can add value. Velocity is derivable from the underlying timestamp and positional data but getting it from the perception software can save you a bit of trouble. Getting points in the object-cloud and classification confidence are mainly useful for your people-measurement platform to use when doing stitching (putting broken tracks together) and other data cleansing (getting rid of ghosts). These are in the nice-to-have but not essential category.


Device Management

If you’ve ever worked with IoT devices in the field, you won’t undersell the importance of good device management. The perception software is traditionally the control route to the sensors. It’s not going to power-cycle sensors for you, but it should be able to alert you when there’s a problem with a sensor.


That’s particularly important in matrixed systems since the loss of a single sensor may not be obvious in the downstream data but can have terrible consequences for data quality.


Ideally, the perception software should be enterprise ready on its own account (providing access control and change logging), and the more device monitoring and management capabilities it provides, the better.



The Last Mile (Going Beyond Perception Software)

The perception software is an essential piece in any lidar software stack. But the software stack usually doesn’t end there. In almost all cases, the perception software then passes the refined data to a people measurement platform that cleans and contextualizes it against a digital mapping of the space and provides real-time and historical analytics. Don’t think of this as dashboarding (though it may include dashboarding). The foundational elements of a people-measurement platform are data cleansing and stitching (essential for data quality), mapping and contextualization, and specialized KPI construction (for things like line stations open and wait times). In most cases, you’ll need this kind of platform even if you’re planning to pipe the data into a powerful analytics tool like Tableau. In fact, having the people measurement platform will greatly facilitate that.

Leave a Reply