Robust 3D Object Detection using Probabilistic Point Clouds

Under Review

University of Wisconsin-Madison




Abstract

LiDAR-based 3D cameras output point clouds, a canonical 3D scene representation used in various 3D scene understanding applications. Although modern LiDARs provide high-fidelity geometric information in nominal conditions, they often perform poorly in non-ideal real-world scenarios producing erroneous point clouds. These errors, which are rooted in the noisy raw LiDAR measurements, get propagated to downstream vision models resulting in severe loss of accuracy. This is because conventional 3D processing pipelines used to construct point clouds from raw LiDAR sensor measurements do not retain the noise and uncertainty information available in the raw sensor data.

We propose a novel 3D scene representation called Probabilistic Point Clouds (PPC) where each point is augmented with a probability attribute that encapsulates the measurement uncertainty (confidence) in raw sensor data. We further introduce inference approaches that leverage PPC for robust 3D object detection; these methods are versatile and can be used as computationally lightweight drop-in modules in 3D inference pipelines. We demonstrate, via both simulations and real captures, that the PPC-based 3D processing methods outperform several baselines with LiDAR as well as Camera-LiDAR fusion models, across challenging indoor and outdoor scenarios involving small, distant, and low-albedo objects, as well as strong ambient light.



Our LiDAR Setups

We use raw LiDAR sensor data to create a probability measure to create Probabilisitic Point Cloud.


3D Object Detection using Probabilitic Point Clouds

Inference with PPC is more robust for small distant objects.