Download PDFOpen PDF in browser

CBILR: Camera Bi-Directional LiDAR-Radar Fusion for Robust Perception in Autonomous Driving

EasyChair Preprint no. 13660

7 pagesDate: June 14, 2024

Abstract

Safe and reliable autonomous driving hinges on
robust perception under challenging environments. Multi-sensor
fusion, particularly camera-LiDAR-Radar integration, plays a
pivotal role in achieving this goal. Different sensors have spe-
cific advantages and disadvantages. Existing pipelines are often
constrained by adverse weather conditions, where cameras and
LiDAR suffer significant degradation. This paper introduces the
Camera Bi-directional LiDAR-Radar (CBILR) fusion pipeline,
which leverages the strengths of sensors to enhance LiDAR
and Radar point clouds. CBILR innovates with a bi-directional
prefusion step between LiDAR and Radar, leading to richer
feature representations. First, prefusion combines LiDAR and
Radar points to compensate for individual sensor weaknesses.
Next, the pipeline fuses the pre-fused features with camera
features in the bird’s eye view (BEV) space, resulting in a
comprehensive multi-modal representation. Experiments have
demonstrated that CBILR outperforms state-of-the-art pipelines,
achieving superior robustness in challenging weather scenarios.
The code is at https://github.com/Artimipt/CBILR.

Keyphrases: autonomous vehicle, Camera, Fusion, LiDAR, Radar, self-driving

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@Booklet{EasyChair:13660,
  author = {Arthur Nigmatzyanov and Gonzalo Ferrer and Dzmitry Tsetserukou},
  title = {CBILR: Camera Bi-Directional LiDAR-Radar Fusion for Robust Perception in Autonomous Driving},
  howpublished = {EasyChair Preprint no. 13660},

  year = {EasyChair, 2024}}
Download PDFOpen PDF in browser