4D RADAR-CAMERA SENSOR FUSION FOR ROBUST VEHICLE POSE ESTIMATION IN FOGGY ENVIRONMENTS

4D Radar-Camera Sensor Fusion for Robust Vehicle Pose Estimation in Foggy Environments

4D Radar-Camera Sensor Fusion for Robust Vehicle Pose Estimation in Foggy Environments

Blog Article

The integration of cameras and millimeter-wave radar into sensor fusion algorithms is essential to ensure robustness and cost effectiveness for vehicle pose estimation.Due to the low resolution of Repair Guide traditional radar, several studies have investigated 4D imaging radar, which provides range, Doppler, azimuth, and elevation information with high resolution.This paper presents a method for robustly estimating vehicle pose through 4D radar and camera fusion, utilizing the complementary characteristics of each sensor.

Leveraging the single-view geometry of the detected vehicle bounding box, the lateral position is derived based on the camera images, and the yaw rate is calculated through feature matching between consecutive images.The high-resolution 4D radar data are used to estimate the heading angle Mandolines and forward velocity of the target vehicle by leveraging the position and Doppler velocity information.Finally, an extended Kalman filter (EKF) is employed to fuse the physical quantities obtained by each sensor, resulting in more robust pose estimation results.

To validate the performance of the proposed method, experiments were conducted in foggy environments, including straight and curved driving scenarios.The experimental results indicate that the performance of the camera-based method is reduced due to frame loss in visually challenging scenarios such as foggy environments, whereas the proposed method exhibits superior performance and enhanced robustness.

Report this page