SCIENTIFIC PUBLICATION #3 | IMKD: Intensity-Aware Multi-Level Knowledge Distillation for Camera-Radar Fusion
A new contribution from DFKI showcases advances in multimodal perception with the paper “IMKD: Intensity‑Aware Multi‑Level Knowledge Distillation for Camera–Radar Fusion,” authored by Shashank Mishra, Karan Patil, Didier Stricker and Jason Rambach. Accepted for presentation at IEEE/CVF WACV 2026 and currently available as a preprint, the work introduces an enhanced strategy for combining radar and camera data in 3D object detection.
What the Paper Brings Forward
IMKD proposes a three‑stage intensity‑aware distillation framework designed to improve radar–camera fusion without compromising the unique strengths of each sensor. Instead of forcing alignment between modalities, the method uses intensity cues to guide feature transfer at multiple levels of the architecture, resulting in richer, more stable fused representations.
Performance and Key Results
Evaluated on the nuScenes benchmark, the method achieves 67.0% NDS and 61.0% mAP, surpassing all previous distillation‑based fusion approaches—all without requiring LiDAR at inference time. This positions IMKD as a competitive, cost‑efficient option for large‑scale automated‑driving perception pipelines.
Why It Matters
IMKD advances the state of the art in camera–radar fusion by demonstrating that high‑quality 3D perception can be achieved using lightweight, scalable sensing setups. The approach aligns with BERTHA’s broader goals by reinforcing the modelling foundations required for trustworthy, efficient and human‑centred automated‑driving systems.
Read the article here.
Acknowledgment: Research conducted under the BERTHA project (GA101076360), funded by the European Union. Views expressed are those of the authors and do not necessarily reflect those of the EU or CINEA.