Efficient Moving Object Segmentation in LiDAR Point Clouds Using Minimal Number of Sweeps
LiDAR point clouds are a rich source of information for autonomous vehicles and ADAS systems. However, they can be challenging to segment for moving objects as - among other things - finding correspondences between sparse point clouds of consecutive frames is difficult. Traditional methods rely on a...
Saved in:
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2025-01-01
|
Series: | IEEE Open Journal of Signal Processing |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10848132/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1823859590807158784 |
---|---|
author | Zoltan Rozsa Akos Madaras Tamas Sziranyi |
author_facet | Zoltan Rozsa Akos Madaras Tamas Sziranyi |
author_sort | Zoltan Rozsa |
collection | DOAJ |
description | LiDAR point clouds are a rich source of information for autonomous vehicles and ADAS systems. However, they can be challenging to segment for moving objects as - among other things - finding correspondences between sparse point clouds of consecutive frames is difficult. Traditional methods rely on a (global or local) map of the environment, which can be demanding to acquire and maintain in real-world conditions and the presence of the moving objects themselves. This paper proposes a novel approach using as minimal sweeps as possible to decrease the computational burden and achieve mapless moving object segmentation (MOS) in LiDAR point clouds. Our approach is based on a multimodal learning model with single-modal inference. The model is trained on a dataset of LiDAR point clouds and related camera images. The model learns to associate features from the two modalities, allowing it to predict dynamic objects even in the absence of a map and the camera modality. We propose semantic information usage for multi-frame instance segmentation in order to enhance performance measures. We evaluate our approach to the SemanticKITTI and Apollo real-world autonomous driving datasets. Our results show that our approach can achieve state-of-the-art performance on moving object segmentation and utilize only a few (even one) LiDAR frames. |
format | Article |
id | doaj-art-08fd418063324c4d987f30f017c1093a |
institution | Kabale University |
issn | 2644-1322 |
language | English |
publishDate | 2025-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Open Journal of Signal Processing |
spelling | doaj-art-08fd418063324c4d987f30f017c1093a2025-02-11T00:01:49ZengIEEEIEEE Open Journal of Signal Processing2644-13222025-01-01611812810.1109/OJSP.2025.353219910848132Efficient Moving Object Segmentation in LiDAR Point Clouds Using Minimal Number of SweepsZoltan Rozsa0https://orcid.org/0000-0002-3699-6669Akos Madaras1Tamas Sziranyi2Machine Perception Research Laboratory of HUN-REN Institute for Computer Science and Control (HUN-REN SZTAKI), Budapest, HungaryMachine Perception Research Laboratory of HUN-REN Institute for Computer Science and Control (HUN-REN SZTAKI), Budapest, HungaryMachine Perception Research Laboratory of HUN-REN Institute for Computer Science and Control (HUN-REN SZTAKI), Budapest, HungaryLiDAR point clouds are a rich source of information for autonomous vehicles and ADAS systems. However, they can be challenging to segment for moving objects as - among other things - finding correspondences between sparse point clouds of consecutive frames is difficult. Traditional methods rely on a (global or local) map of the environment, which can be demanding to acquire and maintain in real-world conditions and the presence of the moving objects themselves. This paper proposes a novel approach using as minimal sweeps as possible to decrease the computational burden and achieve mapless moving object segmentation (MOS) in LiDAR point clouds. Our approach is based on a multimodal learning model with single-modal inference. The model is trained on a dataset of LiDAR point clouds and related camera images. The model learns to associate features from the two modalities, allowing it to predict dynamic objects even in the absence of a map and the camera modality. We propose semantic information usage for multi-frame instance segmentation in order to enhance performance measures. We evaluate our approach to the SemanticKITTI and Apollo real-world autonomous driving datasets. Our results show that our approach can achieve state-of-the-art performance on moving object segmentation and utilize only a few (even one) LiDAR frames.https://ieeexplore.ieee.org/document/10848132/LiDARpoint cloudsmoving object segmentationknowledge transferautonomous driving |
spellingShingle | Zoltan Rozsa Akos Madaras Tamas Sziranyi Efficient Moving Object Segmentation in LiDAR Point Clouds Using Minimal Number of Sweeps IEEE Open Journal of Signal Processing LiDAR point clouds moving object segmentation knowledge transfer autonomous driving |
title | Efficient Moving Object Segmentation in LiDAR Point Clouds Using Minimal Number of Sweeps |
title_full | Efficient Moving Object Segmentation in LiDAR Point Clouds Using Minimal Number of Sweeps |
title_fullStr | Efficient Moving Object Segmentation in LiDAR Point Clouds Using Minimal Number of Sweeps |
title_full_unstemmed | Efficient Moving Object Segmentation in LiDAR Point Clouds Using Minimal Number of Sweeps |
title_short | Efficient Moving Object Segmentation in LiDAR Point Clouds Using Minimal Number of Sweeps |
title_sort | efficient moving object segmentation in lidar point clouds using minimal number of sweeps |
topic | LiDAR point clouds moving object segmentation knowledge transfer autonomous driving |
url | https://ieeexplore.ieee.org/document/10848132/ |
work_keys_str_mv | AT zoltanrozsa efficientmovingobjectsegmentationinlidarpointcloudsusingminimalnumberofsweeps AT akosmadaras efficientmovingobjectsegmentationinlidarpointcloudsusingminimalnumberofsweeps AT tamassziranyi efficientmovingobjectsegmentationinlidarpointcloudsusingminimalnumberofsweeps |