View adaptive multi-object tracking method based on depth relationship cues

Abstract Multi-object tracking (MOT) tasks face challenges from multiple perception views due to the diversity of application scenarios. Different views (front-view and top-view) have different imaging and data distribution characteristics, but the current MOT methods do not consider these differenc...

Full description

Saved in:
Bibliographic Details
Main Authors: Haoran Sun, Yang Li, Guanci Yang, Zhidong Su, Kexin Luo
Format: Article
Language:English
Published: Springer 2025-01-01
Series:Complex & Intelligent Systems
Subjects:
Online Access:https://doi.org/10.1007/s40747-024-01776-7
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1823861509904662528
author Haoran Sun
Yang Li
Guanci Yang
Zhidong Su
Kexin Luo
author_facet Haoran Sun
Yang Li
Guanci Yang
Zhidong Su
Kexin Luo
author_sort Haoran Sun
collection DOAJ
description Abstract Multi-object tracking (MOT) tasks face challenges from multiple perception views due to the diversity of application scenarios. Different views (front-view and top-view) have different imaging and data distribution characteristics, but the current MOT methods do not consider these differences and only adopt a unified association strategy to deal with various occlusion situations. This paper proposed View Adaptive Multi-Object Tracking Method Based on Depth Relationship Cues (ViewTrack) to enable MOT to adapt to the scene's dynamic changes. Firstly, based on exploiting the depth relationships between objects by using the position information of the bounding box, a view-type recognition method based on depth relationship cues (VTRM) is proposed to perceive the changes of depth and view within the dynamic scene. Secondly, by adjusting the interval partitioning strategy to adapt to the changes in view characteristics, a view adaptive partitioning method for tracklet sets and detection sets (VAPM) is proposed to achieve sparse decomposition in occluded scenes. Then, combining pedestrian displacement with Intersection over Union (IoU), a displacement modulated Intersection over Union method (DMIoU) is proposed to improve the association accuracy between detection and tracklet boxes. Finally, the comparison results with 12 representative methods demonstrate that ViewTrack outperforms multiple metrics on the benchmark datasets. The code is available at https://github.com/Hamor404/ViewTrack .
format Article
id doaj-art-dcc6ad246a354edab8edd79961d00860
institution Kabale University
issn 2199-4536
2198-6053
language English
publishDate 2025-01-01
publisher Springer
record_format Article
series Complex & Intelligent Systems
spelling doaj-art-dcc6ad246a354edab8edd79961d008602025-02-09T13:01:00ZengSpringerComplex & Intelligent Systems2199-45362198-60532025-01-0111212110.1007/s40747-024-01776-7View adaptive multi-object tracking method based on depth relationship cuesHaoran Sun0Yang Li1Guanci Yang2Zhidong Su3Kexin Luo4Key Laboratory of Advanced Manufacturing Technology of the Ministry of Education, Guizhou UniversityKey Laboratory of Advanced Manufacturing Technology of the Ministry of Education, Guizhou UniversityKey Laboratory of Advanced Manufacturing Technology of the Ministry of Education, Guizhou UniversitySchool of Engineering, Colorado State University PuebloKey Laboratory of Advanced Manufacturing Technology of the Ministry of Education, Guizhou UniversityAbstract Multi-object tracking (MOT) tasks face challenges from multiple perception views due to the diversity of application scenarios. Different views (front-view and top-view) have different imaging and data distribution characteristics, but the current MOT methods do not consider these differences and only adopt a unified association strategy to deal with various occlusion situations. This paper proposed View Adaptive Multi-Object Tracking Method Based on Depth Relationship Cues (ViewTrack) to enable MOT to adapt to the scene's dynamic changes. Firstly, based on exploiting the depth relationships between objects by using the position information of the bounding box, a view-type recognition method based on depth relationship cues (VTRM) is proposed to perceive the changes of depth and view within the dynamic scene. Secondly, by adjusting the interval partitioning strategy to adapt to the changes in view characteristics, a view adaptive partitioning method for tracklet sets and detection sets (VAPM) is proposed to achieve sparse decomposition in occluded scenes. Then, combining pedestrian displacement with Intersection over Union (IoU), a displacement modulated Intersection over Union method (DMIoU) is proposed to improve the association accuracy between detection and tracklet boxes. Finally, the comparison results with 12 representative methods demonstrate that ViewTrack outperforms multiple metrics on the benchmark datasets. The code is available at https://github.com/Hamor404/ViewTrack .https://doi.org/10.1007/s40747-024-01776-7Multi-object trackingTracking-by-detectionView adaptiveDepth relationshipData association
spellingShingle Haoran Sun
Yang Li
Guanci Yang
Zhidong Su
Kexin Luo
View adaptive multi-object tracking method based on depth relationship cues
Complex & Intelligent Systems
Multi-object tracking
Tracking-by-detection
View adaptive
Depth relationship
Data association
title View adaptive multi-object tracking method based on depth relationship cues
title_full View adaptive multi-object tracking method based on depth relationship cues
title_fullStr View adaptive multi-object tracking method based on depth relationship cues
title_full_unstemmed View adaptive multi-object tracking method based on depth relationship cues
title_short View adaptive multi-object tracking method based on depth relationship cues
title_sort view adaptive multi object tracking method based on depth relationship cues
topic Multi-object tracking
Tracking-by-detection
View adaptive
Depth relationship
Data association
url https://doi.org/10.1007/s40747-024-01776-7
work_keys_str_mv AT haoransun viewadaptivemultiobjecttrackingmethodbasedondepthrelationshipcues
AT yangli viewadaptivemultiobjecttrackingmethodbasedondepthrelationshipcues
AT guanciyang viewadaptivemultiobjecttrackingmethodbasedondepthrelationshipcues
AT zhidongsu viewadaptivemultiobjecttrackingmethodbasedondepthrelationshipcues
AT kexinluo viewadaptivemultiobjecttrackingmethodbasedondepthrelationshipcues