Robot Dynamic Path Planning Based on Prioritized Experience Replay and LSTM Network

To address the issues of slow convergence speed, poor dynamic adaptability, and path redundancy in the Double Deep Q Network (DDQN) within complex obstacle environments, this paper proposes an enhanced algorithm within the deep reinforcement learning framework. This algorithm, termed LPDDQN, integra...

Full description

Saved in:
Bibliographic Details
Main Authors: Hongqi Li, Peisi Zhong, Li Liu, Xiao Wang, Mei Liu, Jie Yuan
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10848077/
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1825207072823705600
author Hongqi Li
Peisi Zhong
Li Liu
Xiao Wang
Mei Liu
Jie Yuan
author_facet Hongqi Li
Peisi Zhong
Li Liu
Xiao Wang
Mei Liu
Jie Yuan
author_sort Hongqi Li
collection DOAJ
description To address the issues of slow convergence speed, poor dynamic adaptability, and path redundancy in the Double Deep Q Network (DDQN) within complex obstacle environments, this paper proposes an enhanced algorithm within the deep reinforcement learning framework. This algorithm, termed LPDDQN, integrates Prioritized Experience Replay (PER) and the Long Short Term Memory (LSTM) network to improve upon the DDQN algorithm. First, Prioritized Experience Replay (PER) is utilized to prioritize experience data and optimize storage and sampling operations through the SumTree structure, rather than the conventional experience queue. Second, the LSTM network is introduced to enhance the dynamic adaptability of the DDQN algorithm. Owing to the introduction of the LSTM model, the experience samples must be sliced and populated. The performance of the proposed LPDDQN algorithm is compared with five other path planning algorithms in both static and dynamic environments. Simulation analysis shows that in a static environment, LPDDQN demonstrates significant improvements over traditional DDQN in terms of convergence, number of moving steps, success rate, and number of turns, with respective improvements of 24.07%, 17.49%, 37.73%, and 61.54%. In dynamic and complex environments, the success rates of all algorithms, except TLD3 and the LPDDQN, decreased significantly. Further analysis reveals that the LPDDQN outperforms the TLD3 by 18.87%, 2.41%, and 39.02% in terms of moving steps, success rate, and number of turns, respectively.
format Article
id doaj-art-fad13380bd144696b1b53a7da01b2d0f
institution Kabale University
issn 2169-3536
language English
publishDate 2025-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj-art-fad13380bd144696b1b53a7da01b2d0f2025-02-07T00:01:17ZengIEEEIEEE Access2169-35362025-01-0113222832229910.1109/ACCESS.2025.353244910848077Robot Dynamic Path Planning Based on Prioritized Experience Replay and LSTM NetworkHongqi Li0https://orcid.org/0009-0005-6776-2039Peisi Zhong1Li Liu2Xiao Wang3https://orcid.org/0000-0002-3201-236XMei Liu4Jie Yuan5College of Mechanical and Electronic Engineering, Shandong University of Science and Technology, Qingdao, ChinaCollege of Mechanical and Electronic Engineering, Shandong University of Science and Technology, Qingdao, ChinaCollege of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao, ChinaCollege of Mechanical and Electronic Engineering, Shandong University of Science and Technology, Qingdao, ChinaCollege of Energy Storage Technology, Shandong University of Science and Technology, Qingdao, ChinaCollege of Mechanical and Electronic Engineering, Shandong University of Science and Technology, Qingdao, ChinaTo address the issues of slow convergence speed, poor dynamic adaptability, and path redundancy in the Double Deep Q Network (DDQN) within complex obstacle environments, this paper proposes an enhanced algorithm within the deep reinforcement learning framework. This algorithm, termed LPDDQN, integrates Prioritized Experience Replay (PER) and the Long Short Term Memory (LSTM) network to improve upon the DDQN algorithm. First, Prioritized Experience Replay (PER) is utilized to prioritize experience data and optimize storage and sampling operations through the SumTree structure, rather than the conventional experience queue. Second, the LSTM network is introduced to enhance the dynamic adaptability of the DDQN algorithm. Owing to the introduction of the LSTM model, the experience samples must be sliced and populated. The performance of the proposed LPDDQN algorithm is compared with five other path planning algorithms in both static and dynamic environments. Simulation analysis shows that in a static environment, LPDDQN demonstrates significant improvements over traditional DDQN in terms of convergence, number of moving steps, success rate, and number of turns, with respective improvements of 24.07%, 17.49%, 37.73%, and 61.54%. In dynamic and complex environments, the success rates of all algorithms, except TLD3 and the LPDDQN, decreased significantly. Further analysis reveals that the LPDDQN outperforms the TLD3 by 18.87%, 2.41%, and 39.02% in terms of moving steps, success rate, and number of turns, respectively.https://ieeexplore.ieee.org/document/10848077/DDQNLSTM networkmobile robotpath planningprioritized experience replay
spellingShingle Hongqi Li
Peisi Zhong
Li Liu
Xiao Wang
Mei Liu
Jie Yuan
Robot Dynamic Path Planning Based on Prioritized Experience Replay and LSTM Network
IEEE Access
DDQN
LSTM network
mobile robot
path planning
prioritized experience replay
title Robot Dynamic Path Planning Based on Prioritized Experience Replay and LSTM Network
title_full Robot Dynamic Path Planning Based on Prioritized Experience Replay and LSTM Network
title_fullStr Robot Dynamic Path Planning Based on Prioritized Experience Replay and LSTM Network
title_full_unstemmed Robot Dynamic Path Planning Based on Prioritized Experience Replay and LSTM Network
title_short Robot Dynamic Path Planning Based on Prioritized Experience Replay and LSTM Network
title_sort robot dynamic path planning based on prioritized experience replay and lstm network
topic DDQN
LSTM network
mobile robot
path planning
prioritized experience replay
url https://ieeexplore.ieee.org/document/10848077/
work_keys_str_mv AT hongqili robotdynamicpathplanningbasedonprioritizedexperiencereplayandlstmnetwork
AT peisizhong robotdynamicpathplanningbasedonprioritizedexperiencereplayandlstmnetwork
AT liliu robotdynamicpathplanningbasedonprioritizedexperiencereplayandlstmnetwork
AT xiaowang robotdynamicpathplanningbasedonprioritizedexperiencereplayandlstmnetwork
AT meiliu robotdynamicpathplanningbasedonprioritizedexperiencereplayandlstmnetwork
AT jieyuan robotdynamicpathplanningbasedonprioritizedexperiencereplayandlstmnetwork