Cloud-edge hybrid deep learning framework for scalable IoT resource optimization
Abstract In the dynamic environment of the Internet of Things (IoT), edge and cloud computing play critical roles in analysing and storing data from numerous connected devices to produce valuable insights. Efficient resource allocation and workload distribution are vital to ensuring continuous and r...
Saved in:
Main Authors: | , , , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
SpringerOpen
2025-02-01
|
Series: | Journal of Cloud Computing: Advances, Systems and Applications |
Subjects: | |
Online Access: | https://doi.org/10.1186/s13677-025-00729-w |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1823861660820963328 |
---|---|
author | Umesh Kumar Lilhore Sarita Simaiya Yogesh Kumar Sharma Anjani Kumar Rai S. M. Padmaja Khan Vajid Nabilal Vimal Kumar Roobaea Alroobaea Hamed Alsufyani |
author_facet | Umesh Kumar Lilhore Sarita Simaiya Yogesh Kumar Sharma Anjani Kumar Rai S. M. Padmaja Khan Vajid Nabilal Vimal Kumar Roobaea Alroobaea Hamed Alsufyani |
author_sort | Umesh Kumar Lilhore |
collection | DOAJ |
description | Abstract In the dynamic environment of the Internet of Things (IoT), edge and cloud computing play critical roles in analysing and storing data from numerous connected devices to produce valuable insights. Efficient resource allocation and workload distribution are vital to ensuring continuous and reliable service in growing IoT ecosystems with increasing data volumes and changing application demands. This study proposes a novel optimisation approach utilising deep learning to tackle these challenges. The integration of Deep Q-Networks (DQN) and Proximal Policy Optimization (PPO) offers a practical approach to addressing the dynamic characteristics of IoT applications. The hybrid algorithm's primary characteristic is its capacity to simultaneously fulfil multiple objectives, including reducing response times, enhancing resource efficiency, and decreasing operational costs. DQN facilitates the formulation of optimal resource allocation strategies in intricate and unpredictable environments. PPO enhances policies in continuous action spaces to guarantee reliable performance in real-time, dynamic IoT settings. This method achieves an optimal equilibrium between policy learning and optimisation, rendering it suitable for contemporary IoT systems. This method improves numerous IoT applications, including smart cities, industrial automation, and healthcare. The hybrid DQN-PPO-GNN-RL model addresses bottlenecks by dynamically managing computing and network resources, allowing for efficient operations in low-latency, high-demand environments such as autonomous systems, sensor networks, and real-time monitoring. The use of Graph Neural Networks (GNNs) improves the accuracy of resource representation, while reinforcement learning-based scheduling allows for seamless adaptation to changing workloads. Simulations using real-world IoT data on the iFogSim platform showed significant improvements: task scheduling time was reduced by 21%, operational costs by 17%, and energy consumption by 22%. The method reliably provided equitable resource distribution, with values between 0.93 and 0.99, guaranteeing efficient allocation throughout the network. This hybrid methodology establishes a novel benchmark for scalable, real-time resource management in extensive, data-centric IoT ecosystems, consequently enhancing system performance and operational efficiency. |
format | Article |
id | doaj-art-93b4e6d782554741bbfc4ffa1f63b87d |
institution | Kabale University |
issn | 2192-113X |
language | English |
publishDate | 2025-02-01 |
publisher | SpringerOpen |
record_format | Article |
series | Journal of Cloud Computing: Advances, Systems and Applications |
spelling | doaj-art-93b4e6d782554741bbfc4ffa1f63b87d2025-02-09T12:54:27ZengSpringerOpenJournal of Cloud Computing: Advances, Systems and Applications2192-113X2025-02-0114112710.1186/s13677-025-00729-wCloud-edge hybrid deep learning framework for scalable IoT resource optimizationUmesh Kumar Lilhore0Sarita Simaiya1Yogesh Kumar Sharma2Anjani Kumar Rai3S. M. Padmaja4Khan Vajid Nabilal5Vimal Kumar6Roobaea Alroobaea7Hamed Alsufyani8Department of Computer Science and Engineering, Galgotias UniversityDepartment of Computer Science and Engineering, Galgotias UniversityDepartment of Computer Science and Engineering, Koneru Lakshmaiah Education FoundationDepartment of CEA, GLA UniversityDepartment of Electrical and Electronics Engineering, Shri Vishnu Engineering College for WomenGenba Sopanrao Moze College of EngineeringDepartment of Computer Science, College of Computers and Information Technology, Taif UniversityDepartment of Computer Science, College of Computing and Informatics, Saudi Electronic UniversityDepartment of Computer Science, College of Computing and Informatics, Saudi Electronic UniversityAbstract In the dynamic environment of the Internet of Things (IoT), edge and cloud computing play critical roles in analysing and storing data from numerous connected devices to produce valuable insights. Efficient resource allocation and workload distribution are vital to ensuring continuous and reliable service in growing IoT ecosystems with increasing data volumes and changing application demands. This study proposes a novel optimisation approach utilising deep learning to tackle these challenges. The integration of Deep Q-Networks (DQN) and Proximal Policy Optimization (PPO) offers a practical approach to addressing the dynamic characteristics of IoT applications. The hybrid algorithm's primary characteristic is its capacity to simultaneously fulfil multiple objectives, including reducing response times, enhancing resource efficiency, and decreasing operational costs. DQN facilitates the formulation of optimal resource allocation strategies in intricate and unpredictable environments. PPO enhances policies in continuous action spaces to guarantee reliable performance in real-time, dynamic IoT settings. This method achieves an optimal equilibrium between policy learning and optimisation, rendering it suitable for contemporary IoT systems. This method improves numerous IoT applications, including smart cities, industrial automation, and healthcare. The hybrid DQN-PPO-GNN-RL model addresses bottlenecks by dynamically managing computing and network resources, allowing for efficient operations in low-latency, high-demand environments such as autonomous systems, sensor networks, and real-time monitoring. The use of Graph Neural Networks (GNNs) improves the accuracy of resource representation, while reinforcement learning-based scheduling allows for seamless adaptation to changing workloads. Simulations using real-world IoT data on the iFogSim platform showed significant improvements: task scheduling time was reduced by 21%, operational costs by 17%, and energy consumption by 22%. The method reliably provided equitable resource distribution, with values between 0.93 and 0.99, guaranteeing efficient allocation throughout the network. This hybrid methodology establishes a novel benchmark for scalable, real-time resource management in extensive, data-centric IoT ecosystems, consequently enhancing system performance and operational efficiency.https://doi.org/10.1186/s13677-025-00729-wCloud load balancingIoT edge networksDeep Q-NetworksProximal policy optimizationGNNReinforcement learning |
spellingShingle | Umesh Kumar Lilhore Sarita Simaiya Yogesh Kumar Sharma Anjani Kumar Rai S. M. Padmaja Khan Vajid Nabilal Vimal Kumar Roobaea Alroobaea Hamed Alsufyani Cloud-edge hybrid deep learning framework for scalable IoT resource optimization Journal of Cloud Computing: Advances, Systems and Applications Cloud load balancing IoT edge networks Deep Q-Networks Proximal policy optimization GNN Reinforcement learning |
title | Cloud-edge hybrid deep learning framework for scalable IoT resource optimization |
title_full | Cloud-edge hybrid deep learning framework for scalable IoT resource optimization |
title_fullStr | Cloud-edge hybrid deep learning framework for scalable IoT resource optimization |
title_full_unstemmed | Cloud-edge hybrid deep learning framework for scalable IoT resource optimization |
title_short | Cloud-edge hybrid deep learning framework for scalable IoT resource optimization |
title_sort | cloud edge hybrid deep learning framework for scalable iot resource optimization |
topic | Cloud load balancing IoT edge networks Deep Q-Networks Proximal policy optimization GNN Reinforcement learning |
url | https://doi.org/10.1186/s13677-025-00729-w |
work_keys_str_mv | AT umeshkumarlilhore cloudedgehybriddeeplearningframeworkforscalableiotresourceoptimization AT saritasimaiya cloudedgehybriddeeplearningframeworkforscalableiotresourceoptimization AT yogeshkumarsharma cloudedgehybriddeeplearningframeworkforscalableiotresourceoptimization AT anjanikumarrai cloudedgehybriddeeplearningframeworkforscalableiotresourceoptimization AT smpadmaja cloudedgehybriddeeplearningframeworkforscalableiotresourceoptimization AT khanvajidnabilal cloudedgehybriddeeplearningframeworkforscalableiotresourceoptimization AT vimalkumar cloudedgehybriddeeplearningframeworkforscalableiotresourceoptimization AT roobaeaalroobaea cloudedgehybriddeeplearningframeworkforscalableiotresourceoptimization AT hamedalsufyani cloudedgehybriddeeplearningframeworkforscalableiotresourceoptimization |