MUFFNet: lightweight dynamic underwater image enhancement network based on multi-scale frequency

IntroductionThe advancement of Underwater Human-Robot Interaction technology has significantly driven marine exploration, conservation, and resource utilization. However, challenges persist due to the limitations of underwater robots equipped with basic cameras, which struggle to handle complex unde...

Full description

Saved in:
Bibliographic Details
Main Authors: Dechuan Kong, Yandi Zhang, Xiaohu Zhao, Yanqiang Wang, Lei Cai
Format: Article
Language:English
Published: Frontiers Media S.A. 2025-02-01
Series:Frontiers in Marine Science
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/fmars.2025.1541265/full
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:IntroductionThe advancement of Underwater Human-Robot Interaction technology has significantly driven marine exploration, conservation, and resource utilization. However, challenges persist due to the limitations of underwater robots equipped with basic cameras, which struggle to handle complex underwater environments. This leads to blurry images, severely hindering the performance of automated systems.MethodsWe propose MUFFNet, an underwater image enhancement network leveraging multi-scale frequency analysis to address the challenge. The network introduces a frequency-domain-based convolutional attention mechanism to extract spatial information effectively. A Multi-Scale Enhancement Prior algorithm enhances high-frequency and low-frequency features while the Information Flow Interaction module mitigates information stratification and blockage. A Multi-Scale Joint Loss framework facilitates dynamic network optimization.ResultsExperimental results demonstrate that MUFFNet outperforms existing state-of-the-art models while consuming fewer computational resources and aligning enhanced images more closely with human visual perception.DiscussionThe enhanced images generated by MUFFNet exhibit better alignment with human visual perception, making it a promising solution for improving underwater robotic vision systems.
ISSN:2296-7745