Class-aware feature attention-based semantic segmentation on hyperspectral images.
This research explores an innovative approach to segment hyperspectral images. Aclass-aware feature-based attention approach is combined with an enhanced attention-based network, FAttNet is proposed to segment the hyperspectral images semantically. It is introduced to address challenges associated w...
Saved in:
Main Authors: | , , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Public Library of Science (PLoS)
2025-01-01
|
Series: | PLoS ONE |
Online Access: | https://doi.org/10.1371/journal.pone.0309997 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1823864065737359360 |
---|---|
author | Prabu Sevugan Venkatesan Rudhrakoti Tai-Hoon Kim Megala Gunasekaran Swarnalatha Purushotham Ravikumar Chinthaginjala Irfan Ahmad Kumar A |
author_facet | Prabu Sevugan Venkatesan Rudhrakoti Tai-Hoon Kim Megala Gunasekaran Swarnalatha Purushotham Ravikumar Chinthaginjala Irfan Ahmad Kumar A |
author_sort | Prabu Sevugan |
collection | DOAJ |
description | This research explores an innovative approach to segment hyperspectral images. Aclass-aware feature-based attention approach is combined with an enhanced attention-based network, FAttNet is proposed to segment the hyperspectral images semantically. It is introduced to address challenges associated with inaccurate edge segmentation, diverse forms of target inconsistency, and suboptimal predictive efficacy encountered in traditional segmentation networks when applied to semantic segmentation tasks in hyperspectral images. First, the class-aware feature attention procedure is used to improve the extraction and processing of distinct types of semantic information. Subsequently, the spatial attention pyramid is employed in a parallel fashion to improve the correlation between spaces and extract context information from images at different scales. Finally, the segmentation results are refined using the encoder-decoder structure. It enhances precision in delineating distinct land cover patterns. The findings from the experiments demonstrate that FAttNet exhibits superior performance compared to established semantic segmentation networks commonly used. Specifically, on the GaoFen image dataset, FAttNet achieves a higher mean intersection over union (MIoU) of 77.03% and a segmentation accuracy of 87.26% surpassing the performance of the existing network. |
format | Article |
id | doaj-art-359623316c8b418ea0dcc5e8f1648fcd |
institution | Kabale University |
issn | 1932-6203 |
language | English |
publishDate | 2025-01-01 |
publisher | Public Library of Science (PLoS) |
record_format | Article |
series | PLoS ONE |
spelling | doaj-art-359623316c8b418ea0dcc5e8f1648fcd2025-02-09T05:30:36ZengPublic Library of Science (PLoS)PLoS ONE1932-62032025-01-01202e030999710.1371/journal.pone.0309997Class-aware feature attention-based semantic segmentation on hyperspectral images.Prabu SevuganVenkatesan RudhrakotiTai-Hoon KimMegala GunasekaranSwarnalatha PurushothamRavikumar ChinthaginjalaIrfan AhmadKumar AThis research explores an innovative approach to segment hyperspectral images. Aclass-aware feature-based attention approach is combined with an enhanced attention-based network, FAttNet is proposed to segment the hyperspectral images semantically. It is introduced to address challenges associated with inaccurate edge segmentation, diverse forms of target inconsistency, and suboptimal predictive efficacy encountered in traditional segmentation networks when applied to semantic segmentation tasks in hyperspectral images. First, the class-aware feature attention procedure is used to improve the extraction and processing of distinct types of semantic information. Subsequently, the spatial attention pyramid is employed in a parallel fashion to improve the correlation between spaces and extract context information from images at different scales. Finally, the segmentation results are refined using the encoder-decoder structure. It enhances precision in delineating distinct land cover patterns. The findings from the experiments demonstrate that FAttNet exhibits superior performance compared to established semantic segmentation networks commonly used. Specifically, on the GaoFen image dataset, FAttNet achieves a higher mean intersection over union (MIoU) of 77.03% and a segmentation accuracy of 87.26% surpassing the performance of the existing network.https://doi.org/10.1371/journal.pone.0309997 |
spellingShingle | Prabu Sevugan Venkatesan Rudhrakoti Tai-Hoon Kim Megala Gunasekaran Swarnalatha Purushotham Ravikumar Chinthaginjala Irfan Ahmad Kumar A Class-aware feature attention-based semantic segmentation on hyperspectral images. PLoS ONE |
title | Class-aware feature attention-based semantic segmentation on hyperspectral images. |
title_full | Class-aware feature attention-based semantic segmentation on hyperspectral images. |
title_fullStr | Class-aware feature attention-based semantic segmentation on hyperspectral images. |
title_full_unstemmed | Class-aware feature attention-based semantic segmentation on hyperspectral images. |
title_short | Class-aware feature attention-based semantic segmentation on hyperspectral images. |
title_sort | class aware feature attention based semantic segmentation on hyperspectral images |
url | https://doi.org/10.1371/journal.pone.0309997 |
work_keys_str_mv | AT prabusevugan classawarefeatureattentionbasedsemanticsegmentationonhyperspectralimages AT venkatesanrudhrakoti classawarefeatureattentionbasedsemanticsegmentationonhyperspectralimages AT taihoonkim classawarefeatureattentionbasedsemanticsegmentationonhyperspectralimages AT megalagunasekaran classawarefeatureattentionbasedsemanticsegmentationonhyperspectralimages AT swarnalathapurushotham classawarefeatureattentionbasedsemanticsegmentationonhyperspectralimages AT ravikumarchinthaginjala classawarefeatureattentionbasedsemanticsegmentationonhyperspectralimages AT irfanahmad classawarefeatureattentionbasedsemanticsegmentationonhyperspectralimages AT kumara classawarefeatureattentionbasedsemanticsegmentationonhyperspectralimages |