Cross-Modal Augmented Transformer for Automated Medical Report Generation
In clinical practice, interpreting medical images and composing diagnostic reports typically involve significant manual workload. Therefore, an automated report generation framework that mimics a doctor’s diagnosis better meets the requirements of medical scenarios. Prior investigations o...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2025-01-01
|
Series: | IEEE Journal of Translational Engineering in Health and Medicine |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10857391/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | In clinical practice, interpreting medical images and composing diagnostic reports typically involve significant manual workload. Therefore, an automated report generation framework that mimics a doctor’s diagnosis better meets the requirements of medical scenarios. Prior investigations often overlook this critical aspect, primarily relying on traditional image captioning frameworks initially designed for general-domain images and sentences. Despite achieving some advancements, these methodologies encounter two primary challenges. First, the strong noise in blurred medical images always hinders the model of capturing the lesion region. Second, during report writing, doctors typically rely on terminology for diagnosis, a crucial aspect that has been neglected in prior frameworks. In this paper, we present a novel approach called Cross-modal Augmented Transformer (CAT) for medical report generation. Unlike previous methods that rely on coarse-grained features without human intervention, our method introduces a “locate then generate” pattern, thereby improving the interpretability of the generated reports. During the locate stage, CAT captures crucial representations by pre-aligning significant patches and their corresponding medical terminologies. This pre-alignment helps reduce visual noise by discarding low-ranking content, ensuring that only relevant information is considered in the report generation process. During the generation phase, CAT utilizes a multi-modality encoder to reinforce the correlation between generated keywords, retrieved terminologies and regions. Furthermore, CAT employs a dual-stream decoder that dynamically determines whether the predicted word should be influenced by the retrieved terminology or the preceding sentence. Experimental results demonstrate the effectiveness of the proposed method on two datasets.Clinical impact: This work aims to design an automated framework for explaining medical images to evaluate the health status of individuals, thereby facilitating their broader application in clinical settings.Clinical and Translational Impact Statement: In our preclinical research, we develop an automated system for generating diagnostic reports. This system mimics manual diagnostic methods by combining fine-grained semantic alignment with dual-stream decoders. |
---|---|
ISSN: | 2168-2372 |