MDCKE: Multimodal deep-context knowledge extractor that integrates contextual information
Extraction of comprehensive information from diverse data sources remains a significant challenge in contemporary research. Although multimodal Named Entity Recognition (NER) and Relation Extraction (RE) tasks have garnered significant attention, existing methods often focus on surface-level informa...
Saved in:
Main Authors: | Hyojin Ko, Joon Yoo, Ok-Ran Jeong |
---|---|
Format: | Article |
Language: | English |
Published: |
Elsevier
2025-04-01
|
Series: | Alexandria Engineering Journal |
Subjects: | |
Online Access: | http://www.sciencedirect.com/science/article/pii/S1110016825001474 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
TMFN: a text-based multimodal fusion network with multi-scale feature extraction and unsupervised contrastive learning for multimodal sentiment analysis
by: Junsong Fu, et al.
Published: (2025-01-01) -
Precise Recognition and Feature Depth Analysis of Tennis Training Actions Based on Multimodal Data Integration and Key Action Classification
by: Weichao Yang
Published: (2025-01-01) -
Instruction and demonstration-based secure service attribute generation mechanism for textual data
by: LI Chenhao, et al.
Published: (2024-12-01) -
Instance-level semantic segmentation of nuclei based on multimodal structure encoding
by: Bo Guan, et al.
Published: (2025-02-01) -
AN ENHANCED MULTIMODAL BIOMETRIC SYSTEM BASED ON CONVOLUTIONAL NEURAL NETWORK
by: LAWRENCE OMOTOSHO, et al.
Published: (2021-10-01)