Advancing Computational Humor: LLaMa-3 Based Generation with DistilBert Evaluation Framework
Humor generation presents significant challenges in the field of natural language processing, primarily due to its reliance on cultural backgrounds and subjective interpretations. These factors contribute to the variability of human-generated humor, necessitating computational models capable of mast...
Saved in:
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
EDP Sciences
2025-01-01
|
Series: | ITM Web of Conferences |
Online Access: | https://www.itm-conferences.org/articles/itmconf/pdf/2025/01/itmconf_dai2024_03024.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1825206557325918208 |
---|---|
author | He Jinliang Mei Aohan |
author_facet | He Jinliang Mei Aohan |
author_sort | He Jinliang |
collection | DOAJ |
description | Humor generation presents significant challenges in the field of natural language processing, primarily due to its reliance on cultural backgrounds and subjective interpretations. These factors contribute to the variability of human-generated humor, necessitating computational models capable of mastering diverse comedic styles with minimal subjectivity and maximal generalizability. This study introduces a novel approach to humor generation by fine-tuning the LLaMA-3 language model with Low-Rank Adaptation (LoRA). The study developed a comprehensive dataset sourced from diverse online platforms, supplemented by non-humorous content from scientific literature and press conferences to enhance the model's discriminative capabilities. Utilizing DistilBERT for efficient evaluation, the fine-tuned LLaMA-3 achieved an impressive accuracy of 95.6% and an F1-score of 97.75%, surpassing larger models such as GPT-4o, and Gemini. These results demonstrate the model's exceptional capability in generating humor, offering a more efficient and scalable solution for applications such as conversational agents and entertainment platforms. This research advances the field by showcasing the benefits of comprehensive dataset preparation and targeted fine-tuning, providing a foundation for future developments in humor-related artificial intelligence applications. |
format | Article |
id | doaj-art-4901b63b68d24e1797cebc0851e13db2 |
institution | Kabale University |
issn | 2271-2097 |
language | English |
publishDate | 2025-01-01 |
publisher | EDP Sciences |
record_format | Article |
series | ITM Web of Conferences |
spelling | doaj-art-4901b63b68d24e1797cebc0851e13db22025-02-07T08:21:11ZengEDP SciencesITM Web of Conferences2271-20972025-01-01700302410.1051/itmconf/20257003024itmconf_dai2024_03024Advancing Computational Humor: LLaMa-3 Based Generation with DistilBert Evaluation FrameworkHe Jinliang0Mei Aohan1Department of Computer Science, The University of Hong KongSchool of Data Science, The Chinese University of Hong KongHumor generation presents significant challenges in the field of natural language processing, primarily due to its reliance on cultural backgrounds and subjective interpretations. These factors contribute to the variability of human-generated humor, necessitating computational models capable of mastering diverse comedic styles with minimal subjectivity and maximal generalizability. This study introduces a novel approach to humor generation by fine-tuning the LLaMA-3 language model with Low-Rank Adaptation (LoRA). The study developed a comprehensive dataset sourced from diverse online platforms, supplemented by non-humorous content from scientific literature and press conferences to enhance the model's discriminative capabilities. Utilizing DistilBERT for efficient evaluation, the fine-tuned LLaMA-3 achieved an impressive accuracy of 95.6% and an F1-score of 97.75%, surpassing larger models such as GPT-4o, and Gemini. These results demonstrate the model's exceptional capability in generating humor, offering a more efficient and scalable solution for applications such as conversational agents and entertainment platforms. This research advances the field by showcasing the benefits of comprehensive dataset preparation and targeted fine-tuning, providing a foundation for future developments in humor-related artificial intelligence applications.https://www.itm-conferences.org/articles/itmconf/pdf/2025/01/itmconf_dai2024_03024.pdf |
spellingShingle | He Jinliang Mei Aohan Advancing Computational Humor: LLaMa-3 Based Generation with DistilBert Evaluation Framework ITM Web of Conferences |
title | Advancing Computational Humor: LLaMa-3 Based Generation with DistilBert Evaluation Framework |
title_full | Advancing Computational Humor: LLaMa-3 Based Generation with DistilBert Evaluation Framework |
title_fullStr | Advancing Computational Humor: LLaMa-3 Based Generation with DistilBert Evaluation Framework |
title_full_unstemmed | Advancing Computational Humor: LLaMa-3 Based Generation with DistilBert Evaluation Framework |
title_short | Advancing Computational Humor: LLaMa-3 Based Generation with DistilBert Evaluation Framework |
title_sort | advancing computational humor llama 3 based generation with distilbert evaluation framework |
url | https://www.itm-conferences.org/articles/itmconf/pdf/2025/01/itmconf_dai2024_03024.pdf |
work_keys_str_mv | AT hejinliang advancingcomputationalhumorllama3basedgenerationwithdistilbertevaluationframework AT meiaohan advancingcomputationalhumorllama3basedgenerationwithdistilbertevaluationframework |