Leveraging Advanced Prompting Strategies in LLaMA3-8B for Enhanced Hyperpartisan News Detection

Michele Maggini, Pablo Gamallo Otero


Abstract
This paper explores advanced prompting strategies for hyperpartisan news detection using the LLaMA3-8b-Instruct model, an open-source LLM developed by Meta AI. We evaluate zero-shot, few-shot, and Chain-of-Thought (CoT) techniques on two datasets: SemEval-2019 Task 4 and a headline-specific corpus. Collaborating with a political science expert, we incorporate domain-specific knowledge and structured reasoning steps into our prompts, particularly for the CoT approach. Our findings reveal that zero-shot prompting, especially with general prompts, consistently outperforms other techniques across both datasets. This unexpected result challenges assumptions about the superiority of few-shot and CoT methods in specialized tasks. We discuss the implications of these findings for ICL in political text analysis and suggest directions for future research in leveraging large language models for nuanced content classification tasks.
Anthology ID:
2024.clicit-1.63
Volume:
Proceedings of the 10th Italian Conference on Computational Linguistics (CLiC-it 2024)
Month:
December
Year:
2024
Address:
Pisa, Italy
Editors:
Felice Dell'Orletta, Alessandro Lenci, Simonetta Montemagni, Rachele Sprugnoli
Venue:
CLiC-it
SIG:
Publisher:
CEUR Workshop Proceedings
Note:
Pages:
531–539
Language:
URL:
https://aclanthology.org/2024.clicit-1.63/
DOI:
Bibkey:
Cite (ACL):
Michele Maggini and Pablo Gamallo Otero. 2024. Leveraging Advanced Prompting Strategies in LLaMA3-8B for Enhanced Hyperpartisan News Detection. In Proceedings of the 10th Italian Conference on Computational Linguistics (CLiC-it 2024), pages 531–539, Pisa, Italy. CEUR Workshop Proceedings.
Cite (Informal):
Leveraging Advanced Prompting Strategies in LLaMA3-8B for Enhanced Hyperpartisan News Detection (Maggini & Gamallo Otero, CLiC-it 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.clicit-1.63.pdf