Barbara Lewandowska-Tomaszczyk


2024

pdf bib
Navigating Opinion Space: A Study of Explicit and Implicit Opinion Generation in Language Models
Chaya Liebeskind | Barbara Lewandowska-Tomaszczyk
Proceedings of the First LUHME Workshop

The paper focuses on testing the use of conversational Large Language Models (LLMs), in particular chatGPT and Google models, instructed to assume the role of linguistics experts to produce opinionated texts, which are defined as subjective statements about animates, things, events or properties, in contrast to knowledge/evidence-based objective factual statements. The taxonomy differentiates between Explicit (Direct or Indirect), and Implicit opinionated texts, further distinguishing between positive and negative, ambiguous, or balanced opinions. Examples of opinionated texts and instances of explicit opinion-marking discourse markers (words and phrases) we identified, as well as instances of opinion-marking mental verbs, evaluative and emotion phraseology, and expressive lexis, were provided in a series of prompts. The model demonstrated accurate identification of Direct and Indirect Explicit opinionated utterances, successfully classifying them according to language-specific properties, while less effective performance was observed for prompts requesting illustrations for Implicitly opinionated texts.To tackle this obstacle, the Chain-of-Thoughts methodology was used. Requested to convert the erroneously recognized opinion instances into factual knowledge sentences, LLMs effectively transformed texts containing explicit markers of opinion. However, the ability to transform Explicit Indirect, and Implicit opinionated texts into factual statements is lacking. This finding is interesting as, while the LLM is supposed to give a linguistic statement with factual information, it might be unaware of implicit opinionated content. Our experiment with the LLMs presents novel prospects for the field of linguistics.