LLM-Eval: Unified Multi-Dimensional Automatic Evaluation for Open-Domain Conversations with Large Language Models

We propose LLM-Eval, a unified multi-dimensional automatic evaluation method for open-domain conversations with large language models (LLMs). Existing evaluation methods often rely on human annotations, ground-truth responses, or multiple LLM prompts, which can be expensive and time-consuming. To address these issues, we design a single prompt-based evaluation method that leverages a unified evaluation schema to cover multiple dimensions of conversation quality in a single model call. We extensively evaluate the performance of LLM-Eval on various benchmark datasets, demonstrating its effectiveness, efficiency, and adaptability compared to state-of-the-art evaluation methods. Our analysis also highlights the importance of choosing suitable LLMs and decoding strategies for accurate evaluation results. LLM-Eval offers a versatile and robust solution for evaluating open-domain conversation systems, streamlining the evaluation process and providing consistent performance across diverse scenarios.


Introduction
Effective evaluation of open-domain conversation systems is a critical yet challenging problem in natural language processing research (Smith et al., 2022).Accurate and consistent evaluation methods are essential for understanding and improving the performance of dialogue systems.Traditional automatic evaluation metrics, such as BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004), are insufficient for capturing the nuances of natural language conversations (Liu et al., 2016;Deriu et al., 2021), leading to the development of various advanced metrics (Tao et al., 2018;Ghazarian et al., 2019;Sai et al., 2020;Huang et al., 2020;Mehri and Eskenazi, 2020b;Phy et al., 2020;Zhang et al., 2021a;Li et al., 2021;Fu et al., 2023;Liu et al., 2023).However, most existing methods require annotation data , human references, or multiple prompts, which could be expensive, timeconsuming, or prone to errors.
In this paper, we address the problem of evaluating open-domain conversation systems with a focus on large language models (LLMs) (Figure 1).Our goal is to develop an efficient and accurate evaluation method that covers multiple dimensions of conversation quality, such as content, grammar, relevance, and appropriateness, without requiring human references or multiple prompts.We build upon recent advances in LLMs (Brown et al., 2020;Bai et al., 2022;OpenAI, 2023), and propose a unified multi-dimensional evaluation method called LLM-EVAL.
Existing evaluation methods have demonstrated promising results in various aspects of dialogue evaluation.However, they often rely on human annotations (Mehri and Eskenazi, 2020b;Phy et al., 2020), ground-truth responses (Ghazarian et al., 2020;Zhang et al., 2020a), or multiple LLM inferences (Fu et al., 2023;Liu et al., 2023), limiting their efficiency and adaptability in practical scenarios.We aim to bridge this gap by proposing LLM-EVAL, a single-prompt-based evaluation method that leverages a unified evaluation schema to cover multiple dimensions of conversation quality in a single model call.
In LLM-EVAL, we design a natural language instruction that defines the evaluation task and desired criteria, as well as a format instruction that specifies the structure and range of scores for each dimension.The single prompt is created by concatenating the dialogue context, reference (if available), and generated response, and then fed to a large language model, which outputs scores for each dimension based on the defined schema.
We extensively evaluate the performance of LLM-EVAL on a variety of benchmark datasets, covering diverse dialogue systems and evaluation dimensions.Our experiments demonstrate that LLM-EVAL consistently outperforms most baselines and state-of-the-art evaluation methods in terms of correlation with human judgments.The proposed method is also robust and versatile, adapting to different scoring ranges and evaluation scenarios.
In summary, our main contributions are as follows: • We propose LLM-EVAL, a unified multidimensional automatic evaluation method for open-domain conversations with large language models, which streamlines the evaluation process by using a single prompt and a unified evaluation schema.
• We extensively evaluate the performance of LLM-EVAL on a variety of benchmark datasets, demonstrating its effectiveness and efficiency in comparison with state-of-the-art evaluation methods.
• We provide an in-depth analysis of the impact of different LLMs and decoding methods on the performance of LLM-EVAL, highlighting the importance of choosing suitable LLMs and decoding strategies for accurate evaluation results.

Related Work
Multi-Dimensional Metrics Multi-dimensional evaluation metrics have been proposed to assess various aspects of dialogue quality, such as content, grammar, relevance, and appropriateness.Examples include USR (Mehri and Eskenazi, 2020b), which trains multiple models to measure qualities like fluency, relevance, and knowledge conditioning, and GRADE (Huang et al., 2020), which models topic transition dynamics in dialogue history using a graph representation.FlowScore (Li et al., 2021) leverages dynamic information flow in dialog history to measure dialogue quality.Unlike these approaches, LLM-EVAL employs a single prompt-based evaluation method that leverages a unified evaluation schema, streamlining the evaluation process and providing a more efficient and adaptable solution.
Unsupervised Metrics Unsupervised evaluation metrics aim to assess the quality of dialogue responses without requiring human annotations.Notable unsupervised methods include DEB (Sai et al., 2020), which fine-tunes BERT with an NSP objective on a dataset with relevant and adversarial irrelevant responses, and FED (Mehri and Eskenazi, 2020a), an unsupervised method that measures dialogue quality using features derived from response embeddings and language model probabilities.In contrast, LLM-EVAL leverages the power of large language models to provide a unified multi-dimensional evaluation, achieving better performance and adaptability compared to existing unsupervised methods.
Large Language Models for Evaluation Recent works have explored using large language models for dialogue evaluation.GPTScore (Fu et al., 2023) employs models like GPT-3 to assign higher probabilities to quality content, using multiple prompts for a multi-dimensional assessment.Chen et al. (2023) explores using ChatGPT and InstructGPT to evaluate text quality without references, and compares different paradigms of using LLMs, including generating explicit scores, using model confidence to determine implicit scores, and directly comparing pairs of texts.G-EVAL (Liu et al., 2023), a framework that leverages LLMs with chain-of-thoughts (CoT) (Wei et al., 2022) and a form-filling paradigm.G-EVAL with GPT-4 as the backbone model achieves a high correlation with human judgments on a summarization task.However, both GPTScore and G-EVAL require multiple prompts or complex scoring functions that use probabilities of output tokens and their weighted summation as the final score, which can be inefficient or time-consuming.LLM-EVAL addresses these issues by using a single prompt and a unified evaluation schema, offering a more efficient and adaptable evaluation method for opendomain conversations.Additionally, LLM-EVAL provides multi-dimensional evaluation scores in a single model call, further streamlining the evaluation process.

Methodology
LLM-EVAL is an efficient prompt-based evaluator tailored for open-domain conversations with large language models.It encompasses a single prompt that addresses the evaluation task, desired evaluation criteria, and a unified multi-dimensional evaluation schema.This method eradicates the necessity for numerous LLMs inferences or intricate scoring functions (Fu et al., 2023;Liu et al., 2023), while still delivering a comprehensive assessment of the generated text.

Unified Evaluation Schema
The evaluation schema is a natural language instruction that defines the task and the desired evaluation criteria.
It is designed to cover multiple dimensions of the evaluation, such as content, grammar, relevance, and appropriateness.The schema is provided as a format instruction, which specifies the structure and the range of the scores for each dimension.For example, the evaluation schema can be: Human: The output should be formatted as a JSON instance that conforms to the JSON schema below....Here is the output schema: {"properties": {"content": {"title": "Content", "description": "content score in the range of 0 to 100", "type": "integer", "grammar": ...}

Single Prompt for Evaluation
The single prompt is designed to include the necessary dialogue context and the target response that needs to be evaluated, along with the evaluation schema.
The prompt is concatenated with the dialogue context, the reference (if available), and the generated response, and then fed to the large language model to output a score for each evaluation dimension, based on the defined schema.For example, the prompt for evaluating a dialogue response with human reference can be: Context: {context} Reference: {reference} Dialogue response: {response} Efficient Evaluation By using a single prompt with a unified evaluation schema, LLM-EVAL can efficiently obtain multi-dimensional scores for the responses without the need for multiple prompts.
The large language model is called only once, and it directly provides the evaluation scores for each dimension based on the defined schema.For instance, given a dialogue context, reference, and generated response, the LLM-EVAL method would produce an example output that looks like this: Output: {"appropriateness": 3.0, "content": 2.5, "grammar": 4.0, "relevance": 2.0} This output showcases the multi-dimensional evaluation of the generated response, with each dimension receiving a score based on the predefined schema.The scores help in understanding the quality of the response in terms of appropriateness, content, grammar, and relevance, while still maintaining the efficiency of the evaluation process by requiring just a single call to the large language model.For a detailed description of the prompt templates used in our experiments with LLM-EVAL, please refer to Appendix A.

Datasets and Benchmarks
Our proposed LLM-EVAL method is assessed on an array of datasets spanning diverse dialogue systems and evaluation dimensions.We provide a concise overview of the datasets and their features in this section.The datasets include human annotations, where each entry comprises a dialogue context, a generated response, and associated scores.A ground-truth human reference may also be present.For data lacking human reference, we only evaluate reference-free metrics.

Overall Scores with Human Reference
TopicalChat-USR evaluates response quality in knowledge-grounded dialogues, emphasizing topical understanding.PersonaChat-USR measures response quality in personalized conversations, highlighting the incorporation of speaker personas (Mehri and Eskenazi, 2020b).ConvAI2-GRADE examines the quality of chit-chat dialogue systems, focusing on engaging and contextually relevant responses.DailyDialog-GRADE investigates response quality in everyday conversational contexts.EmpatheticDialogue-GRADE assesses the quality of empathetic responses in dialogue systems (Huang et al., 2020).DSTC6 evaluates end-to-end conversation modeling with human-generated responses (Hori and Hori, 2017).

Overall Scores without Human Reference
DailyDialog-PredictiveEngagement evaluates engagement in dialogue systems without relying on human references (Ghazarian et al., 2020).FED is an unsupervised method that measures the quality of dialogue responses without using human references (Mehri and Eskenazi, 2020a).DSTC9 focuses on the end-to-end evaluation of contextaware dialogue systems without human references (Mehri et al., 2022).
We compare the performance of LLM-EVAL with existing evaluation methods on these datasets to demonstrate its effectiveness and efficiency in evaluating open-domain conversations.The evaluation results are presented in terms of correlation with human judgments, using Pearson's correlation coefficient (r) and Spearman's correlation coeffi-cient (ρ).

LLM-EVAL Configurations
We evaluate LLM-EVAL under different settings to demonstrate its effectiveness and adaptability.The configurations are as follows: LLM-EVAL 0-5 The evaluation scores for each dimension are in the range of 0 to 5 with one decimal place, which is more close to common 1-5 Likert scale used in human evaluation.
LLM-EVAL 0-100 The evaluation scores for each dimension are in the range of 0 to 100 as integers, providing a finer-grained scale for evaluation.
The evaluation schema prompt for both configurations remains the same, with only the range of scores differing between them.We test the LLM-EVAL method with and without human references for each configuration if applicable.
Unless specified otherwise, throughout our experiments and evaluations, we employ the Anthropic Claude API with the claude-v1.3model and use greedy decoding, which selects the token with the highest probability at each time step during the generation process.

Baseline Evaluation Metrics
We compare LLM-EVAL with several state-of-theart evaluation metrics, including both traditional and LLM-based approaches.
• Deep-AM-FM measures dialog quality with Adequacy Metric (AM) and Fluency Metric (FM), utilizing BERT embeddings and language model probabilities (Zhang et al., 2020a).• DSTC10 Team 1 boosted DyanEval's (Zhang et al., 2021a) turn-level evaluation performance by integrating auxiliary objectives and combining USL-H (Phy et al., 2020), DEB (Sai et al., 2020), and an improved DyanEval, with weights based on input dialogue data characteristics (Zhang et al., 2021b).• MME-CRS introduces the Multi-Metric Evaluation, consisting of 5 parallel sub-metrics to assess dialogue quality across fluency, relevance, engagement, specificity, and topic coherence.The approach utilizes Correlation Re-Scaling to model sub-metric relationships (Zhang et al., 2022).• BERTScore computes the F1 score by matching token embeddings in human references and system responses (Zhang et al., 2020b).• DEB constructs a dialog dataset with relevant and adversarial irrelevant responses, then finetunes BERT with an NSP objective (Sai et al., 2020).• GRADE models topic transition dynamics in dialog using a graph representation of the dialog history (Huang et al., 2020).• USR trains several models to measure different qualities of dialogs, including fluency, relevance, and knowledge conditioning (Mehri and Eskenazi, 2020b).• USL-H combines three models trained with different objectives (VUP, NSP, MLM) to evaluate response validity, sensibleness, and likelihood (Phy et al., 2020).• DynaEval leverages a graph structure to model dialog-level interactions between user and system (Zhang et al., 2021a).
• FlowScore models dynamic information flow in dialog history and measures dialog quality using DialoFlow representations (Li et al., 2021).• GPTScore evaluates text using models like GPT-3, assigning higher probabilities to quality content through multiple prompts for a multi-dimensional assessment.However, it may not be as effective as LLM-EVAL, which only requires a single prompt (Fu et al., 2023).• Traditional Metrics: We also include classic metrics such as BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004), which have known limitations in dialogue evaluation.

Results of DSTC10 Hidden Set
The results of our proposed LLM-EVAL method on the DSTC10 hidden set are presented in 1.We compare the performance of LLM-EVAL with other participating teams and baselines in the DSTC10 challenge.The evaluation is performed in terms of Spearman correlation coefficients between human ratings and automatic metrics across multiple dimensions, including Appropriateness (APP), Content (CON), Grammar (GRA), and Relevance (REL).
The results show that LLM-EVAL consistently outperforms most of the baselines and even the best performing team in DSTC10 across different dimensions and datasets.In particular, LLM-EVAL with a 0-5 score range achieves the highest average Spearman correlation coefficient of 0.378 among all the methods without human reference.
When comparing the two LLM-EVAL configurations, both 0-5 and 0-100 settings demonstrate competitive performance, with the 0-5 configuration slightly outperforming the 0-100 configuration in both cases with or without human reference.This indicates that the LLM-EVAL method is robust and versatile in evaluating open-domain conversations, as it can adapt to different scoring ranges and consistently outperform all baselines and the best performing team in DSTC10 across various dimensions and datasets.

Overall Scores with Human Reference
The results of LLM-EVAL on datasets with overall scores and human references are presented in Table 2.We compare the performance of LLM-EVAL with other top-performing evaluation methods (Yeh et al., 2021), such as BLEU, ROUGE, BERTScore, DEB, GRADE, USR, and USL-H.The meta-evaluation is performed in terms of Pearson correlation coefficient (r) and Spearman correlation coefficient (ρ) between human ratings and automatic metrics.
For the DailyDialog-GRADE, ConvAI2-GRADE, and EmpatheticDialogue-GRADE datasets, we use the "Relevance" dimension for evaluation, while for the DSTC6 dataset, we use the "Overall" score.For TopicalChat-USR and PersonaChat-USR, we predict all the "Engaging, Maintains Context, Natural, Overall, Understandable, Uses Knowledge" dimensions in the original annotations but only use the "Overall" score for meta-evaluation.
LLM-EVAL consistently outperforms most of the baselines across the datasets and correlation coefficients, with LLM-Eval 0-100 configuration achieving the highest average correlation coefficient across all datasets.
The consistent performance of both configurations across different datasets and dimensions indicates that LLM-EVAL is a reliable and effective evaluation tool for open-domain conversations with human references.Its ability to adapt to different scoring ranges while maintaining competitive performance against state-of-the-art evaluation methods showcases the versatility and robustness of the LLM-EVAL approach.

Overall Scores without Human Reference
Table 3 presents the performance of LLM-EVAL on datasets without human references, comparing it with other high-performing evaluation methods such as DynaEval, USL-H, and FlowScore.
For the evaluation of DailyDialog-PredictiveEngagement and DSTC9 datasets, we utilize the "Overall" score.In the FED dataset, we predict "Correctness, Engagement, Fluency, Interestingness, Overall, Relevance, Semantically Appropriateness, Specificity, and Understandability" dimensions for turn-based evaluation, and "Coherence, Consistency, Topic Depth, Diversity, Error Recovery, Flexibility, Informativeness, Inquisitiveness, Likability, Overall, and Understandability" dimensions for dialogue-based evaluation.Nonetheless, only the "Overall" score is used for meta-evaluation in each scenario.
Both LLM-EVAL configurations, 0-5 and 0-100, consistently display strong performance across the datasets, highlighting their resilience and flexibility.The method's capacity to accommodate different scoring ranges while maintaining competitiveness against state-of-the-art evaluation techniques demonstrates LLM-EVAL's adaptability and robustness.This establishes its value as an efficient and versatile evaluation solution in reference-free settings.

Different LLMs
In this section, we analyze the performance of LLM-EVAL when using different large language models for evaluation.Table 4 presents the Spear-man correlation coefficients between human ratings and LLM-EVAL with various model configurations and scoring ranges for the Topical-DSTC10 and Persona-DSTC10 datasets.We compare the performance of LLM-EVAL when using different LLMs, such as Anthropic Claude, OpenAI ChatGPT, Anthropic Claude-instant, and OpenAI GPT-3.51 .
Among these models, Claude and ChatGPT are optimized for chat applications, while GPT-3.5 is not.We observe that both Claude and ChatGPT generally achieve better performance across all dimensions when compared to GPT-3.5.This suggests that using dialogue-optimized LLMs in the LLM-EVAL method leads to more accurate evaluation results in the context of open-domain conversations.
Moreover, when comparing the Claude and ChatGPT models, both models demonstrate competitive performance across different evaluation dimensions, with Claude slightly outperforming ChatGPT in certain configurations.
We also analyze the performance of Claude-instant, a smaller version of Claude.Although it is not as competitive as its larger counterpart, it still achieves reasonable performance in some cases.This implies that smaller models, while not optimal, can still be employed for LLM-EVAL to a certain extent, possibly providing a more resource-efficient option in specific scenarios.
In conclusion, our analysis demonstrates that dialogue-optimized LLMs, such as Claude and ChatGPT, yield better performance in the LLM-EVAL method for open-domain conversation evaluation.Although smaller models like Anthropic Claude-instant may not achieve the best performance, they can still be considered for resourcelimited scenarios.Overall, the choice of LLMs in LLM-EVAL plays a crucial role in obtaining accurate evaluation results.

Decoding Methods
In our experiments, we employ greedy decoding for generating responses using the Anthropic API with the claude-v1.3model.Greedy decoding selects the token with the highest probability at each time step during the generation process.However, other decoding methods, such as nucleus sampling could be employed in the LLM-EVAL method to explore their impact on the evaluation results.
Nucleus sampling, also known as top-p sampling, samples tokens from the top-p most probable tokens at each time step, where p is a predefined probability threshold.This method introduces some randomness into the generation process and could lead to more diverse and creative responses.
Comparing the performance of Claude and Claude top_p = 0.9 in Table 4, we observe that greedy decoding generally achieves better performance across all evaluation dimensions.This finding suggests that using greedy decoding with the LLM-EVAL method provides more accurate and consistent evaluation results compared to nucleus sampling.
One possible reason for this difference in performance is that greedy decoding tends to generate more coherent and focused responses due to its deterministic nature.In contrast, nucleus sampling introduces randomness into the generation process, which may result in less focused or less relevant responses, affecting the evaluation scores.Con-sequently, greedy decoding appears to be a more suitable choice for the LLM-EVAL method.

Conclusion
In this paper, we introduced LLM-EVAL, a unified multi-dimensional automatic evaluation method for open-domain conversations with large language models.The proposed method employs a single prompt along with a unified evaluation schema that covers multiple dimensions of evaluation, such as content, grammar, relevance, and appropriateness.This approach streamlines the evaluation process and eliminates the need for multiple prompts.Experiments on various datasets demonstrated the effectiveness and efficiency of LLM-EVAL, consistently outperforming most baselines and state-ofthe-art evaluation methods.
As future work, we plan to explore reinforcement learning from LLMs feedback and investigate LLM-in-the-loop evaluation strategies as an alternative to human-in-the-loop methods.This will further enhance the applicability and performance of the LLM-EVAL method in various dialogue system evaluation scenarios.

Limitations
Although LLM-EVAL has shown promising results in assessing open-domain conversations, it is crucial to acknowledge its limitations.
Firstly, the performance of our method relies heavily on the large language models underlying it, which may exhibit biases or generate unexpected outputs.If the language model misinterprets the evaluation schema or prompt instructions, it could lead to inaccurate evaluation scores.
Secondly, the choice of LLM significantly influences the evaluation results, as demonstrated in our analysis.While dialogue-optimized LLMs produce better performance, this selection may limit LLM-EVAL's applicability for particular tasks or dialogue systems.
Thirdly, our approach employs single-number scoring for each evaluation dimension, which may fail to capture the subtleties of human judgments, particularly for subjective aspects like engagement, creativity, or humor.
Lastly, the effectiveness of LLM-EVAL hinges on the quality and clarity of the prompts and evaluation schemas.Creating such prompts and schemas may require domain expertise and knowledge of LLM behavior, posing challenges for non-experts.
To overcome these limitations, future research can focus on exploring alternative prompt designs, refining evaluation schemas, and expanding the method to cover a wider range of evaluation dimensions and dialogue system types.

Ethics Statement
We acknowledge that there are potential ethical concerns associated with the use of large language models in our evaluation method.
A primary concern is the biases present in large language models.These biases are introduced during the training process, as the models learn from textual data that may contain biased information, stereotypes, or misinformation.When using these biased models for evaluation, it is possible that the evaluation scores produced by LLM-EVAL may reflect and perpetuate these biases, potentially leading to biased evaluations of dialogue system outputs.This could, in turn, affect the development of future dialogue systems by encouraging biased behavior.
To mitigate this concern, researchers and developers should be cautious when interpreting the evaluation results obtained through LLM-EVAL and consider potential biases in the large language models used.Moreover, future work could explore techniques to debias language models or employ alternative evaluation schemas that actively account for biases in the evaluation process.

Figure 1 :
Figure 1: An illustration of our proposed LLM-EVAL framework, which leverages a unified multi-dimensional evaluation schema and a single prompt to efficiently evaluate open-domain conversations with large language models.

Table 1 :
Spearman correlation coefficients between human ratings and automatic metrics across multiple dimensions (APP for Appropriateness, CON for Content, GRA for Grammar, and REL for Relevance) for DSTC10 hidden test datasets with human reference.Each team is represented by the best submission on 5 test datasets.The best score for each column is highlighted in bold.The second best is underlined.Note that the last column is averaged over 11 dimension-wise correlation scores of all five datasets.

Table 2 :
Correlation coefficients (Pearson r and Spearman ρ) between human ratings and automatic metrics in terms of overall scores for datasets with human reference.We use the following abbreviations: TopicalChat (TopicalChat-USR), PersonaChat (PersonaChat-USR), ConvAI2 (ConvAI2-GRADE), DD (DailyDialog-GRADE), ED (EmpatheticDialogue-GRADE).The best score for each column is highlighted in bold.The second best is underlined.

Table 3 :
Correlation coefficients (Pearson r and Spearman ρ) between human ratings and automatic metrics in terms of overall scores for datasets without human reference.The best score for each column is highlighted in bold.The second best is underlined.

Table 4 :
Spearman correlation coefficients between human ratings and LLM-EVAL with different configurations across multiple dimensions (APP for Appropriateness, CON for Content, GRA for Grammar, and REL for Relevance) for Topical-DSTC10 and Persona-DSTC10.The best score for each column is highlighted in bold.The second best is underlined.