SubmissionNumber#=%=#285 FinalPaperTitle#=%=#Edinburgh Clinical NLP at SemEval-2024 Task 2: Fine-tune your model unless you have access to GPT-4 ShortPaperTitle#=%=# NumberOfPages#=%=#11 CopyrightSigned#=%=#Aryo Pradipta Gema JobTitle#==# Organization#==#University of Edinburgh 10 Crichton St, Newington, Edinburgh EH8 9AB Abstract#==#The NLI4CT task assesses Natural Language Inference systems in predicting whether hypotheses entail or contradict evidence from Clinical Trial Reports. In this study, we evaluate various Large Language Models (LLMs) with multiple strategies, including Chain-of-Thought, In-Context Learning, and Parameter-Efficient Fine-Tuning (PEFT). We propose a PEFT method to improve the consistency of LLMs by merging adapters that were fine-tuned separately using triplet and language modelling objectives. We found that merging the two PEFT adapters improves the F1 score (+0.0346) and consistency (+0.152) of the LLMs. However, our novel methods did not produce more accurate results than GPT-4 in terms of faithfulness and consistency. Averaging the three metrics, GPT-4 ranks joint-first in the competition with 0.8328. Finally, our contamination analysis with GPT-4 indicates that there was no test data leakage. Our code is available at https://github.com/EdinburghClinicalNLP/semeval_nli4ct. Author{1}{Firstname}#=%=#Aryo Pradipta Author{1}{Lastname}#=%=#Gema Author{1}{Username}#=%=#aryo.gema Author{1}{Email}#=%=#aryo.gema@ed.ac.uk Author{1}{Affiliation}#=%=#University of Edinburgh Author{2}{Firstname}#=%=#Giwon Author{2}{Lastname}#=%=#Hong Author{2}{Username}#=%=#gch02518 Author{2}{Email}#=%=#gch02518@kaist.ac.kr Author{2}{Affiliation}#=%=#KAIST School of Computing Author{3}{Firstname}#=%=#Pasquale Author{3}{Lastname}#=%=#Minervini Author{3}{Username}#=%=#pminervini Author{3}{Email}#=%=#p.minervini@gmail.com Author{3}{Affiliation}#=%=#UCL Author{4}{Firstname}#=%=#Luke Author{4}{Lastname}#=%=#Daines Author{4}{Email}#=%=#Luke.Daines@ed.ac.uk Author{4}{Affiliation}#=%=#Usher Institute, University of Edinburgh Author{5}{Firstname}#=%=#Beatrice Author{5}{Lastname}#=%=#Alex Author{5}{Username}#=%=#balex Author{5}{Email}#=%=#balex@ed.ac.uk Author{5}{Affiliation}#=%=#University of Edinburgh, Edinburgh Futures Institute, School of Literatures, Languages and Cultures, School of Informatics ========== èéáğö