Exploiting Positional Bias for Query-Agnostic Generative Content in Search

Andrew Parry, Sean MacAvaney, Debasis Ganguly


Abstract
In recent years, research shows that neural ranking models (NRMs) substantially outperform their lexical counterparts in text retrieval. In traditional search pipelines, a combination of features leads to well-defined behaviour. However, as neural approaches become increasingly prevalent as the final scoring component of engines or as standalone systems, their robustness to malicious text and, more generally, semantic perturbation needs to be better understood. We posit that the transformer attention mechanism can induce exploitable defects in search models through sensitivity to token position within a sequence, leading to an attack that could generalise beyond a single query or topic. We demonstrate such defects by showing that non-relevant text–such as promotional content–can be easily injected into a document without adversely affecting its position in search results. Unlike previous gradient-based attacks, we demonstrate the existence of these biases in a query-agnostic fashion. In doing so, without the knowledge of topicality, we can still reduce the negative effects of non-relevant content injection by controlling injection position. Our experiments are conducted with simulated on-topic promotional text automatically generated by prompting LLMs with topical context from target documents. We find that contextualisation of a non-relevant text further reduces negative effects whilst likely circumventing existing content filtering mechanisms. In contrast, lexical models are found to be more resilient to such content injection attacks. We then investigate a simple yet effective compensation for the weaknesses of the NRMs in search, validating our hypotheses regarding transformer bias.
Anthology ID:
2024.findings-acl.656
Volume:
Findings of the Association for Computational Linguistics: ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11030–11047
Language:
URL:
https://aclanthology.org/2024.findings-acl.656
DOI:
10.18653/v1/2024.findings-acl.656
Bibkey:
Cite (ACL):
Andrew Parry, Sean MacAvaney, and Debasis Ganguly. 2024. Exploiting Positional Bias for Query-Agnostic Generative Content in Search. In Findings of the Association for Computational Linguistics: ACL 2024, pages 11030–11047, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Exploiting Positional Bias for Query-Agnostic Generative Content in Search (Parry et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.656.pdf