Which Stereotypes Are Moderated and Under-Moderated in Search Engine Autocompletion?
| Authors | |
|---|---|
| Publication date | 06-2023 |
| Book title | FAccT '23 |
| Book subtitle | Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency |
| ISBN (electronic) |
|
| Event | FAccT '23: the 2023 ACM Conference on Fairness, Accountability, and Transparency |
| Pages (from-to) | 1049–1061 |
| Publisher | New York: Association for Computing Machinery |
| Organisations |
|
| Abstract |
Language technologies that perpetuate stereotypes actively cement social hierarchies. This study enquires into the moderation of stereotypes in autocompletion results by Google, DuckDuckGo and Yahoo! We investigate the moderation of derogatory stereotypes for social groups, examining the content and sentiment of the autocompletions. We thereby demonstrate which categories are highly moderated (i.e., sexual orientation, religious affiliation, political groups and communities or peoples) and which less so (age and gender), both overall and per engine. We found that under-moderated categories contain results with negative sentiment and derogatory stereotypes. We also identify distinctive moderation strategies per engine, with Google and DuckDuckGo moderating greatly and Yahoo! being more permissive. The research has implications for both moderation of stereotypes in commercial autocompletion tools, as well as large language models in NLP, particularly the question of the content deserving of moderation.
|
| Document type | Conference contribution |
| Note | With supplementary files |
| Language | English |
| Related dataset | Stereotype elicitation in Google, DuckDuckGo and Yahoo! autcompletion |
| Published at | https://doi.org/10.1145/3593013.3594062 |
| Downloads |
3593013.3594062
(Final published version)
|
| Supplementary materials | |
| Permalink to this page | |
