The transparency dilemma: An experiment on how AI disclosures affect credibility perceptions and engagement across topics

Open Access
Authors
Publication date 2025
Host editors
  • E. Burton
  • N. Mattei
  • A. Páez
Book title Proceedings of the Eighth AAAI/ACM Conference on AI, Ethics, and Society
Book subtitle IE University Tower, Madrid, Spain : October 20-22, 2025
ISBN (electronic)
  • 9781577359029
Event 8th AAAI/ACM Conference on AI, Ethics, and Society
Volume | Issue number 2
Pages (from-to) 1748-1757
Publisher Washington, DC: AAAI Press
Organisations
  • Faculty of Law (FdR) - Institute for Information Law (IViR)
  • Faculty of Social and Behavioural Sciences (FMG) - Amsterdam School of Communication Research (ASCoR)
Abstract
The media sector's credibility has been under significant scrutiny due to a rise in misinformation and the advent of generative AI (Artificial Intelligence) technologies, which pose a further threat to its credibility. Amid these challenges, transparency about the use of AI has been championed as a key solution to restore and promote credibility. Research reveals mixed reactions regarding the transparency of AI-generated content. While some studies indicate that AI-written news might be perceived as more credible than its human-produced counterparts, others find that content labelled as AI impacts credibility negatively. This study clarifies the impact of AI labels on individuals' credibility perceptions by examining the influence of different news topics. Does the usage of AI transparency labels invoke more concern about news credibility in the context of political news compared to non-political news? The effectiveness of transparency cues may be different when it concerns more serious issues, such as political news, versus less important non-political news, such as culture. We conducted a 2x2 survey experiment (N= 207) to investigate the impact of AI disclosures in the context of political news on individuals’ perceptions of source credibility, perceived manipulation, and sharing intentions. Overall, AI as a news source is considered less credible, no matter the topic, yet the AI label does not increase feelings of manipulation. When it comes to sharing intention, this is negatively affected by the AI label, but only in the case of political news. These findings can help news organisations and policymakers to develop meaningful transparency labels.
Document type Conference contribution
Language English
Published at https://doi.org/10.1609/aies.v8i2.36671
Downloads
36671-Article Text-40746-1-2-20251015 (Final published version)
Permalink to this page
Back