Feeling iffy about generative AI When journalists disclose AI use, trust in news is lower

Open Access
Authors
Publication date 24-04-2025
Number of pages 26
Publisher OSF Preprints
Organisations
  • Faculty of Law (FdR) - Institute for Information Law (IViR)
  • Faculty of Law (FdR)
  • Faculty of Social and Behavioural Sciences (FMG) - Amsterdam School of Communication Research (ASCoR)
Abstract
News organisations are experimenting with how to best integrate generative AI intotheir journalistic workflows. This brings up important questions about how to dis-close this, as well as what effects such AI disclosures have on readers. Prior researchshows predominantly negative effects on perceived trustworthiness and credibility,but says little about how different use cases compare to each other. In this study,we report the results of a conjoint experiment (N= 683) on the effects of nuancedAI disclosures on the perceived trustworthiness of news. Our results confirm priorresearch in that we find negative effects for all kinds of AI disclosures. However,moderation and cluster analysis suggest that these effects are not universal, but de-pend on individual-level characteristics that co-determine AI disclosure effects. By(1) highlighting important individual-level moderators such as respondents’ politicalposition as well as their attitudes towards and knowledge of AI, and (2) by describ-ing five distinctive preference profiles and their predictors, our results inform futureresearch and help practitioners cater AI disclosures to particular groups of readers.
Document type Preprint
Language English
Published at https://doi.org/10.31219/osf.io/tmzq4_v1
Other links https://osf.io/76n2d
Downloads
Feeling iffy about generative AI (Submitted manuscript)
Permalink to this page
Back