Reinforcing Stereotypes in Health Care Through Artificial Intelligence–Generated Images: A Call for Regulation

Open Access
Authors
Publication date 09-2024
Journal Mayo Clinic Proceedings: Digital Health
Volume | Issue number 2 | 3
Pages (from-to) 335-341
Organisations
  • Faculty of Law (FdR)
Abstract
This paper conducts a small-scale exploratory study to explore how GenAI systems can produce negative medical stereotypes in AI-generated images. Our findings inspired this commentary. Explicating the medical stereotypes in AI-generated images serves 2 important objectives. First, as their dissemination is growing, it is essential to increase awareness of the proneness of GenAI to integrate harmful stereotypes in health-themed images. The impact of stereotypical health-themed images can be particularly detrimental, as they can influence (1) patients’ behavior toward health care professionals and their decisions concerning accessing health care and sharing information, and (2) health care professionals’ behavior toward certain patient groups and the health outcomes of these groups.10 Second, visualizing biases in images is an effective manner to make people understand AI-produced biases in general, especially because most GenAI uses the same LLMs for text and images. Our aim was to stimulate broader discourse on how to mitigate the risks of biases and medical stereotypes produced by GenAI, specifically Image-generative AI. We conclude that unless the harmful effects of GenAI for discrimination in health care are mitigated, the protection of fundamental rights and health is at risk.
Document type Article
Language English
Published at https://doi.org/10.1016/j.mcpdig.2024.05.004
Downloads
Permalink to this page
Back