A Multistakeholder Approach Towards Evaluating AI Transparency Mechanisms

Open Access
Authors
  • A. Lucic ORCID logo
  • M. Srikumar
  • U. Bhatt
  • A. Xiang
Publication date 05-2021
Event HCXAI2021: ACM CHI Workshop Human-Centered Perspectives in Explainable AI
Number of pages 6
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
Given that there are a variety of stakeholders involved in, and affected by, decisions from machine learning (ML) models, it is important to consider that different stakeholders have different transparency needs. Previous work found that the majority of deployed transparency mechanisms primarily serve technical stakeholders. In our work, we want to investigate how well transparency mechanisms might work in practice for a more diverse set of stakeholders by conducting a large-scale, mixed-methods user study across a range of organizations, within a particular industry such as health care, criminal justice, or content moderation. In this paper, we outline the setup for our study.
Document type Paper
Language English
Published at https://arxiv.org/abs/2103.14976 https://www.dropbox.com/s/xliq08b70swlzpm/HCXAI2021_paper_20.pdf?dl=0
Other links https://hcxai.jimdosite.com/hcxai-21-papers-and-videos/
Downloads
2103.14976-1 (Accepted author manuscript)
Permalink to this page
Back