Moral Responsibility for AI Systems
| Authors | |
|---|---|
| Publication date | 2023 |
| Host editors |
|
| Book title | 37th Conference on Neural Information Processing Systems (NeurIPS 2023) |
| Book subtitle | 10-16 December 2023, New Orleans, Louisana, USA |
| ISBN (electronic) |
|
| Series | Advances in Neural Information Processing Systems |
| Event | 37th Conference on Neural Information Processing Systems (NeurIPS 2023) |
| Number of pages | 14 |
| Publisher | Neural Information Processing Systems Foundation |
| Organisations |
|
| Abstract |
As more and more decisions that have a significant ethical dimension are being outsourced to AI systems, it is important to have a definition of moral responsibility that can be applied to AI systems. Moral responsibility for an outcome of an agent who performs some action is commonly taken to involve both a causal condition and an epistemic condition: the action should cause the outcome, and the agent should have been aware -- in some form or other -- of the possible moral consequences of their action. This paper presents a formal definition of both conditions within the framework of causal models. I compare my approach to the existing approaches of Braham and van Hees (BvH) and of Halpern and Kleiman-Weiner (HK). I then generalize my definition into a degree of responsibility.
|
| Document type | Conference contribution |
| Language | English |
| Published at | https://doi.org/10.48550/arXiv.2310.18040 |
| Published at | https://papers.nips.cc/paper_files/paper/2023/hash/0d5b7fd8c669fac58d6702188ed63afa-Abstract-Conference.html |
| Other links | https://doi.org/10.52202/075280 |
| Downloads |
NeurIPS-2023-moral-responsibility-for-ai-systems-Paper-Conference
(Accepted author manuscript)
|
| Permalink to this page | |
