Finding regions of counterfactual explanations via robust optimization
| Authors | |
|---|---|
| Publication date | 2024 |
| Journal | INFORMS Journal on Computing |
| Volume | Issue number | 36 | 5 |
| Pages (from-to) | 1316–1334 |
| Organisations |
|
| Abstract |
Counterfactual explanations (CEs) play an important role in detecting bias and improving the explainability of data-driven classification models. A CE is a minimal perturbed data point for which the decision of the model changes. Most of the existing methods can only provide one CE, which may not be achievable for the user. In this work, we derive an iterative method to calculate robust CEs (i.e., CEs that remain valid even after the features are slightly perturbed). To this end, our method provides a whole region of CEs, allowing the user to choose a suitable recourse to obtain a desired outcome. We use algorithmic ideas from robust optimization and prove convergence results for the most common machine learning methods, including decision trees, tree ensembles, and neural networks. Our experiments show that our method can efficiently generate globally optimal robust CEs for a variety of common data sets and classification models.
|
| Document type | Article |
| Note | With supplementary file |
| Language | English |
| Published at | https://doi.org/10.1287/ijoc.2023.0153 |
| Downloads |
Finding regions of counterfactual explanations via robust optimization
(Final published version)
|
| Supplementary materials | |
| Permalink to this page | |
