CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks
| Authors |
|
|---|---|
| Publication date | 05-02-2021 |
| Edition | v1 |
| Number of pages | 10 |
| Publisher | Ithaca, NY: ArXiv |
| Organisations |
|
| Abstract |
Given the increasing promise of Graph Neural Networks (GNNs) in real-world applications, several methods have been developed for explaining their predictions. So far, these methods have primarily focused on generating subgraphs that are especially relevant for a particular prediction. However, such methods do not provide a clear opportunity for recourse: given a prediction, we want to understand how the prediction can be changed in order to achieve a more desirable outcome. In this work, we propose a method for generating counterfactual (CF) explanations for GNNs: the minimal perturbation to the input (graph) data such that the prediction changes. Using only edge deletions, we find that our method, CF-GNNExplainer can generate CF explanations for the majority of instances across three widely used datasets for GNN explanations, while removing less than 3 edges on average, with at least 94\% accuracy. This indicates that CF-GNNExplainer primarily removes edges that are crucial for the original predictions, resulting in minimal CF explanations.
|
| Document type | Preprint |
| Note | Versions v2 and v3 (2021) and v4 (2022) also available at ArXiv. |
| Language | English |
| Related publication | CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks |
| Published at | https://doi.org/10.48550/arXiv.2102.03322 |
| Downloads |
2102.03322v1
(Submitted manuscript)
|
| Permalink to this page | |
