A Sparsity Principle for Partially Observable Causal Representation Learning
| Authors |
|
|---|---|
| Publication date | 2024 |
| Journal | Proceedings of Machine Learning Research |
| Event | 41st International Conference on Machine Learning, ICML 2024 |
| Volume | Issue number | 235 |
| Pages (from-to) | 55389-55433 |
| Number of pages | 45 |
| Organisations |
|
| Abstract |
Causal representation learning aims at identifying high-level causal variables from perceptual data. Most methods assume that all latent causal variables are captured in the high-dimensional observations. We instead consider a partially observed setting, in which each measurement only provides information about a subset of the underlying causal state. Prior work has studied this setting with multiple domains or views, each depending on a fixed subset of latents. Here we focus on learning from unpaired observations from a dataset with an instance-dependent partial observability pattern. Our main contribution is to establish two identifiability results for this setting: one for linear mixing functions without parametric assumptions on the underlying causal model, and one for piecewise linear mixing functions with Gaussian latent causal variables. Based on these insights, we propose two methods for estimating the underlying causal variables by enforcing sparsity in the inferred representation. Experiments on different simulated datasets and established benchmarks highlight the effectiveness of our approach in recovering the ground-truth latents. |
| Document type | Article |
| Note | Proceedings of the 41st International Conference on Machine Learning, 21-27 July 2024, Vienna, Austria |
| Language | English |
| Published at | https://proceedings.mlr.press/v235/xu24ac.html |
| Other links | https://www.scopus.com/pages/publications/85203804859 |
| Downloads |
xu24ac
(Final published version)
|
| Permalink to this page | |