Probabilistic Test-Time Generalization by Variational Neighbor-Labeling
| Authors | |
|---|---|
| Publication date | 2024 |
| Journal | Proceedings of Machine Learning Research |
| Event | 3rd Conference on Lifelong Learning Agents |
| Volume | Issue number | 274 |
| Pages (from-to) | 832-851 |
| Number of pages | 20 |
| Organisations |
|
| Abstract |
This paper strives for domain generalization, where models are trained exclusively on source domains before being deployed on unseen target domains. We follow the strict separation of source training and target testing, but exploit the value of the unlabeled target data itself during inference. We make three contributions. First, we propose probabilistic pseudo-labeling of target samples to generalize the source-trained model to the target domain at test time. We formulate the generalization at test time as a variational inference problem, by modeling pseudo labels as distributions, to consider the uncertainty during generalization and alleviate the misleading signal of inaccurate pseudo labels. Second, we learn variational neighbor labels that incorporate the information of neighboring target samples to generate more robust pseudo labels. Third, to learn the ability to incorporate more representative target information and generate more precise and robust variational neighbor labels, we introduce a meta-generalization stage during training to simulate the generalization procedure. Experiments on seven widely-used datasets demonstrate the benefits, abilities, and effectiveness of our proposal.
|
| Document type | Article |
| Note | Proceedings of The 3rd Conference on Lifelong Learning Agents, 29-1 August 2024, University of Pisa, Pisa, Italy. |
| Language | English |
| Published at | https://proceedings.mlr.press/v274/ambekar25a.html |
| Downloads |
Probabilistic Test-Time Generalization by Variational Neighbor-Labeling
(Final published version)
|
| Permalink to this page | |
