Factor analysis models via I-divergence optimization
| Authors |
|
|---|---|
| Publication date | 2016 |
| Journal | Psychometrika |
| Volume | Issue number | 81 | 3 |
| Pages (from-to) | 702-726 |
| Number of pages | 25 |
| Organisations |
|
| Abstract |
Given a positive definite covariance matrix Σˆ of dimension n, we approximate it with a covariance of the form HH⊤+D, where H has a prescribed number k<n of columns and D>0 is diagonal. The quality of the approximation is gauged by the I-divergence between the zero mean normal laws with covariances Σˆ and HH⊤+D, respectively. To determine a pair (H, D) that minimizes the I-divergence we construct, by lifting the minimization into a larger space, an iterative alternating minimization algorithm (AML) à la Csiszár-Tusnády. As it turns out, the proper choice of the enlarged space is crucial for optimization. The convergence of the algorithm is studied, with special attention given to the case where D is singular. The theoretical properties of the AML are compared to those of the popular EM algorithm for exploratory factor analysis. Inspired by the ECME (a Newton-Raphson variation on EM), we develop a similar variant of AML, called ACML, and in a few numerical experiments, we compare the performances of the four algorithms.
|
| Document type | Article |
| Language | English |
| Published at | https://doi.org/10.1007/s11336-015-9486-5 |
| Downloads |
Spreij_Finesso_Psychometrika_81-3_2016
(Final published version)
|
| Permalink to this page | |
