From Neural Re-Ranking to Neural Ranking: Learning a Sparse Representation for Inverted Indexing
| Authors |
|
|---|---|
| Publication date | 2018 |
| Book title | CIKM'18 |
| Book subtitle | proceedings of the 2018 ACM International Conference on Information and Knowledge Management : October 22-26, 2018, Torino, Italy |
| ISBN (electronic) |
|
| Event | 27th ACM International Conference on Information and Knowledge Management |
| Pages (from-to) | 497-506 |
| Publisher | New York, NY: The Association for Computing Machinery |
| Organisations |
|
| Abstract |
The availability of massive data and computing power allowing for effective data driven neural approaches is having a major impact on machine learning and information retrieval research, but these models have a basic problem with efficiency. Current neural ranking models are implemented as multistage rankers: for efficiency reasons, the neural model only re-ranks the top ranked documents retrieved by a first-stage efficient ranker in response to a given query. Neural ranking models learn dense representations causing essentially every query term to match every document term, making it highly inefficient or intractable to rank the whole collection. The reliance on a first stage ranker creates a dual problem: First, the interaction and combination effects are not well understood. Second, the first stage ranker serves as a "gate-keeper" or filter, effectively blocking the potential of neural models to uncover new relevant documents. In this work, we propose a standalone neural ranking model (SNRM) by introducing a sparsity property to learn a latent sparse representation for each query and document. This representation captures the semantic relationship between the query and documents, but is also sparse enough to enable constructing an inverted index for the whole collection. We parameterize the sparsity of the model to yield a retrieval model as efficient as conventional term based models. Our model gains in efficiency without loss of effectiveness: it not only outperforms the existing term matching baselines, but also performs similarly to the recent re-ranking based neural models with dense representations. Our model can also take advantage of pseudo-relevance feedback for further improvements. More generally, our results demonstrate the importance of sparsity in neural IR models and show that dense representations can be pruned effectively, giving new insights about essential semantic features and their distributions.
|
| Document type | Conference contribution |
| Language | English |
| Related publication | From Neural Re-Ranking to Neural Ranking: Learning a Sparse Representation for Inverted Indexing From Neural Re-Ranking to Neural Ranking: Learning a Sparse Representation for Inverted Indexing |
| Published at | https://doi.org/10.1145/3269206.3271800 |
| Downloads |
p497-zamani
(Final published version)
|
| Permalink to this page | |
