Parsimonious language models for a terabyte of text

Open Access
Authors
Publication date 2008
Host editors
  • E.M. Voorhees
  • L.P. Buckland
Book title The sixteenth Text REtrieval Conference (TREC 2007) proceedings
Event Sixteenth Text REtrieval Conference (TREC 2007), Gaithersburg, MD, USA
Pages (from-to) 1-7
Publisher National Institute of Standards and Technology (NIST)
Organisations
  • Interfacultary Research - Institute for Logic, Language and Computation (ILLC)
Abstract
The aims of this paper are twofold. Our first aim is to compare results of the earlier Terabyte tracks to the Million Query track. We submitted a number of runs using different document representations (such as full-text, title-fields, or incoming anchor-texts) to increase pool diversity. The initial results show broad agreement in system rankings over various measures on topic sets judged at both Terabyte and Million Query tracks, with runs using the full-text index giving superior results on all measures, but also some noteworthy upsets. Our second aim is to explore the use of parsimonious language models for retrieval on terabytescale collections. These models are smaller thus more efficient than the standard language models when used at indexing time, and they may also improve retrieval performance. We have conducted initial experiments using parsimonious models in combination with pseudo-relevance feedback, for both the Terabyte and Million Query track topic sets, and obtained promising initial results.
Document type Conference contribution
Published at http://trec.nist.gov/pubs/trec16/papers/uamsterdam-derijke.mq.final.pdf
Downloads
Permalink to this page
Back