Hybrid compression of inverted lists for reordered document collections

D. Arroyuelo, M. Oyarzún, V. Sepulveda. Hybrid compression of inverted lists for reordered document collections. Information Processing & Management, volume 54, number 6, pages 1308-1324, DOI 10.1016/j.ipm.2018.05.007, 11, 2018.

Autoren
  • Diego Arroyuelo
  • Mauricio Oyarzún
  • Victor Sepulveda
TypArtikel
JournalInformation Processing & Management
Nummer6
Band54
DOI10.1016/j.ipm.2018.05.007
Monat11
Jahr2018
Seiten1308-1324
Abstract

Text search engines are a fundamental tool nowadays. Their efficiency relies on a popular and simple data structure: inverted indexes. They store an inverted list per term of the vocabulary. The inverted list of a given term stores, among other things, the document identifiers (docIDs) of the documents that contain the term. Currently, inverted indexes can be stored efficiently using integer compression schemes. Previous research also studied how an optimized document ordering can be used to assign docIDs to the document database. This yields important improvements in index compression and query processing time. In this paper we show that using a hybrid compression approach on the inverted lists is more effective in this scenario, with two main contributions:

  • First, we introduce a document reordering approach that aims at generating runs of consecutive docIDs in a properly-selected subset of inverted lists of the index.

  • Second, we introduce hybrid compression approaches that combine gap and run-length encodings

within inverted lists, in order to take advantage not only from small gaps, but also from long runs of consecutive docIDs generated by our document reordering approach. Our experimental results indicate a reduction of about 10%–30% in the space usage of the whole index (just regarding docIDs), compared with the most efficient state-of-the-art results. Also, decompression speed is up to 1.22 times faster if the runs of consecutive docIDs must be explicitly decompressed, and up to 4.58 times faster if implicit decompression of these runs is allowed (e.g., representing the runs as intervals in the output). Finally, we also improve the query processing time of AND queries (by up to 12%), WAND queries (by up to 23%), and full (non-ranked) OR queries (by up to 86%), outperforming the best existing approaches.