We consider the image retrieval problem of finding the images in a dataset that
are most similar to a query image. Our goal is to reduce the number of vector
operations and memory for performing a search without sacrificing accuracy of
the returned images. We adopt a group testing formulation and design the
decoding architecture using either dictionary learning or eigendecomposition.
The latter is a plausible option for small-to-medium sized problems with
high-dimensional global image descriptors, whereas dictionary learning is
applicable in large-scale scenarios. We evaluate our approach for global
descriptors obtained from both SIFT and CNN features. Experiments with standard
image search benchmarks, including the Yahoo100M dataset comprising 100 million
images, show that our method gives comparable (and sometimes superior) accuracy
compared to exhaustive search while requiring only 10% of the vector operations
and memory. Moreover, for the same search complexity, our method gives
significantly better accuracy compared to approaches based on dimensionality
reduction or locality sensitive hashing.