Image matching across wide baselines: From paper to practice
|Y. Jin, D. Mishkin, A. Mishchuk, J. Matas, P. Fua, K. Moo Yi, E. Trulls. Image matching across wide baselines: From paper to practice. International Journal of Computer Vision, DOI 10.1007/s11263-020-01385-0, 10, 2020.|
|Journal||International Journal of Computer Vision|
We introduce a comprehensive benchmark for local features and robust estimation algorithms, focusing on the downstream task—the accuracy of the reconstructed camera pose—as our primary metric. Our pipeline’s modular structure allows easy integration, configuration, and combination of different methods and heuristics. This is demonstrated by embedding dozens of popular algorithms and evaluating them, from seminal works to the cutting edge of machine learning research. We show that with proper settings, classical solutions may still outperform the perceived state of the art. Besides establishing the actual state of the art, the conducted experiments reveal unexpected properties of structure from motion pipelines that can help improve their performance, for both algorithmic and learned methods. Data and code are online (https://github.com/ubc-vision/image-matching-benchmark), providing an easy-to-use and flexible framework for the benchmarking of local features and robust estimation methods, both alongside and against top-performing methods. This work provides a basis for the Image Matching Challenge (https://image-matching-challenge.github.io).