Results of the SIXD Challenge 2017 are included in the BOP ECCV'18 paper. See the BOP website for latest challenges.

Introduction

A pose of a rigid object has six degrees of freedom and its full knowledge is required in many robotic and augmented reality applications. The goal of the challenge is to evaluate methods for 6D object pose estimation from RGB or RGB-D images and to establish the state of the art.

Challenge task

The methods are evaluated on the task of 6D localization of a single instance of a single object (SiSo). All test images are used for the evaluation, even those with multiple instances of the object of interest.

This task, which is a special variant of the 6D localization task described in [1], allows to evaluate most of the state-of-the-art methods out of the box. The task is relevant for industry, e.g. for an assembly robot when it needs to find a bolt to complete an assembly step. Even if there are multiple bolts in its workspace, the robot needs to know the pose of a single bolt, arbitrarily chosen.

The difficulty of the "multiple instances, find one that you pick" is close to "find the instance in most favorable pose" (least occlusion, unambiguous view). Most methods are expected, but not required, to report the most favorable pose, treating the rest as clutter.

Datasets

All datasets selected for the challenge contain 3D object models and training and test RGB-D images. The training images show individual objects from different viewpoints and were either captured by a Kinect-like sensor or obtained by rendering of the 3D object models. The test images were captured in scenes with varying complexity, often with clutter and occlusion.

The datasets can be downloaded from the BOP website.

Hinterstoisser et al.
Hinterstoisser et al. [3] with extra ground truth from Brachmann et al. [4]
T-LESS
T-LESS [2] - use Primesense images
TUD Light
TUD Light
Toyota Light
Toyota Light
Rutgers APC
Rutgers APC [7] - reduced version
Tejani et al.
Tejani et al. [5] - reduced version
Doumanoglou et al.
Doumanoglou et al. [6] - reduced version

How to participate

To have your method evaluated, run it on all of the datasets and submit the results in this format to hodantom@cmp.felk.cvut.cz. The submission deadline is 8th October 2017.

Challenge rules:

  1. For training, you can use the provided object models and training images (both real and rendered). You can also render extra training images using the object models.
  2. Not a single pixel of test images may be used in training, nor the ground truth poses provided for the test images.
  3. For each submission, keep the parameters of your method constant across all objects and datasets.
  4. If you want your results to be included in a publication about the challenge, a documentation of results is required. Without the documentation, your results will be listed on this website but not included in the publication.
  5. To become the winning method, authors need to provide also an implementation of the method (source code or a binary file) that will be validated. Besides the absolute winner, we will announce an open source winner selected from the methods whose source code is publicly available.

The error of 6D object pose estimates will be measured as described in this document. In short: A slightly modified version of the Visible Surface Discrepancy [1] will be used as the main pose error function. For legacy reasons, we will use also the Average Distance by Hinterstoisser et al. [2].

We provide SIXD toolkit with python scripts for reading the standard dataset format, rendering, evaluation etc.

Organizers

Tomáš Hodaň1, Frank Michel2, Caner Sahin3, Tae-Kyun Kim3, Jiří Matas1, Carsten Rother4

1 Center for Machine Perception, Czech Technical University in Prague, Czech Republic
2 Computer Vision Lab, TU Dresden, Germany
3 Imperial Computer Vision & Learning Lab, Imperial College London, United Kingdom
4 Visual Learning Lab, Uni Heidelberg, Germany

References