6D object pose estimation

Program - October 23 (PM), 2022, UTC+3 time zone

14:00Opening: Martin Sundermeyer (DLR, TU Munich), VIDEO
14:10Invited talk 1: Vincent Lepetit (ENPC ParisTech): A brief history of 6D tracking, VIDEO
14:40Invited talk 2: Tolga Birdal (Imperial College): Neural 3D object priors for reconstruction and pose estimation, VIDEO
15:10Oral presentations of workshop papers:
  • Learning to Estimate Multi-view Pose from Object Silhouettes, VIDEO
  • CenDerNet: Center and Curvature Representations for Render-and-Compare 6D Pose Estimation, VIDEO
  • Trans6D: Transformer-based 6D Object Pose Estimation and Refinement, VIDEO
  • TransNet: Category-Level Transparent Object Pose Estimation, VIDEO
15:40Invited talk 3: Yisheng He (HKUST): Towards accurate, generalizable and self-supervised object pose estimation, VIDEO
16:10Invited talk 4: Yen-Chen Lin (MIT CSAIL): Neural fields for robotic perception, VIDEO
16:40Results of the BOP Challenge 2022: Tomáš Hodaň (Meta), Martin Sundermeyer (DLR, TU Munich), VIDEO 1, VIDEO 2, SLIDES
17:00Presentations of the BOP Challenge 2022 winners:
17:10Discussion and closing
17:20Poster session (posters of workshop papers and invited conference papers)
18:00End of workshop

News

Introduction

Accurate estimation of the 6D pose of an object (3D translation and 3D rotation) from RGB/RGB-D images is of great importance to many higher-level tasks such as robotic manipulation, augmented reality, and autonomous driving. The introduction of RGB-D sensors, advent of deep learning, and novel data generation pipelines led to substantial improvements in 6D object pose estimation. Yet there remain challenges to address such as robustness against severe occlusion and clutter, scalability to multiple objects, fast and reliable object learning/modeling, and effective synthetic-to-real domain transfer. Besides 6D pose estimation of specific rigid objects, the workshop focuses on related topics such as pose estimation of articulated and deformable objects, object categories, and pose estimation without 3D models. In this workshop, people working on relevant topics in both academia and industry will share up-to-date advances and identify open problems. The workshop features four invited talks, presentations of accepted workshop papers and of related papers invited from the main conference, and presents results of the BOP Challenge 2022 on 6D object pose estimation.

Previous workshop editions: 1st edition (ICCV 2015), 2nd edition (ECCV 2016), 3rd edition (ICCV 2017), 4th edition (ECCV 2018), 5th edition (ICCV 2019), 6th edition (ECCV 2020).

BOP Challenge 2022

We invite submissions to the BOP Challenge 2022 on 6D object localization, object detection and segmentation. While the 6D object localization task follows the same evaluation methodology as in the 2019 and 2020 editions of the challenge, this year we additionally evaluate 2D object detection and segmentation tasks, which precede many recent 6D pose estimation methods. Evaluating the detection/segmentation stage and the pose estimation stage separately will help us to better understand advances in the two stages.

The challenge is sponsored by Reality Labs at Meta and Niantic who donated $4000 in total (each $2000, before tax) for the award money. Awards were presented at the workshop in Tel Aviv.

Accepted Workshop Papers

Call for Papers

We invite paper submissions on unpublished work. If accepted, the papers will be published in the ECCV workshop proceedings and presented at the workshop. The papers must follow the format of the main conference (LaTeX template) and be submitted to CMT.

The covered topics include but are not limited to:

Dates

Paper submission deadline: August 1 August 8, 2022 (11:59PM PST)
Paper acceptance notification: August 15, 2022
Paper camera-ready version: August 22, 2022 (11:59PM PST)
Deadline for submissions to the BOP Challenge 2022: October 16, 2022 (11:59PM UTC)
Workshop date: October 23 (PM), 2022

Organizers

Martin Sundermeyer, DLR German Aerospace Center, martin.sundermeyer@dlr.de
Tomáš Hodaň, Reality Labs at Meta, tomhodan@fb.com
Yann Labbé, Inria Paris
Gu Wang, Tsinghua University
Lingni Ma, Reality Labs at Meta
Eric Brachmann, Niantic
Bertram Drost, MVTec
Sindi Shkodrani, Reality Labs at Meta
Rigas Kouskouridas, Scape Technologies
Ales Leonardis, University of Birmingham
Carsten Steger​, Technical University of Munich, MVTec
Vincent Lepetit, ENPC ParisTech, Technical University Graz
Jiří Matas, Czech Technical University in Prague

Contact

r6d.workshop@gmail.com