header photo

CVWW 2018

Computer Vision Winter Workshop
Český Krumlov, Czech Republic
5-7th February 2018

Co-chairs

Zuzana Kukelova, FEE CTU Prague

Julia Skovierova, CIIRC, CTU Prague

Local organisation

Eva Matyskova, FEE CTU Prague

Václav Hlaváč, CIIRC, CTU Prague

Center for Machine Perception,
Dept. of Cybernetics, CTU
Karlovo namesti 13
121 35 Prague 2
Czech Republic

phone: +420 224 357 637
fax: +420 224 357 385

Invited speakers

francois_bremond Dr. François Brémond

Research Director at INRIA Sophia Antipolis
INRIA Sophia-Antipolis Méditerranée

Title:
Scene Understanding for Activity Monitoring


Abstract:
Since the population of the older persons grows highly, the improvement of the quality of life of older persons at home is of a great importance. This can be achieved through the development of technologies for monitoring their activities at home. In this context, we propose activity monitoring approaches which aim at analysing older person behaviors by combining heterogeneous sensor data to recognize critical activities at home. In particular, this approach combines data provided by video cameras with data provided by environmental sensors attached to house furnishings. <\br> There are 3 categories of critical human activities:

  • Activities which can be well described or modeled by users
  • Activities which can be specified by users and that can be illustrated by positive/negative samples representative of the targeted activities
  • Rare activities which are unknown to the users and which can be defined only with respect to frequent activities requiring large datasets
In this talk, we will then present several techniques for the detection of people and for the recognition of human activities using in particular 2D or 3D video cameras. More specifically, there are 3 categories of algorithms to recognize human activities:
  • Recognition engine using hand-crafted ontologies based on a priori knowledge (e.g. rules) predefined by users. This activity recognition engine is easily extendable and allows later integration of additional sensor information when needed [König 2015, Crispim 2016, Crispim 2017].
  • Supervised learning methods based on positive/negative samples representative of the targeted activities which have to be specified by users. These methods are usually based on Bag-of-Words and CNNs computing a large variety of spatio-temporal descriptors [Bilinski 2015, Das 2017].
  • Unsupervised (fully automated) learned methods based on clustering of frequent activity patterns on large datasets which can generate/discover new activity models [Negin 2015]. We will discuss briefly about people detection and tracking, to focus after on activity recognition and the last advances in machine learning and how these new technologies have changed these topics. In particular, we will address what are the impacts of Deep Learning or Machine Learning without Deep Learning (i.e. including classical machine learning) for activity recognition, especially in terms of performance.
We will illustrate the proposed activity monitoring approaches through several home care application datasets:
http://www-sop.inria.fr/members/Francois.Bremond/topicsText/activityLanguage.html
http://www-sop.inria.fr/members/Francois.Bremond/topicsText/gerhomeProject.html
http://www-sop.inria.fr/members/Piotr.Bilinski/Demos
http://cmrr-nice.fr/sweethome/

Bio:
François Brémond is leading the STARS team at INRIA Sophia Antipolis. He designs and develops generic systems for dynamic scene interpretation. The targeted class of applications is the automatic interpretation of indoor and outdoor scenes observed with various sensors and in particular with static cameras. These systems detect and track mobile objects, which can be either humans or vehicles, and recognize their behaviours. He is particularly interested in filling the gap between sensor information (pixel level) and recognized activities (semantic level). In 1997 he obtained his PhD degree at INRIA in video understanding and he pursued his research work as a post doctorate at USC on the interpretation of videos taken from UAV (Unmanned Airborne Vehicle). He has also participated to many European and industrial research projects in activity monitoring. François Brémond is author or co-author of more than 200 scientific papers published in international journals or conferences in video understanding. He is a co-fonder of Keeneo, Ekinnox and Neosensys, three companies in intelligent video monitoring and business intelligence.

davide_scaramuzza Prof. Davide Scaramuzza

Professor and Director of the Robotics and Perception Group
University of Zurich

Title:
Vision-controlled Micro Flying Robots: from Frame-based to Event-based Vision


Abstract:
Autonomous quadrotors will soon play a major role in search-and-rescue and remote-inspection missions, where a fast response is crucial. Quadrotors have the potential to navigate quickly through unstructured environments, enter and exit buildings through narrow gaps, and fly through collapsed buildings. However, their speed and maneuverability are still far from those of birds. Indeed, agile navigation through unknown, indoor environments poses a number of challenges for robotics research in terms of perception, state estimation, planning, and control. In this talk, I will give an overview of my research activities on visual navigation of quadrotors, from slow navigation (using standard frame-based cameras) to agile flight (using event-based cameras).

Bio:
Davide Scaramuzza (born in 1980, Italian) is Professor of Robotics and Perception at both departments of Neuroinformatics (ETH and University of Zurich) and Informatics (University of Zurich), where he does research at the intersection of robotics, computer vision, and neuroscience. Specifically he investigates the use of standard and neuromorphic cameras to enable autonomous, agile, navigation of micro drones in search-and-rescue scenarios. He did his PhD in robotics and computer vision at ETH Zurich (with Roland Siegwart) and a postdoc at the University of Pennsylvania (with Vijay Kumar and Kostas Daniilidis). From 2009 to 2012, he led the European project sFly, which introduced the PX4 autopilot and pioneered visual-SLAM-based autonomous navigation of micro drones. For his research contributions in vision-based navigation with standard and neuromorphic cameras, he was awarded the IEEE Robotics and Automation Society Early Career Award, the SNSF-ERC Starting Grant, a Google Research Award, KUKA, Qualcomm, and Intel awards, the European Young Research Award, the Misha Mahowald Neuromorphic Engineering Award, and several conference paper awards. He coauthored the book "Introduction to Autonomous Mobile Robots" (published by MIT Press; 10,000 copies sold) and more than 100 papers on robotics and perception published in top-ranked journals (TRO, PAMI, IJCV, IJRR) and conferences (RSS, ICRA, CVPR, ICCV). In 2015, he cofounded a venture, called Zurich-Eye, dedicated to the commercialization of visual-inertial navigation solutions for mobile robots, which later became Facebook-Oculus Switzerland and Oculus' European research hub. He was also the strategic advisor of Dacuda, an ETH spinoff dedicated to inside-out VR solutions, which later became Magic Leap Zurich.

Menu

Important dates

  • Paper Submission

    22nd December, 2017, 23:59 CET
  • Acceptance Notification

    15th January, 2018
  • Camera-ready

    21st January, 2018, 23:59 CET
  • Registration

    24th January, 2018, 23:59 CET
  • Workshop

    5-7th February, 2018

Organizers

Links