Skip to content

Humanoid and cognitive robotics

We’re a group researching primarily in the areas of cognitive developmental robotics and neurorobotics.

Our full affiliation is Vision for Robotics and Autonomous SystemsDepartment of CyberneticsFaculty of Electrical EngineeringCzech Technical University in Prague.

The goal of our research is two-fold:

  1. Understanding of biological systems through modeling using robots.
  2. Development of new technology geared at making robots more autonomous, robust, and safe.

This is exemplified in our current project which focuses specifically on the topic of body representations: “Robot self-calibration and safe physical human-robot interaction inspired by body representations in primate brains” (Czech Science Foundation, GA17-15697Y).

For more details about our Research see the corresponding section bellow.


Frontiers Research Topic: Body Representations, Peripersonal Space, and the Self: Humans, Animals, Robots


See our YouTube channel.



Matej Hoffmann (Assistant Professor, coordinator)
Google Scholar profile
Tomas Svoboda (Associate Professor)
Google Scholar profile
Karla Stepanova (Postdoc)
Google Scholar profile
Hagen Lehmann (Visiting Researcher)
Google Scholar profile
Zdenek Straka (PhD Student)
Google Scholar profile
Petr Svarny (PhD Student)
Google Scholar profile
Filipe Gama (PhD Student)


Student topics

We offer interesting topics for student theses and projects as well as paid internships. Have a look HERE and feel free to contact Matej Hoffmann for more information.

List of currently open topics at Department of Cybernetics (mostly in Czech): LINK.

Other topics can be defined upon request – simply drop by at KN-E211 or write an email to matej.hoffmann [guess-what]


For our publications, please see the Google Scholar profiles of individual group members.

Models of body representations

HoffmannGACRschematicsHow do babies learn about their bodies? Newborns probably do not have a holistic perception of their body; instead they are starting to pick up correlations in the streams of individual sensory modalities (in particular visual, tactile, proprioceptive). The structure in these streams allows them to learn the first models of their bodies. The mechanisms behind these processes are largely unclear. In collaboration with developmental and cognitive psychologists, we want to shed more light on this topic by developing robotic models.

Automatic robot self-calibration

Standard robot calibration procedures require prior knowledge of a number of quantities from the robot’s environment. These conditions have to be present for recalibration to be performed. This has motivated alternative solutions to the self-calibration problem that are more “self-contained” and can be performed automatically by the robot. These typically rely on self-observation of specific points on the robot using the robot’s own camera(s). The advent of robotic skin technologies opens up the possibility of completely new approaches. In particular, the kinematic chain can be closed and the necessary redundant information obtained through self-touch, broadening the sample collection from end-effector to whole body surface. Furthermore, the possibility of truly multimodal calibration – using visual, proprioceptive, tactile, and inertial information – is open.


Safe physical human-robot interaction

Robots are leaving the factory, entering domains that are far less structured and starting to share living spaces with humans. As a consequence, they need to dynamically adapt to unpredictable interactions with people and guarantee safety at every moment.  “Body awareness” acquired through artificial skin can be used not only to improve reactions to collisions, but when coupled with vision, it can be extended to a surface around the body (so-called peripersonal space), facilitating collision avoidance and contact anticipation, eventually leading to safer and more natural interaction of the robot with objects, including humans.