Project C4 – How Neuronal Representations of Space-Time Lead to Action
The information gathered by different sensory systems varies steadily in space and time and is tainted by uncertainty; nevertheless, living creatures successfully base their decisions on how to act on this information. Multisensory integration, though done highly efficiently in nature, is effectively terra incognita in robotics; therefore, we will approach this problem in robotics using a combination of mathematical theory and applied engineering. An explicit mathematical theory for analysing sensory processing, map formation [1,2], and actuation  makes implementation in hardware straightforward. The Cheng lab is dedicated to exploiting ideas from neuroscience for finding novel solutions in robotics [4, 5]. Buss and co-workers have explored the interface of robotics and human perception through haptics , a key ingredient of multisensory integration .
Objectives and description of the project
The challenging problem we want to solve is how to implement the multimodal integration observed in the neuronal systems of birds and cats in autonomous robotic hardware. This task becomes even more challenging when several modalities of varying accuracy and different delays need to be combined, such as vision and audition. In addition to algorithms for multimodal decision making in time-varying domains, we plan to deepen our understanding of multimodal integration by validating the resulting concepts through technical systems that can generate multisensory, real-world vestibular-oculomotor responses to complex scenes 8MB, and taking feedback instabilities into account as well (TUM Biomathematics).
: Friedel & van Hemmen Biol Cybern 2008. : van Hemmen, 2001. : van Hemmen & Schwartz Biol Cybern 2008. : Cheng et al, Advanced Robotics 2007. : Cheng et al, Robotics and Autonomous Systems 2001. : Peer & Buss, IEEE/ASME Trans Mechatr 2008. : Ernst et al. 2000.