Smile Like You Mean It:
Driving Animatronic Robotic Face with Learned Models
Ability to generate intelligent and generalizable facial expressions is essential for building human-like social robots. At present, progress in this field is hindered by the fact that each facial expression needs to be programmed by humans. In order to adapt robot behavior in real time to different situations that arise when interacting with human subjects, robots need to be able to train themselves without requiring human labels, as well as make fast action decisions and generalize the acquired knowledge to diverse and new contexts. We addressed this challenge by designing a physical animatronic robotic face with soft skin and by developing a vision-based self-supervised learning framework for facial mimicry. Our algorithm does not require any knowledge of the robot's kinematic model, camera calibration or predefined expression set. By decomposing the learning process into a generative model and an inverse model, our framework can be trained using a single motor babbling dataset. Comprehensive evaluations show that our method enables accurate and diverse face mimicry across diverse human subjects.
Video
Overview with Narrations and Subtitles
Hardware Description
Demo Videos
Watch the random data collection in action!
Paper (ICRA 2021)
Latest version: arXiv:2105.12724 [cs.CV].
Team
Columbia University
Related Projects
Faraj, Zanwar, Mert Selamet, Carlos Morales, Patricio Torres, Maimuna Hossain, Boyuan Chen, and Hod Lipson. "Facially expressive humanoid robotic face." HardwareX 9 (2021): e00117.
Acknowledgements
This research is supported by NSF NRI 1925157 and DARPA MTO L2M Program grant HR0011-18-2-0020.
Contact
If you have any questions, please feel free to contact Boyuan Chen