Speaker: Gerard Pons-Moll , Max Planck Institute
UCL Contact: Gabriel Brostow (Visitors from outside UCL please email in advance).
Date/Time: 17 Apr 18, 11:00 - 12:00
Venue: Room 405, 66-72 Gower Street
For man-machine interaction it is crucial to develop models of humans that look and move indistinguishably from real humans. Such virtual humans will be key for application areas such as computer vision, medicine and psychology, virtual and augmented reality and special effects in movies.
Currently, digital models typically lack realistic soft tissue and clothing or require time-consuming manual editing of physical simulation parameters. Our hypothesis is that better and more realistic models of humans and clothing can be learned directly by capturing real people using 4D scans, images, and depth and inertial sensors. Combining statistical machine learning techniques and geometric optimization, we create realistic models from the captured data.
We then leverage the learned digital models to extract information out of incomplete and noisy sensor data coming from monocular video, depth or a small number of IMUs.
I will give an overview of a selection of projects where the goal is to build realistic models of human pose, shape, soft-tissue and clothing. I will also present some of our recent work on 3D reconstruction of people models from monocular video, and real-time joint reconstruction of surface geometry and human body shape from depth data. I will conclude the talk outlining the next challenges in building digital humans and perceiving them from sensory data.
Gerard Pons-Moll obtained his degree in superior Telecommunications Engineering from the Technical University of Catalonia (UPC) in 2008. From 2007 to 2008 he was at Northeastern University in Boston USA with a fellowship from the Vodafone foundation conducting research on medical image analysis. He received his Ph.D. degree (with distinction) from the Leibniz University of Hannover in 2014. In 2012 he was a visiting researcher at the vision group at the University of Toronto. In 2012 he also worked as intern at the computer vision group at Microsoft Research Cambridge. From 11/2013 until 09/2015 he was a postdoc and later from 10/2015-08/2017 Research Scientist at Perceiving Systems, Max Planck for Intelligent Systems. Since 09/2017 he is heading the group Real Virtual Humans at the Max Planck Institute for Informatics.
His work has been published at the major computer vision and computer graphics conferences and journals including Siggraph, Siggraph Asia, CVPR, ICCV, BMVC(Best Paper), Eurographics(Best Paper), IJCV and TPAMI. He serves regularly as a reviewer for TPAMI, IJCV, Siggraph, Siggraph Asia, CVPR, ICCV, ECCV, ACCV, SCA, ICML and others. He co-organized 1 workshop and 3 tutorials: 1 tutorial at ICCV 2011 on Looking at People: Model Based Pose Estimation, and 2 tutorials at ICCV 2015 and Siggraph 2016 on Modeling Human Bodies in Motion, and the workshop PeopleCap at ICCV'17.
His research interests are 3D modeling of humans and clothing in motion and using machine learning and graphics models to solve vision problems.