About me

I am currently a research scientist and group manager at Google, where I lead and manage a large perception team working on cutting edge AR research and development. We develop state of the art technologies for scene and human reconstruction/understanding, and publish at world top conferences. We shipped multiple successful technologies, e.g., depth reconstruction for the Pixel 4 depth sensor, ARCore depth API, Pixel 5 portrait relighting, TensorFlow Graphics. Prior to Google, I was a principal research scientist at Apple where I designed, developed, and productized the realtime face tracking algorithm powering the iPhone X Animojis and also available to third-party developers through ARKit. My research interests include machine learning, computer vision, and computer graphics. You can find some of my work on Google Scholar.

I completed my PhD degree in 2015 in the Computer Graphics and Geometry Laboratory (LGG) at the Swiss Federal Institute of Technology in Lausanne (EPFL). My thesis on realtime face tracking and animation was awarded the 2016 SIGGRAPH outstanding doctoral dissertation award honorable mention, the 2015 ETHZ Fritz Kutter PhD thesis award, and the 2015 EPFL Patrick Denantes PhD award honorable mention. I was also fortunate to receive the 2018 Eurographics Young Researcher Award. In 2012, based on my research, I co-founded faceshift AG, an EPFL spin-off that brought high-quality markerless facial motion capture to the consumer market which was acquired in 2015 by Apple Inc. Over the years I acquired extensive research and engineering experience in numerous other companies such as Adobe Research, Mitsubishi Electric Research Laboratories (MERL), E-on Software, and Eugen Systems. Beside my passion for science, I like to play piano and guitar, and enjoy composing music (SoundCloud).