CreamVR has been developing a unique avatar creation process using blendshape animation and video projection, with Sheridan’s SIRT program. We setout to solve an issue in VR where high poly realistic avatars push consumer computer GPU capabilities beyond their limits. The further you push across the uncanny valley the more your VR experience bogs down. To begin work on volumetric productions, and plan ahead for mobile VR platforms, we set out to create a better avatar.
Using head scans to create low poly blend-shapes and recent innovations in machine learning to retarget mocap data from a facial performance without tracking markers, we have been able to capture both facial mocap data from a video to drive the blendshape animation and then repurpose the same video of the facial performance to be combined onto the face. These initial results have been very promising and open up the possibility to create highly efficient avatars that can be well optimised for VR and mobile VR/AR platforms.
In this talk we will show a 7 minute video of the process and talk about its development and pitfalls. With a brief QnA
Andrew has been building Cream Production’s VR department for the past 4 years, Having had their work shown at Cannes and VIFF CreamVR has emerged as a leading Canadian VR studio bridging the gap between factual entertainment and virtual reality. CreamVR’s work can be found on DiscoveryVR, Jaunt, HuluVR, and their own app. CreamVR began building cameras and creating 360 video content for networks such as Discovery, Travel Hulu, Cream has now moved into creating volumetric productions and is developing avatar creation techniques.