The Virtual YouTubers (or Vtuber) have taken the internet by storm. Virtual YouTuber is the latest trend of digital media coming from Japan. There are many of them with a big following, some of them have millions of subscribers, making them very interesting for merchandise such as nendroids, official promotional material, and collaborations with other YouTubers.
But what is the technology behind these virtual anime girls?
The setup of a Virtual YouTuber mostly involves facial recognition, gesture recognition and animation software. Combining these technologies can be tricky. The best known issue with this technology is the revealing of Noracat true identity in a live broadcast.
The Perfect Virtual YouTuber Setup:
2. The full-body Xsens motion capture system
In this setup you see the full-body motion capture system from Xsens, including the 'MVN Link' hardware (suit). The live motion capture data can be streamed into Unity using Xsens' MVN Animate software to gives you the best live quality data you can get.
4. Unity 3D animation software
The software that pulls it all together is Unity 3D. It is real easy to stream Xsens' motion capture data into Unity 3D.
There is of course more technology available to get your YouTuber avatar live on screen. We recommend combining this available and proven setup.
Kite & Lightning setup with an iPhoneX, Unreal Engine, IKINEMA and Xsens
Kite & Lightning live full body performance setup using Xsens mocap in tandem with an iPhoneX, live-streamed via IKINEMA LiveAction to Unreal Engine in real time.
One Piece Vtuber
One Piece voice actors Mayumi Tanaka and Kappei Yamagushi are in the Xsens MVN motion capture system doing a live Vtuber show as their characters Luffy and Usopp.
More information about Cygames and Captureroid can be found in the special Vtuber edition of CGWorld.jp.