The Virtual YouTubers (or Vtuber) have taken the internet by storm. Virtual YouTuber is the latest trend of digital media coming from Japan. There are many of them with a big following, some of them have millions of subscribers, making them very interesting for merchandise such as nendroids, official promotional material, and collaborations with other YouTubers.
But what is the technology behind these virtual anime girls?
The setup of a Virtual YouTuber mostly involves facial recognition, gesture recognition and animation software. Combining these technologies can be tricky. The best known issue with this technology is the revealing of Noracat true identity in a live broadcast.
The Perfect Virtual YouTuber Setup:
2. The full-body Xsens motion capture system
In this setup you see the full-body motion capture system from Xsens, including the 'MVN Link' hardware (suit). The live motion capture data can be streamed into Unity using Xsens' MVN Animate software to gives you the best live quality data you can get.
4. Unity 3D animation software
The software that pulls it all together is Unity 3D. It is real easy to stream Xsens' motion capture data into Unity 3D.
There is of course more technology available to get your YouTuber avatar live on screen. We recommend combining this available and proven setup.
Group Dance performance
Also check out this video of this girl group i☆Ris. In this video you can see how they prepare for an upcoming (virtual) show using the Xsens MVN Animate system.
More information about Cygames and Captureroid can be found in the special Vtuber edition of CGWorld.jp.