NVIDIA has unveiled powerful AI, simulation, as well as creative tools for developers and creators of virtual worlds, a set of tools that would certainly assist Metaverse designers as well.
NVIDIA announced significant updates for developers and creators using NVIDIA Omniverse, an interactive 3D design collaboration and world simulation platform that enables real-time collaboration between designers and developers over social media. The SIGGRAPH conference is the world’s largest gathering of computer graphics experts.
What are the New Tools?
It is now becoming clear that the next wave of Omniverse worlds is moving to the cloud — and new features in Omniverse Kit, Nucleus, Audio2Face, and Machinima apps are making it easier for users to create digital twins and avatars that are physically accurate, and redefine the way in which virtual worlds can be created and experienced.
It has been proven that Omniverse has the ability to significantly accelerate complex 3D graphics workflows across various industries. There has been an explosion of virtual worlds and realistic simulations created by Omniverse users around the globe using the platform’s advanced rendering, physics, and artificial intelligence technologies, no matter whether they are engineers, researchers, animators, or designers.
With the move of Omniverse to the cloud, users are now able to work virtually using non-RTX systems such as Macs and Chromebooks using NVIDIA RTX-enabled Studio laptops, workstations, and OVX servers, which make Omniverse a great platform for developers.
The NVIDIA Omniverse Avatar Cloud Engine (ACE) and the new Omniverse Connectors and applications are also available today. These technologies enable users to create and customize virtual assistants and digital humans intuitively and quickly.
As a creator, you are able to use the Omniverse Audio2Face app so that you can match virtually any voice-over track to any 3D character in order to simplify the animation process. The Audio2Face software is built on the basis of an AI model which can generate facial animations exclusively based on the sound of your voice.
A new feature that is included with its most recent update is that users can now direct the emotions of their avatars over time. Consequently, creators will have a wide range of emotions to choose from, including joy, amazement, anger, sadness, and many more.
Last and not least, Omniverse Enterprise customers now have access to Omniverse DeepSearch, which is now available. With AI, teams can access huge databases of untagged assets intuitively and quickly by using the power of AI to filter through them.
In addition to providing accurate results, DeepSearch lets users search for qualitative or vague inputs to help them develop the perfect look and lighting for their virtual scenes by delivering accurate results. In addition, these tools will make Metaverse projects more appealing to mainstream users, which will surely help designers in Metaverse projects as well.