Over 20 NVIDIA Papers Headed to SIGGRAPH Conference! Generative AI Becomes a "Godsend" for Simulation Modeling

TAG:

NVIDIA will showcase multiple advancements in rendering, simulation, and generative AI at the prestigious SIGGRAPH 2024 conference, held in Denver, USA, from July 28 to August 1. The presentations will include how AI research can enhance image quality, optimize 3D rendering techniques, and create more realistic simulation models.

It is reported that NVIDIA Research will participate with over 20 papers, sharing innovative results related to the development of synthetic data generators and inverse rendering tools. These new research advancements are applicable to diffusion models in visual generative AI, physics-based simulations, and increasingly realistic AI rendering, providing support for training the next generation of models.

Among the contributions, two papers have been awarded the Best Technical Paper award, and several others are co-authored with researchers from universities in the United States, Canada, China, Israel, and Japan, as well as companies like Adobe and Roblox.

From a practical value perspective, these studies will help developers and enterprises create tools for generating complex virtual objects, characters, and environments, assist scientists in understanding natural phenomena, and aid in the simulation training of robots and autonomous vehicles.

At this year's SIGGRAPH conference, NVIDIA founder and CEO Jensen Huang will engage in a fireside chat with Wired senior writer Lauren Goode to discuss how robots and AI are influencing industrial digitization.

1. Improving Texture Painting with Diffusion Models: Consistent Subject Images Generated in 30 Seconds

Diffusion models are commonly used tools in text-to-image generation, allowing for the rapid creation of visual effects for scripts or artworks, thereby shortening the time it takes to turn ideas into reality. NVIDIA has two papers related to this topic.

In collaboration with researchers from Tel Aviv University, NVIDIA developed ConsiStory, which introduces a technique called "subject-driven shared attention." This innovation reduces the time required to generate consistent subject images from approximately 13 minutes to around 30 seconds, making it easier to create multiple images featuring the same main character.

This research is particularly beneficial for narrative applications such as comic book illustration or script development.

Last year, NVIDIA researchers won the Best Presentation Award at SIGGRAPH's Real-Time Live for their AI model that transforms text or image prompts into custom texture materials. This year, their research team published another paper introducing how to apply 2D generative diffusion models to interactive texture painting on 3D meshes, enabling artists to paint complex textures in real-time based on any reference image.


2. Research on Physics-Based Simulation: Accelerating the Simulation of Real-World Motion

Physics-based simulation helps bridge the gap between physical objects and their virtual representations, allowing digital objects and characters to move as if they were in the real world. Several NVIDIA Research papers have discussed breakthrough advancements in this area, including a more efficient hair modeling technique and a workflow that accelerates fluid simulation by ten times.

One paper, co-authored with researchers from Carnegie Mellon University, is among the five awarded "Best Paper" at this year's SIGGRAPH.

This renderer is not designed for physical light modeling but can be used for thermal analysis, electrostatic analysis, and fluid dynamics analysis. Its methods are easy to parallelize and do not require cumbersome model cleanup, bringing new possibilities for accelerating engineering design cycles.

To address the challenge of simulating complex human movements based on text prompts, researchers demonstrated how to train the SuperPADL framework to replicate over 5,000 skill movements by combining reinforcement learning with supervised learning, showing that the framework can run in real-time on consumer-grade NVIDIA GPUs.

Another paper introduces a neural physics approach that applies AI to learn how objects (whether presented as 3D meshes, NeRFs, or entities generated by text-to-3D modeling techniques) behave while moving in their environment.

3. Enhancing Rendering Realism: Accelerating Diffraction Effects by 1000 Times

Another set of papers from NVIDIA introduces new technologies that can accelerate visible light modeling by up to 25 times and simulate diffraction effects (such as radar simulations used for training autonomous vehicles) by as much as 1000 times.

Path tracing samples multiple paths (bundles of light rays traveling through a scene) to create photorealistic images. ReSTIR, a path tracing algorithm first introduced by NVIDIA and researchers from Dartmouth College at SIGGRAPH 2020, is key to applying path tracing technology in games and other real-time rendering products.

This year, NVIDIA presented two SIGGRAPH papers detailing how to improve the sampling quality of ReSTIR. One paper, co-authored by NVIDIA and the University of Utah, introduces a new method for reusing computed paths that can increase the effective sample count by up to 25 times, significantly enhancing image quality. The other method improves sampling quality by randomly altering subsets of light paths, which helps better run denoising algorithms and reduces visual artifacts in the final render.

A paper co-authored by researchers from NVIDIA and the University of Waterloo addresses the problem of free-space diffraction. Free-space diffraction is an optical phenomenon where light spreads or bends at the edges of objects. Their method can be integrated into path tracing workflows to enhance the efficiency of simulating diffraction in complex scenes, providing up to 1000 times acceleration. In addition to rendering visible light, this model can also be used to simulate longer wavelengths such as radar, sound waves, or radio waves.

4. Teaching AI to Think in 3D: Providing Infrastructure for Large-Scale 3D Reconstruction in Cities

NVIDIA researchers will showcase a range of multipurpose AI tools for 3D presentation and design at SIGGRAPH.

For instance, a paper co-authored by NVIDIA and researchers from Dartmouth College received the Best Technical Paper Award. It introduces a theory for presenting how 3D objects interact with light, unifying various appearances into a single model.

Another paper, written in collaboration with the University of Tokyo, the University of Toronto, and Adobe Research, presents an algorithm that can generate smooth space-filling curves in real-time on 3D meshes. Previous methods required several hours to run, while this framework can achieve results in just seconds, allowing users to exert a high degree of control over the output for interactive design.

Generative AI + Simulation Technology: Bridging the Gap Between the Real and Virtual Worlds

As a leader in graphics and accelerated computing, NVIDIA has introduced numerous cutting-edge papers covering visual computing and graphics rendering at the SIGGRAPH conference over the years. These research advancements not only continuously enhance the realism and efficiency of simulation modeling but also promote the integration of computer graphics, computer vision, human-computer interaction, and AI technologies, making it increasingly possible to simulate interactions in the real world.

With its enhanced reconstruction capabilities and improved simulation quality, generative AI is becoming a significant new engine for accelerating the development of simulation and modeling. Training large models with synthetic data can also expedite the deployment of generative AI applications. The complementary nature of simulation and generative AI technologies has led to more breakthroughs in addressing the challenges of consistency between simulation models and the physical world. Ultimately, this will empower industries such as manufacturing, autonomous driving, embodied intelligence, and robotics, helping to tackle various complex engineering challenges in the real world.

During SIGGRAPH 2024, NVIDIA researchers will also host NVIDIA OpenUSD Day, showcasing how developers and industry leaders can build AI-powered 3D workflows using and advancing OpenUSD.

©️Copyright Notice: Without special notice, all articles on this site are copyrighted by AI-HUB

Similar ToOver 20 NVIDIA Papers Headed to SIGGRAPH Conference! Generative AI Becomes a "Godsend" for Simulation Modeling