Collaborating with over a dozen universities, Nvidia is breaking boundaries in graphics research, enabling developers to bring their ideas to life like never before.
NVIDIA SIGGRAPH PAPERS 2023
Generative AI has taken the world by storm in the past year, with text prompts of ChatGPT and AI-created art. Nvidia's groundbreaking research is now revealing the future of generative AI and neural graphics, working in collaboration with over a dozen universities in the U.S., Europe, and Israel. The results of their research will be presented at NVIDIA SIGGRAPH PAPERS 2023, taking place in Los Angeles from August 6-10.
The innovative research papers showcase generative AI models that turn text into personalized images, inverse rendering tools that transform still images into 3D objects, neural physics models that simulate complex 3D elements with stunning realism, and neural rendering models that unlock new capabilities for generating real-time, AI-powered visual details.
Nvidia's research has already helped bring film-style rendering to games, like the recently released Cyberpunk 2077 Ray Tracing: Overdrive Mode. According to Aaron Lefohn, vice president of graphics research at Nvidia, the research advancements will enable developers and artists to bring their ideas to life in new and exciting ways.
Nvidia regularly shares its researchers' innovations with developers on GitHub, and they're incorporated into products like the Nvidia Omniverse platform for building and operating metaverse applications and Nvidia Picasso, a recently announced foundry for custom generative AI models for visual design.
The research presented at SIGGRAPH this year will help developers and enterprises rapidly generate synthetic data to populate virtual worlds for robotics and autonomous vehicle training. They'll also enable creators in art, architecture, graphic design, game development, and film to more quickly produce high-quality visuals for storyboarding, previsualization, and production.
David Luebke, vice president of research at Nvidia, said that creating content has always been the most expensive and hardest part of the puzzle. With generative AI and neural graphics, this problem can be solved, fitting into how ray tracing is done and generating content for it.
Some examples of these innovations include customized text-to-image AI models, advances in inverse rendering and character creation, neural physics for realistic simulations, and neural rendering that brings film-quality detail to real-time graphics.
With these groundbreaking advancements, Nvidia is pushing the boundaries of graphics technology, bringing AI and graphics closer together and opening up new possibilities for creators in various fields. As generative AI continues to evolve, we can expect to see even more exciting developments in the coming years.