As realistic as most modern-day video games have become, game developers and studios still rely on a player’s willing suspension of disbelief for the game to work. Gamers, after all, know that they are playing a video game — even if they get to customise their character however they want, they still will be playing a character that was created for them.
But what if a gamer could jump into the game as the lead character? What if, instead of vicariously controlling a character in a game, we could actually be that character? With the kind of things that NVIDIA is cooking up in its AI labs, we might get to see such an experience pretty soon after all.
Recently, NVIDIA invited a few gamers and tech journalists to a showcase of their RTX AI PCs, to show all of the stuff they have been working on to push gaming and game development to the next frontier.
Although NVIDIA is more popularly known for its AI hardware among the common folk, gamers, know them for their Gaming GPUs.
However, NVIDIA would prefer if everyone thinks of them as a platform provider — that apart from making some of the most capable GPUs on the planet, they are also a software company. After all, they employ more software engineers than they do hardware technicians. Plus some of the software systems they have deployed, have revolutionised the way people think of video games and gaming performance.
With their up-and-coming AI tools for game developers and video game engines, NVIDIA hopes to disrupt an industry that they have already disrupted a gazillion times before. Here are some of the AI tools that they are developing for developers that will push gaming into a whole new dimension
NVIDIA ACE & Digital Human Technologies
Imagine this: instead of picking between a few preset options to communicate with in-game characters, what if you could just talk to them like you’re actually inside the game? Well, NVIDIA’s ACE and Digital Human Technologies are making this a reality. Players can now chat directly with game characters, making interactions feel more natural and immersive. But that’s not all – using a camera, you can show these characters objects and people from the real world and have conversations about them, blending real and virtual worlds in a way that sounds almost futuristic.
This innovation is thanks to NVIDIA’s cutting-edge digital human technology and its on-device small language model. It’s all about making game characters more conversational and lifelike. The first game to flex this tech muscle is Amazing Seasun Game’s Mecha BREAK. Players are in for a treat as this game breathes life into its characters, creating a more engaging and immersive experience, especially on GeForce RTX AI PCs. The interaction feels more alive, drawing players deeper into the game’s universe.
Over at Perfect World Games, they’re also stepping up the game with their own take on NVIDIA’s ACE and digital human technology in a demo called Legends. This time, they’ve added some fancy AI-powered vision abilities.
The demo features a character named Yun Ni, who can now ‘see’ players and recognise real-world people and objects through the computer’s camera. This adds an augmented reality (AR) twist to the experience, where the virtual and real world collide, powered by the capabilities of ChatGPT-4o. It’s a glimpse into a future where games and reality blend more seamlessly than ever before.
Of course, with all things AI, you need to have guardrails in place, otherwise, things can quickly turn into a nightmare. At the demo, John Gillooly, Technical Product Marketing Manager, Asia Pacific South at NVIDIA, told us that the AI model powering the characters in Legends has been designed to bring the gamer back to the topic in hand, and back to the game’s universe if a player goes off too deep in a random tangent.
NVIDIA Audio2Face
Bringing believable emotions to game characters is no small feat for developers, mostly because of the sheer amount of work involved in making it happen. It’s a massive task to ensure that characters show the right expressions and lip movements. Thankfully, NVIDIA has come up with a clever solution to take the pressure off developers’ shoulders.
Enter NVIDIA Omniverse Audio2Face – a set of AI-based tools that can create facial animations and perfectly synced lip movements, all from a simple audio file. It’s like magic! Instead of spending hours on manual animation, developers can let this AI do the heavy lifting. Even better, with its intuitive character retargeting feature, users can animate their own characters, making the process of breathing life into digital avatars a whole lot easier and more efficient.
ComfyUI with 10X faster image generation
Here’s where things start getting really exciting. Picture yourself not just controlling a game character but actually being that character. Imagine storming through the streets of San Andreas, not as Carl ‘CJ’ Johnson or Michael De Santa, but as you — the player. Imagine taking down the villainous Vladimir Makarov, not as Captain Price, but as your own real-life self, with Captain Price guiding you along the way. The idea of stepping into the game, fully in your own skin, takes the immersive experience to a whole new level.
Now, ComfyUI, one of the most popular Stable Diffusion apps, is already a hit among advanced users for its incredible flexibility in various workflows. This flexibility was showcased in a recent demo, where participants could snap a selfie and, within seconds, turn themselves into a superhero version, all thanks to RTX acceleration for Stable Diffusion. With the help of TensorRT, newly introduced at Computex, and the power of open-source AI tools, this process happens in a flash.
While the programme currently focuses on still images, it’s not a stretch to imagine where this is headed. With the rapid advancements in Stable Diffusion and OpenAI’s models, it seems like only a matter of time before developers come up with an AI tool that will place your face onto the game’s main character.
This would mean gamers wouldn’t just be playing as an avatar — they’d be playing as a fully replicated version of themselves, from their facial expressions to their movements and mannerisms. It’s a thrilling thought that, soon enough, gaming could be an even more personal and immersive experience than ever before.
DLSS 3.5, Ray Tracing & Reflex
Not many people are actually aware of this, including ardent gamers, but NVIDIA’s DLSS upscaling tech uses AI. DLSS or Deep Learning Super Sampling is a technology developed by NVIDIA that uses AI to boost gaming performance by rendering images at lower resolution and then upscaling them to higher resolutions. This approach improves frame rates while maintaining image quality, particularly in demanding, high-resolution games.
NVIDIA leverages AI through a neural network trained to analyse thousands of high-quality images. Using this data, the AI creates high-resolution frames from lower-resolution ones in real-time, minimising the workload on the GPU. The technology is powered by NVIDIA’s Tensor Cores, specialised hardware units on their RTX graphics cards designed for AI-based tasks.
By combining AI-driven upscaling with advanced anti-aliasing, DLSS delivers smoother visuals and enhances gaming performance without requiring powerful hardware. It allows gamers to enjoy high-resolution experiences with improved efficiency, making demanding titles playable on lower-end systems without sacrificing visual quality.
The recently launched Star Wars Outlaws takes the Star Wars universe to new heights, using the latest DLSS & Ray Tracing tech from NVIDIA. Built using the powerful Snowdrop Engine from Massive Entertainment, developers were able to craft an incredibly detailed and immersive open world. From perfectly recreated starships to iconic locations from the franchise, the game is packed with stunning worlds waiting to be explored. The level of detail in every environment and ship is a visual treat for fans, making it feel like you’re truly stepping into the Star Wars galaxy.
To ensure the visuals are as breathtaking as possible, Star Wars Outlaws also features NVIDIA DLSS 3.5 with Ray Reconstruction, which enhances ray-traced effects and boosts performance, especially at higher resolutions. This means you’ll get to enjoy the beautiful ray-traced lighting and reflections without worrying about any drop in performance, letting you experience the game in all its glory.
For those playing on RTX GPUs, Star Wars Outlaws has even more to offer with NVIDIA Reflex. This technology significantly reduces system latency, ensuring that your actions feel immediate and responsive. Whether you’re flying through space or engaging in intense combat, NVIDIA Reflex makes the game more responsive, delivering a smoother and more enjoyable experience.
Other AI Features
Other than these, there are several other AI features that NVIDIA bundles with their RTX series GPUs, that uplift the overall PC experience.
For example, we have ChatRTX, an app that takes personalisation to a new level. It essentially by allows users to connect a GPT large language model (LLM) directly to their own content.
Whether it’s documents, notes, images, or other types of data, ChatRTX makes it possible to query a custom AI chatbot for quick, contextually relevant answers. The app leverages advanced tech like retrieval-augmented generation (RAG), TensorRT-LLM, and RTX acceleration, ensuring speedy and secure results, all while running locally on your Windows RTX PC or workstation. No need to worry about slow cloud-based processing — it’s all fast and efficient right on your device.
Then, there is NVIDIA Broadcast, an app that takes your home setup and transforms it into a professional studio, ideal for live streaming, voice chats, and video conferencing. With AI-enhanced voice and video features, you can bring a polished, professional feel to your content with minimal effort. Whether you’re recording a podcast or jumping on a video call, this app ensures top-notch quality, making your output look and sound impressive every time.
Another nifty app would be NVIDIA Canvas allows users to turn simple brushstrokes into stunning, realistic landscapes. By using AI to generate highly detailed images, it’s perfect for anyone looking to quickly create backgrounds or speed up concept exploration. Instead of spending hours sketching out ideas, Canvas helps you visualise your concepts in no time, letting you focus on bringing your creative visions to life with ease.