GTC 2026 highlights a discreet but concrete pivot: neural networks inserted into the heart of the graphics pipeline with, as a result, massive gains in VRAM and rendering time. The message goes beyond DLSS 5 and targets pure optimization on the asset and shading side. And unlike DLSS 5 which has crystallized all the fears around neural rendering, what was presented at the GDC is difficult to dispute.
Pragmatic neural rendering: beyond DLSS 5
NVIDIA puts neural rendering back into the pipeline itself, far from just post-processing. The approach relies on small, specialized networks to decode textures, evaluate materials, and reduce memory traffic, rather than a global final filter.
The GDC session made a clear distinction between three complementary approaches. ML post-processing, of which DLSS 5 is the most visible example, improves the image output from the pipeline. The ML integrated into the pipeline acts directly on shaders, textures and materials. The generative approach produces the image partially or completely via a network. None is superior to the others: they are combinable, and this is precisely what NVIDIA demonstrates here with the first two.
DLSS 5 is only a pane, visible on the final image side. The GDC session detailed blocks dedicated to targeted tasks, with immediate benefits on memory footprint, bandwidth and material evaluation speed, relevant for both gaming and real-time production.
Neural Texture Compression : de 6,5 Go à 970 Mo
The most telling example, Neural Texture Compression (NTC). On the Tuscan Wheels stage, NVIDIA announces a reduction in the VRAM footprint from around 6.5 GB with classic BCN textures to 970 MB with NTC, at perceived quality close to the original. At the same budget of 970 MB, NTC retains more detail than traditional block compression.
The principle is simple: instead of storing texels directly, NTC encodes a compact latent representation of the texture. At rendering time, a small neural network reconstructs pixels on the fly on the GPU. The process is entirely deterministic, no random generation, no hallucination. It’s pure compression with learned reconstruction.
Beyond VRAM, the interest is logistical: lighter installations, smaller patches, less network bandwidth, and more room for detailed assets on the same GPU. For developers, this opens up space to densify content without blowing memory budgets. Alexi, technical speaker of the session, considers NTC as already usable in production today.
Neural Materials: fewer channels, faster 1080p rendering
Another brick, Neural Materials, which compresses material behavior into a latent representation and decodes it via a small network. In the demo, a 19-channel setup goes down to eight channels, with 1080p renders claimed to be 1.4x to 7.7x faster on the test scene, with reduced BRDF complexity and reduced storage.
What it actually changes
Let’s take a complex material: ceramic base, metal deposit, varnish, dust. Typically, each layer uses separate textures and BRDF evaluations. With Neural Materials, everything is encoded into a single latent vector, decoded in a single inference at render time. The result: less memory, less bandwidth, faster evaluation.
The objective is not to modify the artistic direction, but to keep the material information in a compact format and to limit the cost of evaluation. Neural rendering, here, acts as a shading accelerator and an asset compressor, not as a final image generator. NVIDIA recognizes that Neural Materials remains an area of active research, with production deployment still limited unlike NTC.
Slang and SlangPy: the infrastructure that makes it all possible
A little covered but fundamental point: these technologies are based on a common infrastructure. NVIDIA introduced Slang, a differentiable shader language that automatically generates the backward pass needed for training, and compiles to CUDA, HLSL and GLSL. Next to it, SlangPy bridges Python and the GPU, unifying the training and deployment phases in production.
This is what solves the historical problem of real-time neural rendering: the divergence between the training code in PyTorch and the runtime code in C++ or shaders. With Slang, the same code is used for both phases. For developers who want to integrate these neural blocks into their pipeline, this is the missing piece.
Neural Fixer and Asset Harvester: neural rendering at the service of simulation
The session also covered a less expected use case: simulation for autonomous vehicles via Omniverse Neurec. The principle is based on Gaussian Splatting, a 3D reconstruction from real images where the scene is represented by 3D Gaussian particles, each with position, rotation, opacity and color depending on the point of view.
Neural Fixer
The problem of classic Gaussian Splatting: artifacts appear as soon as we deviate from the captured trajectory. The Neural Fixer is a network that corrects these artifacts in real time by injecting knowledge of the real world, making the simulation usable for scenarios never physically captured.
Asset Harvester
Another tool presented, the Asset Harvester extracts partially visible objects in the captured data, for example a bus seen from a single angle, and generates a complete and reusable 3D version. These assets can then be placed in various simulated scenarios, multiplying the value of the initial collection data.
A neural rendering that no longer scares
The debate around DLSS 5 has crystallized fears about visual standardization and an AI that hallucinates over approximate rendering. The roadmap shown at the GDC suggests a more consensual path on the production side: unload the VRAM, reduce the bandwidth and accelerate critical passes without distorting the image.
For studios, these targeted neural blocks can be iterated into the pipeline and improve the quality/cost ratio without a complete overhaul. NTC is available today via the RTX Neural Texture Compression SDK. Neural Materials and simulation tools will follow as research matures. This is neural rendering without the spectacle, and that’s probably why it will work.
Source :





