NVIDIA revealed how DLSS 5 “finishes” the image, relying solely on a 2‑D frame and motion vectors.
How DLSS 5 Works: An Nvidia Employee’s Explanation
In a recent interview, Jacob Freeman from Nvidia clarified how the new image upscaling technology—DLSS 5, which uses artificial intelligence—works.
What Goes Into the System
* 2‑D frame – a standard rendered image.
* Motion vectors – information about how objects move between frames.
No three‑dimensional data is used: the model does not read scene geometry, depth, materials, or normal maps. This means DLSS 5 relies entirely on 2‑D information and motion.
How It “understands” the Scene
* Semantics – AI recognizes object types such as hair, fabric, skin, and lighting conditions.
* Only one frame is needed for this; the model does not consider metallicity, roughness, or other material properties.
Because of this, results can sometimes seem “unpredictable”:
- a character may develop hair where none exists;
- facial features may change.
No changes occur to the underlying geometry—it’s simply a visual interpretation by AI.
Limitations and Developer Options
* Developers can adjust effect intensity, color correction, contrast, saturation, and gamma.
* Masks can be used to exclude specific objects from processing.
* However, they cannot directly alter facial feature corrections or remove the “makeup” effect—only way is to reduce intensity, apply a mask, or disable the algorithm entirely.
In short, the face is generated by AI, but its appearance can only be indirectly adjusted.
Comments (0)
Share your thoughts — please be polite and stay on topic.
Log in to comment