Apple revealed a method for quickly and efficiently creating three-day scenes using artificial intelligence
Apple and the University of Hong Kong unveiled the LGTM technology – a more efficient rendering of 3‑D scenes
Scientists from Apple in collaboration with researchers at the University of Hong Kong developed a system called LGTM (Less Gaussians, Texture More). It allows a substantial improvement in the quality of rendering high-resolution three-dimensional spaces without increasing computational resources.
How LGTM Works
1. Separating Complexity
Traditional 3‑D scene rendering methods based on Gaussian functions or other approaches require quadratic growth in cost as resolution increases. LGTM “separates” the geometric structure of a scene from its visual details.
2. Two-Stage Training
* The first step – the model builds a simple geometry (skeleton) from low-resolution images.
* The second step – it compares the skeleton with original high-quality images, then learns to create detailed textures that are layered over the simple skeleton.
3. Result
When rendering at 2 K and 4 K resolutions we obtain an accurate result without artifacts or “empty” areas, yet resources do not grow quadratically.
Practical Significance
* VR/AR – the technology is especially useful for virtual and augmented reality systems.
* Apple Vision Pro has a combined display resolution of 23 megapixels, which normally requires enormous computational power. LGTM allows detailed scenes to be reproduced in 4K without such resource growth, enabling smoother and higher-quality performance.
Thus, LGTM represents a significant step forward in rendering high‑quality 3‑D scenes while maintaining computational efficiency.
Comments (0)
Share your thoughts — please be polite and stay on topic.
Log in to comment