Table of Contents
NVIDIA has opened its internal Nyx supercomputer facility to showcase how the company uses its own cutting-edge hardware for AI research and gaming technology development. The Santa Clara headquarters houses 1,192 B200 GPUs across multiple data center rooms, with each GPU consuming up to 1,200 watts of power. The facility serves as both a research and development hub and validation center for new chip designs, with some nodes currently running undisclosed AI workloads at full capacity.
Key Points
- NVIDIA's Nyx supercomputer contains 1,192 B200 GPUs rated for up to 1,200 watts each
- The facility is actively used for DLSS development, AI research, and hardware validation
- Each DGX B200 rack consumes up to 14,000 watts of power and requires sophisticated cooling systems
- NVIDIA's internal teams use up to 1,000 GPUs for two weeks to train advanced AI models
- The company believes AI-enhanced rendering will eventually surpass traditional native rendering quality
Infrastructure and Engineering Excellence
The Nyx supercomputer facility represents NVIDIA's commitment to validating its own technology at scale. The data center features two interconnected rooms with advanced cooling infrastructure, including raised floors with plumbing access for future water-cooled deployments, though current configurations use air cooling for the DGX B200 systems.
Environmental monitoring plays a critical role in maintaining optimal performance. The facility employs extensive sensor networks tracking temperature, humidity, and air pressure throughout the deployment. Humidity management proves particularly crucial—too low creates static electricity problems, while excessive humidity causes condensation issues that can damage sensitive equipment.
Each DGX unit's 14,000-watt power consumption demands sophisticated thermal management. Fresh air flows actively through floor vents, creating positive pressure that pushes air through the systems into hot aisles where temperatures rise significantly. The strategic positioning of networking racks minimizes fiber optic cable lengths, maintaining signal integrity while reducing structural strain on ceiling infrastructure.
DLSS Development and AI Training
NVIDIA's GeForce team leverages the supercomputer's massive computational power primarily for developing Deep Learning Super Sampling (DLSS) technology. The process involves rendering games at lower resolutions, then using AI models to upscale output frames to native monitor resolutions, delivering higher frame rates while attempting to maintain visual fidelity.
According to Edward Leu from NVIDIA's DLSS team, the headline-grabbing training runs represent only a fraction of the actual computational cost. The majority of resources go toward iterative test runs that occur before final model training. His team continuously evaluates new AI innovations and addresses specific game issues, sometimes resolving problems like visual artifacts in popular titles within weeks or months.
"Sometimes it's more surgical, like, 'Oh, hey, we noticed Cyberpunk has an issue with cars having like three or four bumpers as you're driving around. How can we address this with the current model?'" Leu explained during the facility tour.
More substantial improvements, such as transitioning from convolutional neural networks to transformer models, require complete pipeline overhauls that can take over a year to implement. These transformer-based models deliver improved accuracy but run optimally only on NVIDIA's newest GPU architectures.
Beyond Traditional Rendering
NVIDIA's internal research suggests AI-enhanced rendering may eventually surpass traditional native rendering quality. The company's analysis demonstrates that standard "native" rendering already represents an imperfect approximation compared to ground truth images—high-resolution renders that are downsampled to target resolutions.
Ground truth image creation involves running games on standard hardware but processing tens or thousands of pixel samples per frame, extending render times from typical 60-120 frames per second to 60-120 seconds per frame. This computationally intensive process creates reference images that reveal native rendering limitations.
Historical data from DLSS 2.0 development shows instances where AI reconstruction produced outputs closer to ground truth than native rendering achieved. However, Leu acknowledged that users primarily notice DLSS when it fails rather than when it performs correctly, and the technology still encounters regular issues despite continuous improvements.
The broader gaming industry has largely accepted AI-accelerated image enhancement as the technological future, regardless of individual gamer preferences. As NVIDIA continues investing heavily in AI infrastructure and development, the company positions itself to maintain leadership in what it considers 21st-century essential technology, with the Nyx facility serving as both a development tool and a statement of technological capability.