Kioxia has released the server SSD Super High IOPS, which will significantly increase the speed of Nvidia AI chips
New SSDs from Nvidia and Kioxia: how they will ensure uninterrupted AI operation
Nvidia and Kioxia are jointly developing solid‑state drives that will allow Nvidia GPUs to run without interruptions even under the most demanding machine learning workloads.
What Kioxia presented
- Model: E3.S CM9 – “Super High IOPS” SSD
- Capacity: 25.6 TB
- Write performance: can fully overwrite data three times a day for the entire warranty period
- Sample availability: expected by the end of 2026
XL‑Flash Technology
Kioxia uses its own XL‑Flash technology based on SLC NAND memory – the fastest available flash memory. This provides:
| Metric | XL‑Flash | Traditional Data‑Center SSD |
|---|---|---|
| IOPS (max) | > 10 million/s | 3–4 million/s |
| Read latency | 3–5 µs | 40–100 µs |
Thus, XL‑Flash operates three to four times faster and almost ten times with lower latency.
How it fits into Nvidia’s architecture
- Nvidia Storage‑Next: servers will connect these SSDs directly to the GPU, bypassing the CPU.
- This eliminates extra latencies in data transfer and allows large amounts of information to be stored in the GPU cache memory.
- As a result, GPU cores remain at 100 % utilization without downtime – critical for scalable AI models that can contain trillions of parameters and process millions of tokens simultaneously.
Demonstration at GTC 2026
At the GTC 2026 conference, Nvidia showcased the first prototype of such a system on the BlueField‑4 STX platform:
- DPU (Data Processing Unit) – a specialized processor for data storage
- ConnectX‑9 SuperNIC – a network adapter that, according to Nvidia, provides:
- up to 5× higher token throughput
- four times better energy efficiency
- twice the page load speed
This combination already demonstrates the potential to accelerate AI by giving GPUs direct access to ultra‑fast SSDs.
Comments (0)
Share your thoughts — please be polite and stay on topic.
Log in to comment