The NVIDIA DGX Station and previous DGX models, such as the DGX Station A100 and the older DGX Station with Tesla V100 GPUs, exhibit significant differences in memory bandwidth and overall architecture.
DGX Station (New Model)
The new DGX Station is equipped with the GB300 Grace Blackwell Ultra Desktop Superchip, which includes a 72-core Grace CPU and a Blackwell Ultra GPU. This system features up to 288GB of HBM3e GPU memory and up to 496GB of LPDDR5X CPU memory. The memory bandwidth for the GPU memory is not explicitly stated in terms of GB/s, but the system supports up to 8 TB/s of memory bandwidth, which is significantly higher than previous models. The CPU memory bandwidth is up to 396 GB/s**[5][8].DGX Station A100
The DGX Station A100 uses four NVIDIA A100 SXM4 GPUs, each with either 40 GB or 80 GB of HBM2 memory. While the specific memory bandwidth for this model is not detailed, the A100 GPUs are known for their high memory bandwidth, typically around 1,555 GB/s per GPU for the HBM2 memory, which would total to approximately 6,220 GB/s for the entire system[10].DGX Station with Tesla V100 GPUs
The older DGX Station model with four Tesla V100 GPUs features 16 GB of HBM2 memory per GPU, totaling 64 GB of GPU memory. The memory bandwidth for each V100 GPU is 900 GB/s, resulting in a total GPU memory bandwidth of 3.6 TB/s. Additionally, the system includes 256 GB of DDR4 system memory, but its bandwidth is not as high as the GPU memory[2][9].Key Differences
- Memory Type and Bandwidth: The new DGX Station uses HBM3e for GPU memory, offering significantly higher bandwidth compared to HBM2 used in older models. The CPU memory bandwidth is also improved with LPDDR5X.- Architecture: The new DGX Station integrates a Grace CPU with a Blackwell Ultra GPU, providing a more cohesive and efficient architecture for AI workloads compared to previous models.
- Scalability: The new DGX Station is designed to support more extensive AI workloads with its higher memory capacity and faster interconnects like NVLink-C2C, which offers seven times the bandwidth of PCIe Gen 5[5][8].
Citations:
[1] https://www.reddit.com/r/LocalLLaMA/comments/1jedy17/nvidia_digits_specs_released_and_renamed_to_dgx/
[2] https://images.nvidia.com/content/newsletters/email/pdf/DGX-Station-WP.pdf
[3] https://opentools.ai/news/nvidia-unleashes-the-future-with-personal-ai-supercomputers
[4] https://www.youtube.com/watch?v=krBh0Von-2A
[5] https://www.notebookcheck.net/Nvidia-unveils-DGX-Station-desktop-AI-supercomputer-with-72-core-CPU-and-Blackwell-Ultra-GPU.981669.0.html
[6] https://docs.nvidia.com/dgx/dgxa100-user-guide/introduction-to-dgxa100.html
[7] https://www.techpowerup.com/forums/threads/nvidia-announces-dgx-spark-and-dgx-station-personal-ai-computers.334300/
[8] https://www.nvidia.com/en-us/products/workstations/dgx-station/
[9] https://xenon.com.au/product/nvidia-dgx-station/
[10] https://mcomputers.cz/en/products-and-services/nvidia/dgx-systems/nvidia-dgx-station-a100/