288 GiByte HBM3e graphics memory: Nvidia delivers new superlatives for AI with the GH200 super chip (“Grace Hopper”)

288 GiByte HBM3e graphics memory: Nvidia delivers new superlatives for AI with the GH200 super chip ("Grace Hopper")

Nvidia not only used the SIGGRAPH 2023, which will be held from August 6th to 10th in Los Angeles, to present its three latest professional graphics accelerators RTX 5000, RTX 4500 and RTX 4000 as well as those from partners such as Dell, Lenovo and HP distributed Nvidia RTX Workstation in detail, but had an even hotter iron in the fire. The upgrade of the super chip GH200 (“Grace Hopper”) is not stingy with new superlatives.

288 GiByte HBM3e and other superlatives

The revised GH200 (“Grace Hopper”) superchip now has a 144 GiByte HBM3e graphics memory for the first time, which is implemented with a total of six 24 GiByte HBM3e memory stacks (“stacks”). Previously, “only” 96 GiByte HBM3 graphics memory was offered. But the superlatives don’t end there, up to 288 GiByte can now be realized.

144 ARM processor cores and 960 GiByte LPDDR5X

Via an NVLink interconnect with 6 TB/s, two GH200 (“Grace Hopper”) superchips with two ARM Neoverse V2 and a total of 144 ARM processor cores, 960 GiByte LPDDR5X and the said 288 GiByte HBM3e graphics memory can be connected for an impressive 8 PetaFLOPS of computing power interconnect.

Superchip module demonstrates the maximum

The so-called “GH200 (“Grace Hopper”) Platform”, as Nvidia calls the module consisting of two superchips, offers the following technical specifications:

GH200 (“Grace Hopper”) platform

  • Hopper architecture in N4 at TSMC
  • 2 × GH200 (“Grace Hopper”) Superchips
  • 2 × ARM Neoverse V2 with 144 ARM processor cores
  • 960 GiByte LPDDR5X with ECC support and 512 GB/s
  • 288 GiByte HBM3e graphics memory (2 × 144 GiByte)*
  • 264 compute units (2 × 132 compute units)
  • 528 tensor cores (2 × 264 tensor cores)
  • 1.2 TiByte memory (CPU + GPU)
  • 8 PetaFLOPS AI performance
  • 450 – 1,000 watts TDP
  • Q2/2024

*) 282 GiByte (2 × 141 GiByte) effectively usable

The SIGGRAPH (“Special Interest Group on Graphics and Interactive Techniques”) of the Association for Computing Machinery (“ACM”) used Nvidia to announce the availability of the GH200 (“Grace Hopper”) superchip and the platform based on it for the 2nd quarter to be officially announced in 2024. But even that was not the end of the road at SIGGRAPH 2023.

DGX GH200 with 2,500 superchips as an AI supercomputer

Anyone who still had doubts about the company’s focus on AI computing and the associated claim to leadership in this business area was disabused by Nvidia at the end of the keynotes. The DGX GH200 combines a total of 2,500 superchips into a supercomputer that sets new standards for the most demanding AI workflows.

In addition to the largest expansion stage, the Nvidia DGX GH200 SuperPod, the manufacturer offers “smaller” solutions with the DGX BasePod and the DGX A100 and H100, which should be scalable with the requirements of the AI ​​workloads. There is not the slightest doubt within the industry that Nvidia is currently the world market leader in the field of artificial intelligence. Currently, each generation of AI accelerators is expanding the lead even further.

The manufacturer’s official press release (PDF) provides further information.

Sources: Nvidia

LEAVE A REPLY

Please enter your comment!
Please enter your name here