GPU

NVIDIA H100 PCIe

Edit@1 months ago

Intergrated Memory(VRAM)
Capacity

80 GB

(HBM2e 5120-bit)

Bandwidth

2039 GB/s

291 Token/s

Vector Compute
FP64
25.61 T
FP32
51.22 T
FP16
102.40 T
BF16
102.40 T
INT32
25.61 T
INT8
X

NVIDIA H100 PCIe General-Purpose Float-Point performance (Vector Performance / Scalar Performance)

FP64: 25.61 TFLOPS

FP32: 51.22 TFLOPS

FP16: 102.40 TFLOPS

BF16: 102.40 TFLOPS

INT32: 25.61 TOPS

Matirx Compute
FP64
51.22 T
102.44 T
FP32
X
FP16
756 T
1512 T
FP8
1513 T
3026 T
TF32
378 T
756 T
BF16
756 T
1512 T
INT16
X
INT8
1513 T
3026 T
INT4
X

NVIDIA H100 PCIe AI performance (Tensor Performance / Matrix Performance)

FP64: 51.22 TFLOPS, with sparsity: 102.44 TFLOPS

FP16: 756 TFLOPS, with sparsity: 1512 TFLOPS

FP8: 1513 TFLOPS, with sparsity: 3026 TFLOPS

TF32: 378 TFLOPS, with sparsity: 756 TFLOPS

BF16: 756 TFLOPS, with sparsity: 1512 TFLOPS

INT8: 1513 TOPS, with sparsity: 3026 TOPS

Hardware Specs
NVIDIA H100 PCIe is a 5nm chip, has 80000 million transistors, launched by NVIDIA at 2023. It has 80 GB built-in(On-Board/On-Chip) memory with bandwidth up to 2039 GB/s. It has 14592 general-purpose ALUs(CUDA cores/Shader cores) and 456 matrix cores(Tensor cores) .
Process Node
5 nm
Launch Year
2023

Vector(CUDA) Cores
14592
Matrix(Tensor) Cores
456
Core Frequency
1095 ~ 1755 MHz
Cache
50MB

Comment without registration

Share your experience with NVIDIA H100 PCIe / Found an Error? Help Us Improve!