GPU

NVIDIA B100

Edit@6 days ago

Intergrated Memory(VRAM)
Capacity

192 GB

(HBM3e )

Bandwidth

8000 GB/s

1142 Token/s

Vector Compute
FP64
30 T
FP32
60 T
FP16
BF16
INT32
INT8
X

NVIDIA B100 General-Purpose Float-Point performance (Vector Performance / Scalar Performance)

FP64: 30 TFLOPS

FP32: 60 TFLOPS

Matirx Compute
FP64
30 T
60 T
FP32
X
FP16
1750 T
3500 T
FP8
3500 T
7000 T
TF32
1800 T
3600 T
BF16
1750 T
3500 T
INT16
X
INT8
X
INT4
X

NVIDIA B100 AI performance (Tensor Performance / Matrix Performance)

FP64: 30 TFLOPS, with sparsity: 60 TFLOPS

FP16: 1750 TFLOPS, with sparsity: 3500 TFLOPS

FP8: 3500 TFLOPS, with sparsity: 7000 TFLOPS

TF32: 1800 TFLOPS, with sparsity: 3600 TFLOPS

BF16: 1750 TFLOPS, with sparsity: 3500 TFLOPS

Hardware Specs
NVIDIA B100 is a 5nm chip, has 104000 million transistors, launched by NVIDIA at 2024. It has 192 GB built-in(On-Board/On-Chip) memory with bandwidth up to 8000 GB/s. .
Process Node
5 nm
Launch Year
2024

Vector(CUDA) Cores
Matrix(Tensor) Cores
Core Frequency
~ MHz
Cache
50MB

Comment without registration

Share your experience with NVIDIA B100 / Found an Error? Help Us Improve!