GPU

NVIDIA Tesla V100 SXM2 32 GB

Edit@6 days ago

Intergrated Memory(VRAM)
Capacity

32 GB

(HBM2 4096-bit)

Bandwidth

898 GB/s

128 Token/s

Vector Compute
FP64
7.83 T
FP32
15.67 T
FP16
31.33 T
BF16
X
INT32
15.67 T
INT8
X

NVIDIA Tesla V100 SXM2 32 GB General-Purpose Float-Point performance (Vector Performance / Scalar Performance)

FP64: 7.83 TFLOPS

FP32: 15.67 TFLOPS

FP16: 31.33 TFLOPS

INT32: 15.67 TOPS

Matirx Compute
FP64
X
FP32
X
FP16
125.34 T
FP8
X
TF32
X
BF16
X
INT16
X
INT8
X
INT4
X

NVIDIA Tesla V100 SXM2 32 GB AI performance (Tensor Performance / Matrix Performance)

FP16: 125.34 TFLOPS

Hardware Specs
NVIDIA Tesla V100 SXM2 32 GB is a 12nm chip, has 21100 million transistors, launched by NVIDIA at 2018. It has 32 GB built-in(On-Board/On-Chip) memory with bandwidth up to 898 GB/s. It has 5120 general-purpose ALUs(CUDA cores/Shader cores) and 640 matrix cores(Tensor cores) .
Process Node
12 nm
Launch Year
2018

Vector(CUDA) Cores
5120
Matrix(Tensor) Cores
640
Core Frequency
1290 ~ 1530 MHz
Cache
6MB

Comment without registration

Share your experience with NVIDIA Tesla V100 SXM2 32 GB / Found an Error? Help Us Improve!