Popular hardware maker Nvidia has announced at the annual GTC 2022 conference several chips and technologies designed to increase the computational speed of artificial intelligence algorithms.
The company has reportedly unveiled the architecture of the next-generation video accelerator Hopper and the H100 chip based on it, designed for machine learning tasks.
Apparently, the novelty includes 4 nm manufacturing technology and contains 80 billion transistors. Notably, this is the company’s first GPU to support the PCle Gen5 connectivity interface and uses HBM3 to deliver 3 TB/s memory bandwidth.
The company has reported the H100 is three times faster than its A100 predecessor in FP16, FP32 and FP64 calculations and six times faster in 8-bit floating-point calculations.