During a press conference at the 2020 Consumer Electronics Show, Intel gave a small update on its ongoing AI and machine learning hardware acceleration efforts. Details were a bit hard to come by at press time, but platforms group executive vice president Navin Shenoy previewed the performance improvement that’ll arrive with the chipmaker’s third-generation Xeon Scalable processor family, code-named Cooper Lake.
Cooper Lake, which will be available in the first half of 2020, will deliver up to a 60% increase in both AI inferencing and training performance. That’s compared with the 30 times improvement in deep learning inferencing performance Intel achieved in 2019 from 2017, the year the company released its first processor with AVX-512, a set of 512-bit extensions to the 256-bit Advanced Vector Extensions SIMD instructions.
Delivering this in part is DL Boost, which encompasses a range of x86 technologies designed to accelerate AI vision, speech, language, generative, and recommendation workloads. It’ll support the bfloat16 (Brain Floating Point) starting with Cooper Lake products, a number format originally by Google and implemented in its third generation custom-designed Tensor Processing Unit AI accelerator chip.
By way of refresher, Cooper Lake features up to 56 processor cores per socket, or twice the processor core count of Intel’s second-gen Scalable Xeon chips. They’ll have higher memory bandwidth, higher AI inference, and training performance at a lower power envelope, as well as platform compatibility with the upcoming 10-nanometer Ice Lake processor.
Intel products are used for more data center runs on AI than on any other platform, the company claims.
The future of Intel is AI. Its books imply as much — the Santa Clara company’s AI chip segments notched $ 3.5 billion in revenue this year, and it expects the market opportunity to grow 30% annually from $ 2.5 billion in 2017 to $ 10 billion by 2022. Putting this into perspective, AI chip revenues were up from $ 1 billion a year in 2017.