Tech industry races to keep data centres up to speed with explosion in cloud-connected IoT devices.
Chip giant Intel has initiated a new open chip-to-chip interconnect called Compute Express Link (CXL) with industry partners that include Facebook, Google and Microsoft.
The new processors, which are due to come on stream in 2021, will enable a new epoch in chip-to-cloud architecture.
‘CXL creates a high-speed, low-latency interconnect between the CPU and workload accelerators, such as GPUs, FPGAs and networking’
– NAVIN SHENOY
The aim is to increase interconnection between data centre central processing units (CPUs) and accelerator chips, and avoid bottlenecks.
The purpose of the consortium – which includes Intel, Microsoft, Alibaba, Cisco, Dell EMC, Facebook, Google, Hewlett Packard Enterprise and Huawei – is to deliver breakthrough data centre performance at a time when connected devices are exploding through the internet of things (IoT) revolution.
“Intel developed the technology behind CXL and donated it to the consortium to become the initial release of the new specification,” explained Navin Shenoy, executive vice-president and general manager of the Data Center Group at Intel. “I am proud of the work Intel has done in developing this interconnect technology and the milestone it represents to the technology industry – much like our roles with Universal Serial Bus (USB) and PCI Express – and we look forward to working with the CXL consortium on future versions of the specification.”
Shenoy explained that the explosion of data and rapid innovation in specialised workloads – such as compression, encryption and artificial intelligence (AI) – have given rise to heterogeneous computing, where purpose-built accelerators work side by side with general-purpose CPUs.
“These accelerators need a high-performance connection to the processor and, ideally, they share a common memory space to reduce overhead and latency. CXL is a key technology that enables memory coherence between the accelerator and CPU, with very high bandwidth, and does so using well-understood infrastructure based on PCI Express Gen 5.
“More specifically, CXL creates a high-speed, low-latency interconnect between the CPU and workload accelerators, such as GPUs, FPGAs and networking. CXL maintains memory coherency between the devices, allowing resource-sharing for higher performance, reduced software stack complexity and lower overall system cost.”
RISC-V reward in the cloud architecture stakes
Interestingly, the move by Intel to establish the CXL consortium coincides with the creation of a rival alliance established by proponents of RISC-V, an open source architecture based on reduced instruction set computer (RISC) principles.
The proponents of RISC-V have created the CHIPS (Common Hardware for Interfaces, Processors and Systems) Alliance, which is a project of The Linux Foundation to develop a broad set of open source tools for emerging cloud and electronics architecture.
Initial members of the CHIPS Alliance include Esperanto, Google, SiFive and Western Digital.
The alliance aims to create open source blocks for embedded cores and systems on chip (SOCs) that will feature in future cloud-connected devices and in the data centre.
The emergence of the two consortia indicates that the compute continuum and obstacles ahead are infinite when you consider the physics challenge of connecting vast amounts of machines and devices to work in harmony and in real time from anywhere, right through to the data centre.
And we haven’t even started to scratch the surface on quantum computing yet.