After reading Manoj Sukumaran's captivating report on the open standard Compute Express Link (CXL), Aaron Lewis, Analyst in Omdia’s Cloud and Data Center Research Practice shares his summary of why CXL could have a bright future in computing.
A New server architecture
Since John von Neumann invented the reference model for computer architecture in 1945, the basic design has remained unchanged. This design had a significant floor, the "von Neumann bottleneck," in which the compute speed is limited to the rate at which the CPU can retrieve instructions and data from the storage's memory. Intel and others developed Compute Express Link (CXL) (an open standard) to solve this problem, revolutionizing memory access and sharing between multiple computing nodes. This protocol could help alleviate bottlenecks caused by limited system-level memory capacity. Furthermore, it can reduce costs associated with hardware. By 2024, co-processors will be available with CXL support.
Overview of CXL protocols
There are three categories of CXL devices: Type 1, Type 2, and Type 3. Type 1 devices do not contain local memory but can communicate with the host's processor memory using the CXL protocols. Type 2 devices have local memory and can also access the host's processor memory. They can create a coherent memory pool between the device and the host processor. Type 3 devices are "memory expander devices." These are hardware solutions for boosting the available RAM. The processor is mapped to the device host as Type 3 devices do not have an onboard processor/accelerator caching for local memory.
Memory pooling use case
The data center servers are starting to transition from DDR4 to DDR5 memory, with the latest server CPUs only supporting DDR5. Despite the infrastructure shift, there is a considerable price gap, making purchasing DDR5 unviable for many cloud service providers. Cloud service providers can overcome this using a CXL memory expander with a DDR4 controller. This feature allows DDR4 and DDR5 to coexist on the same server. Many vendors have already introduced CXL memory expanders, and cloud service providers are interested in reusing their existing memory modules and reducing new server costs.
An advantage of CXL is that it can connect nearly any memory type, e.g., DDR, LPDDR, persistent memory, and NAND flash. It also enables byte-addressable memory (computer memory accessed at the individual byte level) in the same space as CXL and allows transparent memory allocation using standard APIs. As a result, cloud service providers can reduce the memory cost in servers while meeting capacity and bandwidth requirements.
A new server real estate equation
A significant share of the motherboard area gets used for memory. With CXL memory disaggregation, memory resources can be treated like storage drives or PCIe cards in physical form factor. This could make server designs more compute-dense and limited primarily by thermal factors rather than a lack of motherboard real estate.
Bottom line
CXL has created a dynamic ecosystem of adopters and vendors offering solutions addressing specific use cases. This technology could significantly influence the future server architectures and it is going to be an exciting time in the server market.
Click here to access the Compute Express Link (CXL): The Road Ahead report, available to the subscribers of Omdia’s Data Center Compute Intelligence Service.
More from author
More insights
Assess the marketplace with our extensive insights collection.
More insightsHear from analysts
When you partner with Omdia, you gain access to our highly rated Ask An Analyst service.
Hear from analystsOmdia Newsroom
Read the latest press releases from Omdia.
Omdia NewsroomSolutions
Leverage unique access to market leading analysts and profit from their deep industry expertise.
Solutions