Since 2003, the PCIe connection has delivered enhanced computing performance and throughput with a smaller footprint on motherboards. A bastion for innovation, the PCIe serial bus has helped usher in new advancements in computing applications, from increased capacity drives for in-memory processing to larger-capacity scratch disks and GPUs for 3D video and graphics processing.
PCIe (peripheral component interconnect express) is an interface standard for connecting high-speed components. Every desktop PC motherboard has a number of PCIe slots you can use to add GPUs (aka video cards or graphics cards), RAID cards, WiFi cards or SSD (solid-state drive) add-on cards.
PCIe-driven hardware continues to push the future of computing. With every new generation of PCIe connectivity, we see both an increase in transfer speeds and the number of available lanes for simultaneous data delivery—allowing for larger volumes of data to be transferred and used in short order.
That’s why both flash storage and GPU manufacturers are excited about the next generation of PCI Express (PCIe 4.0) to go live in data centers around the world.
This eWEEK Data Points article, using industry information from Kingston Technology Sr. Technology Manager Cameron Crandall, offers some key insights about the value of the PCIe form factor.
Data Point No. 1: Throughput Is the Name of the Game
When given the option between a 2020 Ferrari engine or that of a 2010 Corvette, most people would choose the one that provides the greatest speed and efficiency. The same could be said for choosing computer hardware with the best and most efficient transfer speeds.
For GPU processors, like NVIDIA, faster transfer speeds mean larger volumes of data can be processed for application usage. For example, the new NVIDIA A100 GPU supports PCIe 4.0, which provides 32 GT/s of full-duplex, total bandwidth for x16 connections. This doubles the available bandwidth of PCIe 3.0, and a server with eight A100 GPUs like the NVIDIA DGX A100 can execute 5 petaflops of performance.
Data Point No. 2: Opening the Door to New Wave of Supercomputing
These types of transfer and processing speeds have given birth to a new wave of supercomputing and AI applications, where multiple GPUs can be clustered as a computing resource pool. This is exciting as many of the trends in supercomputing drive adoption of new technologies in both consumer and data center hardware.
NVMe SSD manufacturers like Kingston can benefit from the PCIe 4.0 transfer speeds. From multiple read and write scenarios to redundancy practices, edge computing and latency minimization, data centers that utilize NVMe SSD drives over SATA and SAS interconnects can immediately realize the performance benefits.
PCIe 4.0 also bolsters many of the tunable elements of the NVMe architecture, including multi-namespace, multi-streaming and dual porting. While hyperscalers can tune for high-availability architectures and redundancy practices using single-port PCIe, other flash storage array manufacturers and peering technologies can now leverage dual-port capabilities.
Data Point No. 3: How They Impact Data Centers
Additionally, data centers that may have already migrated to PCIe 3.0 NVMe drives are still able to use them as they upgrade motherboards to the backward-compatible PCIe 4.0 standard. This means that a data center that may have invested in NVMe SSD during recent hardware refreshes can extend the life of their newest storage additions but may not gain additional performance benefits over the old architecture.
Data Point No. 4: Why PCIe 4.0 Might Change the Shape of Storage
While the performance benefits of PCIe 4.0 are advantageous to processing and availability of data, it also has the potential to help server manufacturers define the physical shape of the next generation of servers. CPUs are generally limited to the number of assigned computing lanes for bandwidth; for example, storage drives can consume four lanes per SSD. With the introduction of PCIe 4.0, the number of available lanes is increased and allows manufacturers to create higher-density servers.
With increased density, power management and cooling become deciding factors, which in turn may limit the standard U.2 form factor. Before packing a 1U server with 32 serviceable NVMe drives, provisions will need to be made for thermal limitations and heat sync. For that reason, E1.S drives in both long and short ruler size are getting traction, segmented by server size.
Data Point No. 5: Are We There Yet?
The simple answer is not quite, but we’re close. While the standard was introduced almost 10 years ago, the implementation and support haven’t been ubiquitous. The biggest challenge that will determine form factors, hardware support and other innovations will be CPU support.
Currently, AMD’s EPYC Rome CPU is the only processor supporting enterprise-grade builds with PCIe 4.0. Intel has yet to release a PCIe 4.0 supporting CPU. However, other Intel business units have made the transition, signaling that the anticipated Ice Lake and Tiger Lake architectures will support PCIe 4.0.
If you have a suggestion for an eWEEK Data Points article, email cpreimesberger@eweek.com.