Introducing JESD204D Transmitter & Receiver IPs for scalable, reliable data transfer!
Learn More !
New Release: Logic Fruit Launches the Advanced Kritin iXD 6U VPX SBC.
Explore More !

Focal Points of PCIe 5.0 Interface for Data Center Performance

Focal Points of PCIe 5.0 Interface

 Jump to the section that interests you

A considerable number of advancements are molding the development of data centers. AI and ML applications are at the heart of our day to day exposure to t

echnology, even if it’s implicitly. As indicated by Tractica, Artificial Intelligence (AI) and Machine Learning (ML) markets will grow $118.6 billion by 2025, marking the bright future of industries working in this field.

idn

Image Source : westerndigital.com

  • Consumer applications such as Google news feed, AlexaSiri are all possible because of AI’s inclusion in technology. 
  • There is heavy use of ML in statistics, making it a backbone of various prediction/analysis such as stock prediction, prediction of a tournament winnercustomized ads, etc. 
  • The online streaming content, which has become a part of our daily life, is all dependent on cloud storage. 
  • Above all, the WFH scenario, which is the need of the hour due to the pandemic, wouldn’t have been possible without cloud computing. 

Workload Evolution

The quick reception of Artificial Intelligence/Machine Learning (AI/ML) applications and the move to cloud-based workloads has fundamentally expanded organization traffic as of late.

  • The development of high-transmission capacity and media-centered gadgets is putting more unusual requests on data center throughput. This implies that the lines need to get quicker, and the basic structure blocks need to deal with more data. Just adding more lower-transmission capacity lines doesn’t work because physical deployment may result in unreliable costing.
  •  In addition to overall system speed, these building blocks need features and capabilities to address special data center needs. One such requirement is an increase in storage.
  •  Excellent sound and video occupy a ton of room. Cloud computing is quickly developing to make information accessible to clients irrespective of their location, which implies that content needs to be stored in the cloud. 

Here is a quick walkthrough of the applications.

AI & ML applications

With the deployment of next-generation 5G cellular networks worldwide, machine learning and artificial intelligence have begun to acquire ubiquity. As AI workloads, including machine learning and deep learning, generate, move, and process massive amounts of data at real-time speeds, it requires a new generation of computing architectures. Undoubtedly, AI applications across different verticals request massive memory transmission measures to support the processing of enormous data sets. 

X mmsKe43gylYOv6XmqsyQPU0jz0uo2sGPD7gUISs2j6VmsKfal3JnEUdgrWx2s3Qo7tU3ouTZ3ryPxzGWFq2r5E c4 yPnIaQbC1PE5ugX5TPmvosCGpbtNLDwV70rAU8 QudY

Image source : hitachivantara.com

Additionally, AI applications require immediate and quick admittance to memory, unlike the conventional multi-level caching architectures. Not only this, parallel computing, low precision computing, and empirical analysis assumption are the additional characteristics and prerequisites of AI-explicit applications. AI/ML outstanding tasks are significantly registered compute-intensive – and they are moving framework design from everyday CPU-based computing towards more heterogeneous processing. 

Cloud computing & networking

Going above and beyond AI/ML applications, the conventional data center paradigm is advancing because of cloud computing’s continuous move. Being remotely hosted, ubiquitous, and commodified, enterprise workloads are moving to the cloud: 45% were cloud-based in 2017, while over 60% were cloud-based in 2019! and the percentage will keep on increasing. 

Accordingly, data centers are lifting hyper-scale computing and networking to meet the needs of cloud-based workloads. Since the economies of scale are driven by expanding the transfer speed per actual unit of room, this new cloud-based model (alongside AI/ML applications) is quickening the adoption of higher speed networking protocols which are capable of doubling its speed roughly at regular intervals: 100GbE – >200GbE-> 400GbE->800GbE.

FsTEkcfg8p8s9wJWxhlsOcs0VoiiX8UsqTRg8ql4Wfanl ADmJiZxN2zQvue

Moreover, Virtual and augmented reality (VR/AR), autonomous cars, and high-definition (HD) streaming continues to push the need for faster data in abundance. Consumers no longer tolerate even the smallest delay, especially in VR/AR and autonomous driving. The result is that new technologies are moving forward quickly as far as data centers are concerned.

Obsoleteness of older technology 

New workloads, with AI/ML (artificial intelligence/machine learning) and cloud-based being first and foremost, are moving the focal point of virtualization from one server running many processes to the bridling of many processors to handle single, massive workloads. Across every one of these turns of events, a steady pattern is quickly rising data traffic and ever-more bandwidth requirements. 

Historically, the out-and-out use of virtualization ensured that server computed capacity adequately met the need for heavy workloads, accomplished by dividing or partitioning a single (physical) server into multiple virtual servers to extend and optimize utilization intelligently. Nonetheless, this switchover can no longer keep up with the AI/ML applications and cloud-based workloads that are quickly outpacing server compute capacity.

Serving enhanced bandwidth demands

For such advanced AI/ML workloads, parallel processing is needed for enormous datasets requiring heterogeneous computing, which in turn puts a critical demand for bandwidth on the link between CPUs and AI accelerators. A Link? You got that right – it is the PCIe link.

For almost everything, there is PCIe. Effectively adaptable through multi-lane links, PCIe has consistently been backward compatible and widely upheld by all cutting-edge OSs, programming, and drivers.PCIe 5.0 with it’s native support for carrying additional protocols over its low latency non-return to zero physical layers, can help CPUs keep up with the ever-increasing flow of data from edge devices.

7Q5wESGOs7cVfEZTBplXFtWo8qZu1kmkvdl6cLgftvyl 9Hnexr9ZTMqBth9kOhsovYA1Y3U3ABMnszkzl

Parallel Processing 

PCI Express 5.0 (PCIe 5.0), as the latest generation of the PCIe standard, with an aggregate link bandwidth of 1024 Gbps in an x16 configuration, addresses these demands without ‘boiling the ocean‘ as it is built on the proven PCIe framework and will enable the continued advancement of high-speed computing and networking performance needed in the next generation of the data center. With PCIe 5.0 interface solutions, designers can rely upon a robust, high-performance platform to implement their new PCIe Gen 5 ASICs. 

Critically, its bandwidth performance provides the necessary speed of connecting the network interfaces of servers and switches. The critical interface connection between CPUs and AI accelerators, PCIe 5.0, represents a doubling over PCIe 4.0: 32GT/s vs. 16GT/s, with an aggregate x16 link bandwidth of 1024 Gbps.

PCIe 5.0 doubles the data rate to 32 Gbps and the resultant full-duplex bandwidth of an x16 interface to 1024 Gbps, sufficient for 400 GbE links. A 400 GbE link operating at a full-duplex requires 800 Gbps of bandwidth, which x16 PCIe 5 can support within its performance envelope. 

Getting Future Ready

Yet, the demand for bandwidth is unquenchable, and 800 GbE reported recently will require another speed overhaul. The PCI-SIG is committed to a 2-year cadence of new generations to advance the standard’s performance in support of that need. PCI-SIG has also built up a cabled technology called OCuLink to interface PCIe devices, consequently empowering new out-of-the-box process and storage use-cases. Innovations such as these assure PCIe’s importance in computing and storage infrastructure in many more years to come.

Related Articles

sa

Only the best of the blogs delivered to you monthly

By submitting this form, I hereby agree to receive marketing information and agree with Logic Fruit Privacy Policy.

Get a Quote Today

By submitting this form, I hereby agree to receive marketing information and agree with Logic Fruit Privacy Policy.

or just Call us on