Intel’s news source for media, analysts and everyone curious about the company.

Actualizaciones en la plataforma de centro de datos de próxima generación de Intel, Sapphire Rapids (English Only)

Rich platform enhancements and built-in acceleration engines will address most demanding workloads from cloud to the edge.

Lisa Spelman
Corporate Vice President, General Manager of the Xeon and Memory Group

By Lisa Selman

It’s been a few months since we launched our 3rd Gen Intel® Xeon® Scalable processors and I thought it was a good time to share an update on our next-generation Xeon Scalable processor,code-named “Sapphire Rapids,” which we are hard at work on. We’ve talked at a high level about what’s coming in this platform, and today I want to highlight two of the breakthrough technologies featured in Sapphire Rapids and provide an update on timing.

Sapphire Rapids will feature a new microarchitecture designed to address the dynamic and increasingly demanding workloads in future data centers across compute, networking and storage. The processor is supported by a rich set of platform enhancements that will drive the adoption of industry-changing technologies like DDR5 memory and PCIe 5.0.

Sapphire Rapids is built to be Intel’s highest performance data center processor, delivering low-latency, high-memory bandwidth across workloads. As we announced this week at the International Supercomputing Conference, versions of Sapphire Rapids will be offered with integrated High Bandwidth Memory (HBM), providing a dramatic performance improvement for memory bandwidth-sensitive applications.

I’m excited to share more information about two new advanced acceleration engines that we’re building into Sapphire Rapids.

I’ve talked extensively about the leadership position we have in the area of artificial intelligence (AI) within Xeon, and as AI workloads continue to grow in importance across all our customers, we are doubling down on our leadership position with Sapphire Rapids through the integration of Intel Advanced Matrix Extensions (AMX). AMX is our next-generation built-in DL Boost advancement for deep learning performance and features a set of matrix multiplication instructions that will significantly advance DL inference and training.  I’ve seen this technology firsthand in our labs and I can’t wait to get it in our customers’ hands. I don’t want to give away everything now, but I can tell you that on early silicon we are easily achieving over two times the deep learning inference and training performance compared with our current Xeon Scalable generation.

The other major acceleration engine we’re building into Sapphire Rapids is the Intel® Data Streaming Accelerator (Intel® DSA). It was developed hands-on with our partners and customers who are constantly looking for ways to free up processor cores to achieve higher overall performance. DSA is a high-performance engine targeted for optimizing streaming data movement and transformation operations common in high-performance storage, networking and data processing-intensive applications. We’re actively working to build an ecosystem around this new feature so customers can easily take advantage of its value.

Demand for Sapphire Rapids continues to grow as customers learn more about the benefits of the platform. Given the breadth of enhancements in Sapphire Rapids, we are incorporating additional validation time prior to the production release, which will streamline the deployment process for our customers and partners. Based on this, we now expect Sapphire Rapids to be in production in the first quarter of 2022, with ramp beginning in the second quarter of 2022.

Sapphire Rapids is an exciting release for us and for the industry. The advancements in performance, workload acceleration, memory bandwidth and infrastructure management will lead the industry’s transition to cloud-based architectures and help shape the data centers of the future. We look forward to sharing more technical information on our next-gen data center platform in the coming months, including during Hot Chips 2021 in August and Intel Innovation in October.

Lisa Spelman is corporate vice president and general manager of the Xeon and Memory Group at Intel Corporation.

Acerca de Intel

Intel (Nasdaq: INTC) es una empresa líder en la industria, creando tecnología que cambia al mundo, habilita el progreso mundial y enriquece vidas. Inspirados en la Ley de Moore, continuamente avanzamos en el diseño y la fabricación de semiconductores para ayudar a enfrentar los desafíos más grandes de nuestros clientes. Al integrar inteligencia en la nube, las redes, el edge y toda clase de dispositivos de cómputo, liberamos el potencial de los datos para transformar y mejorar a las empresas y la sociedad. Si deseas conocer más sobre las innovaciones de Intel, visita e

© Intel Corporation, Intel, el logotipo de Intel y otras marcas de Intel son marcas registradas de Intel Corporation o sus subsidiarias. Otros nombres y marcas pueden ser reclamados como propiedad de otros.