As the world experiences a generational shift to artificial intelligence (AI), everyone is participating in a new era of global expansion enabled by silicon. It is the “Siliconomy,” where systems powered by AI are imbued with autonomy and agency, assisting us across both knowledge-based and physical-based tasks as part of our everyday environments.
At Intel Innovation, the company unveiled technologies to bring AI everywhere and to make it more accessible across all workloads—from client and edge to network and cloud. These include easy access to AI solutions in the cloud, better price performance for Intel data centre AI accelerators than the competition offers, tens of millions of new AI-enabled Intel PCs shipping in 2024, and tools for securely powering AI deployments at the edge.
AI requires a broad range of solutions developed with openness and security in mind to speed innovation. Intel’s portfolio of AI-enabling hardware and software—from CPUs, GPUs and accelerators to the oneAPI programming model, OpenVINO developer toolkit and libraries that empower the AI ecosystem—provides competitive, high-performance, open-standards solutions for customers to quickly deploy AI at scale.
More: Intel Innovation 2023
Intel Developer Cloud Reaches General Availability
Intel announced the general availability of the Intel® Developer Cloud, which gives developers an easy path to test and deploy AI and high performance computing applications and solutions across the latest Intel CPUs, GPUs and AI accelerators. Developers can also take advantage of cutting-edge tools to enable advanced AI and performance. The details:
- The Intel Developer Cloud is built on a foundation of advanced central processing units (CPUs) that are purpose-built for AI, graphics processing units (GPUs), and Intel® Gaudi®2 processors for Deep Learning, along with open software and tools. The cloud development environment also provides access to the latest Intel hardware platforms, such as 5th Gen Intel® Xeon® Scalable processors (code-named Emerald Rapids), which will become available in the Intel® Development Cloud in the next few weeks and launch on December 14, and Intel® Data Center GPU Max Series 1100 and 1550.
- Developers can use the Intel Developer Cloud to build, test and optimise AI and high performance computing applications. They can also run small- to large-scale AI training, model optimisation and inference workloads that deploy with performance and efficiency. Based on an open software foundation with oneAPI – the open multiarchitecture, multivendor programming model—Intel Developer Cloud provides hardware choice and freedom from proprietary programming models to support accelerated computing and code reuse and portability.
More: Intel Developer Cloud
Customer and Performance Momentum in the Data Centre
Intel announced AI performance updates and industry momentum for its data centre and AI product portfolio, including Intel Gaudi2 and 3, 4th Gen Intel® Xeon®, 5th Gen Intel Xeon, and future-generation Xeon processors code-named Sierra Forest and Granite Rapids. The details:
- Intel announced a large AI supercomputer will be built entirely on Intel Xeon processors and 4,000 Intel Gaudi2 AI hardware accelerators, with Stability AI as the anchor customer.
- Dell Technologies and Intel are collaborating to offer AI solutions to meet customers wherever they are on their AI journey. PowerEdge systems with Xeon and Gaudi will support AI workloads ranging from large-scale training to base-level inferencing.
- Alibaba Cloud has reported 4th Gen Xeon as a viable solution for real-time large language model (LLM) inference in its model-serving platform DashScope, with 4th Gen Xeon achieving a 3x acceleration in response time because of its built-in Intel® Advanced Matrix Extensions (Intel® AMX) accelerators and other software optimizations.
- Granite Rapids will include industry-leading Performance-cores (P-cores), offering better AI performance than any other CPU, and a 2x to 3x boost over 4th Gen Xeon for AI workloads.
More: Intel Unveils Future-Generation Xeon with Robust Performance and Efficiency Architectures
New AI Experiences Powered by Intel Core Ultra Processors
Intel will usher in the age of the AI PC with the upcoming Intel Core Ultra processors, code-named Meteor Lake, featuring Intel’s first integrated neural processing unit, or NPU, for power-efficient AI acceleration and local inference on the PC. Intel confirmed Core Ultra will launch Dec. 14. The details:
- Core Ultra delivers low-latency AI compute that is connectivity-independent with stronger data privacy.
- Core Ultra integrates an NPU into client silicon for the first time. The NPU is built to enable low power and high quality and provide entirely new PC experiences. It is ideal for workloads migrating from the CPU that need higher quality or efficiency, or for workloads that would typically run in the cloud due to lack of efficient client compute.
- Core Ultra represents an inflection point in Intel’s client processor roadmap: It is the first client chiplet design enabled by Foveros packaging technology. In addition to the NPU and major advances in power-efficient performance thanks to Intel 4 process technology, the new processor brings discrete-level graphics performance with onboard Intel® Arc™ graphics.
- Core Ultra’s disaggregated architecture delivers a balance of performance and power across AI-driven tasks:
- The GPU has performance parallelism and throughput, ideal for AI infused in media, 3D applications and the render pipeline.
- The NPU is a dedicated low-power AI engine for sustained AI and AI offload.
- The CPU has a fast response ideal for lightweight, single-inference low-latency AI tasks.
- Intel highlighted a collaboration with Acer to bring AI to its upcoming Core Ultra systems showcasing how the new “Acer Parallax” software feature uses the NPU to add a 3D look and feel to user images.
Powering AI at the Edge
The opportunity for edge computing community is immense, fuelled by the demand for automating systems and analysing data through AI. OpenVINO is Intel’s AI inferencing and deployment runtime of choice for developers on client and edge platforms. With the OpenVINO developer toolkit, Intel is making AI at the edge even more accessible. Developer downloads of the OpenVINO toolkit have seen a 90% year-over-year increase in the past year alone. The details:
- OpenVINO 2023.1, powered by oneAPI, makes generative AI more accessible for real-world scenarios, enabling developers to write once and deploy across a broad range of devices and AI applications.
- The newest release—available for download on ai—brings Intel closer to the vision of any model on any hardware anywhere.
- OpenVINO 2023.1 enables developers to optimize standard PyTorch, TensorFlow or ONNX models and offers full support for the forthcoming Core Ultra processors. It also provides more model compression techniques, improved GPU support and memory consumption for dynamic shapes, as well as more portability and performance to run across the entire compute continuum: cloud, client and edge.
- During the Innovation Day 1 keynote Intel demonstrated Fit:match, an AI solution improving today’s retail fitting-room experience. Fit:match’s 3D Concierge experience uses Intel® RealSense™ Depth Cameras with lidar sensors, Intel Core processors and OpenVINO. With a focus on security and privacy, the solution can scan and match thousands of products to ensure an optimal fit for the customer, which increases purchasing conversions and reduces return rates.
Archive
- October 2024(44)
- September 2024(94)
- August 2024(100)
- July 2024(99)
- June 2024(126)
- May 2024(155)
- April 2024(123)
- March 2024(112)
- February 2024(109)
- January 2024(95)
- December 2023(56)
- November 2023(86)
- October 2023(97)
- September 2023(89)
- August 2023(101)
- July 2023(104)
- June 2023(113)
- May 2023(103)
- April 2023(93)
- March 2023(129)
- February 2023(77)
- January 2023(91)
- December 2022(90)
- November 2022(125)
- October 2022(117)
- September 2022(137)
- August 2022(119)
- July 2022(99)
- June 2022(128)
- May 2022(112)
- April 2022(108)
- March 2022(121)
- February 2022(93)
- January 2022(110)
- December 2021(92)
- November 2021(107)
- October 2021(101)
- September 2021(81)
- August 2021(74)
- July 2021(78)
- June 2021(92)
- May 2021(67)
- April 2021(79)
- March 2021(79)
- February 2021(58)
- January 2021(55)
- December 2020(56)
- November 2020(59)
- October 2020(78)
- September 2020(72)
- August 2020(64)
- July 2020(71)
- June 2020(74)
- May 2020(50)
- April 2020(71)
- March 2020(71)
- February 2020(58)
- January 2020(62)
- December 2019(57)
- November 2019(64)
- October 2019(25)
- September 2019(24)
- August 2019(14)
- July 2019(23)
- June 2019(54)
- May 2019(82)
- April 2019(76)
- March 2019(71)
- February 2019(67)
- January 2019(75)
- December 2018(44)
- November 2018(47)
- October 2018(74)
- September 2018(54)
- August 2018(61)
- July 2018(72)
- June 2018(62)
- May 2018(62)
- April 2018(73)
- March 2018(76)
- February 2018(8)
- January 2018(7)
- December 2017(6)
- November 2017(8)
- October 2017(3)
- September 2017(4)
- August 2017(4)
- July 2017(2)
- June 2017(5)
- May 2017(6)
- April 2017(11)
- March 2017(8)
- February 2017(16)
- January 2017(10)
- December 2016(12)
- November 2016(20)
- October 2016(7)
- September 2016(102)
- August 2016(168)
- July 2016(141)
- June 2016(149)
- May 2016(117)
- April 2016(59)
- March 2016(85)
- February 2016(153)
- December 2015(150)