Hewlett Packard Enterprise has announced its entry to the artificial intelligence (AI) cloud market through the expansion of its HPE GreenLake portfolio. With this, HPE will be offering large language models (LLM) for any enterprise, from startups to Fortune 500 companies, to access on-demand in a multi-tenant supercomputing cloud service.
With the introduction of HPE GreenLake for Large Language Models (LLMs), enterprises can privately train, tune and deploy large-scale AI using a sustainable supercomputing platform that combines HPE’s AI software and market-leading supercomputers. HPE GreenLake for LLMs will be delivered in partnership with HPE’s first partner Aleph Alpha, a German AI startup, to provide users with a field-proven and ready-to-use LLM to power use cases requiring text and image processing and analysis.
The Start of Something Big
HPE GreenLake for LLMs is the first in a series of industry and domain-specific AI applications that HPE plans to launch in the future. These applications will include support for climate modeling, healthcare and life sciences, financial services, manufacturing and transportation. HPE also announced a new series of compute solutions optimised for AI inference workloads at the edge and in the data center.
“We have reached a generational market shift in AI that will be as transformational as the web, mobile, and cloud,” said Antonio Neri, President and CEO at HPE. “HPE is making AI, once the domain of well-funded government labs and the global cloud giants, accessible to all by delivering a range of AI applications, starting with large language models, that run on HPE’s proven, sustainable supercomputers. Now, organisations can embrace AI to drive innovation, disrupt markets and achieve breakthroughs with an on-demand cloud service that trains, tunes and deploys models at scale and responsibly.”
HPE is the global leader and expert in supercomputing, which powers unprecedented levels of performance and scale for AI, including breaking the exascale speed barrier with the world’s fastest supercomputer, Frontier.
Unlike general-purpose cloud offerings that run multiple workloads in parallel, HPE GreenLake for LLMs runs on an AI-native architecture uniquely designed to run a single large-scale AI training and simulation workload, and at full computing capacity. The offering will support AI and HPC jobs on hundreds or thousands of CPUs or GPUs at once. This capability is significantly more effective, reliable and efficient to train AI and create more accurate models, allowing enterprises to speed up their journey from POC to production to solve problems faster.
Introducing HPE GreenLake for LLMs, the First in a Series of AI Applications
HPE GreenLake for LLMs will include access to Luminous, a pre-trained large language model from Aleph Alpha, which is offered in multiple languages, including English, French, German, Italian and Spanish. The LLM allows customers to leverage their own data, train and fine-tune a customised model, to gain real-time insights based on their proprietary knowledge.
This service empowers enterprises to build and market various AI applications to integrate them into their workflows and unlock business and research-driven value.
“By using HPE’s supercomputers and AI software, we efficiently and quickly trained Luminous, a large language model for critical businesses such as banks, hospitals and law firms to use as a digital assistant to speed up decision-making and save time and resources,” said Jonas Andrulis, Founder of and CEO at Aleph Alpha. “We are proud to be a launch partner on HPE GreenLake for Large Language Models, and we look forward to expanding our collaboration with HPE to extend Luminous to the cloud and offer it as a-service to our end customers to fuel new applications for business and research initiatives.”
Providing Supercomputing Scale for AI Training, Tuning and Deployment
HPE GreenLake for LLMs will be available on-demand, running on the world’s most powerful, sustainable supercomputers, HPE Cray XD supercomputers, removing the need for customers to purchase and manage a supercomputer of their own, which is typically costly, complex and requires specific expertise. The offering leverages the HPE Cray Programming Environment, a fully integrated software suite to optimize HPC and AI applications, with a complete set of tools for developing, porting, debugging and tuning code.
In addition, the supercomputing platform provides support for HPE’s AI/ML software, which includes the HPE Machine Learning Development Environment to rapidly train large-scale models, and HPE Machine Learning Data Management Software to integrate, track and audit data with reproducible AI capabilities to generate trustworthy and accurate models.
HPE GreenLake for LLMs Runs on Sustainable Computing
HPE is committed to delivering sustainable computing for its customers. HPE GreenLake for LLMs will run in colocation facilities, such as with QScale in North America as the first region to deliver purpose-built design to support the scale and capacity of supercomputing with nearly 100% renewable energy.
HPE Expands AI Offerings with New Portfolio of AI Inferencing Compute Solutions
HPE supports three critical components to the AI journey in training, tuning and inferencing. In addition to the introduction of HPE GreenLake for LLMs to train and tune large-scale LLMs, HPE announced an expansion to its AI inferencing compute solutions to accelerate time-to-value for a range of industries, including retail, hospitality, manufacturing and media and entertainment.
These systems have been tuned to target workloads at the edge and in the data centre, such as Computer Vision at the Edge, Generative Visual AI and Natural Language Processing AI. These AI solutions are based on the new HPE ProLiant Gen11 servers which have been purpose-built to integrate advanced GPU acceleration, which is critical for AI performance. HPE ProLiant DL380a and DL320 Gen11 servers boost AI inference performance by more than 5X over previous models.
Availability
HPE is accepting orders now for HPE GreenLake for LLMs and expects additional availability by the end of the calendar year 2023, starting in North America with availability in Europe expected to follow early next year.
HPE also announced an expansion to its AI inferencing compute solutions. The new HPE ProLiant Gen11 servers are optimized for AI workloads, using advanced GPUs. The HPE ProLiant DL380a and DL320 Gen11 servers boost AI inference performance by more than 5X over previous models.2 For more information, please visits HPE ProLiant Servers for AI.
HPE Services provides a comprehensive portfolio of services spanning strategy and design, operations and management for AI initiatives.
For more information on HPE GreenLake for LLMs, please visit: https://www.hpe.com/greenlake/llm.
Archive
- October 2024(44)
- September 2024(94)
- August 2024(100)
- July 2024(99)
- June 2024(126)
- May 2024(155)
- April 2024(123)
- March 2024(112)
- February 2024(109)
- January 2024(95)
- December 2023(56)
- November 2023(86)
- October 2023(97)
- September 2023(89)
- August 2023(101)
- July 2023(104)
- June 2023(113)
- May 2023(103)
- April 2023(93)
- March 2023(129)
- February 2023(77)
- January 2023(91)
- December 2022(90)
- November 2022(125)
- October 2022(117)
- September 2022(137)
- August 2022(119)
- July 2022(99)
- June 2022(128)
- May 2022(112)
- April 2022(108)
- March 2022(121)
- February 2022(93)
- January 2022(110)
- December 2021(92)
- November 2021(107)
- October 2021(101)
- September 2021(81)
- August 2021(74)
- July 2021(78)
- June 2021(92)
- May 2021(67)
- April 2021(79)
- March 2021(79)
- February 2021(58)
- January 2021(55)
- December 2020(56)
- November 2020(59)
- October 2020(78)
- September 2020(72)
- August 2020(64)
- July 2020(71)
- June 2020(74)
- May 2020(50)
- April 2020(71)
- March 2020(71)
- February 2020(58)
- January 2020(62)
- December 2019(57)
- November 2019(64)
- October 2019(25)
- September 2019(24)
- August 2019(14)
- July 2019(23)
- June 2019(54)
- May 2019(82)
- April 2019(76)
- March 2019(71)
- February 2019(67)
- January 2019(75)
- December 2018(44)
- November 2018(47)
- October 2018(74)
- September 2018(54)
- August 2018(61)
- July 2018(72)
- June 2018(62)
- May 2018(62)
- April 2018(73)
- March 2018(76)
- February 2018(8)
- January 2018(7)
- December 2017(6)
- November 2017(8)
- October 2017(3)
- September 2017(4)
- August 2017(4)
- July 2017(2)
- June 2017(5)
- May 2017(6)
- April 2017(11)
- March 2017(8)
- February 2017(16)
- January 2017(10)
- December 2016(12)
- November 2016(20)
- October 2016(7)
- September 2016(102)
- August 2016(168)
- July 2016(141)
- June 2016(149)
- May 2016(117)
- April 2016(59)
- March 2016(85)
- February 2016(153)
- December 2015(150)