Written By: Chris Wright, Chief Technology Officer and Senior Vice President, Global Engineering, Red Hat
It is one thing to answer the IT challenges of today. But today is fleeting, and tomorrow is here before we know it. So, how do you, right now, solve the technology puzzles that have not materialised yet? Luckily, we have got a time-tested way to help us plan for and create the future: Open source communities and projects.
Today, many people are looking at Artificial Intelligence/Machine Learning (AI/ML) as a future technology. It is still nascent in many organisations, for sure, with strategies and planning and big ideas—rather than deployments and production taking centre stage. But not for the open source world. We are already looking ahead to how to answer the next wave of AI-driven questions.
You could fill an entire conference keynote with what the future holds for AI, but I want to focus on three distinct areas being tackled by open source projects:
- Democratisation
- Sustainability
- Trust
If you solve or at least start to solve these issues, then the rest of an AI strategy may start to feel a little less complex and more attainable.
Open Source and Democratisation
We need to be very blunt when it comes to AI terminology: It is hard not to raise an eyebrow at “open” models that have quotation marks around them or maybe an asterisk. Do not get me wrong: These models are critical to the field of AI, but they are not open in the sense of open source. They are open for usage—many with various restrictions or rules—but they may not be open for contributions, nor do they have open training data sets or weights.
This is a challenge that we addressed today, and that we will continue to do so, in collaboration with IBM Research. Alongside InstructLab, IBM Research is now applying an open source Apache licence to Granite language and code models. This is huge, not because it is unique to have a model governed by an open source licence. This is unique because now anyone—through InstructLab—can contribute to these models to make them better.
More than that, you can actually make an AI model YOUR AI model. Do you want to build a chatbot focused around fishing? Go for it, contribute it back, let us make ChatGoFish. Want to focus a troubleshooting bot around a really specific niche technology area? Do it with InstructLab. The possibilities become boundless when you really, truly apply open source principles to AI models, and we’re here for it.
Open Source and Sustainability
Model training and AI inference require a lot of power. By 2026, the International Energy Agency expects power demand for the AI industry to grow by 10x. So, what does this mean other than coin miners have a rival in the energy industry?
It means we need to bring software—open source software—to bear to help solve this challenge. Getting started with AI will almost always be power-hungry, but we can be smart about it. We’ve already taken steps in this regard with modern enterprise IT through the Kepler project, which helps provide insights into the carbon footprint and energy efficiency of cloud-native applications and infrastructure. It’s currently available as a technology preview in Red Hat OpenShift 4.15.
“Model training and AI inference require a lot of power.”
But what if we can, through the power of open innovation, turn Kepler into a tool that can also watch power consumption of GPUs, not just CPUs?
We’re doing just that, using Kepler to measure the power consumption of ML models for both training and inference. This provides a full view of the power consumption of both traditional IT and your AI footprints – once again, brought to you by open source.
Open Source and Trust
Like any exciting new technology, we need to be able to effectively protect and enforce the security footprint of AI workloads, models, and platforms. Innovation without security is simply “risk,” which is something that both enterprises and open source communities want to minimize.
For software, the supply chain and provenance is key in delivering a more secure experience. This means having a clear understanding of where given bits are coming from, who coded them and who accessed them before they make it into production capabilities. The Red Hat-led sigstore project helps prove the veracity of the open source code that you are using across all stages of application development.
Now, we need to apply this same level of forethought, discipline, and rigour to AI models—which is what Red Hat and the open source community are doing by working to create an AI Bill of Materials, which delivers greater assurances around model builds using our secure supply chain tooling.
Going hand in hand with security is the concept of trust—how do you and your organizations know that you can trust the AI models and workloads that you’re banking the future on? This is where TrustyAI comes in. It helps a technology team understand the justifications of AI models as well as flag potentially problematic behaviour.
Through these examples, I hope you can see how open source is working to bring greater accessibility, more sustainability, and enhanced security and trust to AI for the future. And at Red Hat, we’re proud to be at the forefront of driving all of these technologies, none of which would be possible without open source community collaboration that spurs new ways of thinking.
Archive
- October 2024(44)
- September 2024(94)
- August 2024(100)
- July 2024(99)
- June 2024(126)
- May 2024(155)
- April 2024(123)
- March 2024(112)
- February 2024(109)
- January 2024(95)
- December 2023(56)
- November 2023(86)
- October 2023(97)
- September 2023(89)
- August 2023(101)
- July 2023(104)
- June 2023(113)
- May 2023(103)
- April 2023(93)
- March 2023(129)
- February 2023(77)
- January 2023(91)
- December 2022(90)
- November 2022(125)
- October 2022(117)
- September 2022(137)
- August 2022(119)
- July 2022(99)
- June 2022(128)
- May 2022(112)
- April 2022(108)
- March 2022(121)
- February 2022(93)
- January 2022(110)
- December 2021(92)
- November 2021(107)
- October 2021(101)
- September 2021(81)
- August 2021(74)
- July 2021(78)
- June 2021(92)
- May 2021(67)
- April 2021(79)
- March 2021(79)
- February 2021(58)
- January 2021(55)
- December 2020(56)
- November 2020(59)
- October 2020(78)
- September 2020(72)
- August 2020(64)
- July 2020(71)
- June 2020(74)
- May 2020(50)
- April 2020(71)
- March 2020(71)
- February 2020(58)
- January 2020(62)
- December 2019(57)
- November 2019(64)
- October 2019(25)
- September 2019(24)
- August 2019(14)
- July 2019(23)
- June 2019(54)
- May 2019(82)
- April 2019(76)
- March 2019(71)
- February 2019(67)
- January 2019(75)
- December 2018(44)
- November 2018(47)
- October 2018(74)
- September 2018(54)
- August 2018(61)
- July 2018(72)
- June 2018(62)
- May 2018(62)
- April 2018(73)
- March 2018(76)
- February 2018(8)
- January 2018(7)
- December 2017(6)
- November 2017(8)
- October 2017(3)
- September 2017(4)
- August 2017(4)
- July 2017(2)
- June 2017(5)
- May 2017(6)
- April 2017(11)
- March 2017(8)
- February 2017(16)
- January 2017(10)
- December 2016(12)
- November 2016(20)
- October 2016(7)
- September 2016(102)
- August 2016(168)
- July 2016(141)
- June 2016(149)
- May 2016(117)
- April 2016(59)
- March 2016(85)
- February 2016(153)
- December 2015(150)