Covering Disruptive Technology Powering Business in The Digital Age

image
Open Source and AI’s Future: The Importance of Democratisation, Sustainability, and Trust
image

Written By: Chris Wright, Chief Technology Officer and Senior Vice President, Global Engineering, Red Hat

 

Open Source

Chris Wright, Chief Technology Officer and Senior Vice President, Global Engineering, Red Hat

It is one thing to answer the IT challenges of today. But today is fleeting, and tomorrow is here before we know it. So, how do you, right now, solve the technology puzzles that have not materialised yet? Luckily, we have got a time-tested way to help us plan for and create the future: Open source communities and projects.

Today, many people are looking at Artificial Intelligence/Machine Learning (AI/ML) as a future technology. It is still nascent in many organisations, for sure, with strategies and planning and big ideas—rather than deployments and production taking centre stage. But not for the open source world. We are already looking ahead to how to answer the next wave of AI-driven questions.

You could fill an entire conference keynote with what the future holds for AI, but I want to focus on three distinct areas being tackled by open source projects:

  • Democratisation
  • Sustainability
  • Trust

If you solve or at least start to solve these issues, then the rest of an AI strategy may start to feel a little less complex and more attainable.

Open Source and Democratisation

We need to be very blunt when it comes to AI terminology: It is hard not to raise an eyebrow at “open” models that have quotation marks around them or maybe an asterisk. Do not get me wrong: These models are critical to the field of AI, but they are not open in the sense of open source. They are open for usage—many with various restrictions or rules—but they may not be open for contributions, nor do they have open training data sets or weights.

This is a challenge that we addressed today, and that we will continue to do so, in collaboration with IBM Research. Alongside InstructLab, IBM Research is now applying an open source Apache licence to Granite language and code models. This is huge, not because it is unique to have a model governed by an open source licence. This is unique because now anyone—through InstructLab—can contribute to these models to make them better.

More than that, you can actually make an AI model YOUR AI model. Do you want to build a chatbot focused around fishing? Go for it, contribute it back, let us make ChatGoFish. Want to focus a troubleshooting bot around a really specific niche technology area? Do it with InstructLab. The possibilities become boundless when you really, truly apply open source principles to AI models, and we’re here for it.

Open Source and Sustainability

Model training and AI inference require a lot of power. By 2026, the International Energy Agency expects power demand for the AI industry to grow by 10x. So, what does this mean other than coin miners have a rival in the energy industry?

It means we need to bring software—open source software—to bear to help solve this challenge. Getting started with AI will almost always be power-hungry, but we can be smart about it. We’ve already taken steps in this regard with modern enterprise IT through the Kepler project, which helps provide insights into the carbon footprint and energy efficiency of cloud-native applications and infrastructure. It’s currently available as a technology preview in Red Hat OpenShift 4.15.

“Model training and AI inference require a lot of power.”

But what if we can, through the power of open innovation, turn Kepler into a tool that can also watch power consumption of GPUs, not just CPUs?

We’re doing just that, using Kepler to measure the power consumption of ML models for both training and inference. This provides a full view of the power consumption of both traditional IT and your AI footprints – once again, brought to you by open source.

Open Source and Trust

Like any exciting new technology, we need to be able to effectively protect and enforce the security footprint of AI workloads, models, and platforms. Innovation without security is simply “risk,” which is something that both enterprises and open source communities want to minimize.

For software, the supply chain and provenance is key in delivering a more secure experience. This means having a clear understanding of where given bits are coming from, who coded them and who accessed them before they make it into production capabilities. The Red Hat-led sigstore project helps prove the veracity of the open source code that you are using across all stages of application development.

Now, we need to apply this same level of forethought, discipline, and rigour to AI models—which is what Red Hat and the open source community are doing by working to create an AI Bill of Materials, which delivers greater assurances around model builds using our secure supply chain tooling.

Going hand in hand with security is the concept of trust—how do you and your organizations know that you can trust the AI models and workloads that you’re banking the future on? This is where TrustyAI comes in. It helps a technology team understand the justifications of AI models as well as flag potentially problematic behaviour.

Through these examples, I hope you can see how open source is working to bring greater accessibility, more sustainability, and enhanced security and trust to AI for the future. And at Red Hat, we’re proud to be at the forefront of driving all of these technologies, none of which would be possible without open source community collaboration that spurs new ways of thinking.

(0)(0)

Archive