Covering Disruptive Technology Powering Business in The Digital Age

image
Responsible AI Is the Way Forward: IBM Highlights Need for Trust, Governance in Stirring Masterclass on AI
image
April 16, 2024 News AI IBM responsible AI

 

Written By: Martin Dale Bolima, Tech Journalist, AOPG

As we stand on the threshold of the AI era, it has become increasingly evident that we require more than just any Artificial Intelligence (AI); what’s truly imperative is responsible AI.

Yes, it’s not a typo; it’s a fundamental distinction.

Responsible AI is about more than just the technology itself—IBM defines responsible AI as “a set of principles that help guide the design, development, deployment, and use of AI,” in turn “building trust in AI solutions that have the potential to empower organisations and their stakeholders.”

IBM further states that “responsible AI involves the consideration of a broader societal impact of AI systems and the measures required to align these technologies with stakeholder values, legal standards, and ethical principles,” with the ultimate aim “to embed such ethical principles into AI applications and workflows to mitigate risks and negative outcomes associated with the use of AI, while maximising positive outcomes.”

In other words, responsible AI is all about building AI you and your customers can trust.

It’s not just about the AI we want; it’s the AI we need for a better future.

“Without responsible AI and AI governance framework, companies will not be able to adopt AI at scale. When we advance AI, we make more accessibility, and we gain competitive advantage towards adopting AI,” said Catherine Lian, General Manager and Executive Technology Leader at IBM ASEAN, at the recently held IBM ASEAN AI Masterclass: Future of Responsible AI & Governance in ASEAN. “But organisations must also weigh towards introducing rewards against investment and risks that they see—privacy, accuracy, explainability, and bias are very important as organisations are growing to adopt AI.”

AI Is a Game Changer in Business

Of course, all that is set against the backdrop of AI’s unprecedented rise as a business differentiator in this digital age, where digital technologies are now obvious necessities. And it appears AI is the mother of all innovations at the moment, with PwC predicting that it will unlock approximately USD $16 trillion—trillion!—in value by 2030.

AI is already unlocking a lot of value for some of the world’s biggest companies. In fact, according to IDC, 25% of G2000 companies credit AI capabilities for contributing +5% to their earnings, highlighting its growing prominence in the business world. And, it is only growing, with IDC also predicting that 80% of CIOs will leverage organisational changes to harness AI automation to drive agile insight-driven digital business.

But, again, it is no longer just enough to deploy AI—and, in particular, its ever-popular minion, generative AI or GenAI. That’s because inhibitors to AI adoption are aplenty, and they include:

  • Data privacy concerns.
  • Trust and transparency concerns.
  • Minimising bias.
  • Maintaining brand integrity and customer trust.
  • Meeting regulatory obligations.

These are all serious matters that, when unaddressed, can undermine or completely derail any organisation’s AI initiatives and turn this innovation from a game-changing asset to a potential liability.

This is where responsible AI comes into the picture, providing organisations with a template to unlock real value from AI and yield its many benefits.

“In order to take advantage of the very real benefits that AI can pose and bring to organisations, you have to adopt AI in a responsible way,” emphasised Christina Montgomery, Chief Privacy & Trust Officer at IBM, as she discussed IBM’s responsible AI initiative at the same IBM ASEAN AI Masterclass: Future of Responsible AI & Governance in ASEAN virtual event.

The question now is: Where does one even start?

A Trustworthy Policy Is Key for Responsible AI

IBM might have the answer within its core policy framework, encapsulated by three pivotal pillars:

  1. Regulate AI risks, not the AI algorithms. IBM proposes regulating the context in which AI is deployed, making sure to regulate high-risk use cases of AI (as in the case of deepfakes, for example).
  2. Make AI creators and deployers accountable, not immune to liability. People creating and deploying AI need to be held accountable for the way they develop and use AI.
  3. Support open AI innovation, not an AI licencing regime. Promoting an AI licencing regime will not only inhibit open innovation but can also result in regulatory capture of some sort.

This three-pronged policy is a solid foundation companies can use to advance responsible AI—and the time to act is now, according to Christina, who pointed to deepfakes as one of the most pressing challenges posed by generative AI, so much so that they can actually compromise the integrity of elections, harm reputations, and spread fake news.

“The time is now to focus on AI safety and to focus on governance because generative AI has introduced some new and amplified existing risks associated with AI,” Christina pointed out. “At the same time, it has become very clear that AI is going to offer significant benefits. Balancing those benefits with the potential risks is really important now, more than ever, because of the rise of generative AI and the new risks we talked about, like deepfakes and content that can be created to harm people and generate more misinformation and disinformation.”

Again, IBM is taking an active role in this critical push for responsible AI, with Christina making clear that “every IBMer in the company is responsible for trustworthy AI” and that its mission is “to instil a culture of trustworthy AI throughout the company”—and cascade it to its clients.

“We are essentially leveraging our privacy program in something we’re calling integrated governance program,” Christina said. “There’s a complexity of emerging regulatory obligations, but across all of those, there’s an emerging set of high-level requirements: Things like having a risk management system, making sure you’re vetting your data, having transparency in the data that’s being used to train models, lifecycle governance and model management, and disclosing what data was used to train the models. We are essentially distilling all that to an AI baseline and applying a continuous compliance approach through our program to help IBM and our clients to comply with regulations and also to adopt trustworthy AI.”

Many a Happy Returns: The Benefits of Trustworthy AI Are Worth All the Trouble

So, you are complying with regulations—something already established in various countries worldwide. More are expected to follow as well, according to Stephen Braim, Vice President, Government and Regulatory Affairs, Asia Pacific, at IBM, who noted in the same Masterclass that “there’s a lot of work going on how to regulate AI.”

Responsible AI

You are also making AI that is trustworthy. What’s in it for you then?

According to Christina, the benefits of responsible, trustworthy AI are compelling, so much so that 75% of executives view it as a competitive differentiator. Beyond that, trustworthy AI can:

  • Build for an organisation a sound data and AI foundation that is compliant with regulations.
  • Improve AI adoption rate and the ability to operationalise it.
  • Enable higher returns on investment.
  • Build and retain investor confidence in AI.
  • Reduce AI-related risks and frequency of failure.
  • Attract and retain both talents and customers, gain public trust and sympathy.
  • Win over customers who place a premium on values and ESG commitment.

All these benefits, though, circle back to that big, bold tenet of trust: Do people trust AI well enough for them to adopt and support it fully?

“The biggest fear of governments and AI regulators is that they’re going to put all this work and no one’s going to adopt it, and that all comes back to what is the role of government in promoting trust and in promoting these [AI] applications for societal benefit. So, if you can’t get the trust equation right, you’re not going to get the adoption equation right.

In other words, gain people’s trust, and you gain their support. You gain even greater adoption. This is why IBM, according to Stephen, believes that in the era of generative AI, user trust is more essential than ever.

It is particularly true in ASEAN, which Stephen describes as having a demographic that is unique for AI adoption due to its young population, high IT adoption, and strong innovation focus. Corollary, AI matters in ASEAN, says Stephen, noting how it can be a major platform for digital transformation, drive productivity and economic growth, and transform industries like healthcare, education, and tourism.

Of course, AI’s far-reaching impact is undeniable and well-documented. What is left now, it appears, is to ensure that organisations—in the public and private sectors—are making AI responsible and trustworthy.

It is easier said than done, but IBM has laid the groundwork for it. Perhaps it is time for others to follow suit.

 

(0)(0)

Archive