Covering Disruptive Technology Powering Business in The Digital Age

image
When AI Goes Rogue . . . Or When AI Is Used Incorrectly?
image
February 2, 2023 News AI Feature Article Rogue

 

Written by: Martin Dale Bolima, Tech Journalist, AOPG.

The idea of Artificial Intelligence (AI) going rogue is fascinating—and scary. Imagine something like the Terminator attacking everyone in sight or V.I.K.I from I, Robot plotting murders or The Machines in The Matrix taking over all of humanity. But that idea, fortunately, is pure fantasy at this point, though real life tells us that AI can mess up, too, as was the case when an AI-powered chess robot crushed a child’s finger late last year.

So, maybe, just maybe, we ought to worry about AI doing us wrong. Or, perhaps we need to worry more about us doing AI wrong?

AI Is Not a Villain but a Super Helper

Before we proceed, we want to emphasise something we have said before: The impact of AI on the world is profound. It has ushered in the era of Machine-Learning (ML), neural networks and Deep Learning, all disruptive innovations that have shaped—and are continuing to shape—every vertical of every industry. That impact is also far-reaching, with AI and its minions impacting everything in everyday life, from economics to manufacturing, to retail, healthcare and even the arts and entertainment.

Unsurprisingly, AI adoption is gradually increasing, though adoption continues to be lower than to be expected for disruptive tech. Part of the reason why a few more enterprises are betting big on AI is due to their need to accelerate their digital transformation, or else be rendered obsolete in this era of remote collaboration, operational agility and autonomous production. Neither is this any bit surprising, as AI is an end itself that augments informed decision-making, generates insightful near real-time analytics and enables efficient and seamless supply chain management.

But that is just the tip of the iceberg. AI is being used in so many different, creative ways—and in each of these ways, AI can bring in a ton of benefits, whether cost savings, improved business processes, enhanced communication and collaboration or better customer experiences. Think AI revolutionising IT service management, powering high-performance digital twins in manufacturing and automotive or even transforming simple startups into exciting unicorns.

Even greater things are in store for AI, more so with tech firms advancing the technology to new frontiers. Lenovo, for instance, is bringing AI to the edge, while Red Hat is lowering the barriers to AI projects with OpenShift. TeamViewer and Siemens are doing much of the same, with the former introducing AiStudio and the latter enhancing its AI platform SynthAI.

Great but Not Perfect

For the great things AI can do and all the benefits it can bring to an enterprise, it is far from perfect. Of course, nothing ever is—even the most advanced technologies known to man. And for a relatively new technology, AI is certainly prone to imperfections. To put it simply, the artificial mind can go wrong. It can make mistakes. It has happened more than a few times already, in fact.

The aforementioned finger-breaking, chess-playing robot is a recent example of AI doing something unexpectedly wrong. There is also the infamous case of Tay.ai, a chatbot Microsoft developed to “interact” with young, mostly teenaged users of Twitter. Tay.ai interacted with the Twitter community, all right. Unfortunately, Tay.ai was soon sending out tweets that ran the gamut from sexist to fascist to racist—evidently picking up the obnoxious, inflammatory language some Twitter users had been using in “conversations” with the chatbot. Tay.ai’s case is a classic example of garbage in, garbage out, which we had previously discussed in detail already. The idea behind it is if you “feed” AI garbage, it will give you garbage in return.

Even Amazon got burned by “bad” AI, as its experimental AI-driven hiring tool outright favoured male applicants and all but disregarded female ones. The AI used for this project self-learned by reviewing 10 years’ worth of résumés but there was one problem: Most of the résumés were submitted by men. As a result, the algorithms largely favoured men, reportedly downgrading words related to women such as, well, “women” to put female applications markedly lower in the pecking order so to speak.

Not Surprising—and Humans Are Part of the Problem

Such incidents of AI not getting things right are to be expected but there is always an underlying reason for most mess ups according to Fan Yang, Practice Lead for Analytics and Consulting, Director, ASEAN, at DXC Technology. And that reason, Yang points out, oftentimes has something to do with the humans actually running the AI.

“New technologies command an array of benefits, along with inevitable risks that are often human-inflicted. For AI, the most common issues would be the lack of technical knowledge, programme bias that impairs decision making and privacy issues in data collection, resulting in poorly designed AI,” explained Yang.

This human element, incidentally, is part of the fabric of AI, and it plays a primordial role in setting the course of the artificial mind’s learning. In case you are unaware still, this process of learning begins with humans providing computer algorithm data so it can start making predictions and evaluate each one. The process then continues with validation, evaluation and testing using new, never-before-seen data to check how the AI responds. The human element is present in each step, which means there are several points at which human decision-making can impact an AI’s own decision-making process.

In other words, there are plenty of ways humans can mess up an AI—deliberately or unintentionally. And, in a roundabout way, the most common problem is providing AI with the kind of accurate data it needs, according to Chin Ying Loong, Regional Managing Director, ASEAN and SAGE (South Asian Growing Economies) at Oracle. This is why Loong attributes slower-than-usual AI adoption to an apparent lack of accurate data for AI.

“While there are several reasons for the low adoption of AI, ranging from infrastructure to talent, a recent survey by Appen indicates that for 51% of organisations, data accuracy is critical to their AI use-case. Without the use of accurate data sets, businesses are at risk of AI bias impacting their decision-making processes,” Loong pointed out.

The value of these data sets, again, is highly dependent on humans, like the data scientists and engineers in charge of “training” AI.

A Lack of Guidance

An equally big, but rarely talked about, problem in terms of “teaching” AI is the lack of guidance its human masters give, particularly in terms of discernment—or being able to distinguish right from wrong. Or, in keeping with our initial pop culture, reference, think Harold Finch in Person of Interest teaching his AI creation, The Machine, what is good and what is bad based on human values.

This lack of guidance is likely another reason why Tay.ai turned fascist and sexist, as it was unable to discern between right and wrong ideals. Roman Yampolskiy, Head of the CyberSecurity Lab at the University of Louisville certainly thinks so.

“This [Tay.ai’s failure] was to be expected. The system is designed to learn from its users, so it will become a reflection of their behaviour,” said Yampolskiy, who previously also wrote a paper about dangerous AI. “One needs to explicitly teach a system about what is not appropriate like we do with children. “Any AI system learning from bad examples could end up socially inappropriate—like a human raised by wolves.”

Louis Rosenberg, the founder of Unanimous AI, concurs, noting how “like all chatbots, Tay has no idea what it’s saying and has no idea if it’s saying something offensive, or nonsensical, or profound.” And this problem ultimately falls on the people in charge of the AI and their inability—or unwillingness, perhaps—to provide the needed guidance for the artificial mind to learn as real humans do.

When AI Is Done Wrong, the Cost Can Be Considerable

Then, when AI does mess up, the costs can be considerable—in part because can already be a pricey proposition. Exhibit A in this case is the doomed Watson for Oncology project. It was an initiative by IBM and the University of Texas MD Anderson Cancer Center to create an Oncology Expert Advisor that supposedly will “uncover valuable insights from the cancer centre’s rich patient and research databases” and provide expert cancer treatment recommendations.

The project turned out to be a colossal failure, with the AI-guided Watson supercomputer reportedly providing “multiple examples of unsafe and incorrect treatment recommendations”—like prescribing anticoagulants to patients with severe bleeding issues. Apparently, the problem was human-inflicted, as the engineers involved in the project were supposed to train the AI not on real patient data but on a small sample of—wait for it—hypothetical cancer patients. There were other issues, yes, like MD Anderson switching to a new electronic health record system that IBM could not access and the sheer enormity of distilling cancer into algorithms.

The project was ultimately shelved but not after costing IBM and MD Anderson a staggering USD $62 million in just a little over four years. That is not to say all companies stand to lose that much from a failed AI investment. But it does highlight how big of a loss rogue AI can cause—and that is from the investment side of things only. There are also reputational repercussions to AI letting a company down, like losing some credibility over a chatbot’s racist comment or drawing backlash from a failed AI initiative such as Amazon’s.

What Can Be Done?

To be perfectly clear: AI itself is not the problem per se. Neither is it inherently “good” nor innately “bad” for a lack of better terms. It can actually be a difference maker, a game-changer when done right.

But what exactly can an organisation do to get AI right?

Not to belabour a point but things ultimately circle back to humans—the programmers and the data scientists and everyone else in charge of AI and the algorithms that drive it. They and the culture of the company using AI.

“In order to navigate this [using AI correctly], businesses must always ensure that programmers and data scientists working with AI algorithms embed the company’s moral values and cultures in their AI model,” said Loong, who also emphasises the ethical use of technology as a crucial part in building any and all applications.

“It is important for enterprises to ensure that there is right governance and controls in place to ensure that AI is used ethically in a way that is fair, transparent, and easily understood—for example, providing more clarity on customer privacy and use of personal data for monitoring models,” explained Loong. “Customers must also be provided with full transparency on how the systems operate and be given a platform to ask questions and have discussions with businesses if they receive puzzling results from AI models. A transparent ecosystem between businesses and customers will not only provide more education around AI bias but also help the technology to be more efficient.”

Yang, for her part, highlights the need for data fluency and putting in place the right processes. When these considerations are met, according to Yang, “the opportunities tagged to AI are limitless” and will enable companies to “save time and money by automating and optimising routine processes, or even understanding customer demands better to design improved personalised experiences.”

Or, as explained by Loong, AI with the right policies and optimisations in place, “gives every business equal opportunity to build, empower and succeed.”

In other words, AI when done right can be a boon to a business. But we, humans, will have to do right by AI if we want it to really help us the way we want it to. It has been done before. It can be done again . . .  and again.

(0)(0)

Archive