Written by: Martin Dale Bolima, Tech Journalist, AOPG.
The idea of Artificial Intelligence (AI) going rogue is fascinating—and scary. Imagine something like the Terminator attacking everyone in sight or V.I.K.I from I, Robot plotting murders or The Machines in The Matrix taking over all of humanity. But that idea, fortunately, is pure fantasy at this point, though real life tells us that AI can mess up, too, as was the case when an AI-powered chess robot crushed a child’s finger late last year.
So, maybe, just maybe, we ought to worry about AI doing us wrong. Or, perhaps we need to worry more about us doing AI wrong?
AI Is Not a Villain but a Super Helper
Before we proceed, we want to emphasise something we have said before: The impact of AI on the world is profound. It has ushered in the era of Machine-Learning (ML), neural networks and Deep Learning, all disruptive innovations that have shaped—and are continuing to shape—every vertical of every industry. That impact is also far-reaching, with AI and its minions impacting everything in everyday life, from economics to manufacturing, to retail, healthcare and even the arts and entertainment.
Unsurprisingly, AI adoption is gradually increasing, though adoption continues to be lower than to be expected for disruptive tech. Part of the reason why a few more enterprises are betting big on AI is due to their need to accelerate their digital transformation, or else be rendered obsolete in this era of remote collaboration, operational agility and autonomous production. Neither is this any bit surprising, as AI is an end itself that augments informed decision-making, generates insightful near real-time analytics and enables efficient and seamless supply chain management.
But that is just the tip of the iceberg. AI is being used in so many different, creative ways—and in each of these ways, AI can bring in a ton of benefits, whether cost savings, improved business processes, enhanced communication and collaboration or better customer experiences. Think AI revolutionising IT service management, powering high-performance digital twins in manufacturing and automotive or even transforming simple startups into exciting unicorns.
Even greater things are in store for AI, more so with tech firms advancing the technology to new frontiers. Lenovo, for instance, is bringing AI to the edge, while Red Hat is lowering the barriers to AI projects with OpenShift. TeamViewer and Siemens are doing much of the same, with the former introducing AiStudio and the latter enhancing its AI platform SynthAI.
Great but Not Perfect
For the great things AI can do and all the benefits it can bring to an enterprise, it is far from perfect. Of course, nothing ever is—even the most advanced technologies known to man. And for a relatively new technology, AI is certainly prone to imperfections. To put it simply, the artificial mind can go wrong. It can make mistakes. It has happened more than a few times already, in fact.
The aforementioned finger-breaking, chess-playing robot is a recent example of AI doing something unexpectedly wrong. There is also the infamous case of Tay.ai, a chatbot Microsoft developed to “interact” with young, mostly teenaged users of Twitter. Tay.ai interacted with the Twitter community, all right. Unfortunately, Tay.ai was soon sending out tweets that ran the gamut from sexist to fascist to racist—evidently picking up the obnoxious, inflammatory language some Twitter users had been using in “conversations” with the chatbot. Tay.ai’s case is a classic example of garbage in, garbage out, which we had previously discussed in detail already. The idea behind it is if you “feed” AI garbage, it will give you garbage in return.
Even Amazon got burned by “bad” AI, as its experimental AI-driven hiring tool outright favoured male applicants and all but disregarded female ones. The AI used for this project self-learned by reviewing 10 years’ worth of résumés but there was one problem: Most of the résumés were submitted by men. As a result, the algorithms largely favoured men, reportedly downgrading words related to women such as, well, “women” to put female applications markedly lower in the pecking order so to speak.
Not Surprising—and Humans Are Part of the Problem
Such incidents of AI not getting things right are to be expected but there is always an underlying reason for most mess ups according to Fan Yang, Practice Lead for Analytics and Consulting, Director, ASEAN, at DXC Technology. And that reason, Yang points out, oftentimes has something to do with the humans actually running the AI.
“New technologies command an array of benefits, along with inevitable risks that are often human-inflicted. For AI, the most common issues would be the lack of technical knowledge, programme bias that impairs decision making and privacy issues in data collection, resulting in poorly designed AI,” explained Yang.
This human element, incidentally, is part of the fabric of AI, and it plays a primordial role in setting the course of the artificial mind’s learning. In case you are unaware still, this process of learning begins with humans providing computer algorithm data so it can start making predictions and evaluate each one. The process then continues with validation, evaluation and testing using new, never-before-seen data to check how the AI responds. The human element is present in each step, which means there are several points at which human decision-making can impact an AI’s own decision-making process.
In other words, there are plenty of ways humans can mess up an AI—deliberately or unintentionally. And, in a roundabout way, the most common problem is providing AI with the kind of accurate data it needs, according to Chin Ying Loong, Regional Managing Director, ASEAN and SAGE (South Asian Growing Economies) at Oracle. This is why Loong attributes slower-than-usual AI adoption to an apparent lack of accurate data for AI.
“While there are several reasons for the low adoption of AI, ranging from infrastructure to talent, a recent survey by Appen indicates that for 51% of organisations, data accuracy is critical to their AI use-case. Without the use of accurate data sets, businesses are at risk of AI bias impacting their decision-making processes,” Loong pointed out.
The value of these data sets, again, is highly dependent on humans, like the data scientists and engineers in charge of “training” AI.
A Lack of Guidance
An equally big, but rarely talked about, problem in terms of “teaching” AI is the lack of guidance its human masters give, particularly in terms of discernment—or being able to distinguish right from wrong. Or, in keeping with our initial pop culture, reference, think Harold Finch in Person of Interest teaching his AI creation, The Machine, what is good and what is bad based on human values.
This lack of guidance is likely another reason why Tay.ai turned fascist and sexist, as it was unable to discern between right and wrong ideals. Roman Yampolskiy, Head of the CyberSecurity Lab at the University of Louisville certainly thinks so.
“This [Tay.ai’s failure] was to be expected. The system is designed to learn from its users, so it will become a reflection of their behaviour,” said Yampolskiy, who previously also wrote a paper about dangerous AI. “One needs to explicitly teach a system about what is not appropriate like we do with children. “Any AI system learning from bad examples could end up socially inappropriate—like a human raised by wolves.”
Louis Rosenberg, the founder of Unanimous AI, concurs, noting how “like all chatbots, Tay has no idea what it’s saying and has no idea if it’s saying something offensive, or nonsensical, or profound.” And this problem ultimately falls on the people in charge of the AI and their inability—or unwillingness, perhaps—to provide the needed guidance for the artificial mind to learn as real humans do.
When AI Is Done Wrong, the Cost Can Be Considerable
Then, when AI does mess up, the costs can be considerable—in part because can already be a pricey proposition. Exhibit A in this case is the doomed Watson for Oncology project. It was an initiative by IBM and the University of Texas MD Anderson Cancer Center to create an Oncology Expert Advisor that supposedly will “uncover valuable insights from the cancer centre’s rich patient and research databases” and provide expert cancer treatment recommendations.
The project turned out to be a colossal failure, with the AI-guided Watson supercomputer reportedly providing “multiple examples of unsafe and incorrect treatment recommendations”—like prescribing anticoagulants to patients with severe bleeding issues. Apparently, the problem was human-inflicted, as the engineers involved in the project were supposed to train the AI not on real patient data but on a small sample of—wait for it—hypothetical cancer patients. There were other issues, yes, like MD Anderson switching to a new electronic health record system that IBM could not access and the sheer enormity of distilling cancer into algorithms.
The project was ultimately shelved but not after costing IBM and MD Anderson a staggering USD $62 million in just a little over four years. That is not to say all companies stand to lose that much from a failed AI investment. But it does highlight how big of a loss rogue AI can cause—and that is from the investment side of things only. There are also reputational repercussions to AI letting a company down, like losing some credibility over a chatbot’s racist comment or drawing backlash from a failed AI initiative such as Amazon’s.
What Can Be Done?
To be perfectly clear: AI itself is not the problem per se. Neither is it inherently “good” nor innately “bad” for a lack of better terms. It can actually be a difference maker, a game-changer when done right.
But what exactly can an organisation do to get AI right?
Not to belabour a point but things ultimately circle back to humans—the programmers and the data scientists and everyone else in charge of AI and the algorithms that drive it. They and the culture of the company using AI.
“In order to navigate this [using AI correctly], businesses must always ensure that programmers and data scientists working with AI algorithms embed the company’s moral values and cultures in their AI model,” said Loong, who also emphasises the ethical use of technology as a crucial part in building any and all applications.
“It is important for enterprises to ensure that there is right governance and controls in place to ensure that AI is used ethically in a way that is fair, transparent, and easily understood—for example, providing more clarity on customer privacy and use of personal data for monitoring models,” explained Loong. “Customers must also be provided with full transparency on how the systems operate and be given a platform to ask questions and have discussions with businesses if they receive puzzling results from AI models. A transparent ecosystem between businesses and customers will not only provide more education around AI bias but also help the technology to be more efficient.”
Yang, for her part, highlights the need for data fluency and putting in place the right processes. When these considerations are met, according to Yang, “the opportunities tagged to AI are limitless” and will enable companies to “save time and money by automating and optimising routine processes, or even understanding customer demands better to design improved personalised experiences.”
Or, as explained by Loong, AI with the right policies and optimisations in place, “gives every business equal opportunity to build, empower and succeed.”
In other words, AI when done right can be a boon to a business. But we, humans, will have to do right by AI if we want it to really help us the way we want it to. It has been done before. It can be done again . . . and again.
Archive
- October 2024(44)
- September 2024(94)
- August 2024(100)
- July 2024(99)
- June 2024(126)
- May 2024(155)
- April 2024(123)
- March 2024(112)
- February 2024(109)
- January 2024(95)
- December 2023(56)
- November 2023(86)
- October 2023(97)
- September 2023(89)
- August 2023(101)
- July 2023(104)
- June 2023(113)
- May 2023(103)
- April 2023(93)
- March 2023(129)
- February 2023(77)
- January 2023(91)
- December 2022(90)
- November 2022(125)
- October 2022(117)
- September 2022(137)
- August 2022(119)
- July 2022(99)
- June 2022(128)
- May 2022(112)
- April 2022(108)
- March 2022(121)
- February 2022(93)
- January 2022(110)
- December 2021(92)
- November 2021(107)
- October 2021(101)
- September 2021(81)
- August 2021(74)
- July 2021(78)
- June 2021(92)
- May 2021(67)
- April 2021(79)
- March 2021(79)
- February 2021(58)
- January 2021(55)
- December 2020(56)
- November 2020(59)
- October 2020(78)
- September 2020(72)
- August 2020(64)
- July 2020(71)
- June 2020(74)
- May 2020(50)
- April 2020(71)
- March 2020(71)
- February 2020(58)
- January 2020(62)
- December 2019(57)
- November 2019(64)
- October 2019(25)
- September 2019(24)
- August 2019(14)
- July 2019(23)
- June 2019(54)
- May 2019(82)
- April 2019(76)
- March 2019(71)
- February 2019(67)
- January 2019(75)
- December 2018(44)
- November 2018(47)
- October 2018(74)
- September 2018(54)
- August 2018(61)
- July 2018(72)
- June 2018(62)
- May 2018(62)
- April 2018(73)
- March 2018(76)
- February 2018(8)
- January 2018(7)
- December 2017(6)
- November 2017(8)
- October 2017(3)
- September 2017(4)
- August 2017(4)
- July 2017(2)
- June 2017(5)
- May 2017(6)
- April 2017(11)
- March 2017(8)
- February 2017(16)
- January 2017(10)
- December 2016(12)
- November 2016(20)
- October 2016(7)
- September 2016(102)
- August 2016(168)
- July 2016(141)
- June 2016(149)
- May 2016(117)
- April 2016(59)
- March 2016(85)
- February 2016(153)
- December 2015(150)