Written by: Martin Dale Bolima, Tech Journalist, AOPG.
Garbage in, garbage out.
For all that Artificial Intelligence (AI) is supposed to be, it does not change the fact that its usefulness and reliability depend mostly on the information it is “fed.” This is something previously discussed here already, with Steven Johnson, writer of the newsletter “Adjacent Possible,” highlighting AI’s propensity to regurgitate biased or disturbing information, like “openly racist language” and “conspiratorial misinformation.”
But, to be fair, AI has proven to be a lot of things. It has, for instance, become a game-changer in terms of reinventing the customer experience and a needle-mover in improving healthcare. Not for nothing, it is considered transformative, driving business transformation and fuelling startups the world over.
That is the good side of AI, and it can get better. But it can get bad just the same. It can even take a turn for the worse if it ever gets hold of the wrong kinds of information—the kind that is biased, distorted and everything in between.
What Goes In Is What Goes Out
Again, it is garbage in, garbage out when it comes to AI. Or, as Ramprakash Ramamoorthy, Director of Research at ManageEngine told Disruptive Tech Asia, “AI is only as good as the data that is fed into it.” This means if you feed it biased information, it will in turn make biased decisions as well—decisions that will reflect the prejudice in the data given, to begin with.
It also means the success of AI depends on—wait for it!—humans.
“It is important for the human gatekeepers to present the AI system with an evenly distributed dataset that is free from any biases. A biased AI prediction can have severe repercussions on the business and can bring in unrepairable damage,” explained Ramamoorthy. “AI is not sentient enough to self-identify biases and balance them out. There are a few statistical techniques that can help understand and visualise the distribution of data but there is no auto-balancing.”
Lending Machines a Helping Hand
In other words, it is the job of humans—whose jobs are supposedly being taken over by AI—to keep machines in check, so to speak. It is up to people to keep AI from regurgitating the kind of partisan, hurtful information that can damage the reputation of the business using it.
“It is important for AI developers to treat data like code—create versioned and access controlled datasets, periodically peer review the data sets and so on. One more way to keep the dataset clean is to eliminate Personally Identifiable Information (PII) from the training data and this would also ensure the AI model stays bias-free,” said Ramamoorthy.
It is a thankless job for humans, with Ramamoorthy elaborating: “Scrutinising data collection processes with validation and other similar techniques can be the first step to avoiding unintended bias. It’s more important to make sure the data stays relevant in the long term. The technical term for this is called ‘concept drift’—when you train an AI model with one set of data but the underlying data distribution has changed over time, the model won’t perform as expected and that would mean the concept has drifted. It’s important to identify and mitigate concept drift to ensure bias-free AI models and the only way to do this would be to keep monitoring the data quality.”
If that sounds a lot like humans caretaking machines, it’s because that is, in a roundabout way, the case exactly.
A Task Well Worth It
All that scrutiny and careful curation of data to make AI as unbiased as possible translate to the right kind of AI—fair and humane. This is also the kind of AI that is good for business—and in a lot of ways, according to Ramamoorthy.
“Generally, people talk about bias in critical applications like credit disbursement or recruitment. However, building bias-free AI doesn’t stop there,” Ramamoorthy pointed out. “Proper care must be taken to ensure bias does not creep into any of the AI use cases. With great power comes great responsibility and AI developers should be educated to consciously avoid bias in any of the use-cases they are automating using AI.”
Avoiding bias, incidentally, is something ManageEngine takes very seriously. A provider of enterprise IT management software, ManageEngine also builds AI to cater for IT management, making sure that precautions are taken to ensure no bias creeps into the system. ManageEngine does not stop there. The AI it builds, according to Ramamoorthy, also has “sanity checks to ensure there is no way the model discriminates against somebody based on their personal information,” along with “a strong privacy policy which ensures no personal information is factored in to arrive at a decision, thereby removing all PII indicators from the AI training data.”
To Adopt AI or To Not Adopt AI?
All that leads to one final question: Given the foregoing considerations, should organisations adopt AI or not?
The answer, according to Ramamoorthy, is to go for it but not “pay heed to the hype around AI.” What Ramamoorthy recommends instead is for organisations to “take a slow and a measured approach since AI can change the way organisations work and a drastic change might be difficult to adapt to.”
To that end, Ramamoorthy further recommends that organisations strive to understand as best they could where AI can be a strength and where it can be a weakness. This requires thorough research and careful decision-making by management, but all that hard work can pay off handsomely later on.
That success, however, is that the humans behind the scenes—the AI developers—must do their jobs well and feed AI correctly.
Archive
- October 2024(22)
- September 2024(94)
- August 2024(100)
- July 2024(99)
- June 2024(126)
- May 2024(155)
- April 2024(123)
- March 2024(112)
- February 2024(109)
- January 2024(95)
- December 2023(56)
- November 2023(86)
- October 2023(97)
- September 2023(89)
- August 2023(101)
- July 2023(104)
- June 2023(113)
- May 2023(103)
- April 2023(93)
- March 2023(129)
- February 2023(77)
- January 2023(91)
- December 2022(90)
- November 2022(125)
- October 2022(117)
- September 2022(137)
- August 2022(119)
- July 2022(99)
- June 2022(128)
- May 2022(112)
- April 2022(108)
- March 2022(121)
- February 2022(93)
- January 2022(110)
- December 2021(92)
- November 2021(107)
- October 2021(101)
- September 2021(81)
- August 2021(74)
- July 2021(78)
- June 2021(92)
- May 2021(67)
- April 2021(79)
- March 2021(79)
- February 2021(58)
- January 2021(55)
- December 2020(56)
- November 2020(59)
- October 2020(78)
- September 2020(72)
- August 2020(64)
- July 2020(71)
- June 2020(74)
- May 2020(50)
- April 2020(71)
- March 2020(71)
- February 2020(58)
- January 2020(62)
- December 2019(57)
- November 2019(64)
- October 2019(25)
- September 2019(24)
- August 2019(14)
- July 2019(23)
- June 2019(54)
- May 2019(82)
- April 2019(76)
- March 2019(71)
- February 2019(67)
- January 2019(75)
- December 2018(44)
- November 2018(47)
- October 2018(74)
- September 2018(54)
- August 2018(61)
- July 2018(72)
- June 2018(62)
- May 2018(62)
- April 2018(73)
- March 2018(76)
- February 2018(8)
- January 2018(7)
- December 2017(6)
- November 2017(8)
- October 2017(3)
- September 2017(4)
- August 2017(4)
- July 2017(2)
- June 2017(5)
- May 2017(6)
- April 2017(11)
- March 2017(8)
- February 2017(16)
- January 2017(10)
- December 2016(12)
- November 2016(20)
- October 2016(7)
- September 2016(102)
- August 2016(168)
- July 2016(141)
- June 2016(149)
- May 2016(117)
- April 2016(59)
- March 2016(85)
- February 2016(153)
- December 2015(150)