Covering Disruptive Technology Powering Business in The Digital Age

image
Want Good, Reliable AI? Look to Humans for Help!
image
October 19, 2022 Blogs

 

Written by: Martin Dale Bolima, Tech Journalist, AOPG.

Garbage in, garbage out.

For all that Artificial Intelligence (AI) is supposed to be, it does not change the fact that its usefulness and reliability depend mostly on the information it is “fed.” This is something previously discussed here already, with Steven Johnson, writer of the newsletter “Adjacent Possible,” highlighting AI’s propensity to regurgitate biased or disturbing information, like “openly racist language” and “conspiratorial misinformation.”

But, to be fair, AI has proven to be a lot of things. It has, for instance, become a game-changer in terms of reinventing the customer experience and a needle-mover in improving healthcare. Not for nothing, it is considered transformative, driving business transformation and fuelling startups the world over.

That is the good side of AI, and it can get better. But it can get bad just the same. It can even take a turn for the worse if it ever gets hold of the wrong kinds of information—the kind that is biased, distorted and everything in between.

What Goes In Is What Goes Out

Again, it is garbage in, garbage out when it comes to AI. Or, as Ramprakash Ramamoorthy, Director of Research at ManageEngine told Disruptive Tech Asia, “AI is only as good as the data that is fed into it.” This means if you feed it biased information, it will in turn make biased decisions as well—decisions that will reflect the prejudice in the data given, to begin with.

It also means the success of AI depends on—wait for it!—humans.

“It is important for the human gatekeepers to present the AI system with an evenly distributed dataset that is free from any biases. A biased AI prediction can have severe repercussions on the business and can bring in unrepairable damage,” explained Ramamoorthy. “AI is not sentient enough to self-identify biases and balance them out. There are a few statistical techniques that can help understand and visualise the distribution of data but there is no auto-balancing.”

Lending Machines a Helping Hand

In other words, it is the job of humans—whose jobs are supposedly being taken over by AI—to keep machines in check, so to speak. It is up to people to keep AI from regurgitating the kind of partisan, hurtful information that can damage the reputation of the business using it.

“It is important for AI developers to treat data like code—create versioned and access controlled datasets, periodically peer review the data sets and so on. One more way to keep the dataset clean is to eliminate Personally Identifiable Information (PII) from the training data and this would also ensure the AI model stays bias-free,” said Ramamoorthy.

It is a thankless job for humans, with Ramamoorthy elaborating: “Scrutinising data collection processes with validation and other similar techniques can be the first step to avoiding unintended bias. It’s more important to make sure the data stays relevant in the long term. The technical term for this is called ‘concept drift’—when you train an AI model with one set of data but the underlying data distribution has changed over time, the model won’t perform as expected and that would mean the concept has drifted. It’s important to identify and mitigate concept drift to ensure bias-free AI models and the only way to do this would be to keep monitoring the data quality.”

If that sounds a lot like humans caretaking machines, it’s because that is, in a roundabout way, the case exactly.

A Task Well Worth It

All that scrutiny and careful curation of data to make AI as unbiased as possible translate to the right kind of AI—fair and humane. This is also the kind of AI that is good for business—and in a lot of ways, according to Ramamoorthy.

“Generally, people talk about bias in critical applications like credit disbursement or recruitment. However, building bias-free AI doesn’t stop there,” Ramamoorthy pointed out. “Proper care must be taken to ensure bias does not creep into any of the AI use cases. With great power comes great responsibility and AI developers should be educated to consciously avoid bias in any of the use-cases they are automating using AI.”

Avoiding bias, incidentally, is something ManageEngine takes very seriously. A provider of enterprise IT management software, ManageEngine also builds AI to cater for IT management, making sure that precautions are taken to ensure no bias creeps into the system. ManageEngine does not stop there. The AI it builds, according to Ramamoorthy, also has “sanity checks to ensure there is no way the model discriminates against somebody based on their personal information,” along with “a strong privacy policy which ensures no personal information is factored in to arrive at a decision, thereby removing all PII indicators from the AI training data.”

To Adopt AI or To Not Adopt AI?

All that leads to one final question: Given the foregoing considerations, should organisations adopt AI or not?  

The answer, according to Ramamoorthy, is to go for it but not “pay heed to the hype around AI.” What Ramamoorthy recommends instead is for organisations to “take a slow and a measured approach since AI can change the way organisations work and a drastic change might be difficult to adapt to.”

To that end, Ramamoorthy further recommends that organisations strive to understand as best they could where AI can be a strength and where it can be a weakness. This requires thorough research and careful decision-making by management, but all that hard work can pay off handsomely later on.

That success, however, is that the humans behind the scenes—the AI developers—must do their jobs well and feed AI correctly.

 

(0)(0)

Archive