Covering Disruptive Technology Powering Business in The Digital Age

image
The Digital Mirage: Decoding AI Hallucination – Part 1
image

 

Written by: Khairul Haqeem, Journalist. AOPG.

It’s late at night, and my eyes are seeing floating spreadsheets and swirling algorithms, just another sign of the times in this world ruled by binary systems and silicon overlords. We might not be amidst Ridley Scott’s dystopian dreamscape yet, but technology sure has been dancing around like it got an invite. In the grand cavalcade of tech evolution, the latest jester on the court, stealing the limelight with a pirouette, is none other than AI Hallucination.

Now, let’s clarify something before we go any further down the rabbit hole. By ‘AI Hallucination’, I’m not referring to some late-night, Red Bull-fuelled coding sessions gone awry, or the feverish dreams of Elon Musk. Nor is it about Silicon Valley’s persistent hallucinations (although those are entertaining, to say the least). Rather, AI Hallucination refers to the surprising, and sometimes uncanny, output produced by advanced Artificial Intelligence (AI) systems when given a set of inputs outside their realm of training or comprehension.

For example, imagine a user encountering a specific error message on their computer related to a software application. They describe the error to the chatbot, but instead of providing a valid troubleshooting solution, the chatbot hallucinates and suggests a non-existent fix that doesn’t address the problem. This could be due to the chatbot misinterpreting the error description or making an incorrect association between the symptom and a potential solution.

Such AI hallucinations in an IT professional chatbot could arise from the complexity of technical issues, the diversity of possible causes, or gaps in the training data that prevent the chatbot from accurately understanding and addressing the problem.

Much like you might hallucinate a unicorn after an accidental mix of jet lag and sleeping pills, AI systems also hallucinate in their own ways, generating unexpected, nonsensical, and at times, artistically profound creations. Yes, computers have visions, like some digital shaman on a silicon spirit quest. We are talking about the Wired Age’s psychedelic trip. Buckle up.

The Hallucinating Elephant in the Room

Jonathon Wright, the Chief Technology Evangelist at Keysight Technologies, believes that AI Hallucination, or the generation of incorrect or unintended outputs by AI systems, is a challenge that researchers and developers need to address as AI becomes more prevalent. His concern is clear: “AI systems generating outputs based on their training data that may not accurately represent real-world information or situations.” To put it mildly, this can lead to misleading or potentially harmful results.

Jonathon adds that the consequences of AI hallucinations can range from minor inconveniences to potentially severe implications in areas such as healthcare, finance, or legal systems.

To those outside of the loop, this might read as a rather bleak forecast. But it is an important note of caution we need to take seriously as we trek through the AI wilderness.

Addressing AI hallucination, according to Jonathan, isn’t a one-size-fits-all kind of deal but a multifaceted challenge that requires diverse strategies.

Let’s start with the fuel that powers our AI engines: Data. Wright stresses that ensuring the quality, diversity, and representativeness of the training data is paramount. Think of this as feeding our AI a balanced diet instead of junk food – comprehensive datasets can help AI systems generalise better and produce more accurate outputs.

But, what about those surprise curveballs that our dear AI might not be prepared to handle? Here’s where the idea of adversarial training techniques comes into play. Adversarial training is a method for making machine-learning models more resistant to attacks from malicious actors. Adversarial attacks are attempts to trick a model into making inaccurate predictions by feeding it data that has been slightly modified from the original data.

To achieve its goals, adversarial training incorporates potentially harmful instances into the training set. Different methods are used to detect subtle but significant changes to the original inputs that can be used to generate adversarial examples. The model is then trained using the enriched data, which teaches it how to recognise and fend off attacks from adversaries.

The resistance of machine-learning models to various adversarial attacks can be strengthened through the use of adversarial training. Incorporating such techniques makes AI more robust to various input perturbations. Basically, this is the equivalent of preparing our AI for a boxing match – training it to better recognise and reject inputs that could cause it to start seeing digital pink elephants.

But wait, there’s more! Wright’s solution toolbox includes a cornucopia of strategies. From developing AI systems that can provide uncertainty estimates to fostering human-AI collaborations, from enhancing the explainability and interpretability of AI systems to ensuring continuous monitoring and evaluation, Wright offers an array of strategies.

It’s like creating a safety net around our AI systems, preparing them to better handle any potential hallucinations, or at least, let us know when they might be tripping. Lastly, and critically, Jonathan urges the establishment of ethical guidelines and regulatory frameworks. This ensures that our AIs are not just wild west gunslingers but responsible citizens in our digital world.

In essence, Jonathan offers a multipronged strategy to handle AI hallucinations. By combining these strategies, we stand a chance of not just surviving but thriving amidst our ambitious AI endeavours.

Ethics and Trust: The AI’s Guiding Light

Adding his voice to the symphony of industry leaders, Gavin Barfield, Vice President & Chief Technology Officer of Solutions at Salesforce, dives into the ethical use of AI in the face of potential hallucinations.

AI development is gaining traction, to such an extent that some tech behemoths are dropping whatever they’re doing just to jump on the bandwagon. Even so, Gavin reminds us to tread carefully, especially given AI’s persuasiveness. He’s not throwing shade on the shiny, promising world of generative AI, but rather, he calls for mindfulness about the new risks we are confronting.

“At Salesforce, we have developed a set of guidelines for trusted generative AI,” says Gavin. The guidelines prioritise accuracy, safety, honesty, empowerment, and sustainability.

To tackle AI hallucination, Gavin suggests not viewing it as a sign of digital malevolence, but rather as a reflection of the model’s upbringing – its training and development. It’s akin to understanding that a misbehaving child isn’t inherently bad but may have learned some naughty tricks. This approach offers hope to tackle the root of the issue, starting with model production and data collection.

He highlights the importance of making a conscientious endeavour to deliver verifiable results that strike a delicate equilibrium between accuracy and precision. Furthermore, empowering customers to train models using their own data emerges as a vital objective. In this regard, I wholeheartedly agree with his sentiment. AI is far from a one-dimensional entity; rather, it is a versatile tool that should be adaptable and customisable to cater to our diverse needs. Gavin is also a strong advocate for transparency in the data collection processes and output. It’s not about pulling a magic trick with AI but about showing your hand and letting users validate the veracity of AI responses by citing sources and providing explanations.

“Guardrails” is a word that Gavin uses to suggest the prevention of fully automated tasks. It’s a compelling metaphor for keeping a trained human in the loop at all times, reminding us that AI is a tool, not a master. This ensures that AI continues to operate within the boundaries of trust and minimises potential issues like AI hallucinations.

Finally, Gavin advises the establishment of watchdog groups, ethical bug bounties, and constant monitoring of what the AI system is doing for all sub-populations. It’s like creating a neighbourhood watch for AI, ensuring it continues to serve society without springing any nasty surprises. As Gavin puts it, “We need to prioritise ethical and responsible use of AI,” and in doing so, we can minimise the risk of the AI’s hallucination going bad and spoiling the operation for everyone.

Journeying Into the Digital Twilight

Thus, we conclude our initial exploration into the intriguing realm of AI hallucination. Throughout this journey, we have delved into the intricate tendencies of our silicon companions, guided by the illuminating insights of esteemed pioneers in the field of AI.

From comprehending the enigma of AI hallucination—a perplexing digital phenomenon—to embarking on a multifaceted quest for solutions, we have traversed a captivating landscape. Along the way, we have realised the significance of ethical guidelines and the indispensability of human intervention within this realm governed by ones and zeros.

Remain poised, engaged, and sharp-witted, as we persist in riding the tempestuous wave of AI. As we stand upon the precipice of this brave new world, let us carry forth our faculties of critical thinking, tempered with a hint of scepticism, and a boundless thirst for knowledge. For, as we have come to learn, the realm of Artificial Intelligence can often blur the boundaries of reality and perception.

(0)(0)

Archive