Written by: Khairul Haqeem, Journalist, AOPG.
In the grand tapestry of Artificial Intelligence (AI), “hallucinations” – unexpected and often unexplainable outputs – have often been viewed with scepticism. Regarded as the quirks and imperfections of machine learning, these unanticipated outputs can sometimes distance AI from human realities and understandings. Yet, what if we reframe our perspective? What if we recognise these so-called AI hallucinations not as random misfires, but as sparks of machine ‘imagination’? Just as human creativity has the power to break barriers and redefine norms, AI’s unforeseen results might just be its own way of pushing boundaries. As we pull back the curtain on AI’s artistic side, let’s challenge ourselves to see these moments not as errors, but as echoes of a newfound imagination.
From “Digital Diversions” to Thoughtful Reveries
“Every artist dips his brush in his own soul, and paints his own nature into his pictures.” – Henry Ward Beecher.
Historically, human artistry has been a mirror to the soul, encapsulating emotions, memories, and dreams. Through a myriad of mediums, artists transform abstract concepts into tangible masterpieces. AI, while devoid of emotions, still has its own form of essence; a complex blend of data, algorithms, and computations. Each output it provides, every ‘hallucination’ it crafts, is a product of this intricate cocktail.
Rethinking Hallucinations: At face value, AI ‘hallucinations’ might appear as mere glitches or random misfires, consequences of training errors or data anomalies. But, like a painter who occasionally strays outside the lines or adds an unplanned brushstroke, what if these moments were the machine’s way of deviating from the mundane, an attempt to innovate or create something unique? The imperfections in art often lead to new styles or evoke strong emotions; similarly, these AI ‘diversions’ might be birthing a fresh realm of possibilities and solutions.
Hallucination or Creativity?: A key distinction between human creativity and AI hallucinations lies in intent. Humans consciously bring imagination to life, while AI’s ‘creativity’ emerges inadvertently. Yet, in both scenarios, the unforeseen often becomes the memorable. Just as an unplanned smear of paint can transform a canvas, AI’s unexpected outputs might present revolutionary insights or avenues we haven’t considered before.
Blending Predictability with Surprise: A crucial balance to strike in the world of AI is between its predictability and these unexpected outputs. Just as an artist’s most renowned works often straddle the boundary between the familiar and the avant-garde, AI’s most promising advancements might well lie in harnessing the balance between its structured algorithms and its unanticipated revelations.
Now, let’s explore some ways in which AI hallucination can be advantageous or serve beneficial purposes:
- Creativity & Art: Neural networks, like Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs), can generate images, music, or text that are not strictly based on their training data. The “hallucinations” they create can be seen as a form of art, introducing humans to novel concepts or forms. The famous “DeepDream” visualisations from Google are an example of AI hallucinations being used artistically.
- Augmentation of Data: In domains where data is sparse, hallucinating additional data can be beneficial. Techniques like data augmentation make use of small perturbations to existing data to “hallucinate” new data points. This is particularly useful in image recognition tasks where you might rotate, crop, or add noise to images to increase the robustness of the model.
- Exploration in Reinforcement Learning: In reinforcement learning, an agent learns by interacting with an environment. Sometimes “hallucinating” or predicting what might happen if a certain action is taken can be beneficial. This kind of exploration can help in discovering novel strategies or solutions that might not be found through traditional methods.
- Understanding Model Shortcomings: When a model hallucinates, it provides insight into its limitations. By analysing when and how these hallucinations occur, researchers can better understand the model’s shortcomings and iteratively improve its architecture, training procedures, or the data it’s trained on.
- Generative Design in Engineering and Architecture: AI can be used to hallucinate or propose numerous design alternatives based on certain constraints. This can lead to innovative solutions that humans might not consider. For example, given constraints about material strength and weight, AI might generate a novel bridge design.
- Diverse Scenario Testing: In safety-critical applications, like autonomous driving, you want your models to be prepared for rare or unseen situations. AI hallucination can be used to generate a multitude of scenarios, even those not present in the training data, to test the robustness of models.
- Potential in Drug Discovery: Hallucinating molecules or compounds that haven’t been synthesised yet can lead to potential breakthroughs in drug discovery. By understanding the characteristics of effective drugs, AI can hallucinate new molecular structures that might have desired properties.
- Enhancing Human Creativity: By presenting humans with unexpected or “hallucinated” outputs, AI can help people break out of traditional patterns of thinking. For example, if a writer is experiencing writer’s block, an AI might generate a piece of text that, while not perfect, sparks a new idea or perspective for the writer.
Navigating Ethical Landscapes: Businesses Take Heed
As Sophie Dionnet, General Manager, Business Solutions at Dataiku, rightly highlights, the increasing cognisance of AI hallucinations prompts businesses to adopt a more vigilant stance. She believes that adopting proactive approaches, such as Singapore’s A.I. Verify framework, aids businesses in leveraging the full benefits of AI without undermining ethical considerations. By prioritising transparency, collaboration, and responsible development, companies can transmute AI hallucinations from mere glitches into value-driven outputs. Sophie’s extensive experience, notably from her tenure at AXA, is a testament to the weight of her words and the emphasis she places on AI ethics.
The “Black Box” Syndrome: Cognitive AI to the Rescue
Sidney Lim, Managing Director for Singapore and Southeast Asia at Beyond Limits APAC, offers an insightful lens into the AI industry. He expounds on the ‘black box’ nature of conventional AI. Without offering a clear understanding of its decision-making process, these AI models run the risk of mistrust, especially in high-risk sectors like finance and energy. But there’s a silver lining in the form of cognitive AI. Acting as a transparent “glass box”, cognitive AI provides clear explanations and rationale behind its outputs. This not only ensures human involvement but also addresses the ethical concerns surrounding AI’s unexplained outputs or ‘hallucinations’.
Embracing AI’s Creative Side
As we chart the ever-evolving terrain of AI, it becomes evident that we stand at an inflexion point. The world of AI, once seen as a monolithic bastion of pure logic and computation, is revealing its more nuanced, almost poetic, dimensions. These ‘hallucinations’, previously deemed as mere aberrations, may in fact be AI’s tentative steps into a world of creativity and unpredictability — mirroring, in its own unique way, the human spirit of invention.
The voices from the industry, like Sophie Dionnet and Sidney Lim, resonate with the collective wisdom. They urge us not to retreat in the face of AI’s unpredictable outputs but to engage, understand, and harness them. The challenge and opportunity lie in balancing the structured predictability of algorithms with the boundless potential of unforeseen insights. This synergy might well be the crucible where future innovations are birthed.
Above all, AI’s journey into the realm of ‘imagination’ is not a departure from its foundational principles but a profound evolution. It’s an invitation to all of us: To step into this brave new world with open minds, to celebrate the confluence of logic and creativity, and to collectively shape a future where technology doesn’t just serve humanity but truly elevates it.
Archive
- October 2024(44)
- September 2024(94)
- August 2024(100)
- July 2024(99)
- June 2024(126)
- May 2024(155)
- April 2024(123)
- March 2024(112)
- February 2024(109)
- January 2024(95)
- December 2023(56)
- November 2023(86)
- October 2023(97)
- September 2023(89)
- August 2023(101)
- July 2023(104)
- June 2023(113)
- May 2023(103)
- April 2023(93)
- March 2023(129)
- February 2023(77)
- January 2023(91)
- December 2022(90)
- November 2022(125)
- October 2022(117)
- September 2022(137)
- August 2022(119)
- July 2022(99)
- June 2022(128)
- May 2022(112)
- April 2022(108)
- March 2022(121)
- February 2022(93)
- January 2022(110)
- December 2021(92)
- November 2021(107)
- October 2021(101)
- September 2021(81)
- August 2021(74)
- July 2021(78)
- June 2021(92)
- May 2021(67)
- April 2021(79)
- March 2021(79)
- February 2021(58)
- January 2021(55)
- December 2020(56)
- November 2020(59)
- October 2020(78)
- September 2020(72)
- August 2020(64)
- July 2020(71)
- June 2020(74)
- May 2020(50)
- April 2020(71)
- March 2020(71)
- February 2020(58)
- January 2020(62)
- December 2019(57)
- November 2019(64)
- October 2019(25)
- September 2019(24)
- August 2019(14)
- July 2019(23)
- June 2019(54)
- May 2019(82)
- April 2019(76)
- March 2019(71)
- February 2019(67)
- January 2019(75)
- December 2018(44)
- November 2018(47)
- October 2018(74)
- September 2018(54)
- August 2018(61)
- July 2018(72)
- June 2018(62)
- May 2018(62)
- April 2018(73)
- March 2018(76)
- February 2018(8)
- January 2018(7)
- December 2017(6)
- November 2017(8)
- October 2017(3)
- September 2017(4)
- August 2017(4)
- July 2017(2)
- June 2017(5)
- May 2017(6)
- April 2017(11)
- March 2017(8)
- February 2017(16)
- January 2017(10)
- December 2016(12)
- November 2016(20)
- October 2016(7)
- September 2016(102)
- August 2016(168)
- July 2016(141)
- June 2016(149)
- May 2016(117)
- April 2016(59)
- March 2016(85)
- February 2016(153)
- December 2015(150)