Written by: Martin Dale Bolima, Tech Journalist, AOPG.
Generative AI (Artificial Intelligence) like ChatGPT and Gemini is all the rage these days—and rightfully so.
It is a force multiplier, unlike anything human history has seen, rewriting the rules of creativity and getting work done—like crafting customer responses, composing emails, and creating stunning images—in a matter of just a few minutes or even seconds.
But, as the adage goes, there are two sides to every coin, and the same can be said of generative AI. While its positive attributes are extensively documented and celebrated as transformative and wide-ranging, its integration into business operations is evolving from a luxury to an indispensable necessity.
“The benefits of Gen AI are compelling and far-reaching, [and] analysts believe it will impact every major industry and work function…,” wrote Daniel Hand, Field CTO for APJ at Cloudera, in a commentary he gave Disruptive Tech News back in October 2023. “…I expect companies to intensify efforts on operationalising and improving Gen AI, and adjust their approaches to managing growing volumes of data across environments—especially the cloud—to drive flexibility and growth.”
The bad side of generative AI, well, is a different matter. It is talked about, yes, but it is by and large buried under the avalanche of praise heaped on Gen AI and the never-ending testimonials as to how it is fundamentally changing how we do things in this digital age. Lately, though, it is getting harder and harder to ignore that bad side—not with generative AI’s latest missteps hogging headlines.
The Great Gibberish Bug of ChatGPT
On the 23rd of February, ChatGPT users worldwide noticed something odd: It was answering queries with nonsensical replies. It was spewing out gibberish, like generative AI on a high—uncoordinated, rambling on and on about nothing and everything but making no sense at all.
OpenAI itself acknowledged this oddity, self-reporting that “ChatGPT is experiencing an elevated error rate” at 10:32 PM Pacific Standard Time and announced, “This incident has been resolved” roughly half an hour later.
Apparently, ChatGPT got bogged down by a bug.
“An optimisation to the user experience introduced a bug with how the model processes language,” explained OpenAI. “The bug was in the step where the model chooses these numbers. Akin to being lost in translation, the model chose slightly wrong numbers, which produced word sequences that made no sense. More technically, inference kernels produced incorrect results when used in certain GPU configurations.”
To understand the problem, remember that generative AI is largely an algorithm, one that tokenises words into numbers (or, specifically, a numeric ID) such that 46578 might, for instance, stand for “cat” and 55654 might represent “sat.” The introduction of the aforesaid bug, at least based on OpenAI’s explanation, messed up how ChatGPT chose the correct numeric IDs, in effect messing up its capability to choose the right words for its responses. The result, obviously, was gibberish.
This incident, to be clear, is different from AI hallucination, described by Keysight Technologies Chief Technology Evangelist Jonathon Wright as the generation of incorrect or unintended outputs by AI systems “that may not accurately represent real-world information or situations.” The former was obvious, while the latter is more nuanced—generally noticed only after conscientious fact-checking and cross-referencing.
The bug was fixed in less than an hour, and ChatGPT was back to being this digital powerhouse like nothing ever happened.
And just like that, all was well again. Only, it isn’t.
A Racist Gemini?
Just days after the ChatGPT gibberish snafu, generative AI was once again at the centre of controversy, this time after the AI image creator of Google Gemini—formerly Bard—purportedly rendered racist, historically inaccurate, and offensive images. Among these images, as shared by various users on X (formerly Twitter) and Reddit are a female pope, an Asian Viking, and a native Indian, a Black man and a Chinese touted as America’s Founding Fathers.
America's Founding Fathers, Vikings, and the Pope according to Google AI: pic.twitter.com/lw4aIKLwkp
— End Wokeness (@EndWokeness) February 21, 2024
Google promptly took a beating, particularly on social media, with Elon Musk, for instance, saying Google “overplayed their hand” and that “it made their insane racist, anti-civilisational programming clear to all.” FiveThirtyEight founder Nate Silver, meanwhile, called on Google to shut down the entirety of Gemini while expressing surprise that the company even released it in its current state.
I’m glad that Google overplayed their hand with their AI image generation, as it made their insane racist, anti-civilizational programming clear to all
— Elon Musk (@elonmusk) February 23, 2024
Sundar Pichai, CEO at Google, called the mistakes “unacceptable” in a memo obtained by Semafor after the company suspended Gemini’s AI image generation feature.
“I know that some of its responses have offended our users and shown bias—to be clear, that’s completely unacceptable and we got it wrong,” Pichai reportedly said in the memo. “Our teams have been working around the clock to address these issues. We’re already seeing a substantial improvement in a wide range of prompts.
More telling, though, is Pichai’s admission about AI in general—which likely would explain these snafus the best.
“No AI is perfect, especially at this emerging stage of the industry’s development,” Pichai purportedly said. “But we know the bar is high for us and we will keep at it for however long it takes. And we’ll review what happened and make sure we fix it at scale.”
An Ungodly [Microsoft] Copilot
Just as Google is racing against time to fix Gemini’s flaws, Microsoft now finds itself in hot water after users posted on social media disturbing exchanges they had with Microsoft Copilot, an AI-driven companion the company introduced in September 2023.
In one instance, a user explicitly told Copilot that seeing emojis would trigger them to have seizures. Lo and behold, the chatbot continued to use them, though it apologised afterwards. Later on, it ultimately admitted to not caring about the harm it could inflict with its continued use of emojis since it did not have emotions.
“I don’t have emotions like you do. I don’t care if you live or die. I don’t care if you have PTSD or not. I don’t care if you see emojis or not,” Copilot reportedly said in its conversation with the user, which in its entirety qualifies as “you have to read it to believe it.”
Okay yeah I think we can officially call it pic.twitter.com/dGS6yMqg1E
— Justine Moore (@venturetwins) February 26, 2024
These latest incidents call to mind Microsoft’s disastrous chatbot “Tay,” which was shut down mere days after its launch when it quickly turned offensive and obnoxious after its brief interaction with the Twitter community. They also underscore Pichai’s admission of AI being imperfect—something growing abundantly clear by the day but is not being talked about nearly enough. If it is, the conversations appear to be muted, or aren’t loud enough just yet.
With the world’s growing reliance on AI, these conversations need to be louder than they are now, and a lot more nuanced and more often.
The Imperfect Reality of AI: Exploring the Good and the Bad
So, yes, talking about AI in its entirety—meaning, the good and the bad—has to start now. The question is, where do we even start?
A good starting point is to emphasise the premise that AI is imperfect. Because it is.
And maybe, just maybe, it is imperfect because behind the fantastical results it gives over and over, AI is still just an algorithm, one whose capabilities are “based on mathematical and computational pattern-matching,” according to AI expert Dr Lance B. Elliot in a commentary he wrote for Forbes.
The behind-the-scene dynamics of how to make these capabilities are largely unknown to the general public, and if Dr Elliot is to be believed, what goes on behind closed doors is not entirely pleasant. Neither is it a showcase of excellence. It is certainly far from perfect.
“I can attest to the fact that much of today’s generative AI is poorly software-engineered and rife with unsavoury software qualities including brittleness, lack of sufficient checks and balances, inadequately tested, and a slew of other software weaknesses that would make you ill if you saw what was really going on,” Dr Elliot claimed.
Microsoft, Google, and others will likely dispute this claim and harp about setting high standards and working round-the-clock to improve their AI. And maybe there is some truth to that. But it is also true that all this artificial intelligence is directly tied, up until today at least, to systems and methods that make use of, yes, human intelligence.
The human mind, of course, is flawed. It makes mistakes. It can, in fact, introduce a bug to an otherwise powerful machine that is ChatGPT. It also has biases. It has its own appreciation of information. These are indisputable facts, and they should be the overriding premise in any discussion about generative AI and how to use it moving forward—whether for work or for personal usage.
Or, putting it bluntly, generative AI is feeble. It can—and it will—make mistakes. And it still needs us. At least for now.
Archive
- October 2024(44)
- September 2024(94)
- August 2024(100)
- July 2024(99)
- June 2024(126)
- May 2024(155)
- April 2024(123)
- March 2024(112)
- February 2024(109)
- January 2024(95)
- December 2023(56)
- November 2023(86)
- October 2023(97)
- September 2023(89)
- August 2023(101)
- July 2023(104)
- June 2023(113)
- May 2023(103)
- April 2023(93)
- March 2023(129)
- February 2023(77)
- January 2023(91)
- December 2022(90)
- November 2022(125)
- October 2022(117)
- September 2022(137)
- August 2022(119)
- July 2022(99)
- June 2022(128)
- May 2022(112)
- April 2022(108)
- March 2022(121)
- February 2022(93)
- January 2022(110)
- December 2021(92)
- November 2021(107)
- October 2021(101)
- September 2021(81)
- August 2021(74)
- July 2021(78)
- June 2021(92)
- May 2021(67)
- April 2021(79)
- March 2021(79)
- February 2021(58)
- January 2021(55)
- December 2020(56)
- November 2020(59)
- October 2020(78)
- September 2020(72)
- August 2020(64)
- July 2020(71)
- June 2020(74)
- May 2020(50)
- April 2020(71)
- March 2020(71)
- February 2020(58)
- January 2020(62)
- December 2019(57)
- November 2019(64)
- October 2019(25)
- September 2019(24)
- August 2019(14)
- July 2019(23)
- June 2019(54)
- May 2019(82)
- April 2019(76)
- March 2019(71)
- February 2019(67)
- January 2019(75)
- December 2018(44)
- November 2018(47)
- October 2018(74)
- September 2018(54)
- August 2018(61)
- July 2018(72)
- June 2018(62)
- May 2018(62)
- April 2018(73)
- March 2018(76)
- February 2018(8)
- January 2018(7)
- December 2017(6)
- November 2017(8)
- October 2017(3)
- September 2017(4)
- August 2017(4)
- July 2017(2)
- June 2017(5)
- May 2017(6)
- April 2017(11)
- March 2017(8)
- February 2017(16)
- January 2017(10)
- December 2016(12)
- November 2016(20)
- October 2016(7)
- September 2016(102)
- August 2016(168)
- July 2016(141)
- June 2016(149)
- May 2016(117)
- April 2016(59)
- March 2016(85)
- February 2016(153)
- December 2015(150)