Written By: Andy Ng, Vice President and Managing Director for Asia South and Pacific Region, at Veritas Technologies
The use of Artificial Intelligence (AI)-powered chatbots has been gaining momentum in recent years. In fact, more organisations are implementing conversational AI software to serve various functions, including customer support, internal helpdesk, and troubleshooting services. These AI solutions have helped to reduce the burden on customer services, filtering IT support needs, and lowering call centre costs. However, while effective, many of these solutions are limited in their capabilities and can only address a narrow scope of use cases.
Today, forward-thinking organisations are exploring the use of AI in a more advanced way by embracing the capabilities of general-purpose large language models (LLMs). The emergence of ChatGPT, an LLM trained by OpenAI, launched in November 2022 has taken the world by storm, with a newfound realisation among organisations and individuals regarding a wide range of applications and use cases.
Unlike traditional chatbots, ChatGPT can support a wide variety of purposes, including software coding, composing essays, or creating marketing materials such as website copy and product brochures. These services can also be accessed through APIs, which allow organisations to integrate the capabilities of publicly available LLMs into their own apps, products and services based on their particular needs.
Adopting AI tools can help organisations enhance their efficiencies, gain a competitive edge, and reduce manual requirements. For instance, video game publisher Ubisoft Singapore has jumped on the bandwagon by using in-house AI tools to design and develop games, reducing the time taken to rebuild their game worlds from roughly three hours to a mere 13 seconds. Used effectively, AI technology can also help bolster employee capabilities by providing access to resources that were previously unavailable, thus enhancing an individual’s knowledge base and skill set.
Responsible Use of AI: Balancing Innovation and Data Security Risks
The growing adoption of AI is putting pressure on business decision-makers in terms of how these advancements fit into their existing data management strategy. As the implementation of AI in business processes becomes increasingly common, there is a gnawing fear that it creates potential risks and blind spots.
In the quest to stay ahead of competitors, it is common to see organisations rushing to implement AI
technology like ChatGPT. However, it doesn’t take too long before organisations realise the limitations, such as the availability of data. Very often, data is siloed and inconsistent, which presents challenges for business leaders looking to unlock the value of AI. For example, during the COVID-19 pandemic, organisations moved their data to the cloud to maintain productivity, only later to run into issues related to cost, backups and compliance that they need to address retroactively.
When integrating AI into business processes, organisations will typically gather data not only from online sources but also from their own data—potentially including sensitive company information and IP—to train the AI. However, this presents significant security implications for organisations that become dependent on these AI-enabled processes when proper framework is not in place to keep that information safe.
Securing Data Used for AI
Any organisation interacting with these services must ensure that any data used for AI purposes is subject to the same principles and guidelines around security, privacy, and governance as data used for other business purposes. Many organisations are already alerted to the potential dangers. To rein in the looming problems, Singapore’s Infocomm Media Development Authority has identified the top six risks associated with generative AI in a recent report, and established a foundation to ensure a safe and responsible adoption of AI, as well as collaboration with the open-source community to develop test toolkits to mitigate the risks.
Organisations must also consider how to ensure the integrity of any data processes that leverage AI and how to secure the data in the event of any unexpected disruptions, such as a ransomware attack or data centre outage. It is also critical to consider the quality of the data they feed into the AI engine, as not all information produced by AI is accurate. Moreover, they must ask themselves how they will protect the data produced by AI, to ensure it complies with local legislation and regulations and does not land into the wrong hands of bad actors.
Smarter Tech Means Greater Security Risks
From a security perspective, the recent developments in AI imply greater potential risks as the technology gets smarter over time. We are at the stage where we can no longer easily distinguish between the real and false photos, videos or text. The AI tools will be adopted not only for productive use cases but also by cyber criminals, who will seek to apply the technology to increase the scale and sophistication of the cyberattacks they conduct. It is imperative for organisations to recognise the potential harm that AI can bring to their operations and take the necessary steps to protect themselves from cyberattacks and data breaches.
Safeguarding Your Data and Infrastructure Against Cyber Threats
It is safe to assume that businesses are creating an almost unfathomable amount of data. This begs the question: How do you manage it? This is where autonomous data management, based on artificial intelligence (AI), comes in. By harnessing the power of AI, machine learning, and hyper-automation, autonomous data management enables IT departments to simplify tasks, increase efficiency and improve security in multi-cloud environments—with little or no human intervention. With AI-driven malware scanning and anomaly detection, autonomous data management empowers organisations to manage their data and automate protection from cyber threats such as ransomware.
Contrary to the extremely overdramatised science fiction version of AI portrayed in pop culture, the reality is quite different. AI-driven automation has long played a role in data management, progressing from basic software backup to more sophisticated functions such as automated discovery and protection of new workloads. Today, AI technology can offer protection against rapidly evolving cyber threats such as ransomware and predict hardware failures in backup storage devices, ensuring efficient data recovery.
With AI advancing at a rate faster than most organisations can keep up with, it is critical for them to realign their business priorities and structures to empower their IT teams to take on a more strategic role. The ultimate goal of AI- and ML-powered autonomous data management is all about enabling what we can’t do.
While the true potential of AI is yet to be discovered, we know that its applications will be highly data-intensive, creating the need for enterprises to manage it efficiently and responsibly. By adopting AI-powered autonomous data management, businesses can address the demands of current and future data challenges, unlock new opportunities, and achieve transformative business outcomes.
Archive
- October 2024(44)
- September 2024(94)
- August 2024(100)
- July 2024(99)
- June 2024(126)
- May 2024(155)
- April 2024(123)
- March 2024(112)
- February 2024(109)
- January 2024(95)
- December 2023(56)
- November 2023(86)
- October 2023(97)
- September 2023(89)
- August 2023(101)
- July 2023(104)
- June 2023(113)
- May 2023(103)
- April 2023(93)
- March 2023(129)
- February 2023(77)
- January 2023(91)
- December 2022(90)
- November 2022(125)
- October 2022(117)
- September 2022(137)
- August 2022(119)
- July 2022(99)
- June 2022(128)
- May 2022(112)
- April 2022(108)
- March 2022(121)
- February 2022(93)
- January 2022(110)
- December 2021(92)
- November 2021(107)
- October 2021(101)
- September 2021(81)
- August 2021(74)
- July 2021(78)
- June 2021(92)
- May 2021(67)
- April 2021(79)
- March 2021(79)
- February 2021(58)
- January 2021(55)
- December 2020(56)
- November 2020(59)
- October 2020(78)
- September 2020(72)
- August 2020(64)
- July 2020(71)
- June 2020(74)
- May 2020(50)
- April 2020(71)
- March 2020(71)
- February 2020(58)
- January 2020(62)
- December 2019(57)
- November 2019(64)
- October 2019(25)
- September 2019(24)
- August 2019(14)
- July 2019(23)
- June 2019(54)
- May 2019(82)
- April 2019(76)
- March 2019(71)
- February 2019(67)
- January 2019(75)
- December 2018(44)
- November 2018(47)
- October 2018(74)
- September 2018(54)
- August 2018(61)
- July 2018(72)
- June 2018(62)
- May 2018(62)
- April 2018(73)
- March 2018(76)
- February 2018(8)
- January 2018(7)
- December 2017(6)
- November 2017(8)
- October 2017(3)
- September 2017(4)
- August 2017(4)
- July 2017(2)
- June 2017(5)
- May 2017(6)
- April 2017(11)
- March 2017(8)
- February 2017(16)
- January 2017(10)
- December 2016(12)
- November 2016(20)
- October 2016(7)
- September 2016(102)
- August 2016(168)
- July 2016(141)
- June 2016(149)
- May 2016(117)
- April 2016(59)
- March 2016(85)
- February 2016(153)
- December 2015(150)