Covering Disruptive Technology Powering Business in The Digital Age

Unleashing the Power of AI: Decoding the Promises and the Perils

Written By: Andy Ng, Vice President and Managing Director for Asia South and Pacific Region, at Veritas Technologies


The use of Artificial Intelligence (AI)-powered chatbots has been gaining momentum in recent years. In fact, more organisations are implementing conversational AI software to serve various functions, including customer support, internal helpdesk, and troubleshooting services. These AI solutions have helped to reduce the burden on customer services, filtering IT support needs, and lowering call centre costs. However, while effective, many of these solutions are limited in their capabilities and can only address a narrow scope of use cases. 

Today, forward-thinking organisations are exploring the use of AI in a more advanced way by embracing the capabilities of general-purpose large language models (LLMs). The emergence of ChatGPT, an LLM trained by OpenAI, launched in November 2022 has taken the world by storm, with a newfound realisation among organisations and individuals regarding a wide range of applications and use cases.

Unlike traditional chatbots, ChatGPT can support a wide variety of purposes, including software coding, composing essays, or creating marketing materials such as website copy and product brochures. These services can also be accessed through APIs, which allow organisations to integrate the capabilities of publicly available LLMs into their own apps, products and services based on their particular needs.

Adopting AI tools can help organisations enhance their efficiencies, gain a competitive edge, and reduce manual requirements. For instance, video game publisher Ubisoft Singapore has jumped on the bandwagon by using in-house AI tools to design and develop games, reducing the time taken to rebuild their game worlds from roughly three hours to a mere 13 seconds. Used effectively, AI technology can also help bolster employee capabilities by providing access to resources that were previously unavailable, thus enhancing an individual’s knowledge base and skill set.

Responsible Use of AI: Balancing Innovation and Data Security Risks

The growing adoption of AI is putting pressure on business decision-makers in terms of how these advancements fit into their existing data management strategy. As the implementation of AI in business processes becomes increasingly common, there is a gnawing fear that it creates potential risks and blind spots.

In the quest to stay ahead of competitors, it is common to see organisations rushing to implement AI

technology like ChatGPT. However, it doesn’t take too long before organisations realise the limitations, such as the availability of data. Very often, data is siloed and inconsistent, which presents challenges for business leaders looking to unlock the value of AI.  For example, during the COVID-19 pandemic, organisations moved their data to the cloud to maintain productivity, only later to run into issues related to cost, backups and compliance that they need to address retroactively.

When integrating AI into business processes, organisations will typically gather data not only from online sources but also from their own data—potentially including sensitive company information and IP—to train the AI. However, this presents significant security implications for organisations that become dependent on these AI-enabled processes when proper framework is not in place to keep that information safe.

Securing Data Used for AI

Any organisation interacting with these services must ensure that any data used for AI purposes is subject to the same principles and guidelines around security, privacy, and governance as data used for other business purposes. Many organisations are already alerted to the potential dangers. To rein in the looming problems, Singapore’s Infocomm Media Development Authority has identified the top six risks associated with generative AI in a recent report, and established a foundation to ensure a safe and responsible adoption of AI, as well as collaboration with the open-source community to develop test toolkits to mitigate the risks.

Organisations must also consider how to ensure the integrity of any data processes that leverage AI and how to secure the data in the event of any unexpected disruptions, such as a ransomware attack or data centre outage. It is also critical to consider the quality of the data they feed into the AI engine, as not all information produced by AI is accurate. Moreover, they must ask themselves how they will protect the data produced by AI, to ensure it complies with local legislation and regulations and does not land into the wrong hands of bad actors.

Smarter Tech Means Greater Security Risks

From a security perspective, the recent developments in AI imply greater potential risks as the technology gets smarter over time. We are at the stage where we can no longer easily distinguish between the real and false photos, videos or text. The AI tools will be adopted not only for productive use cases but also by cyber criminals, who will seek to apply the technology to increase the scale and sophistication of the cyberattacks they conduct. It is imperative for organisations to recognise the potential harm that AI can bring to their operations and take the necessary steps to protect themselves from cyberattacks and data breaches.

Safeguarding Your Data and Infrastructure Against Cyber Threats

It is safe to assume that businesses are creating an almost unfathomable amount of data. This begs the question: How do you manage it? This is where autonomous data management, based on artificial intelligence (AI), comes in. By harnessing the power of AI, machine learning, and hyper-automation, autonomous data management enables IT departments to simplify tasks, increase efficiency and improve security in multi-cloud environments—with little or no human intervention. With AI-driven malware scanning and anomaly detection, autonomous data management empowers organisations to manage their data and automate protection from cyber threats such as ransomware.

Contrary to the extremely overdramatised science fiction version of AI portrayed in pop culture, the reality is quite different. AI-driven automation has long played a role in data management, progressing from basic software backup to more sophisticated functions such as automated discovery and protection of new workloads. Today, AI technology can offer protection against rapidly evolving cyber threats such as ransomware and predict hardware failures in backup storage devices, ensuring efficient data recovery.

With AI advancing at a rate faster than most organisations can keep up with, it is critical for them to realign their business priorities and structures to empower their IT teams to take on a more strategic role. The ultimate goal of AI- and ML-powered autonomous data management is all about enabling what we can’t do.

While the true potential of AI is yet to be discovered, we know that its applications will be highly data-intensive, creating the need for enterprises to manage it efficiently and responsibly. By adopting AI-powered autonomous data management, businesses can address the demands of current and future data challenges, unlock new opportunities, and achieve transformative business outcomes.