Covering Disruptive Technology Powering Business in The Digital Age

Talking Generative AI With the Experts From SAS—A Discussion With Bryan Harris and Udo Sglavo
August 10, 2023 News

Written by: Andrew Martin, Group Publisher, AOPG


Last week, I was able to attend the Singapore leg of SAS Innovate 2023. One of the themes (unsurprisingly) that kept making its way into the presentations and moderated discussions was generative and conversational Artificial Intelligence (AI).

As is commonly the case now at tech events, we were treated to a simulation of how generative AI will soon be used as a way to interact with SAS software to request data and reports.

However, what came across clearly is that when it comes to generative AI, while SAS agrees it should be embraced, they also advised measured consideration and caution about why and how it should be used.

I was fortunate enough to spend some one-on-one time with both Bryan Harris, Executive Vice President and Chief Technology Officer at SAS, and Udo Sglavo, Vice President of Advanced Analytics, to delve a bit deeper into what a future working alongside generative AI will look like.  Given that SAS has been working with Neural Networks for more than 15 years and is at the forefront of AI in analytics, it seemed to me these two individuals are significantly more qualified than most to give an opinion on the topic.

I thought Bryan framed things particularly well when he used the analogy of data lakes. Looking back, he recounted that anyone that started with the proposition of “I want to build a data lake” before understanding why and how they would leverage it was likely to waste time and money in that pursuit.

For Bryan, the same holds for generative AI. Everyone wants to start using it, but have they thought through where the technology holds up and what its best use cases are? For Bryan, it’s about starting with outcomes and then building the technology from there.

Speaking with Udo, I explained that from my own layman’s perspective, I was totally shocked the first time I ended up in a conversation with ChatGPT. I was curious to know if someone like him, working at the forefront of AI technologies, was as shocked as I was.

Udo explained that he was a little shocked at the “jump” in the “ability” of these LLMs (Large Language Models), but that he knew it was coming. SAS has been using and enhancing textual sentiment technology for a long time, so is aware of the advances coming down the track. However, Udo did concede this particular jump arrived sooner than he expected.

Ultimately, when it comes to generative AI, Udo had the same perspective as Bryan, advocating to focus on desired outcomes and then look at the technology you need, and generative AI is not the panacea that solves every problem.

Udo went on to explain why LLMs are actually not the answer to a lot of outcomes that people need, especially when it comes to analytics where we need 100% accuracy and traceability on how insights are generated. In fact, according to Udo, with things like hallucinations and AI drift, accuracy from results served by generative AI must always be questioned. Likewise, humans always need to be able to understand and reverse engineer how insights, data and answers are calculated, and that transparency is not possible with generative AI.

Udo and his team have been perfecting these areas for many years, and use other proven technologies to deliver reliable trustworthy analytics results.

Bryan also agreed that, ultimately, generative AI needs to find its place. He explained that the underlying technology is not something new for SAS, as they have been working with neural networks for more than 25 years.

In that time, Bryan has seen a few transformative technologies that changed how we go about things. He feels that to really judge the effect of any transitional technology, you have to assess its specific and lasting impact. Bryan suggested that if you look at the lasting impact of Hadoop, then beyond anything else, it moved us from proprietary to open data. His view of the lasting impact of the cloud is the ability to match compute capacity to compute demand, and Bryan believes that when the dust settles on LLMs, the lasting impact will be prompt-based interaction—the ability to interface with technology via conversational questions and answers.

Udo agrees, he sees a future where LLMs become a powerful interface sitting in front of the stack of technologies that remain the proven choices on which the SAS AI Analytics platform is built.

The power of LLMs will be to interface with the data you need, but that data will still be managed, controlled and secured by other technologies.

It’s a considered and knowledgeable view, and also one that makes a lot of sense. The way we interface and control the technology is likely to change hugely as a result of this technology, it is also likely to mean that tasks currently reserved for data scientists will be increasingly democratised, but as long as that happens in a secure, traceable and trustworthy platform, then that future seems bright.