Covering Disruptive Technology Powering Business in The Digital Age

image
Trustworthy AI Gets Boost as SAS Launches New Model Cards, AI Governance Services
image

 

Data and Artificial Intelligence (AI) leader SAS is launching new trustworthy AI products and services to improve AI governance and support model trust and transparency. Model cards and new AI Governance Advisory services will help organisations navigate the turbulent AI landscape, mitigating risk and helping them pursue AI goals more confidently.

SAS has also published a Trustworthy AI Life Cycle Workflow, mapped to the National Institute of Standards and Technology (NIST) AI Risk Management Framework.

“Our customers are enthusiastic about the potential of AI but remain cautious about when and how to use it,” said Reggie Townsend, Vice President at SAS Data Ethics Practice. “They’re asking good questions about responsible and ethical AI. Our goal is to give them the tools and guidance, based on decades of experience, to integrate AI in ways that boost profitability while reducing unintended harm.”

Model Cards: Trustworthy AI’s ‘Nutrition Labels’ 

It can be difficult to take something as complex and sophisticated as an AI model and convert it into something easily digestible for everyone involved in the AI lifecycle. And as new rules and regulations are passed around the world, the ability to understand and share with regulators how a model is performing will be crucial.

Model cards, an upcoming feature in SAS® Viya®, will serve stakeholders across the AI lifecycle. From developers to board directors, all stakeholders will find value in a curated tool that supports proprietary and open source models.

Releasing mid-2024, model cards are best described as “nutrition labels” for AI models and a prescription for trustworthy AI. The SAS approach is to autogenerate model cards for registered models with content directly from SAS products, removing the burden from individual users to create them. Additionally, because SAS Viya already has an existing architecture for managing open source, model cards will also be available for open source models, starting with Python models.

Model cards will highlight indicators like accuracy, fairness, and model drift, which is the decay of model performance as conditions change. They include governance details like when the model was last modified, who contributed to it, and who is responsible for the model, allowing organisations to address abnormal model performance internally to continue deploying trustworthy AI.

The model usage section addresses intended use, out-of-scope use cases, and limitations, which will be key factors as transparency and model auditing likely become regulated business operations. Model cards were showcased earlier this year at SAS Insight, a conference for analysts.

“SAS is taking a thoughtful approach to how it helps customers embrace AI, focusing on the practical realities and challenges of deploying AI in real industry settings,” said Eric Gao, Research Director at analyst firm IDC. “Model cards will be valuable for monitoring AI projects and promoting transparency.”

New AI Governance Group Led by Ethical AI Veteran 

With the proliferation of AI, SAS customers have become increasingly concerned with how to use their data in ways that are both productive and safe. To help them on their data and AI journeys, SAS is launching AI Governance Advisory, a value-added service for current customers.

Beginning with a short meeting, SAS AI Governance Advisory will help customers think through what AI governance and trustworthy AI mean in the context of their organisations. SAS has piloted this service, and customers have noted several benefits:

  • Increased productivity from trusted and distributed decision-making.
  • Improved trust from better accountability in data usage.
  • The ability to win and keep top talent who demand responsible innovation practices.
  • Increased competitive advantage and market agility from being “forward compliant.”
  • Greater brand value for confronting the potential impacts to society and the environment.

PZU Insurance of Poland is one of the largest financial institutions in central and eastern Europe. A longtime SAS customer, PZU deploys AI in areas such as claims, sales, fraud detection, and customer care.

“Our AI governance conversations with SAS helped us consider potential unseen factors that could cause problems for customers and our business,” said Marek Wilczewski, Managing Director of Information, Data and Analytics Management (Chief Data Officer/Chief Analytics Officer) at PZU. “We better understand the importance of having more perspectives as we embark on AI projects.”

Industry veteran and ethical AI expert Steven Tiell has been hired as SAS Global Head of AI Governance. Tiell, who led Accenture’s global data ethics and responsible innovation practice, is also the former Vice President of AI Strategy at DataStax.

Building Trustworthy AI on Emerging Government Standards  

Last year, the US National Institute of Standards and Technology (NIST) launched an AI Risk Management Framework. It has become a valuable tool for organizations to design and manage trustworthy AI in the absence of official regulations.

SAS has created a Trustworthy AI Life Cycle workflow, which makes NIST’s recommendations easier to adopt for organisations by specifying individual roles and expectations, gathering required documentation, outlining factors for consideration, and leveraging automation to ease adoption. Organizations end up with a production model with documentation showing that they did due diligence to help ensure that the model is fair and that their processes do not cause harm.

Trustworthy AI

The workflow allows organisations to document their considerations of AI systems’ impacts on human lives. It includes steps to ensure that the training data is representative of the impacted population and that the model predictions and performance are similar across protected classes.

These steps help ensure that the model is not causing disparate impact or harm to specific groups. Furthermore, users can ensure that a model remains accurate over time by creating human-in-the-loop tasks to act when additional attention is needed.

The SAS Trustworthy AI Life Cycle workflow can be downloaded from the SAS Model Manager Resources Github and will soon be available through the NIST AI Resource Center.

This announcement was made at SAS Innovate, the data and AI experience for business leaders, technical users, and SAS Partners.

(0)(0)

Archive