Data and Artificial Intelligence (AI) leader SAS is launching new trustworthy AI products and services to improve AI governance and support model trust and transparency. Model cards and new AI Governance Advisory services will help organisations navigate the turbulent AI landscape, mitigating risk and helping them pursue AI goals more confidently.
SAS has also published a Trustworthy AI Life Cycle Workflow, mapped to the National Institute of Standards and Technology (NIST) AI Risk Management Framework.
“Our customers are enthusiastic about the potential of AI but remain cautious about when and how to use it,” said Reggie Townsend, Vice President at SAS Data Ethics Practice. “They’re asking good questions about responsible and ethical AI. Our goal is to give them the tools and guidance, based on decades of experience, to integrate AI in ways that boost profitability while reducing unintended harm.”
Model Cards: Trustworthy AI’s ‘Nutrition Labels’
It can be difficult to take something as complex and sophisticated as an AI model and convert it into something easily digestible for everyone involved in the AI lifecycle. And as new rules and regulations are passed around the world, the ability to understand and share with regulators how a model is performing will be crucial.
Model cards, an upcoming feature in SAS® Viya®, will serve stakeholders across the AI lifecycle. From developers to board directors, all stakeholders will find value in a curated tool that supports proprietary and open source models.
Releasing mid-2024, model cards are best described as “nutrition labels” for AI models and a prescription for trustworthy AI. The SAS approach is to autogenerate model cards for registered models with content directly from SAS products, removing the burden from individual users to create them. Additionally, because SAS Viya already has an existing architecture for managing open source, model cards will also be available for open source models, starting with Python models.
Model cards will highlight indicators like accuracy, fairness, and model drift, which is the decay of model performance as conditions change. They include governance details like when the model was last modified, who contributed to it, and who is responsible for the model, allowing organisations to address abnormal model performance internally to continue deploying trustworthy AI.
The model usage section addresses intended use, out-of-scope use cases, and limitations, which will be key factors as transparency and model auditing likely become regulated business operations. Model cards were showcased earlier this year at SAS Insight, a conference for analysts.
“SAS is taking a thoughtful approach to how it helps customers embrace AI, focusing on the practical realities and challenges of deploying AI in real industry settings,” said Eric Gao, Research Director at analyst firm IDC. “Model cards will be valuable for monitoring AI projects and promoting transparency.”
New AI Governance Group Led by Ethical AI Veteran
With the proliferation of AI, SAS customers have become increasingly concerned with how to use their data in ways that are both productive and safe. To help them on their data and AI journeys, SAS is launching AI Governance Advisory, a value-added service for current customers.
Beginning with a short meeting, SAS AI Governance Advisory will help customers think through what AI governance and trustworthy AI mean in the context of their organisations. SAS has piloted this service, and customers have noted several benefits:
- Increased productivity from trusted and distributed decision-making.
- Improved trust from better accountability in data usage.
- The ability to win and keep top talent who demand responsible innovation practices.
- Increased competitive advantage and market agility from being “forward compliant.”
- Greater brand value for confronting the potential impacts to society and the environment.
PZU Insurance of Poland is one of the largest financial institutions in central and eastern Europe. A longtime SAS customer, PZU deploys AI in areas such as claims, sales, fraud detection, and customer care.
“Our AI governance conversations with SAS helped us consider potential unseen factors that could cause problems for customers and our business,” said Marek Wilczewski, Managing Director of Information, Data and Analytics Management (Chief Data Officer/Chief Analytics Officer) at PZU. “We better understand the importance of having more perspectives as we embark on AI projects.”
Industry veteran and ethical AI expert Steven Tiell has been hired as SAS Global Head of AI Governance. Tiell, who led Accenture’s global data ethics and responsible innovation practice, is also the former Vice President of AI Strategy at DataStax.
Building Trustworthy AI on Emerging Government Standards
Last year, the US National Institute of Standards and Technology (NIST) launched an AI Risk Management Framework. It has become a valuable tool for organizations to design and manage trustworthy AI in the absence of official regulations.
SAS has created a Trustworthy AI Life Cycle workflow, which makes NIST’s recommendations easier to adopt for organisations by specifying individual roles and expectations, gathering required documentation, outlining factors for consideration, and leveraging automation to ease adoption. Organizations end up with a production model with documentation showing that they did due diligence to help ensure that the model is fair and that their processes do not cause harm.
The workflow allows organisations to document their considerations of AI systems’ impacts on human lives. It includes steps to ensure that the training data is representative of the impacted population and that the model predictions and performance are similar across protected classes.
These steps help ensure that the model is not causing disparate impact or harm to specific groups. Furthermore, users can ensure that a model remains accurate over time by creating human-in-the-loop tasks to act when additional attention is needed.
The SAS Trustworthy AI Life Cycle workflow can be downloaded from the SAS Model Manager Resources Github and will soon be available through the NIST AI Resource Center.
This announcement was made at SAS Innovate, the data and AI experience for business leaders, technical users, and SAS Partners.
Archive
- October 2024(44)
- September 2024(94)
- August 2024(100)
- July 2024(99)
- June 2024(126)
- May 2024(155)
- April 2024(123)
- March 2024(112)
- February 2024(109)
- January 2024(95)
- December 2023(56)
- November 2023(86)
- October 2023(97)
- September 2023(89)
- August 2023(101)
- July 2023(104)
- June 2023(113)
- May 2023(103)
- April 2023(93)
- March 2023(129)
- February 2023(77)
- January 2023(91)
- December 2022(90)
- November 2022(125)
- October 2022(117)
- September 2022(137)
- August 2022(119)
- July 2022(99)
- June 2022(128)
- May 2022(112)
- April 2022(108)
- March 2022(121)
- February 2022(93)
- January 2022(110)
- December 2021(92)
- November 2021(107)
- October 2021(101)
- September 2021(81)
- August 2021(74)
- July 2021(78)
- June 2021(92)
- May 2021(67)
- April 2021(79)
- March 2021(79)
- February 2021(58)
- January 2021(55)
- December 2020(56)
- November 2020(59)
- October 2020(78)
- September 2020(72)
- August 2020(64)
- July 2020(71)
- June 2020(74)
- May 2020(50)
- April 2020(71)
- March 2020(71)
- February 2020(58)
- January 2020(62)
- December 2019(57)
- November 2019(64)
- October 2019(25)
- September 2019(24)
- August 2019(14)
- July 2019(23)
- June 2019(54)
- May 2019(82)
- April 2019(76)
- March 2019(71)
- February 2019(67)
- January 2019(75)
- December 2018(44)
- November 2018(47)
- October 2018(74)
- September 2018(54)
- August 2018(61)
- July 2018(72)
- June 2018(62)
- May 2018(62)
- April 2018(73)
- March 2018(76)
- February 2018(8)
- January 2018(7)
- December 2017(6)
- November 2017(8)
- October 2017(3)
- September 2017(4)
- August 2017(4)
- July 2017(2)
- June 2017(5)
- May 2017(6)
- April 2017(11)
- March 2017(8)
- February 2017(16)
- January 2017(10)
- December 2016(12)
- November 2016(20)
- October 2016(7)
- September 2016(102)
- August 2016(168)
- July 2016(141)
- June 2016(149)
- May 2016(117)
- April 2016(59)
- March 2016(85)
- February 2016(153)
- December 2015(150)