Written By: Jenn Mullen, Emerging Technology Solutions Manager, Keysight Technologies
Breakthroughs in Artificial Intelligence (AI) are coming at an accelerating pace and, with them, immense economic and societal benefits across a wide range of sectors. Its predictive capabilities along with the automation it enables help in operational optimisation, resource allocation, and personalisation applications. However, AI’s benefits are accompanied by growing concerns about potential misuse that must be addressed to ensure public safety and trust in this revolutionary technology. Regulations are needed to ensure responsible AI—or AI that is developed and deployed in ways that align with societal values, safety principles, and ethical standards.
Considerations for AI Regulation to Promote Responsible AI
There is debate around the role of government in AI regulations. Supporters of tighter regulation point towards it as critical to ensuring consumer trust in AI tools. Detractors are concerned that regulations will create a disadvantage for smaller firms that do not have the resources of multinational tech giants. This debate comes amidst rapid innovation and increasingly complex AI algorithms being embedded into an increasing number of products across sectors. This leaves regulators struggling to find a balanced framework that will guide the development of responsible AI without impeding innovation.
Societal Considerations for Responsible AI
Artificial Intelligence has significant implications for employment, education, and social equality. Regulations should ensure that AI algorithms promote fairness relative to societal values and human decision-making by helping to manage and mitigate negative impacts, such as job displacement due to automation, promoting reskilling programs, ensuring equitable access to AI technologies, and fostering inclusive development processes.
Regulations can enforce ethical standards in AI development, ensuring that AI systems are designed with fairness, accountability, and transparency in mind. This includes addressing biases in AI algorithms, protecting against misuse, and ensuring that AI respects human rights and values. Regulations are also necessary for overseeing or limiting the development and deployment of AI systems that may cause harm like invasive surveillance technologies, or tools that could undermine democratic processes.
Data Privacy and Security Considerations Are Central to Responsible AI
With AI systems often relying on large datasets, including personal information, regulations can help protect individuals’ data and privacy. They can enforce data protection standards, consent requirements, and limitations on data usage, ensuring that AI does not infringe upon individuals’ privacy rights.
Regulations are crucial for ensuring the safety and security of AI systems. They can set guidelines for the robustness and reliability of AI, minimising risks associated with failures or unintended consequences. This is particularly important in critical areas such as healthcare, transportation (like autonomous vehicles), and finance, where AI malfunctions could have severe consequences.
Innovation and Collaboration Considerations
Responsible AI technology transcends national borders and regulations play a crucial role in facilitating international collaboration and standards. Global cooperative innovation is vital for addressing global challenges such as AI security, ensuring AI system interoperability, and managing the global impact of AI on labour markets.
“Regulations are needed to ensure responsible AI—or AI that is developed and deployed in ways that align with societal values, safety principles, and ethical standards.”
Regulations for responsible AI can create a level playing field that encourages healthy competition and innovation while building public trust in AI technologies. By setting clear standards, regulations can reduce uncertainties that might hinder investment and development in AI. Trust is essential for the widespread adoption of AI technologies, and regulations can help assure users that AI systems are safe and reliable.
A New Global Standard for AI Regulation
In February 2024, all 27 member states of the European Union (EU) endorsed the Artificial Intelligence Act (AI Act), which is the world’s first, comprehensive AI legislation. The AI Act establishes a broad definition for AI that can apply to various entities, including providers, deployers, importers, and distributors of AI systems. This legislation will leverage a risk-based classification system which allows the law to evolve along with AI technology. Most systems will fall into the lower risk categories, but all will require significant procedural and transparency obligations pertaining to their use.
This new legislation also prohibits AI systems deemed to include “unacceptable risks” that represent threats to people including the use of subliminal techniques to influence behaviour or those that target vulnerabilities in specific groups. The ‘high-risk’ category applies to 5–15% of AI systems and represents the majority of obligations in the AI Act.
These include safety components and products already regulated by EU safety legislation as well as standalone AI systems used for both public and private sector applications like education and recruitment, biometric identification, determining access to essential services, and the management of critical infrastructure and border security.
Legal experts and technologists following the AI Act expect it to be formally adopted in the summer of 2024 with full enforcement to follow 2 years later. There are exceptions where enforcement will come into effect earlier. This includes prohibitions on the highest risk systems, which will see enforcement six months after the act is adopted. Rules surrounding general purpose AI (GPAI) will be enforceable after twelve months. Drafts of the legislation are available publicly. They provide businesses with the requirements they will need to meet to integrate responsible AI into their product development and operational processes.
Prepare Early
The EU hopes that the AI Act will serve as the global standard for regulating AI and promoting responsible AI. It is a strong litmus test for the types of regulations other countries are looking to implement soon. Tech companies should review their regulatory frameworks and ensure that they align with the regulations laid out by the EU. These regulations will encourage public trust and ensure that society and industry continue to reap the transformative benefits of AI technology innovation.
Archive
- October 2024(44)
- September 2024(94)
- August 2024(100)
- July 2024(99)
- June 2024(126)
- May 2024(155)
- April 2024(123)
- March 2024(112)
- February 2024(109)
- January 2024(95)
- December 2023(56)
- November 2023(86)
- October 2023(97)
- September 2023(89)
- August 2023(101)
- July 2023(104)
- June 2023(113)
- May 2023(103)
- April 2023(93)
- March 2023(129)
- February 2023(77)
- January 2023(91)
- December 2022(90)
- November 2022(125)
- October 2022(117)
- September 2022(137)
- August 2022(119)
- July 2022(99)
- June 2022(128)
- May 2022(112)
- April 2022(108)
- March 2022(121)
- February 2022(93)
- January 2022(110)
- December 2021(92)
- November 2021(107)
- October 2021(101)
- September 2021(81)
- August 2021(74)
- July 2021(78)
- June 2021(92)
- May 2021(67)
- April 2021(79)
- March 2021(79)
- February 2021(58)
- January 2021(55)
- December 2020(56)
- November 2020(59)
- October 2020(78)
- September 2020(72)
- August 2020(64)
- July 2020(71)
- June 2020(74)
- May 2020(50)
- April 2020(71)
- March 2020(71)
- February 2020(58)
- January 2020(62)
- December 2019(57)
- November 2019(64)
- October 2019(25)
- September 2019(24)
- August 2019(14)
- July 2019(23)
- June 2019(54)
- May 2019(82)
- April 2019(76)
- March 2019(71)
- February 2019(67)
- January 2019(75)
- December 2018(44)
- November 2018(47)
- October 2018(74)
- September 2018(54)
- August 2018(61)
- July 2018(72)
- June 2018(62)
- May 2018(62)
- April 2018(73)
- March 2018(76)
- February 2018(8)
- January 2018(7)
- December 2017(6)
- November 2017(8)
- October 2017(3)
- September 2017(4)
- August 2017(4)
- July 2017(2)
- June 2017(5)
- May 2017(6)
- April 2017(11)
- March 2017(8)
- February 2017(16)
- January 2017(10)
- December 2016(12)
- November 2016(20)
- October 2016(7)
- September 2016(102)
- August 2016(168)
- July 2016(141)
- June 2016(149)
- May 2016(117)
- April 2016(59)
- March 2016(85)
- February 2016(153)
- December 2015(150)