AI Regulation: Fears and a Framework

Pam Sornson, JD

March 5, 2024

Even while artificial intelligence (AI) offers immense promise, it also threatens equally immense disaster, at least in the minds of industry professionals who recently weighed in on the topic. In a March 2023 letter, more than 350 technology leaders shared with policymakers their concerns about the possible dangers AI poses in its present, unfettered iteration. Their concerns are notable, not just because of the obvious concerns raised by a technology that already closely mimics human activities but also because of the role these AI pioneers have played in designing, developing, and propagating the technology around the world. Requesting a ‘pause‘ in further development of the software, the signatories suggest that stopping AI progress until implementable rules are created and adopted would be beneficial. The break would allow for the implementation of standards that would prevent the technology’s evolution into a damaging and uncontrollable force.

 

Industry Consensus: “AI Regulation Should be a Global Priority”

The 23-word letter is concise in its warning: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” The stark statement embodies the fears arising among industry leaders as they watch unimpeded AI technology permeate more and larger sectors of the global community.

The letter was published by the Center for AI Safety (CAIS), a non-profit agency dedicated to reducing large-scale risks posed by AI. Dan Hendrycks, its executive director, told Canada’s CBC News that industrial competition based on AI tech has led to a form of ‘AI arms race,’ similar to the nuclear arms race that dominated world news through the 20th Century. He further asserts that AI could go wrong in many ways as it’s used today and that ‘autonomously developed’ AI – technology generated by its own function – raises equally significant concerns.

Hendrycks’ thoughts are shared by those who signed the letter, three of whom are considered the ‘Godfathers’ of the AI evolution: Geoffry HintonYoshua Bengio, and Yann LeCun (their work to advance AI technology won the trio the 2019 Turing Award, the Nobel Prize of computing). Other notable signers include executives from Google (Lila Ibrahim, COO of Google Deepmind), Microsoft (CTO Kevin Scott), OpenAI (CEO Sam Altman), and Gates Ventures (CEO Bill Gates), who joined the group in raising the issue with global information security leaders. Analysts with eyes on the tech agree that its threat justifies worldwide collaboration. ‘Like climate change, AI will impact the lives of everyone on the planet,’ says technology analyst and journalist Carmi Levy. The collective message to world leaders: “Companies need to step up … [but] … Government needs to move faster.”

 

Where to Begin: Regulating AI

Even before the letter was released, the U.S. National Institute of Standards and Technology (NIST) was already working on promulgating AI user rules and strategies in the United States. Its January 2023 “AI Risk Management Framework 1.0” (AI RMF 1.0) set the initial parameters for America’s embrace of AI regulation, referring to AI as “an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments (adapted from: OECD Recommendation on AI:2019; ISO/IEC 22989:2022).”

Define the Challenge

Framing the Risk

The NIST experts parse AI threats into categories with other, similar national security concerns, such as domestic terrorism or international espionage. Styles of AI technology, for example, can pose

short- or long-term concerns,

with higher or lower probabilities of occurring, and

with the capability of emerging locally, regionally, nationally, or in any combination of the three.

Managing those risks within those parameters requires accurate

assessment,

measurement, and

prioritization, as well as

an analysis of relevant risk tolerance by users.

Assessing Programming Trustworthiness

At the same time, the agency is looking for ways that the technology is already safe to use. Factors that play into this assessment include the software’s:

validity and reliability for its purpose,

inherent security programming,

accountability and resiliency capacities,

its functional transparency, and

its overall fairness for users, to name just a few.

Generating Framework Functions

At its heart, the AI RNF 1.0 enables conversations, comprehensions, and actions that facilitate the safe development of risk management practices for AI users. Its four primary functions follow a process that creates an overarching culture of AI risk management, populated by the steps needed to reduce AI risks while optimizing AI benefits.

Develop the Protocols

Govern: The ‘Govern’ function outlines the strategies that seek out, clarify, and manage risks arising from AI activities. Users familiar with the work of the enterprise can identify and reconfigure those circumstances where an AI program might deviate from existing safety or security requirements. Subcategories within the ‘Govern’ functions apply to current legal and regulatory demands, policy sets, active practices, and the entity’s overarching risk management schematic, among many other factors.

Map: AI functions interact with many interdependent resources, each of which poses its own independent security concern. AI implementation requires an analysis of each independent resource and how it would impact or impede AI adoption and safe usage. This function anticipates that safe and appropriate AI decisions can be inadvertently rendered unsafe by later inputs and determinations. Anticipating these challenges early on reduces their opportunity to cause harm in the future.

Measure: This step naturally follows the Map function, directing workers to assess, benchmark, and monitor apparent risks while keeping alert to emerging risks. Comprehensive data collection pursuant to relevant performance metrics for functionality and safety/security gives entities control over how their AI implementation functions within their organization and industry as it is launched and throughout its productive lifecycle.

Manage: Activities involved in managing AI risks are outlined in the Govern functions, giving organizations strategies to respond to, oversee, and recover from AI-involved incidents or events. The Managing function anticipates that oversight of AI risks and concerns will be incorporated into enterprise actions in the same way that other industry standards and mandates are followed.

 

America is one of many entities working to establish viable controls over a potentially uncontrollable resource. Other nations, groups of nations, industries, and public and private companies are all engaged in creating regulations that allow for the fullest implementation of AI for the public good while reducing its existential threat to the world. The optimal outcome of all these efforts is a safe and secure technology that advances humankind’s best interests while helping it reduce the challenges it creates for itself.

Get the PULSE in your inbox!