AI Regulation: The Current State of International Affairs

Pam Sornson, JD

March 19, 2024

It is a decided understatement to say that there are legitimate concerns about entities using Artificial Intelligence (AI) for nefarious purposes. The opportunities for its misuse are significant, considering its capacity to generate documents, images, and sounds that present as 100% authentic and true.

Accordingly, leaders across most community sectors are discussing and developing rule sets designed to govern the use of the technology. Once those rules are established and embraced, the development and adoption of correlating enforcement strategies and standards will keep the world safe from the misuse of this unprecedented computing capability. At least, that’s the plan …

 

AI Governance Drivers

Governments, industrial leaders, and technology experts all agree that the threats posed by AI are immense.

In July 2023, United Nations Secretary-General Antonio Guterres warned the world that unchecked AI resources could “cause horrific levels of death and destruction” and that, if used for criminal or illicit state purposes, it could generate “deep psychological damage” to the global order.

Also in Summer 2023, the International Association of Privacy Professionals (IAPP) analyzed AI adoption practices in numerous settings to evaluate how regulators might address issues that have already arisen. Recent court cases show that the protections previously enjoyed by software developers might be eroding as more civil liberty, intellectual property, and privacy cases with AI-based fact patterns hit the courts. As ‘matters of first impression,’ the results of these early lawsuits will be the foundation of what will undoubtedly become an immense new body of laws.

All by itself, Generative AI is causing much consternation in computer labs and C-Suites around the world. AI vendors have utilized vast quantities of web-based copyrighted data as part of the software’s ‘training materials,’ and the owners of those copyrights aren’t happy that their work has been co-opted without their permission.

As has been the case with all technology, the promise of AI and all its permutations creates as many concerns as it does possibilities. The world is now grappling to get those concerns under control.

 

AI Governance Inputs

Entities invested in putting controls around AI threats are also working on enforcement systems to ensure compliance by AI proponents. Three early entries into the fray help to outline the scope, depth, and breadth of the challenge all AI developers are now facing as their products become more ubiquitous in the world:

The Asilomar Principles

While AI programming itself has been around for a while, articulated considerations about its safe usage are relatively new. In 2017, the Future of Life Institute gathered a group of concerned individuals to explore the full range of its opportunities and issues. The resulting ‘Asilomar Principles‘ set out 23 guidelines that parse AI development activities into three categories: Research, Ethics and Values, and Longer-Term Issues.

The OECD

The work of the Organisation for Economic Cooperation and Development (OECD) is also notable. The OECD works to establish uniform policy standards related to the economic growth of its 37 market-based, democratic members. These 37 countries research, inaugurate, and share development activities across more than 20 policy areas involving economic, social, and natural resources. As a group, the OECD considers AI to be a general-purpose technology that belongs to no one country or entity. Accordingly, its members agree that its use should be based on international, multi-stakeholder, and multidisciplinary cooperation to ensure that its implementation benefits all people as well as the planet.

The OECD parses its work into two main categories, each of which has five subparts.

Its Values-Based Principles focus on the sustainable development of AI stewardship to provide inclusive growth and well-being opportunities that pursue humanitarian and fair goals. AI developers should make their work transparent and understandable to most users while ensuring the validity of its safety and security capacities, and they should be – and be held – accountable for the programs they create.

For policymakers, the OECD recommends establishing international standards that define the safe and transparent development of AI technologies that function compatably within the existing digital ecosystem. Policy environments should be flexible to encourage the growth and innovation of the software while protecting human rights and also limiting its capacity to be used for less than honorable reasons.

The European Union

On Wednesday, March 14, 2024, the European Union voted to implement its ‘Artificial Intelligence Act,’ the world’s first set of AI regulations. It took three years of negotiations, data wrangling, and intense discussions to achieve … “historic legislation [that] sets a balanced approach [to] guarantee [our] citizen’s rights while encouraging innovation” around the development of AI. The full EU Parliament is expected to vote on the new laws in April; there is no expectation of significant opposition to it.

Fundamentally, the AI Act is focused on limiting the risks AI presents, especially in certain situations. Some AI programming provides rote actions that don’t require intense analysis or oversight, so they don’t need excessively intense regulation, either. Other AI platforms, however, incorporate more sensitive data in their computations, such as those that use private biometric or financial data. Inappropriate use of such sensitive information could be disastrous for the person or people exposed, and the scope of AI technology has the capacity to expand that risk to a much greater scale. The AI Act requires developers to prove their model’s trustworthiness and transparency, as well as demonstrate that it respects personal privacy laws and doesn’t discriminate. Entities that are found to be non-compliant with the AI Act risk a fine of up to 7% of their global annual profits.

 

These are just three of the many governments and organizations working to gain control over the use of AI within their jurisdictions. Leaders in the United States are also focused on the concern. The second article in this edition of the Pulse looks at what’s happening in America as it, too, reels from the immense and growing impact of AI on virtually all of its systems and communities.

 

Get the PULSE in your inbox!