The views expressed by contributors are their own and not the view of The Hill

The AI ecosystem is complex and dynamic: Its regulation should acknowledge that

Getty Images

Last Thursday, Meta announced the newest iteration of its large language model (LLM), Llama 3. The newest model will aim to dislodge OpenAI as the market leader through various improvements driven by what Meta claims to be “high quality” data training sets and new computer programming capabilities. Meta’s chief product officer, Chris Cox, predicts that future versions of the model will include “multimodality” driven advanced reasoning. While Meta’s aspirations grab headlines, the sheer complexity of the systems will likely lead the predictions of Cox and others to fall short. Llama 2, for example, failed to understand basic context. It’s critical that regulators recognize artificial intelligence ecosystem is complex and allow for developers to adjust and improve models throughout the process of deployment.

Researchers, engineers, companies, academic institutions and government agencies, work together across disciplines and industries to integrate AI into a wide range of complex socio-technical-economic systems—illustrated by the cases of ChatGPT and Gemini. The development of these foundation models requires collaboration between linguists, computer scientists and engineers, firms with the computing power and data necessary to build and train the models. The process also requires funders for developing the models, scientists from other disciplines, such as sociology and ethics, and firms that will use them in customer-facing applications like websites.

Consequently, the resulting ecosystem is incredibly complex and it exhibits the properties of complex systems—incomplete knowledge, uncertainty, unpredictability, asynchronicity and non-decomposability. The behavior of the overall system cannot be easily predicted or controlled due to the many interconnected components. Hence, diffusion of AI applications creates new challenges to understanding, explaining and controlling the emergent behaviors of the coupled systems. For example, the propensity for LLMs to “hallucinate” and report incorrect information (which independent research suggests occurs around 20 percent of the time, even in the most “truthful” systems currently available). Due to the complexity of the models, even their creators are unable to explain why or how each specific “untruth” is generated; hence it is difficult to create systematic ways of detecting or deterring such behavior.

Governance of complex systems — like the AI ecosystem — requires policymakers to take account of varying perspectives, unintended consequences, and unpredictable emergent behaviors—both of the systems themselves and of the humans responding to them. As applications can be developed and deployed in different, and multiple, jurisdictions, and the effects and impacts can play out across many different sectors, it may not be clear where responsibility for regulation lies. At the very least, effective governance will require coordination and collaboration across multiple stakeholders. This level of coordination itself is complex, as the ecosystem is constantly changing as new systems are developed, new applications are deployed and more experience is gained. 

To date, regulation in both the EU and the U.S., has been predicated upon risk and risk management, in order to provide assurances to society that AIs are developed and deployed in a manner deemed “safe.”

The EU rules derive from the continent’s experiences in embracing regulation to ensure product safety and protect individuals from known harms associated with specific AI uses and breaches of privacy. Systems are classified according to perceptions of the risk they pose. Banned AIs include the manipulation of individuals’ behavior in specific undesirable ways or the use of particular technologies (e.g. biometric data, facial recognition) in prescribed circumstances.

High-risk AIs that require extensive documentation, auditing and pre-certification draw extensively on existing EU product safety conformity legislation (e.g. toys, protective equipment, agricultural and forestry vehicles, civil aviation and rail system interoperability), as well as applications in areas where physical safety is prioritized (e.g. critical infrastructure) or where risks of psychological or economic harm may ensue (e.g. education, employment, access to services). Low-risk applications required to meet only transparency obligations perform narrow procedural tasks, the focus is on improving the outcome of human decisions and where ultimate decision-making is controlled by human decision-makers. 

The U.S. Office of Management and Budget provisions for government use of AIs are less restrictive and prescriptive than the EU rules, but the focus is still on addressing a subset of AI risks and governance and innovation issues directly pertaining to agencies’ use of AI. Specifically, the risks addressed result from “reliance on AI outputs to inform, influence, decide, or execute agency decisions or actions, which could undermine the efficacy, safety, equitableness, fairness, transparency, accountability, appropriateness, or lawfulness of such decisions or actions.”

In both cases, the risks addressed arise almost exclusively in relation to specific products, activities, decisions or uses of AIs, rather than the complex ecosystems in which they operate. The relevant circumstances are narrowed down to a specific set of situations, actions, actors and consequences that are already largely known and controllable. Even the banned EU uses are limited to specific outcomes that have already been largely identified and described.  Both require single named individuals to be ultimately accountable for the AIs and their regulatory reporting and management.

Neither set of regulations addresses the elements of complexity, uncertainty, unpredictability, asynchronicity and non-decomposability of the ecosystems in which the AIs will operate. Indeed, references to “complexity” and “uncertainty” are conspicuous by their apparent absence from consideration. Neither of itself appears to accommodate the extensive multi-stakeholder collaboration and multiple perspectives necessary for governing complex dynamic systems.

Perhaps it is time for some regulatory humility and acknowledgement of what these regulations can and cannot achieve. They do not give assurances of safety as we proceed with the development and deployment of AIs. Neither do they acknowledge what we know, don’t know and can’t know, because of the bounded rationality of the humans overseeing them. They simply endeavor to manage for the subset of risks that have already been identified or anticipated. We should still expect some surprises from unexpected emergent behaviors, and from the discoveries of things we previously did not know or understand, as we develop our experience of the new ecosystems in operation.

The question is — how do we anticipate managing in those circumstances? Some leadership in the discussion of how we want our societies will evolve in the face of these unerring uncertainties is necessary. We cannot presume that existing regulatory efforts looking backwards are either sufficient or a substitute for this bigger and more complex endeavor, which necessarily must be forward-looking, for a future that is inherently uncertain.

Bronwyn Howell is a nonresident senior fellow at the American Enterprise Institute, where she focuses on the regulation, development and deployment of new technologies.

Tags ai regulation Artificial intelligence Chris Cox

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.