Cover photo

CAS Regulations for AI

The PM-EAC suggests that AI should be regulated as a complex adaptive system. While there is a lot to say about this approach, in its articulation, the paper fails to take into account many of the essential features of modern AI.

This is a link-enhanced version of an article that first appeared in the Mint. You can read the original here.


Last month, the Prime Minister’s Economic Advisory Council (PM-EAC) released a paper proposing a new approach to regulating Artificial Intelligence (AI). It argues that while our current approach of enacting reactionary regulations might work in a static, linear system with predictable risks, it is unlikely to work in the context of AI, which comprises emergent, non-linear systems. It argues that since AI is a dynamic network of diverse agents whose interactions generate emergent behaviours, we need to think of it as a complex adaptive system (CAS) and design regulations accordingly.

Regulating CAS

We already have experience dealing with complex adaptive systems like stock markets. The paper attempts to extract regulatory principles from those systems, so that we can apply them to AI. For instance, it suggests that we put in place guardrails and partitions to define operational spaces within which AI can operate, so that, if needed, we can be sure it will not accidentally stray into potentially hazardous areas. It also calls for building manual overrides and authorization choke-points directly into these AI systems, so that humans can effectively take control of operations where needed. It makes the case for “transparency and explainability,” so that there will always be public scrutiny of these systems. The PM-EAC paper also suggests that we ensure “distinct accountability,” so we can always identify the entity or individual responsible for any unintended consequence. Finally, the paper recommends that we put in place an AI regulator with the expertise and the mandate to recalibrate regulations on the fly to deal with the dynamic requirements of CAS regulation.

If nothing else, categorizing AI as a CAS is a refreshingly novel approach to finding solutions for a particularly challenging problem. AI policies tend to be knee-jerk responses to manifestations of harms resulting from the use of this technology, but regulators using this whack-a-mole approach will constantly find themselves behind the curve. By taking a step back and thinking of the entire AI landscape as a CAS that displays emergent behaviour and is continuously and spontaneously evolving is an effective way to arrive at a workable long-term solution.

For that reason, I agree with the proposal to put in place a dedicated and agile expert regulatory body with the power to issue directions and amend regulations on-the-fly as and when required. If such a regulator is obliged to operate in accordance with a set of principles aligned with the democratic values of the country, I see this as no different from the principles-based regulation approach that I have called for in earlier articles in this column.

Partitions and Accountability

That said, there is much that I disagree with in the paper, such as the idea of guardrails and partitions that it suggests. While the concept itself may be sound, given the way in which AI has already been built so far, this suggestion would be virtually impossible to implement. Much of AI development has been modular—particularly lower down the stack, with customization mainly taking place at higher levels.

As a consequence, for all intents and purposes, this is a ship that might have already sailed. While it is still possible to ‘air-gap’ some core systems—think of weapons, power grids, etc—doing so will force us to build them from scratch, and, as a result, forgo many of the benefits that could have come from building on top of what has already been developed.

This is also why the goal of ensuring “distinct accountability” might be impossible to achieve. Today, AI is designed to be interoperable, with access provided through application programming interfaces (APIs) designed for deep integration of AI into other digital products. Technologies like IFTTT and Zapier take this interoperability even further by allowing do-it-yourself combinations of services without any need for coding expertise. All of which is to say that even though we are at the dawn of the AI age, it may already be impossible to pin distinct responsibility for AI outcomes on a single individual or entity.

Oversight and Explainability

I am just as sceptical of the paper’s blind insistence on human oversight and the need always to have humans in the loop. One of the reasons why we moved to automation in the first place was to avoid biased human decision-makers. Now that we have committed ourselves to this path and integrated machines into our workflows, we have reached a point where humans can no longer keep up. Our poorer senses and slower reaction times mean that we are no match for our machine counterparts.

And then there is the demand for transparency and explainability. As I have argued before, whenever we insist on transparency, it is often in exchange for performance,  and while in certain circumstances—such as where human life and liberty are at stake—this might be appropriate, in others, it will not be. For instance, AI can analyse radiology images with far greater accuracy than humans. If this gives me a better chance at detecting a potentially fatal disease, I don’t see why I should give this up simply because we need algorithms to be explainable.

Using a CAS approach to formulate regulations for AI is indeed novel and refreshing as a way to solve a wicked problem. But we cannot blindly apply these regulatory principles to AI without a proper understanding of how it will impact operations in this important sector. Instead, we should work at adapting CAS principles so that we can achieve the desired outcomes.

Loading...
highlight
Collect this post to permanently own it.
Ex Machina logo
Subscribe to Ex Machina and never miss a post.
  • Loading comments...