Regulating Artificial Intelligence in International Investment Law



Mark McLaughlin | The Journal of World Investment & Trade

Artificial Intelligence (AI) is emerging as a significant phenomenon in the global economy. As multinational corporations pour capital into acquiring AI start-ups, investment in cognitive AI systems is expected to reach USD 98 billion by 2023. The United States and China have established initiatives to pursue strategic dominance of AI, while several other developed countries are nurturing their own AI sectors. Such is the promise of creative destruction associated with AI that the concept of privacy, the nature of work, the accountability of governments, and the value of data must all be reviewed in response to AI-driven technological advancements. Indeed, concerns about the ensuing public interest implications have provoked a series of responses by national legislatures as States attempt to fill the regulatory vacuum.

While international economic law literature is becoming increasingly cognisant of AI, the interaction between AI and investment treaties remains uncharted territory. Scholars have addressed how AI will exacerbate the ‘digital divide’ within cross-border trade, contribute to data-driven research within international economic law, and even generate treaties to predict the outcome of negotiations. Broader issues of international arbitration have similarly been scrutinized within the context of AI, for predicting outcomes, selecting arbitrators, and calculating damages. In contrast, AI’s interaction with investment treaties has garnered little attention, and is the focus of this article.

The central thesis is that the international investment regime provides an unpredictable legal environment in which to adjudicate the emerging norms and ethics of AI. Reforms to drafting and to practice are necessary to prepare investment treaties for an AI-driven future.

This article proceeds in five Sections. Section 2 considers the components and regulation of AI. It is argued that AI regulation – including limitations on market access, utilization of automated decision-making systems, restrictions on cross-border data flows, and mandated algorithmic transparency – may constitute barriers to investment. Section 3 discusses the conditions for AI to be within the scope of protected investment in international investment agreements (IIAs). Section 4 analyses whether measures targeting AI are in compliance with substantive investment obligations. Having identified areas of potential breach, Section 5 finds that existing exceptions clauses within IIAs are too narrowly drafted to encompass the relevant policy concerns. As a result, Section 6 proposes three reformative measures to optimize investment treaties for the AI-powered investor and AI-powered host State.

The Components and Regulation of Artificial Intelligence

There is no consensus on a definition for AI. It has been defined in four different ways, as computer programs capable of: acting humanly, thinking humanly, thinking rationally and acting rationally. None offer a suitably firm basis on which to frame regulation.

The former two categories define AI in relation to human characteristics that are themselves indefinable. What is learning? What is self-awareness? What is reasoning? It is impossible to define these terms with enough precision to identify targets of regulation. The third approach, ‘thinking rationally’, is likely to be over-inclusive, as even rudimentary algorithms follow logical laws of thought. Finally, the ‘acting rationally’ approach defines AI by its ability to operate autonomously, adapt to changing circumstances, and pursue goals. The notion of AI as a ‘rational agent’ has proven to be the most influential approach in the field.

However, two aspects of this definition remain challenging. Firstly, assessing whether a computer program is ‘pursuing’ a ‘goal’ involves allusions to intent and consciousness, which creates the same ambiguity that exists with imitating indefinable human characteristics. Secondly, perceptions of autonomy are highly subjective. They rely upon our perceptions of foreseeability that necessarily shift as the technology becomes more familiar. Indeed, as John McCarthy remarked, ‘as soon as it works, no one calls it AI any more’.

Therefore, this article will forego any attempt to define AI from a technical perspective. Instead, it will adopt a normative approach based upon regulations in development in the EU and the United States. The proposed EU AI Act defines an AI system as ‘software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with’. Similarly, the US Algorithmic Accountability Act defines an ‘automated decision system’ as ‘a computational process, including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, that makes a decision or facilitates human decision making, that impacts consumers’.

AI regulation is being constructed around certain techniques and technologies, and the risks they pose to those with whom they interact. Similarly, international investment law protects and regulates certain assets, activities, and public interests. Therefore, recognizing the points at which AI intersects with investment law will involve identifying the assets, activities and public interest implications, or risks, of AI. As a first step, this requires identifying its components and applications.

Regulating Artificial Intelligence in International Investment Law

Mark McLaughlin is a Full-time Faculty and Visiting Assistant Professor of Law at the
Singapore Management University’s School of Law.

To read the full journal, please click here.