Foreword:
The rapid increase in interest and in the adoption of artificial intelligence (‘AI’) technology, including generative AI (1), has prompted global demands for regulation and corresponding laws. For example, Microsoft promoted the development of ‘new regulatory provisions for highly capable basic AI models‘ (2) and advocated the creation of an agency in the United States to implement these new regulations, as well as the establishment of licensing requirements for entities wishing to use the more advanced AI models. At the same time, Sam Altman, CEO of OpenAI, the ChatGPT developer, proposed ‘the establishment of an agency to issue licences for the development of large-scale AI models, security standards and tests that AI models must pass before being made available to the public‘ (3), among other measures.
A look at the United States
The New York Times recently reported that the US is ‘lagging behind other countries in privacy, speech, and child protection legislation‘ in relation to AI regulatory demands. Cecilia Kang, a journalist for the same paper, pointed out that the US is also ‘lagging behind on AI regulation’, considering that ‘EU lawmakers are set to introduce regulations for this technology by the end of the year‘ through the Artificial Intelligence Regulation. In contrast, China currently has ‘the most comprehensive set of AI regulations in the world, including the recent scheme of measures for the management of generative AI‘.
So far, non-binding AI risk management guidelines have been issued in the US, the second version of which was published by the National Institute of Standards and Technology in August 2022 (4). The AI Risk Management Framework, intended for voluntary application, aims to enable companies to ‘address risks in the design, development, use, and evaluation of AI products, services, and systems‘ in light of evolving AI research and development standards.
Shortly thereafter, in October 2022, the White House Office of Science and Technology Policy published the Draft Plan for the Development, Use and Deployment of Automated Systems (5), which is based on five principles aimed at minimising the potential harm from AI systems.
The principles are as follows:
i. secure and effective systems;
ii. protection from algorithmic discrimination;
iii. protection of data privacy;
iv. notification and explanation;
v. humane alternatives, consideration and countermeasures.
In May of this year, the Biden-Harris Administration announced new efforts to advance the research, development, and deployment of responsible artificial intelligence (AI) that protects the rights and safety of individuals and produces results for the American population.
Artificial intelligence is one of the most powerful technologies of our time, with broad applications. President Biden has been clear: to seize the opportunities that AI presents, we must first manage its risks. To that end, the Administration has taken significant steps to promote responsible AI innovation that puts people, communities, and the public good at its core and manages the risks to individuals and to our society, security, and economy. This includes the major project for an AI Bill of Rights and related executive actions, the AI Risk Management Framework, a roadmap for creating a national resource for AI research, active work to address national security concerns raised by AI, and the investments and actions announced in early May.
Despite the delays in regulation, it is important to keep an eye on the growing number of new US AI legislative proposals at the federal level. In this regard, we provide a list in chronological order of the most important national bills that currently represent the basis and latest legislative developments in the United States:
Bill to waive immunity under S. 230 for generative AI
Introduced: 14 June 2023
The bill would like to amend Section 230 of the Communications Decency Act (6), which is referred to by experts as “the 26 words that created the Internet” (7); in fact, the text of Section 230 reads as follows: “No provider or user of Internet services may be held liable, as publisher or author, for any information provided by a third party“. This sentence relieves social networks of responsibility for the content that is published on their platforms.
The amendment to the law would be based on the addition of a clause that would remove immunity from AI companies in the event of civil claims or criminal prosecution involving the use or provision of generative AI.
Proponents of the bill present it with this introduction: “We cannot make the same mistakes with generative AI that we made with Big Tech on Section 230“, said Hawley, precisely one of the signatories to the bill amendment. “When these new technologies harm innocent people, companies must be held accountable. Victims deserve a day in court and this bipartisan proposal will make that a reality.”
“AI companies should be forced to take responsibility for business decisions as they develop products, without any Section 230 legal shield,” said Senator Blumenthal, the other sponsor. “This legislation is the first step in our effort to write the rules of AI and establish safeguards as we enter this new era. Accountability of AI platforms is a key principle of a regulatory framework that addresses risks and protects the public.”
Proposed Law on Global Technology Leadership
Introduced: 8 June 2023
The bill would establish an Office of Global Competitive Analysis to assess the position of the United States in key emerging technologies – such as artificial intelligence (AI) – relative to other countries in order to inform US policy and strengthen US competitiveness.
Proponents of the proposal present it with these reasons: ‘We cannot afford to lose our competitive advantage in strategic technologies such as semiconductors, quantum computing, and artificial intelligence to competitors like China,’ said proponent Senator Bennet. “To defend our economic and national security and protect US leadership in critical emerging technologies, we must be able to consider both classified and commercial information to fully assess our position. With this information, Congress can make smart decisions about where to invest and how to strengthen our competitiveness.”
“This legislation will better synchronise our national security community to ensure that America wins the technology race against the Communist Party of China. There is no single federal agency that assesses American leadership in critical technologies like artificial intelligence and quantum computing, despite their importance to our national security and economic prosperity. Our bill will help fill this gap,” said the other sponsor, Senator Young.
“In recent years, the United States has made significant investments in key sectors such as semiconductor manufacturing. But as the United States works to outpace our global competitors, it is critical that we have a meaningful way to monitor our progress against that of neighbouring competitors like China. I am proud to join this bipartisan effort to create a centralised hub responsible for keeping tabs on these developments, which are critical to our economy and national security,’ said Senator Warner, who also sponsored this bill.
Proposed Law on Transparent Automated Governance
Introduced: 7 June 2023
The bill would require federal agencies to be transparent when using automated and augmented systems to interact with the public or make critical decisions, and for other purposes.
“Artificial intelligence is already transforming the way federal agencies serve the public, but the government needs to be more transparent with the public about when and how it uses these emerging technologies,” said Senator Peters, one of the bill’s proponents. “This bipartisan bill will ensure that taxpayers know when they are interacting with certain federal artificial intelligence systems and establish a process for getting answers as to why these systems are making certain decisions.“
Proposed Law on the Disclosure of AI
Submitted: 5 June 2023
The bill would require all material generated by artificial intelligence technology to include the following statement: ‘DISCLAIMER: this output was generated by artificial intelligence‘. It would apply to videos, photos, text, audio and/or any other material generated by artificial intelligence. The Federal Trade Commission (FTC) would be responsible for enforcing violations, which could result in civil penalties.
This is the comment introducing the proposal by the proponents: ‘Artificial intelligence is the most revolutionary technology of our time. It has the potential to be a weapon of disinformation, dislocation and mass destruction,” said Congresswoman Torres. “Creating a regulatory framework for managing the existential risks of artificial intelligence will be one of the major challenges Congress will face in the years and decades to come. There is a danger of both under-regulation and over-regulation. The simplest starting point is disclosure. All generative AIs should be required to disclose that they are AIs. Disclosure is by no means a magic solution, but it is a common-sense starting point for what will surely be a long road to federal regulation‘.
And what is happening in Europe in the meantime?
The goal of positioning the European Union as a world leader in defending and guaranteeing fundamental rights and public interests in the face of the emergence of new artificial intelligence technologies was about to reach its climax as early as April 2021 with the approval of the so-called Proposal for a Regulation on Artificial Intelligence (8).
As of today, Europe’s first artificial intelligence law is on track to be approved by the EU authorities, probably by the end of the year. On 14 June, the European Parliament (9) gave the green light to this legislation with 499 votes in favour, 28 against and 93 abstentions. It will now have to go through the European Council to address the various considerations of the member states. The document approved by the House is a stricter version of the initial proposal outlined by the European Commission in 2021 and includes generative AI systems such as ChatGPT under supervision, explicitly recognising the possibility of the founding model being embedded in a high-risk system
The proposal established three distinct categories of AI systems, characterising them according to their level of risk: unacceptable, high and low or minimal. The latter classification does not contain binding rules, but creates a framework for voluntary codes of conduct.
The first category prohibits AI whose use is considered unacceptable because it goes against EU values, for instance because it violates fundamental rights. One scenario could be where AI is used to manipulate people through subliminal techniques that transcend their consciousness.
The second category of high-risk systems involves a problem intrinsic to AI: how to regulate a technology whose malfunctioning may lead to the violation of our fundamental rights, given its autonomous nature? And, before that, in what situations can such a violation occur, given that we have almost no precedent on which to base it? The answer lies in the sixth article of the proposal.
Linked to the second paragraph of this article, Annex III of the proposal lists a number of cases in which the use of the AI system would be considered high risk: management and operation of essential infrastructure; education and vocational training; employment, management of workers, etc.
In addition to these cases, the proposal makes use of the already developed framework of market surveillance by authorities to address the dilemma. In particular, it makes use of already familiar concepts such as CE marking (conformity assessment or ex-ante control) and market surveillance and product conformity (ex-post control).
Thus, any AI system that is a product (or that is part of a product as a safety component), which has to undergo conformity assessment for CE marking, will be considered ‘high risk’.
However, the emergence of ChatGPT has shown that the legislative project is still in its infancy, forcing the Internal Market and Consumer Protection Committee (IMCO) to produce a draft (published on 5 May) with amendments to the proposal approved by the European Parliament on Wednesday 14 June, which will be incorporated into the final position.
The reality is that AI systems that have recently penetrated the market have found it difficult to fit into the proposal. We are talking about programmes such as ChatGPT, Midjourney or DALL-E, which generate text or images based on user commands, hence the name ‘Generative Artificial Intelligence’.
It is clear that these AIs do not fall into the category of unacceptable risk systems and are unlikely to find a place in the ‘high risk’ classification, although it does not seem reasonable to leave them unregulated.
For example, if the ChatGPT were to be used to “determine access or assignment of individuals to educational and vocational training institutions” (paragraph 3.a of Annex III of the Proposal), it would consequently have to be considered a “high risk” system. In this case, the operator would have to subject the system to specific pre-compliance checks and ensure that it complies with the applicable regulations once it is already in operation.
In order to prevent ChatGPT from being relegated to such limited regulation and conditional on its use in certain products, the May amendment includes a new sub-category in the proposal: the foundation model, i.e. models trained on a large set of unlabelled data that can be used for different tasks, with minimal fine-tuning.
The amendment explicitly recognises the possibility of the foundation model being incorporated into a high-risk system. Secondly, it establishes a number of obligations for foundation model providers, especially transparency obligations such as the disclosure of the fact that the content was generated by AI, the design of the model to avoid generating illegal content, or the publication of summaries of copyrighted data used for training.
In short, the arrival of generative artificial intelligence – among others, the ChatGPT – is yet another unforeseen event that the European legislator, with greater or lesser success, has to adapt to with amendments such as the one quoted above, which brings us back to the thought of Prof. Jonny Thompson, Oxford professor of philosophy, who points out as missing a fourth essential law of Isaac Asimov: a robot must identify itself (10). We all have the right to know whether we are interacting with a human being or an artificial intelligence.
The essential laws governing the intelligence of robots, defined in 1942 by Asimov and initially aimed at an audience of science fiction readers, have ethical implications for our society, we recall them below:
1. A robot cannot harm human beings, nor can it allow human beings to be harmed by its failure to act
2. A robot must obey orders given by humans, unless those orders conflict with the First Law.
3. A robot must safeguard its own existence, provided this does not conflict with the First and Second Laws.
Asimov then added in 1985 (“I Robot e L’impero”, ed. Mondadori, 1986) a so-called Zero Law that is placed before the others in order of importance to allow for greater efficiency:
0. A robot cannot harm Humanity, nor can it allow Humanity to be harmed as a result of its failure to act.
Milan, 24 July 2023
Niccolò Lasorsa Borgomaneri (all rights reserved)
______________________________________________________________________________________________________________
Notes:
(1) For a definition of generative artificial intelligence see https://focus.namirial.it/intelligenza-artificiale-generativa/
(2) See https://fortune.com/2023/05/25/microsoft-president-says-the-u-s-must-create-an-a-i-regulatory-agency-with-rules-for-companies-using-advanced-a-i-models-similar-to-anti-fraud-safeguards-at-banks/
(3) See https://www.nytimes.com/2023/05/16/technology/openai-altman-artificial-intelligence-regulation.html
(4) In https://www.nist.gov/system/files/documents/2022/08/18/AI_RMF_2nd_draft.pdf
(5) https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf
(6) Full text in https://www.law.cornell.edu/uscode/text/47/230
(7) See commentary in https://www.propublica.org/article/nsu-section-230
(8) See https://eur-lex.europa.eu/legal-content/IT/TXT/?uri=CELEX:52021PC0206
(9) See EU press note https://www.europarl.europa.eu/news/it/agenda/briefing/2023-06-12/1/intelligenza-artificiale-al-voto-le-nuove-regole-ue
(10) https://makerfairerome.eu/it/le-tre-leggi-della-robotica-nellera-dellai/