United States: Artificial Intelligence

Introduction

Over the past several years, US lawmakers and government agencies have sought to develop artificial intelligence and automated systems (AI) strategies and policy with the aim of balancing the tension between protecting the public from the potentially harmful effects of AI technologies, and encouraging positive innovation and competitiveness. As AI technologies become increasingly commercially viable, one of the most interesting challenges lawmakers face in the governance of AI is determining which of its challenges can be safely left to ethics (appearing as informal guidance or voluntary standards), and which suggested approaches should be codified in law.[1] In 2020, companies and regulators faced unprecedented challenges as they navigated the covid-19 crisis and a rapidly evolving set of issues and policy proposals on the regulation of AI. After a slow start, the second half of 2020 saw a noticeable surge in AI-related regulatory and policy proposals as well as growing international coordination. We may be seeing an inflection point in AI governance, and 2021 is poised to bring consequential legislative and policy changes. Even before the covid-19 pandemic created colossal market disruption and strained virtual networks, increasing consumer demand called for more accountability in the AI ecosystem and the gap between technological change and regulatory response in the digital economy. In the United States, this trend was manifested in particular by calls for regulation of certain ‘controversial’ AI technologies or use cases, increasingly empowering lawmakers to take larger steps to control the scope of AI and automated systems in the public and private sectors.

The final months of 2020 saw federal rulemaking gather real pace. At the very end of 2020, Congress passed landmark legislation, the National Defense Authorization Act (NDAA), boosting the nascent US national AI strategy, increasing spending for AI research funding, and raising the profile of the US National Institute of Standards and Technology (NIST) as the need for more coordination with respect to technical standards emerges as a policy priority. The expansion of AI research funding and coordination by the new National AI Initiative Office places the federal government in a more prominent role in AI research. Amid waning public trust in the use of tools for automated decision-making, 2020 also saw a number of federal bills promoting the ethical and equitable use of AI technologies and consumer protection measures.

US federal, state and local government agencies continue to show a willingness to take concrete positions on the regulatory spectrum, including in light of recent events and social movements, resulting in a variety of policy approaches to AI regulation – many of which eschew informal guidance and voluntary standards and favour outright technology bans. We should expect that high-risk or contentious AI use cases or failures will continue to generate similar public support for, and ultimately trigger, accelerated federal and state action.[2] For the most part, the trend in favour of more individual and nuanced assessments of how best to regulate AI systems specific to their end uses by regulators in the United States has been welcome. Even so, there is an inherent risk that reactionary legislative responses will result in a disharmonious, fragmented national regulatory framework. Such developments will continue to yield important insights into what it means to govern and regulate AI over the coming year.

Further, as the use of AI expands into different sectors and the need for data multiplies, legislation that traditionally has not focused on AI is starting to have a growing impact on AI technology development. This impact can be seen in areas such as privacy, discrimination, antitrust and labour-related immigration laws. While some of these areas may help alleviate ethical concerns that AI sometimes engenders (e.g., eliminating bias), others may unnecessarily inhibit development and make it difficult to operate (eg, complying with consumer deletion requests under privacy laws or securing the workforce needed to develop AI technology).

The following section in this chapter will discuss the general regulatory framework of AI technology in the United States, contrasting the approach with other jurisdictions that have invested in AI research and development where appropriate, and will highlight differences in how AI technology is regulated by use in various key sectors.

The final section in this chapter will discuss certain areas of existing and proposed legislation and policies that may distinctly affect AI technologies and companies, even though they are not directly targeting them, and what effects may result.

AI-specific regulations and policies – existing and proposed

Legislation promoting and evaluating AI ethics, research and federal policy

Despite its position at the forefront of commercial AI innovation, for the past several years the United States has still lacked an overall federal AI strategy and policy.[3] By contrast, observers noted other governments’ concerted efforts and considerable expenditures to strengthen their domestic AI research and development,[4] particularly China’s plan to become a world leader in AI by 2030. These developments abroad prompted many to call for a comprehensive government strategy and similar investments by the United States’ government to ensure its position as a global leader in AI development and application.[5]

In 2019, the federal government began to prioritise both the development and regulation of AI technology. On 11 February 2019, President Donald Trump signed an executive order (EO) creating the ‘American AI Initiative’,[6] intended to spur the development and regulation of AI and fortify the United States’ global position by directing federal agencies to prioritise investments in research and development of AI.[7] The EO, which was titled ‘Maintaining American Leadership in Artificial Intelligence’, outlined five key areas: research and development,[8] ‘unleashing’ AI resources,[9] establishing AI governance standards,[10] building an AI workforce[11] and international collaboration and protection.[12] The AI Initiative is coordinated through the National Science and Technology Council (NSTC) Select Committee on Artificial Intelligence (the Select Committee).

In the American AI Initiative Year One Report, issued in February 2020, the Trump Administration announced that it would double the funding for the AI Initiative’s R&D for the next two years.[13] This report follows from the launch of ai.gov, on 19 March 2019. The White House launched ai.gov as a platform to share AI initiatives from the Trump Administration and federal agencies. These initiatives track along the key points of the AI EO, and ai.gov is intended to function as an ongoing press release.[14]

A couple of months after the EO, on 11 April 2019, the Growing Artificial Intelligence Through Research (GrAITR) Act was introduced to establish a coordinated federal initiative aimed at accelerating AI research and development for US economic and national security and closing the existing funding gap.[15] The Act would create a strategic plan to invest US$1.6 billion over 10 years in research, development and application of AI across the private sector, academia and government agencies, including the NIST, and the National Science Foundation and the Department of Energy – aimed at helping the United States catch up to other countries, including the United Kingdom, who are ‘already cultivating workforces to create and use AI-enabled devices’. The bill was referred to the House Committee on Science, Space, and Technology but has not progressed.

A companion bill to GrAITR, the Artificial Intelligence Government Act, would attempt to create a national, overarching strategy ‘tailored to the US political economy’, for developing AI with a US$2.2 billion federal investment over the next five years.[16] The Act would task branches of the federal government to use AI where possible in operation of its systems. Specifically, it includes the establishment of a national office to coordinate AI efforts across the federal system, requests that NIST establish ethical standards, and proposes that the National Science Foundation set educational goals for AI and STEM learning.[17] The draft legislation complements the formation of the bipartisan Senate AI Caucus in March 2019 to address transformative technology with implications spanning a number of fields including transportation, healthcare, agriculture, manufacturing and national security.[18] While the bill also has not been passed, further legislation introduced in the House of Representatives in March 2020 – the National Artificial Intelligence Initiative Act – would establish a National Artificial Intelligence Initiative to promote AI research and interagency cooperation and to develop AI best practices and standards to ensure US leadership in the responsible development of AI.[19] The Act, the most ambitious attempt by Congress to advance the development of AI in the United States, would also authorise over US$1.1 billion of funding over the next five fiscal years. Further, the National Cloud Computing Task Force Act proposes a task force to plan a national cloud computing system for AI research to provide students and researchers across scientific disciplines with access to cloud computing resources, government and non-government datasets and a research environment.[20]

May 2020 saw the introduction of the Generating Artificial Intelligence Networking Security (GAINS) Act, which directs the Department of Commerce and the Federal Trade Commission to identify the benefits and barriers to AI adoption in the United States; survey other nations’ AI strategies and rank how the United States compares; and assess the supply chain risks and how to address them.[21] The bill requires the agencies to report the results to Congress, along with recommendations to develop a national AI strategy. After previously expressing reluctance due to fears that the initiative’s recommendations would harm innovation, on 28 May 2020, the US Department of State announced that the United States had joined the Global Partnership on AI – becoming the last of the Group of Seven (G7) countries to sign on – reportedly ‘as a check on China’s approach to AI’.

In September 2020, the AI in Government Act of 2020 (HR 2575) was passed by the House by voice vote. The bill aims to promote the efforts of the federal government in developing innovative uses of AI by establishing the ‘AI Center of Excellence’ within the General Services Administration (GSA), and requiring that the Office of Management and Budget (OMB) issue a memorandum to federal agencies regarding AI governance approaches. It also requires the Office of Science and Technology Policy to issue guidance to federal agencies on AI acquisition and best practices. Just a few days later, House Representatives introduced a concurrent resolution calling for the creation of a cohesive national AI strategy, based on four pillars: workforce; national security; research and development; and ethics.

Congress has also expressed the need for ethical guidelines and labour protection to address AI’s potential for bias and discrimination. In February 2019, the House introduced Resolution 153 with the intent of ‘[s]upporting the development of guidelines for ethical development of artificial intelligence’ and emphasising the ‘far-reaching societal impacts of AI’ as well as the need for AI’s ‘safe, responsible and democratic development’.[22] Similar to California’s adoption last year of the Asilomar Principles[23] and the OECD’s recent adoption of five ‘democratic’ AI principles,[24] the House Resolution provides that the guidelines must be consonant with certain specified goals, including ‘transparency and explainability’, ‘information privacy and the protection of one’s personal data’, ‘accountability and oversight for all automated decision-making’, and ‘access and fairness’. This Resolution put ethics at the forefront of policy, which differs from other legislation that considers ethics only as an ancillary topic. Yet, while this resolution signals a call to action by the government to come up with ethical guidelines for the use of AI technology, the details and scope of such ethical regulations remain unclear.

In 2021, the US federal government’s national AI strategy continued to take shape, bridging the old and new administrations. Almost two years after the Trump Executive Order ‘Maintaining American Leadership in Artificial Intelligence’, we have seen a significant increase in AI-related legislative and policy measures in the US. In particular, the federal government has been active in coordinating cross-agency leadership and encouraging the continued research and development of AI technologies for government use. To that end, a number of key legislative and executive actions focused on the growth and development of such technologies for federal agency, national security and military uses. Pursuant to the National AI Initiative Act of 2020, which was passed on 1 January as part of the 2021 NDAA, the White House Office of Science and Technology Policy formally established the National AI Initiative Office (the Office) on January 12. The Office – one of several new federal offices mandated by the NDAA – will be responsible for overseeing and implementing a national AI strategy and acting as a central hub for coordination and collaboration by federal agencies and outside stakeholders across government, industry and academia in AI research and policymaking.

Further, on 27 January 2021, President Biden signed a memorandum titled ‘Restoring trust in government through science and integrity and evidence-based policy making’, setting in motion a broad review of federal scientific integrity policies and directing agencies to bolster their efforts to support evidence-based decision-making.

And, on 17 June 2021, Senator Gillibrand reintroduced the Data Protection Act in an attempt to establish a new, independent agency responsible for policing Big Tech mergers and police against discriminatory data practices. The new Data Protection Act of 2021 gained support in the midst of a series of cyberattacks in 2020. Additionally, in 2021, ransomware attacks have increased by over 100 per cent in comparison to the start of the previous year. The Biden Administration has recommended that private sector companies take precautionary measures to their virtual infrastructure to prevent ransomware threats and cyberattacks, releasing an open letter to business executives urging them to step up protections.[25]

Regulation of AI technologies and algorithms

Intellectual property

Issuing a long-anticipated decision, the US Patent and Trademark Office (USPTO) accelerated efforts to support private-sector AI development in late 2019. The USPTO requested public comment on patent-related AI issues, including whether AI could be considered an inventor on a patent.[26] Coming to the same conclusion as the European Patent Office, and in light of its public comment process, the USPTO ruled in April 2020 that only natural persons, not AI systems, can be registered as inventors on a patent.[27] This decision – in which the USPTO rejected a patent application that listed an AI system as the inventor[28] – has the potential to put into dispute any inventions developed in conjunction with AI, as it raises the question of whether a natural person was close enough to an invention to claim credit.

In August 2020, the developer of the AI system at issue in the USPTO’s April 2020 ruling sued the USPTO, arguing that the USPTO violated the Administrative Procedure Act when it decided that AI systems cannot be named as inventors on patent applications.[29] During an April 2021 hearing in the case, the judge appeared sympathetic to the USPTO’s decision to restrict the definition of ‘inventor’ to humans; she also noted that the legislature, as opposed to the courts, would be best equipped to make any changes to this position into the future.[30]

Both the USPTO and the National Security Commission on Artificial Intelligence (NSCAI) have released reports cataloguing the uncertainty that remains in United States IP policy as it applies to AI. In October 2020, the USPTO published a report entitled ‘Public Views on Artificial Intelligence and Intellectual Property Policy’, which compiled feedback following the USPTO’s request for comment on issues including whether current laws and regulations on patentability and authorship should be revised to account for contributions by AI systems.[31] The comments, synthesised in the report, reflected an overarching concern about the lack of a universally acknowledged definition of AI, and a prevailing view that AI – at the present moment – can neither invent nor author without human intervention.[32] In March 2021, the NSCAI, which was created under the National Defense Authorization Act of 2019, submitted its Final Report to Congress and to President Biden.[33] The report cautioned that stringent patent eligibility requirements in US courts and the lack of clear legal protections for data have created uncertainty related to IP protection for AI creations, necessitating comprehensive reforms to these policies to incentivise AI innovation.[34]

Transparency and algorithmic bias

Over the past several years, US lawmakers have proposed pieces of legislation that seek to regulate AI, with a focus on pursuing transparency, accountability and ‘explainability’ in the face of emerging risks such as harmful bias and other unintended outcomes. Several of these bills have stalled, but they are useful in pinpointing lawmakers’ areas of focus in the context of transparency and bias.

The Bot Disclosure and Accountability Act, first introduced on 25 June 2018 and reintroduced on 16 July 2019, would mandate that the FTC propose regulations that would force digital platforms to publicly disclose their use of an ‘automated software program or process intended to replicate human activity online’.[35] It would also prohibit political candidates or parties from using these automated software programs in order to share or disseminate any information targeting political elections. The Act, which has not progressed, hands the task of defining ‘automated software program’ to the FTC, which leaves wide latitude in interpretation beyond the narrow bot context for which the bill is intended.

On 10 April 2019, a number of Senate Democrats introduced the Algorithmic Accountability Act, which ‘requires companies to study and fix flawed computer algorithms that result in inaccurate, unfair, biased or discriminatory decisions impacting Americans’.[36] The bill represented Congress’s first serious foray into the regulation of AI and the first legislative attempt in the United States to impose regulation on AI systems in general, as opposed to regulating a specific technology area, such as autonomous vehicles. While observers have noted congressional reticence to regulating AI in past years, the bill represented a dramatic shift in Washington’s stance amid growing public awareness of AI’s potential to create bias or harm certain groups. The bill cast a wide net, such that many technology companies’ common practices would fall within the purview of the Act. The Act would not only regulate AI systems but also any ‘automated decision system’ (ADS), which is broadly defined as any ‘computational process, including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, that makes a decision or facilitates human decision making, that impacts consumers’.[37] The bill has not progressed but reflected a step back from the previously favoured approach of industry self-regulation, since it would force companies to actively monitor use of any potentially discriminatory algorithms.

Most recently, on 19 May 2021, Senators Rob Portman and Martin Heinrich introduced the bipartisan Artificial Intelligence Capabilities and Transparency (AICT) Act.[38] The AICT Act would provide increased transparency for the government’s AI systems, and is based primarily on recommendations promulgated by the NSCAI.[39] It would establish a Chief Digital Recruiting Officer within the Department of Defense (DOD), the Department of Energy and the Intelligence Community to identify digital talent needs and recruit personnel, and it expresses the sense of Congress that the National Science Foundation should establish focus areas in AI safety and AI ethics as a part of establishing new, federally funded National Artificial Intelligence Institutes.[40] The bill also includes certain defence-specific provisions that will be discussed in further detail in the ‘National security and military use’ section below.

Thus, as at the time of writing, there is no comprehensive federal statutory scheme in the US that applies specifically to AI technology. While a number of bills have been introduced, many of them are contradictory or at odds with each other in their requirements, and the focus on these issues has waned, particularly in the face of the ongoing covid-19 pandemic. As we return to a more ‘normal’ agenda, perhaps signalled by the legislation introduced by Senators Heinrich and Portman, it will be interesting to see whether the momentum toward AI regulation returns in Congress.

Even in the absence of comprehensive AI-specific federal legislation, and despite several stalled bills, there has still been progress, particularly at the federal agency level. In December 2020, 10 US senators sent a letter to the Chair of the Equal Employment Opportunity Commission about the agency’s ability to ‘investigate and/or enforce against discrimination related to the use of’ AI hiring technologies.[41] Lawmakers were particularly concerned about employers using AI to screen job applicants, including through the use of machine learning assessment tools, general intelligence and personality tests, and ‘modern’ applicant tracking systems.[42] In April 2021, the FTC published a blog post announcing the Commission’s intent to bring enforcement actions related to ‘biased algorithms’ under section 5 of the FTC Act, the Fair Credit Reporting Act, and the Equal Credit Opportunity Act.[43] The FTC set an expectation that companies will test their algorithms for bias before and during deployment and will employ transparency frameworks, while also warning that biased outcomes may be considered deceptive and thus may lead to FTC enforcement actions.[44]

Both the NSCAI and the NIST have released reports related to transparency and bias in AI technology. The NSCAI, in its Final Report, endorsed tools for improving AI transparency and explainability, including AI risk and impact assessments; audits and testing of AI systems; and redress mechanisms for people adversely affected by government AI systems.[45] The NIST released a report entitled ‘A Proposal for Identifying and Managing Bias in Artificial Intelligence’ in June 2021.[46] The report proposes a three-stage approach for averting bias, focusing on the pre-design, design and development, and deployment phases and making recommendations for identifying and managing different forms of bias – including statistical bias, human cognitive bias and societal bias – at each stage.[47]

For their part, state legislatures have continued passing laws directly regulating AI. California passed a bill in September 2018, the ‘Bolstering Online Transparency Act’,[48] which was the first of its kind and (similar to the federal bot bill) is intended to combat malicious bots operating on digital platforms. This state law does not attempt to ban bots outright, but it requires companies to disclose whether they are using a bot to communicate with the public on their internet platforms. The law went into effect on 1 July 2019. Additionally, the Automated Decision Systems Accountability Act of 2021 is pending in California, which would require any business in California that provides a program or device using an ADS to continually test for biases during the ADS’s development and usage, and to conduct an impact assessment to determine any disproportionate impacts on protected classes.[49]

In May 2019, Illinois passed the ‘Artificial Intelligence Video Interview Act’ that limits an employer’s ability to incorporate AI into the hiring process.[50] Employers must meet certain requirements to use AI technology for hiring, which includes obtaining informed consent by explaining how the AI works, explaining the characteristics the technology examines and deleting any video content within 30 days. However, the bill does not define the meaning of ‘AI’, and other requirements for the informed consent provisions are considered vague and subject to wide latitude. An amendment to the Act is working its way through the Illinois legislature, which would mandate that employers relying solely on AI systems to determine whether a job applicant advances to an in-person interview gather and report demographic data to the Department of Commerce and Economic Opportunity.[51] The government would then analyse the data to determine whether the AI systems were racially biased. Finally, following the passage of Washington’s landmark facial recognition law, which went into effect in July 2021,[52] a bill is pending in the state that would prohibit state agencies from using ADSs that discriminate against different groups.[53] A similar bill is pending in New Jersey, which would prohibit discrimination by ADSs in areas including consumer finance and healthcare.[54]

National security and military use

In the past few years, the US federal government has been very active in coordinating cross-agency leadership and planning for bolstering continued research and development of artificial intelligence technologies for use by the government itself. Along these lines, a principle focus for a number of key legislative and executive actions has been the growth and development of such technologies for national security and military uses.

The John S. McCain National Defense Authorization Act for 2019 (the 2019 NDAA)[55] established the NSCAI to study current advancements in artificial intelligence and machine learning, and their potential application to national security and military uses.[56] In its Final Report, submitted in March 2021, the NSCAI concluded that the US government was not prepared to defend against AI-enabled threats or to rapidly adopt AI applications for national security purposes.[57] The report made strategic recommendations related to AI technology development in the national security sphere, including combating digital disinformation with AI-enabled cyber-defences; creating ‘innovative warfighting concepts’; protocols for the use of autonomous weapons; and integrating AI-enabled capabilities into the intelligence community.[58] Also in accordance with a mandate from the 2019 NDAA, the DOD created the Joint Artificial Intelligence Center (JAIC) as a vehicle for developing and executing an overall AI strategy, and named its director to oversee the coordination of this strategy for the military.[59] While these actions clearly indicate an interest in ensuring that advanced technologies like AI benefit the US military and intelligence communities, their efficacy will likely depend on the availability of funding from Congress. To that end, the 2021 NDAA grants the JAIC Director acquisition authority in support of defence missions of up to US$75 million for new contracts for each year through fiscal year 2025.[60]

The JAIC is becoming the key focal point for the DOD in executing its overall AI strategy. As set out in a 2018 summary of AI strategy provided by the DOD,[61] the JAIC will work with the Defense Advanced Research Projects Agency (DARPA),[62] various DOD laboratories, and other entities within the DOD to not only identify and deliver AI-enabled capabilities for national defence, but also to establish ethical guidelines for the development and use of AI by the military.[63]

The JAIC’s efforts to be a leader in defining ethical uses of AI in military applications may further prove challenging because autonomous weaponry is one of the most controversial use cases for AI.[64] Even indirectly weaponised uses of AI, such as Project Maven, which utilised machine learning and image recognition technologies to improve real-time interpretation of full-motion video data, have been the subject of hostile public reaction and boycott efforts.[65] Thus, while time will tell, the tension between the confidentiality that may be needed for national security and the desire for transparency with regard to the use of AI may be a difficult line for the JAIC to walk.

Several proposed bills at the federal level have sought to increase funding and develop AI innovation in the United States defence sector. On 16 June 2020, Senators Rob Portman and Martin Heinrich introduced the bipartisan Artificial Intelligence for the Armed Forces Act which aimed to further strengthen the DOD’s AI capacity by increasing the number of AI and cyber-professionals in the department. The bill would have required the defence secretary to develop a training and certification programme, and to issue guidance on how the Pentagon could make better use of existing hiring authorities to recruit AI talent.[66] Though this bill did not progress as a standalone, the 2021 NDAA adopts several provisions from the proposal, including the requirement that the Director of the JAIC report directly to the Deputy Secretary of Defense; modifications to the Armed Services Vocational Aptitude test that add a computational assessment to identify applicants with skills in AI; and direction on the use of direct hiring authorities to help recruit AI experts to the DOD.[67] The 2021 NDAA also directs the Secretary of Defense to assess whether the DOD has the capacity to ensure that any AI technology acquired by the Department is ethically and responsibly developed.[68]

Additionally, the Securing American Leadership in Science and Technology Act (SALTA),[69] first introduced in January 2020 and reintroduced in March 2021, would focus on ‘invest[ing] in basic scientific research and support[ting] technology innovation for the economic and national security of the United States’, which could encourage AI development, including for national security, in spite of challenges discussed in the ‘Labour’ section below.

The May 2021 AICT bill, discussed above, was accompanied by the Artificial Intelligence for the Military (AIM) Act.[70] The AICT Act would establish a pilot AI development and prototyping fund within the DOD aimed at developing AI-enabled technologies for the military’s operational needs, and would develop a resourcing plan for the DOD to enable development, testing, fielding and updating of AI-powered applications.[71] Finally, in June 2021, the Senate passed the US Innovation and Competition Act on a bipartisan basis.[72] The Act, which seeks to counter China’s efforts to expand as a technological superpower, would authorise US$250 billion of investment in a range of emerging technologies, and lists artificial intelligence, machine learning and autonomy as ‘key technology focus areas’.[73] The Act also includes provisions labelled as the ‘Advancing American AI Act’, which is designed in part to ‘encourage agency artificial intelligence-related programs and initiatives that enhance the competitiveness of the United States’.[74]

Moreover, in February 2021, the House Armed Services Committee created a new Subcommittee on Cyber, Innovative Technologies, and Information Systems, which has jurisdiction over DOD policy related to artificial intelligence.[75] The Subcommittee was created to achieve ‘a more targeted focus’ on technological capabilities, including AI.[76]

Healthcare

Unsurprisingly, the use of AI in healthcare draws some of the most exciting prospects and deepest trepidation, given potential risks. There are still few regulations directed at AI in healthcare specifically, despite the healthcare community’s continued investment in artificial intelligence technology.[77] Covid-19 has introduced additional complications relating to healthcare treatment and delivery options that may affect AI, and regulators acknowledge that existing frameworks for medical device approval are not well-suited to AI-related technologies. Academic commentators have suggested that what will be top-of-mind for lawmakers when implementing artificial intelligence tools in the healthcare space will be ‘the basic need to balance benefits against burdens’ – to have tools that ‘produce valid, reliable predictions and burden individuals’ civil liberties no more than necessary’.[78]

Recent legislation – partly resulting from covid-related data collections – may limit the extent to which companies can use AI to deal with the business challenges posed by the coronavirus. Specifically, restrictions on the use of facial recognition technology and personal health data may restrict the ways that technology can be used to track the spread and impact of the virus. Employers have begun using thermal scans to admit employees into their workplaces, and this technology can use a facial scan of the employee as part of the process.[79] Owners of large residential properties are considering the use of facial recognition technology to control and monitor the entrances to their buildings and prevent unauthorised entrants who might carry the coronavirus.[80] These practices could expose companies to significant legal risk in jurisdictions like Illinois, which requires that private entities provide notice and obtain written consent from employees or members of the public before collecting their biometric data, even for fleeting or temporary purposes like a facial recognition scan.[81] The Illinois Supreme Court has ruled a plaintiff need not show actual harm beyond a violation of his or her rights under the act, so private companies could face costly class actions able to pursue damages between US$1,000 and US$5,000 per class member in addition to attorney’s fees.[82] Bills before the 116th Congress that would have imposed similar affirmative consent requirements for the use of facial recognition technology failed to move past the introduction phase, and have not yet been reintroduced in the 117th Congress.[83] However, regarding public entities, one bill currently before Congress would go beyond imposing affirmative consent requirements to instead simply prohibit the use of facial recognition and other biometric technologies by federal entities, condition federal grant funding on local and state entities also enacting moratoria on the technology and provide a private right of action to individuals whose data has been collected or used in violation of the Act.[84]

Some other bills have carried over from the 116th Congress to the 117th Congress. The Protecting Personal Health Data Act would create a national task force on health data protection and require the Department of Health and Human Services to promulgate regulations for health information not currently covered by the Health Insurance Portability and Accountability Act (HIPAA), but which may be collected in light of the pandemic and used by businesses implementing AI technology.[85] The Smartwatch Data Act would prohibit companies from transferring or selling health information from personal consumer devices without the consumer’s informed consent, including wearables and trackers.[86] Some academic commentators have called for emergent medical data, or personal data that does not explicitly relate to health but that can be used to derive conclusions about the individual’s health using AI technology, to be regulated more strictly to make the treatment of this information consistent with the regulations imposed by Health Insurance Portability and Accountability Act of 1996 (HIPAA).[87]

And data subject to the HIPAA itself would be subject to its Privacy Rule,[88] which also may unintentionally hinder AI development. For example, one of the basic tenets of the Privacy Rule is that use and disclosure of protected health information should be limited to only the ‘minimum necessary’ to carry out the particular transaction or action.[89] While there are innumerable ways AI could be used (including used pursuant to exceptions), such limitations on use can affect the ability to develop AI related to healthcare.[90] Uses of AI in the healthcare space will continue to be under close scrutiny; for example, healthcare workers recently filed suit regarding a voice-activated household device, claiming it had unlawfully recorded, stored and analysed patient information protected under HIPAA, including interactions between the healthcare workers and their patients.[91]

Some have opined that the United States’ lack of comprehensive federal privacy law undermines confidence in and advancement of artificial intelligence-based technologies in the healthcare space. For example, one commentator described contact tracing applications as a ‘huge failure’ in the US because ‘people don’t trust the tech companies or the government to collect, use, and store their personal data, especially when that data involves their health and precise whereabouts’.[92]

Regarding medical devices specifically, the US Food and Drug Administration (FDA) has also offered its views on regulating the use of AI in its ‘Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan’, released in January 2021.[93] The Action Plan, produced in response to stakeholder feedback on the FDA’s 2019 Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) (the 2019 Review Framework) for AI-related medical devices, set out five actions it will take to encourage a pathway for innovative and life-changing AI technologies, while maintaining the FDA’s patient safety standards.

Among the FDA’s key action items were an effort to develop ‘Good Machine Learning Practices’, which would serve as ‘best practices’ for the industry.[94] The FDA also promised to update the framework for AI/ML-based SaMD originally proposed in the 2019 Review Framework, in part by issuing Draft Guidance on the 2019 publication’s Predetermined Change Control Plan.[95] Transparency to the public as a means of promoting a patient-centred approach was also key for the FDA, as was a focus on supporting scientific efforts to develop mechanisms to evaluate and improve machine learning algorithms (including with respect to bias in the algorithms).[96] The Action Plan also addresses stakeholders’ interest in Real-World Performance monitoring of AI/ML software.[97] Many of these items are directly responsive to the 2019 Review Framework, which sought comment on various issues, including whether the categories of modifications described therein are those that would require pre-market review, defining ‘Good Machine Learning Practices’ and how a manufacturer can ‘demonstrate transparency’.[98]

Facial recognition, biometric surveillance and ‘deepfake’ technologies

Perhaps no single area of application for artificial intelligence technology has sparked as fervent an effort to regulate or ban its use in the United States as has the adoption of facial recognition technology by law enforcement and other public officials.[99] Like other biometric data, data involving facial geometries and structures is often considered some of the most personal and private data about an individual, leading privacy advocates to urge extra care in protecting against unauthorised or malicious uses. As a result, many public interest groups and other vocal opponents of facial recognition technology have been quick to sound alarms about problems with the underlying technology as well as potential or actual misuse by governmental authorities. While most regulatory activity to date has been at the local level, momentum is also building for additional regulatory actions at both the state and federal levels, particularly in light of covid-19 and a resurgence in social movements relating to racial justice, such as the Black Lives Matter movement. Despite these efforts to restrict use of facial recognition technology, its use actually increased in 2021, most notably due to law enforcement’s efforts to identify and arrest participants in the Capitol Riots.[100] However, even widespread condemnation of the rioters did not fundamentally change digital rights advocates’ attitude of alarm towards growing use of facial recognition technology.[101]

Concerns about facial recognition and other biometric technology, highlighted by the racial justice movements, have caused federal and state legislatures to reconsider the use of facial recognition technology by government and police departments and propose legislation that would ban the use of this technology by the police more generally. Indeed, one commentator noted that it ‘may be too late for a more balanced regulatory approach’, with efforts already underway to prohibit the technology entirely.[102] Some of those proposed bills, including the Ethical Use of Facial Recognition Act[103] and the National Biometric Information Privacy Act,[104] were proposed in the 116th Congress but did not gain traction. Others have been reintroduced in the 117th Congress. For example, the proposed George Floyd Justice in Policing Act of 2021 reform package contains two bills that would affect the use of AI technology and facial recognition. The Federal Police Camera and Accountability Act would require federal law enforcement officers to wear body cameras, but the bill explicitly prohibits federal police from equipping these cameras with facial recognition technology.[105] The Police CAMERA Act of 2021 would provide grants for state, local, and tribal police to implement body camera technology, but it bars these grants from being used on facial recognition technology and mandates that a study be completed within two years examining, among other topics, ‘issues relating to the constitutional rights of individuals on whom facial recognition technology is used’.[106] The Facial Recognition and Biometric Technology Moratorium Act would limit use of biometric surveillance systems by federal and state government entities, and prohibit information obtained in violation of the bill from being used by the federal government in any proceeding or investigation (with the exception of any proceeding related to an alleged violation of the bill).[107] The proposed Advancing Facial Recognition Technology Act[108] calls on the Secretary of Commerce and the Federal Trade Commission to conduct a study on facial recognition technology, and the proposed Advancing American AI Act[109] would promote adoption of artificial intelligence consistent with ‘the protection of privacy, civil rights, and civil liberties’. And in March 2021, the first ‘comprehensive consumer privacy bill’ – the Information Transparency and Personal Data Control Act – was introduced to the 117th Congress, with the goal of establishing ‘a uniform set of rights for consumers’ and ‘one set of rules for businesses to operate in’.[110] However, despite renewed focus on the issue of facial recognition technology and biometric data collection and use over the past few years, Congress has yet to pass any of its proposed legislation.[111]

In June 2020, the California legislature, under pressure from the ACLU and other civil rights groups, rejected a bill that would have expanded the use of facial recognition technology by state and local governments.[112] That bill would have allowed government entities to use facial recognition technology to identify individuals believed to have committed a serious criminal offence.[113] As a safeguard, the bill provided for third-party controllers to test whether the facial recognition technology exhibits any bias across subpopulations and allowed members of the public to request that their image be deleted from the government database. Other states are similarly addressing facial recognition, including Maryland, which passed a bill banning the use of ‘a facial recognition service for the purpose of creating a facial template during an applicant’s interview for employment’, unless the interviewee signs a waiver;[114] Washington, which approved a bill curbing government use of facial recognition, requiring bias testing and training, and transparency regarding use;[115] and Virginia, which, like California, passed legislation banning law enforcement from using facial recognition technology.[116] Maryland and Alabama have also both introduced new bills to their respective state legislatures that would place additional limitations on government and law enforcement use of facial recognition technology.[117]

In addition, other states have also enacted more general biometric data protection laws that are not limited to facial recognition, but which nevertheless regulate the collection, processing and use of an individual’s biometric data (which, at least in some cases, includes facial geometry data). At the time of writing, Illinois, Texas and Washington have all enacted legislation directed to providing specific data protections for their residents’ biometric information.[118] Only the Illinois Biometric Information Privacy Act provides for a private right of action as a means of enforcement.[119] However, New York and Maryland legislatures are both currently considering biometric privacy bills that would also both provide for a private right of action.[120] In addition, the California Consumer Privacy Act extends its protections to an individual’s biometric information, including that used in facial recognition technology.[121] Still other states have included biometric data privacy as part of their data breach laws or are currently considering the adoption of more general privacy bills that would include protection of biometric information.[122]

Also on the state level, 2021 has also seen much litigation relating to use of facial recognition technology and tools for the collection and storage of other biometric data, especially in Illinois, under the state’s robust Biometric Information Privacy Act (BIPA). There were multiple cases and settlements involving alleged improper collection of fingerprint data.[123] In addition, one of the largest fast-food chains became subject to a US$5 million class action alleging that the company violated BIPA by ‘storing customers’ voiceprints without their permission’.[124] And the Supreme Court of Illinois ruled in West Bend Mutual Insurance Co. v Krishna Schaumburg Tan Inc. that ‘commercial general liability’, or CGL, policies cover claims brought against policyholders for alleged violations of [BIPA],’ and that ‘insuring policies do not have to include magic words to cover BIPA claims.’[125]

On the local level, laws limiting use of facial recognition and other biometric technology are gaining traction. A new ordinance in Portland, Oregon limits the private sector’s use of facial recognition technology in public places – the first city to place restrictions on the private sector, but one of many to pass legislation banning certain uses of facial recognition technology.[126] Portland, Maine also passed an ordinance in November 2020 restricting its local government from using facial recognition technology on the public, with citizens eligible to receive compensation and payment of attorneys’ fees for any violation.[127] An ordinance in New York City that went into effect in July 2021 limits how businesses collect biometric information (including facial recognition and other biometric data) from the public, and prohibits them from selling that information.[128] Amnesty International, a nongovernmental organisation, is leading the way in promoting legislation to ban governmental (including police) use of facial recognition technology at the city level.[129]

Further, responses from businesses may turn out to be a substantial factor in deterring use of AI in a way that might perpetuate bias or infringe on civil liberties. A number of prominent technology companies have voiced their strong support for the Black Lives Matter movement, including IBM, which announced that it would discontinue its facial recognition products over concerns about bias in the technology and its possible infringement on civil liberties.[130]

More recently, ‘deepfakes’ – the output of generative adversarial networks, software systems designed to be trained with authentic inputs (eg, photographs) to generate similar, but artificial, outputs (deepfakes) – have also emerged as a high-risk use case. The Identifying Outputs of Generative Adversarial Networks (IOGAN) Act was signed into law in 2020, and directs the National Science Foundation (NSF) and the NIST to support research on the authenticity of manipulated or synthesised media and spur the development of standards.[131] The NDAA also calls on the Department of Homeland Security to address deepfakes in an annual report every year for the next five years.[132]

Autonomous vehicles and the automobile industry

There was a flurry of legislative activity in Congress in 2017 and early 2018 towards a national regulatory framework for autonomous vehicles (AVs). The US House of Representatives passed the Safely Ensuring Lives Future Deployment and Research In Vehicle Evolution (SELF DRIVE) Act[133] in September 2017, but its companion bill (the American Vision for Safer Transportation through Advancement of Revolutionary Technologies (AV START) Act)[134] stalled in the Senate as a result of holds from Democratic senators who expressed concerns that the proposed legislation remains underdeveloped in that it ‘indefinitely’ pre-empts state and local safety regulations even in the absence of federal standards.[135] Though there was a bipartisan effort to jumpstart AV legislation again in April 2021, it has not gained much traction.[136] A separate effort to lift regulations that would allow thousands of autonomous vehicles to be deployed was shot down by the Commerce Committee in June 2021.[137] Even so, some commentators have predicted that AV legislation is on its way in the near future; though the Biden Administration has not specifically mentioned it as a priority, Transportation Secretary Buttigieg has acknowledged that policy related to AV is behind the times.[138]

Since late 2019, lawmakers have pushed to coalesce around new draft legislation regulating AVs. On 11 February 2020, the House Committee on Energy and Commerce, Subcommittee on Consumer Protection and Commerce, held a hearing entitled ‘Autonomous Vehicles: Promises and Challenges of Evolving Automotive Technologies’.[139] At the hearing, witnesses expressed concerns that because of the lack of federal regulation, the US is falling behind in both the competitive landscape and in establishing comprehensive safety standards.[140] The House Panel released a bipartisan draft bill shortly after the hearing, including a previously unreleased section on cybersecurity requirements,[141] but the bill was not passed during the 116th Congress.[142] The Committee renewed its discussion of many of these same topics at a similar hearing just over one year later, entitled ‘Promises and Perils: The Potential of Automobile Technologies’.[143]

Due to a lack of federal legislation, AVs continue to operate largely under a complex patchwork of state and local rules, with tangible federal oversight limited to the US Department of Transportation’s (DOT) informal guidance. In January 2021, the DOT published updated guidance for the regulation of the autonomous vehicle industry: the ‘Automated Vehicles Comprehensive Plan’ or ‘Comprehensive Plan’.[144] The guidance encompasses and builds on ‘Ensuring American Leadership in Automated Vehicle Technologies’ or ‘AV 4.0’,[145] which included 10 principles to protect consumers, promote markets and ensure a standardised federal approach to AVs, and the AV 3.0 guidance released in October 2018, which introduced guiding principles for AV innovation for all surface transportation modes, and described the DOT’s strategy to address existing barriers to potential safety benefits and progress.[146] The Comprehensive Plan defines three goals for automated driving systems that aim to ‘prioritize safety while preparing for the future of transportation’:

  • promote collaboration and transparency;
  • modernise the regulatory environment;[147] and
  • prepare the transportation system.

In line with previous guidance, the report promises to address legitimate public concerns about safety, security and privacy without hampering innovation.

In March 2020, the NHTSA issued its first-ever Notice of Proposed Rulemaking ‘to improve safety and update rules that no longer make sense such as requiring manual driving controls on autonomous vehicles’.[148] The Notice aims to ‘help streamline manufacturers’ certification processes, reduce certification costs and minimise the need for future NHTSA interpretation or exemption requests’. For example, the proposed regulation would apply front passenger seat protection standards to the traditional driver’s seat of an AV, rather than safety requirements that are specific to the driver’s seat. Nothing in the Notice would make changes to existing occupant protection requirements for traditional vehicles with manual controls.[149]

The NHTSA also issued an Advance Notice of Proposed Rulemaking in December 2020 with the objective of developing a framework to address the safety of automated driving systems itself, building on other proposed rules that cover the safety of the design of vehicles equipped with automated driving systems.[150] Eventually, this framework and additional research and development in automated driving systems could evolve into automated driving system-specific Federal Motor Vehicle Safety Standards (FMVSS).[151] In the meantime, it is likely that regulatory changes to testing procedures (including pre-programmed execution, simulation, use of external controls, use of a surrogate vehicle with human controls and technical documentation) and modifications to current FMVSSs (such as crashworthiness, crash avoidance and indicator standards) will be finalised in 2021. As a more concrete safety-related action, the NHTSA also issued a Standing General Order in June 2021 that mandates manufacturers and operators of vehicles with specified levels of advanced driver assistance systems (ADAS) or automated driving systems to report any crashes that occur on public roads to the NHTSA.[152]

Meanwhile, legislative activity at the US state level is stepping up to advance integration of autonomous vehicles.[153] State regulations vary significantly, ranging from allowing testing under certain specific and confined conditions to the more extreme, which allow for testing and operating AVs with no human passenger behind the wheel. Some states, such as Florida, take a generally permissive approach to AV regulation in that they do not require that there be a human driver present in the vehicle.[154] California is generally considered to have the most comprehensive body of AV regulations, permitting testing on public roads and establishing its own set of regulations just for driverless testing.[155] In December 2019, the California DMV published updated AV regulations (approved by California’s Office of Administrative Law) that allow the testing and deployment of autonomous motor trucks (delivery vehicles) weighing less than 10,001 pounds on California’s public roads.[156] And in November 2020, the California Public Utilities Commission established two new deployment programmes – one for drivered vehicles and one for driverless vehicles.[157] In the California legislature, multiple bills related to AVs are being considered: SB 500[158] would restrict the operation of non-zero-emission AVs, SB 570[159] would exempt vehicles not operable by human drivers from equipment standards otherwise required of motor vehicles, and SB 66 would establish a Council on the Future of Transportation to ‘promote state legislation for autonomous vehicles, in addition to other transportation-related policy’[160] (all of these bills remain in committee at the time of writing).[161]

Non-AI specific regulation likely affecting AI technologies

Data privacy

Following the General Data Protection Regulation (GDPR) in Europe, and various high-profile privacy incidents over the past few years, lawmakers at both the state and federal level are proposing privacy-related bills at a record rate. Among these are the California Consumer Privacy Act (CCPA), which took effect on 1 January 2020, the California Privacy Rights and Enforcement Act (CPRA), the Virginia Consumer Data Protection Act (VCDPA), which will take effect on 1 January 2023, the Colorado Privacy Act (CPA) (also taking effect in 2023), the New York Privacy Act (which made it out of committee in May 2021 but was not put on the Senate floor calendar) and the New York Stop Hacks and Improve Electronic Data Security (SHIELD) Act. On the federal level, there is the Data Protection Act of 2021. Although most are not specific to AI technologies, some include provisions related to automated decision-making, and most have the capacity to greatly affect – and unintentionally stifle progression of – AI technologies.

Most of the recently proposed and pending state privacy bills would not regulate AI technologies directly, but may do so with respect to transparency and consumer rights as to the data use in such technologies.[162] For example, New York’s reintroduced Privacy Act limits entities’ abilities to make ‘an automated decision involving solely automated processing that results in a denial of financial or lending services, housing, public accommodation, insurance, healthcare services, or access to basic necessities, such as food and water’.[163] When decisions with these effects are made solely based on automated processing, entities must ‘disclose in a clear conspicuous, and consumer-friendly manner’ that the decision was made through an automated process. Entities must further provide consumers the opportunity to appeal that automated decision by, at minimum, allowing the affected consumer to ‘express their point of view,’ contest the decision, and obtain meaningful human review.[164] Further, the new bill requires entities engaged in automated decision-making that has the effects noted above, or those ‘engaged in assisting others in automated decision-making in those fields’ to conduct annual assessments of the automated systems’ development and impact.[165] Virginia’s privacy law that will take effect in 2023 also requires an assessment of the risks associated with the use and deployment of AI. The VCDPA requires data processers to provide data controllers with information necessary to perform a data protection assessment and requires the data controller to perform and document such assessment.[166] The CPA similarly addresses profiling (‘any form of automated processing of personal data [relating to] economic situation, health, personal preferences, interests, reliability, behaviour, location or movements’).[167] Rights with respect to this profiling include the right to opt out of processing of personal data for profiling in furtherance of decisions of a legal effect, similar to others.[168] And much like the CPA, which is based, in large part on the CPRA, the California Attorney General, and then the California Privacy Protection Agency, are instructed to adopt regulations ‘governing access and opt-out rights with respect to a business’s use of automated decision-making technology, including profiling’.[169] ‘Profiling’ under the CPRA is defined as ‘any form of automated processing of personal information . . . to evaluate certain personal aspects relating to a natural person, and in particular to analyze or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location or movements’.[170] Thus, it will be unclear how the CPRA’s new measures will affect AI for a few years.

Companies using AI technologies may find solace in exceptions to the state laws. Under the New York Privacy Act, for example, a potentially significant carve-out exist through the use of the term ‘solely’ though – as with GDPR – if there is human intervention at some point in the decision-making process, then the regulation is not invoked.[171] Washington’s proposed Privacy Act, (which was widely expected to pass, but has now been proposed three times, and failed each time) included very similar provisions. While additional states may develop their own legislation consistent with this framework, many state proposals may also stay silent on automated processing issues specifically, similar to the CCPA.[172]

In either case – whether the law has provisions specific to AI or not – broadly applicable privacy laws are often fundamentally at odds with AI, and likely to generate headaches for companies developing and using AI technologies.[173] Fundamentally, AI technologies require large datasets, and those datasets are likely to contain some elements of personal information. This may trigger an avalanche of requirements under privacy laws.[174] For example, as the first and most comprehensive privacy act in the United States, the CCPA and the CPRA allow consumers to request businesses to delete personal information without explanation. For an AI data system, this can be not only impossible, but, to the extent it is possible, it may result in skewed decision-making, a risk to the AI technology’s integrity. While CCPA includes several exceptions to this general right (including for reasons of security, transactions, public interest research or internal use aligned with the consumer’s expectations), it is still unclear how these exceptions will be applied, and whether an entity can use the exceptions as a broad permission to include data in the datasets. Further, consumers’ right to transparency of what data is collected, how it is used, and where it originates, may also simply be impossible for potentially ‘black box’ AI algorithms. Understanding what data is collected may be feasible, but disclosing how it is used, and in what detail, poses complex issues for AI. And even where disclosure is feasible, companies may face conflicts between not wanting to disclose exactly how such information is used for trade secret purposes, but also complying with consumer notification requirements under privacy regulations where the acceptable level of detail is yet undefined. The uncertainty of the CCPA’s effect is compounded by the fact that the CPRA will take effect on 1 January 2023.[175]

Other recent decisions and laws similarly may affect the ability to use data freely as may be required for AI development. The New York SHIELD Act took effect 21 March 2020, and amends the state’s data breach notification law to impose additional data security and breach notification requirements on covered businesses to protect New York residents, including to ensure safeguards from their vendors as well. And companies previously permitted to transfer data – for example, for AI programmes – from the EU in light of the US-EU Privacy Shield are now facing additional hurdles in light of the Schrems II decision invalidating that mechanism. With more data comes more vulnerability, making AI companies particularly at risk of litigation for cybersecurity incidents and non-compliance with security and data transfer requirements.

On the other hand, these privacy laws may not apply in various circumstances. For example, companies with data of 50,000 consumers or less, that have limited revenues, or are not-for-profit, may not be subject to the CCPA.[176] While this may allow freer range for start-ups, it may have the unintended consequence of decreasing protection (as a result of unsophisticated security systems), and increasing the potential of bias (smaller datasets, and less mature anti-bias systems). Also, proposed privacy laws generally do not apply to aggregated or de-identified data. While those definitions can at times be stringent, it may be that AI uses of data can fall out of the scope of regulations by the mere fact that they may not actually use ‘personal information’.

Efforts at the federal level may also affect AI uses of data. The Data Protection Act of 2021 would create an independent federal agency to protect Americans’ data and privacy.[177] The main focus of the agency would be to protect individuals’ privacy related to the collection, use and processing of personal data.[178] Still, the legislation defines ‘automated decisions system’ as ‘a computational process, including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, that makes a decision, or facilitates human decision making.’[179] Using an ADS is a high-risk data practice requiring an impact evaluation after deployment and a risk assessment on the system’s development and design.[180]

This wave of proposed privacy-related federal and state regulation is likely to continue. As a result, companies developing and using AI are certain to be focused on these issues in the coming months, and will be tackling how to balance these requirements with further development of their technologies.[181]

Discrimination

While the federal discrimination laws the United States Equal Employment Opportunity Commission (EEOC) enforces[182] – and their guidelines – have not changed, AI is recognised as a new medium for such discrimination.[183]

Indeed, senators and agencies – including the EEOC itself – are pushing to ensure that AI technology is held accountable to prevent discrimination based on bias. For example, the EEOC is reportedly investigating claims of algorithms used in hiring, promotion and other job decisions in a manner that discriminated against certain groups of individuals.[184] Further, the Federal Trade Commission (FTC) may begin addressing race and gender bias in AI under its enforcement powers.[185] And US senators have been putting increasing pressure on agencies over the past couple of years to ensure they are doing their part to sufficiently investigate companies’ use of Al in decision-making.[186] In addition to urging existing agencies to take a greater role in addressing the biases in AI, proposed legislation may create new roles tasked with such duty. For example, Senator Gillibrand’s proposed privacy legislation, discussed in more detail above, would include an office of civil rights within the new federal agency tasked with ensuring that data usage is non-discriminatory.[187]

The concerns raised are not theoretical, but based on an already realised dilemma, as various human resources and financial lending tools have fallen susceptible to inadvertent biases.[188]

Various racial justice movements, such as Black Lives Matter, have also caused federal and state legislatures to reconsider the use of facial recognition technology by government and police departments, as discussed further above.

As a result of recent focus on the potential for discrimination and bias in AI, we may see anti-discrimination laws used with more frequency – and potentially additional proposed regulations – against AI-focused technologies.[189] We may also see additional legislative proposals that seek to directly regulate algorithmic bias.

Antitrust

Government agencies are showing an increasing willingness to scrutinise the business practices of large technology companies on antitrust issues. While not affecting AI directly, these large technology companies are often the companies doing a significant amount of work building, utilising and testing AI, and any threatened ‘breakup’ of such companies could adversely affect their ability to continue building AI technologies. Centralised concentrations of data – potentially a problem in the antitrust world – may actually promote AI development, given the need for large datasets for use in the development of machine learning systems. Such breakups could make the United States less competitive in its AI race with some of its geopolitical rivals as well, as the broken up companies will have access to smaller datasets, which will hinder AI innovation, and companies from other countries, such as China, are less likely to be subject to antitrust action.[190]

In spite of this, there has been bipartisan support – both in Congress and from the executive branch – of efforts to use antitrust laws (and other legal methods) to rein in large technology companies. This past October, the House Judiciary Committee concluded a 16-month bipartisan investigation into competition in digital markets, which included holding hearings and subpoenaing documents, and which culminated in a report that recommended both crafting stronger antitrust laws, and enforcing them more aggressively against large technology companies.[191] Indeed, as part of this investigation, the House Judiciary antitrust subcommittee held an almost six-hour virtual hearing in July 2020 to publicly question leaders of the most prominent technology companies.[192] Most recently, this past June, the House Judiciary Committee approved a package of bills that, among other provisions, would restrict the ability of large technology companies to favour their own products and would grant the FTC new enforcement powers, potentially making it easier for the agency to break those companies up.[193]

However, both the FTC and the Department of Justice (DOJ) had already begun using their antitrust enforcement powers more aggressively against large technology companies towards the end of the Trump Administration. In October 2020, the DOJ brought an antitrust claim against Google, accusing it of engaging in ‘anticompetitive and exclusionary practices in the search and search advertising markets’.[194] The DOJ had previously announced that its Antitrust Division would review ‘whether and how market-leading online platforms have achieved market power and are engaging in practices that have reduced competition, stifled innovation or otherwise harmed consumers’, so more investigations may follow.[195] The FTC has also investigated online platforms during the Trump Administration,[196] a trend that will likely continue during the Biden Administration.

Indeed, President Biden recently issued an executive order that, among other provisions, encourages the FTC Chair to promulgate new rules to address ‘unfair data collection and surveillance practices’ and ‘unfair competition in major Internet marketplaces’.[197] He also has made appointments, both at the FTC and the DOJ’s Antitrust Division, that many commentators believe indicate the administration’s interest in pursuing much more aggressive antitrust action against large technology companies, as these appointees – including Lina Khan, the recently confirmed FTC Chair – have been critics of these companies in the past.[198] In the wake of these appointments, the FTC has already voted to expand its enforcement authority, a move widely viewed as a presage of more investigations against technology companies.[199]

In justifying these investigations, proponents frequently cite criticisms of the advertising practices (which often involve AI technologies) of large companies such as Google and Amazon, and the resulting extraordinary influence these large companies have on communications and commerce.[200] On the other hand, because this would be a new area for the application of antitrust laws, critics expect that the federal government will face various challenges in this context.[201]

Labour

The past two years have been marked by the covid-19 pandemic in many regards, including relating to unprecedented restrictions on travel and immigration. Given these limitations, and US tensions with China, companies developing AI may find it more difficult to recruit and retain top talent necessary for successful programmes.

Fallout from covid-19 and geopolitical concerns has led to a significant decrease in the ability of skilled workers to enter the US. In response to the coronavirus pandemic and the subsequent record unemployment numbers, the Trump Administration announced that it would suspend the issuance of issuance of employment-based green cards (including those reserved for professionals with advanced degrees),[202] and of new H-1B and some other non-immigrant visas.[203] Near the end of his term, President Trump extended both suspensions through the end of March 2021,[204] which prevented new skilled workers from entering the country to work until the spring. Commentators predicted that these new suspensions would reduce the number of new green cards issued in 2020 by one-third.[205] The number of H-1B visas issued over the course of the 2020 fiscal year also appears to have dropped by about one-third.[206] While President Biden did not reverse these suspensions, he also did not renew them when they expired in March.[207]

The Trump Administration had separately attempted to promulgate new rules governing the H-1B programme towards the end of his term. Specifically, these new rules would have raised the ‘prevailing wage’ an employer needed to pay a worker in order to sponsor them for the H-1B programme (the goal being to clamp down on a perceived practice of employers using the programme to hire less expensive foreign workers rather than native-born counterparts).[208] However, these new rules were subjected to legal challenges, and were vacated by the courts.[209] Yet the Biden Administration has not completely walked away from the effort to limit the H-1B programme. While it recognised that the Trump Administration’s rules had been vacated, it had also previously signalled that it would be completing a more comprehensive review of H1-B workers’ salaries before deciding whether to change the ‘prevailing wage’ necessary for the programme.[210] Thus, it is possible that more reform of the H1-B programme may follow – which could have significant consequences for AI development, as many large technology companies rely on the programme for their workforce.[211]

Additionally, since 2018 the State Department has restricted student visas for Chinese graduate students seeking to pursue degrees in ‘sensitive subjects’, requiring these students to renew their visa every year, instead of the prior practice of granting five-year visas.[212] In June 2020, the Trump Administration announced that it would bar the entry of any Chinese graduate student or researcher who previously served in the People’s Liberation Army or attended a university affiliated with it, owing to concerns about theft of sensitive intellectual property.[213] While President Biden rolled back a few of the Trump Administration’s restrictions on Chinese student visas that had been imposed due to the covid-19 pandemic, he has thus far maintained these other national security and foreign policy-based restrictions on Chinese students.[214]

These actions are particularly relevant because the United States’ dominance in AI research has been sustained in part by the ability of American companies and universities to attract talent from abroad, particularly from China. A recent study of participants in a major AI conference found that 60 per cent of the presenters currently work in the US, but two-thirds of those researchers obtained their undergraduate degree outside of the US.[215] AI research is particularly affected by the human capital of the teams developing a company’s product rather than the IP that the company owns: ‘[r]esearchers generally publish what they find, and anybody can use it. So what the industry is looking for is not intellectual property but the minds that conduct the research.’[216] Rising geopolitical tensions between the United States and China, as well as related immigration restrictions, could further constrict the flow of labour and prevent American universities and companies from recruiting top AI talent, which necessarily could curb the US’s success in AI development.

Conclusion

Discussion around regulating AI technologies has accelerated over the past year, resulting in additional proposals across sectors from local and federal legislatures. However, while few AI-specific laws were actually passed by legislative bodies, those that have passed, as well as pending regulations and policies, still raise significant questions about whether AI technology should be regulated, when, and how, including whether additional growth is required to understand the potential effects and address them adequately, without overzealously inhibiting the United States’ position as a world leader in AI. Similarly, non-AI specific laws, including recently enacted privacy regulation, may have an unintentional disparate impact on AI technologies, given their need for data. While it is still too soon to know for certain, the next few years will prove exceedingly interesting with respect to regulation of AI as companies continue to incorporate AI across business lines, and as laws continue to develop and affect AI, directly and indirectly. Given the fast-paced nature of these developments, it is expected that even between the drafting of this chapter and its publication, the landscape of this sector will have changed dramatically.

The authors would like to acknowledge and thank Kirsten Bleiweiss, Natalie Cernius, Anna Chirniciuc, Rosa Chong, Alec Mouser and Silvie Saltzman for their assistance and contribution to this chapter.


Footnotes

[1] See, eg, Paul Nemitz, Constitutional Democracy and Technology in the Age of Artificial Intelligence, Phil. Trans. R. Soc. A 376: 20180089 (15 November 2018), available at https://royalsocietypublishing.org/doi/full/10.1098/rsta.2018.0089.

[2] See, eg, the House Intelligence Committee’s hearing on Deepfakes and AI on 13 June 2019 (US House of Representatives, Permanent Select Committee on Intelligence, Press Release: House Intelligence Committee To Hold Open Hearing on Deepfakes and AI (7 June 2019)); see also Makena Kelly, ‘Congress grapples with how to regulate deepfakes’, The Verge (13 June 2019), available at https://www.theverge.com/2019/6/13/18677847/deep-fakes-regulation-facebook-adam-schiff-congress-artificial-intelligence. Indeed, after this hearing, separate legislation was introduced to require the Department of Homeland Security to report on deepfakes (the Senate passed S. 2065 on 24 October 2019) and to require NIST and NSF support for research and reporting on generative adversarial networks (HR 4355 passed the House on 9 December 2019).

[3] The only notable legislative proposal before 2019 was the Fundamentally Understanding the Usability and Realistic Evolution of Artificial Intelligence Act of 2017, also known as the FUTURE of Artificial Intelligence Act, which did not aim to regulate AI directly, but instead proposed a Federal Advisory Committee on the Development and Implementation of Artificial Intelligence. The Act was reintroduced on 9 July 2020 by Representatives Pete Olson (R-TX) and Jerry McNerney (D-CA) as the FUTURE of Artificial Intelligence Act of 2020. The House bill (HR 7559) would require the Director of the National Science Foundation, in consultation with the Director of the Office of Science and Technology Policy, to establish an advisory committee to advise the President on matters relating to the development of AI. A similar bill in the Senate (S. 3771), also titled FUTURE of Artificial Intelligence Act of 2020, was introduced by bipartisan lawmakers on 20 May 2020 and was ordered to be reported with an amendment favourably on 22 July 2020 after passing the Senate Committee on Commerce, Science, and Transportation.

[4] For example, in June 2017, the UK established a government committee to further consider the economic, ethical and social implications of advances in artificial intelligence, and to make recommendations. ‘AI – United Kingdom’, available at https://futureoflife.org/ai-policy-united-kingdom. It also published an Industrial Strategy White Paper that set out a five-part structure by which it will coordinate policies to secure higher investment and productivity. HM Government, ‘Industrial Strategy: Building a Britain fit for the future’ (November 2017), https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/730048/industrial-strategy-white-paper-web-ready-a4-version.pdf. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/730048/industrial-strategy-white-paper-web-ready-a4-version.pdf. And, in a March 2018 sector deal for AI, the UK established an AI Council to bring together respected leaders in the field, and a new body within the government – the Office for Artificial Intelligence – to support it. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/702810/180425_BEIS_AI_Sector_Deal__4_.pdf.

[5] Joshua New, ‘Why the United States Needs a National Artificial Intelligence Strategy and What It Should Look Like’, The Center for Data Innovation (4 December 2018), available at http://www2.datainnovation.org/2018-national-ai-strategy.pdf.

[6] Donald J Trump, Executive Order on Maintaining American Leadership in Artificial Intelligence, The White House (11 February 2019).

[7] The White House, Accelerating America’s Leadership in Artificial Intelligence, Office of Science and Technology Policy (11 February 2019).

[8] Supra note 1, section 2(a) (directing federal agencies to prioritise AI investments in their ‘R&D missions’ to encourage ‘sustained investment in AI R&D in collaboration with industry, academia, international partners and allies, and other non-Federal entities to generate technological breakthroughs in AI and related technologies and to rapidly transition those breakthroughs into capabilities that contribute to our economic and national security.’).

[9] id., section 5 (stating that ‘[h]eads of all agencies shall review their Federal data and models to identify opportunities to increase access and use by the greater non-Federal AI research community in a manner that benefits that community, while protecting safety, security, privacy, and confidentiality’).

[10] Aiming to foster public trust in AI by using federal agencies to develop and maintain approaches for safe and trustworthy creation and adoption of new AI technologies (for example, the EO calls on the National Institute of Standards and Technology (NIST) to lead the development of appropriate technical standards). Within 180 days of the EO, the Secretary of Commerce, through the Director of NIST, shall ‘issue a plan for Federal engagement in the development of technical standards and related tools in support of reliable, robust, and trustworthy systems that use AI technologies’ with participation from relevant agencies as the Secretary of Commerce shall determine. The plan is intended to include ‘Federal priority needs for standardization of AI systems development and deployment,’ the identification of ‘standards development entities in which Federal agencies should seek membership with the goal of establishing or supporting United States technical leadership roles,’ and ‘opportunities for and challenges to United States leadership in standardization related to AI technologies’. See id., section 6(d)(i)(A)-(C). Accordingly, we can expect to see proposals from the General Services Administration (GSA), OMB, NIST, and other agencies on topics such as data formatting and availability, standards, and other potential regulatory efforts. NIST’s indirect participation in the development of AI-related standards through the International Organization for Standardization (ISO) may prove to be an early bellwether for future developments.

[11] The EO asks federal agencies to prioritise fellowship and training programs to prepare for changes relating to AI technologies and promoting science, technology, engineering and mathematics (STEM) education.

[12] In addition, the EO encourages federal agencies to work with other nations in AI development, but also to safeguard the country’s AI resources against adversaries.

[13] The White House Office of Science and Technology Policy, American Artificial Intelligence Initiative: Year One Annual Report (Feb 2020).

[14] Donald J Trump, Artificial Intelligence for the American People, the White House (2019). For example, three years after the release of the initial National Artificial Intelligence Research and Development Strategic Plan, in June 2019 the Trump Administration issued an update – previewed in the administration’s February 2019 executive order – highlighting the benefits of strategically leveraging resources, including facilities, datasets and expertise, to advance science and engineering innovations, bringing forward the original seven focus areas (long-term investments in AI research; effective methods for human-AI collaboration; ethical, legal and societal implications of AI; safety and security of AI systems; shared public datasets and environments for AI training and testing; measuring and evaluating AI technologies through standards and benchmarks; and national AI research-and-development workforce needs) and adding an eighth: public-private partnerships.

[15] HR 2202, 116th Cong (2019).

[16] S. 1558 – Artificial Intelligence Initiative Act, 116th Cong (2019–2020).

[17] The bill also establishes the National AI Research and Development Initiative to identify and minimise ‘inappropriate bias and datasets algorithms’. The requirement for NIST to identify metrics used to establish standards for evaluating AI algorithms and their effectiveness, as well as the quality of training datasets, may be of particular interest to businesses. Moreover, the bill requires the Department of Energy to create an AI research programme, building state-of-the-art computing facilities that will be made available to private sector users on a cost-recovery basis. Similar legislation, the Advancing Artificial Intelligence Research Act of 2020, was introduced on 4 June 2020 by Senator Cory Gardner. The Act establishes the National Program to Advance Artificial Intelligence Research under NIST to support research and development of technical standards and guidelines that promote the US’s AI goals. See S. 3891, 116th Cong (2nd Sess 2020). Finally, the AI Scholarship-for-Service Act (S. 3901) provides AI practitioners, data engineers, data scientists, and data analysts with higher education scholarships in exchange for a commitment to work for a federal, state, local, or tribal government, or a state, local, or tribal-affiliated non-profit deemed as critical infrastructure.

[18] Press Release, Senator Martin Heinrich, ‘Heinrich, Portman, Schatz Propose National Strategy For Artificial Intelligence; Call For $2.2 Billion Investment In Education, Research & Development’ (21 May 2019), available at https://www.heinrich.senate.gov/press-releases/heinrich-portman-schatz-propose-national-strategy-for-artificial-intelligence-call-for-22-billion-investment-in-education-research-and-development.

[19] HR 6216, 116th Cong (2nd Sess 2020). If passed, the bill would authorise US$391 million for the National Institute of Standards and Technology (NIST) to develop voluntary standards for trustworthy AI systems, establish a risk assessment framework for AI systems and develop guidance on best practices for public-private data sharing.

[20] S. 3890, 116th Cong (2019–2020). A companion bill (H.R. 7096) has been introduced in the House.

[21] HR 6950, 116th Cong (2019–2020).

[22] HR Res 153, 116th Cong (1st Sess 2019).

[23] Assemb Con Res 215, Reg Sess 2018–2019 (Cal 2018) (enacted) (expressing the support of the legislature for the ‘Asilomar AI Principles’ – a set of 23 principles developed through a collaboration between AI researchers, economists, legal scholars, ethicists and philosophers that met in Asilomar, California, in January 2017 and categorised into ‘research issues’, ‘ethics and values’ and ‘longer-term issues’ designed to promote the safe and beneficial development of AI – as ‘guiding values for the development of artificial intelligence and of related public policy).

[24] OECD Principles on AI (22 May 2019) (stating that AI systems should benefit people, be inclusive, transparent and safe, and their creators should be accountable), available at http://www.oecd.org/going-digital/ai/principles.

[25] And recommending solutions and best practices for companies, including multifactor authentication, data encryption, endpoint detection and response, implementation of dedicated security teams, and more. See the White House open letter ‘What We Urge You To Do to Protect Against the Threat of Ransomware’, available at https://image.connect.hhs.gov/lib/fe3915707564047b761078/m/1/8eeab615-15a3-4bc8-8054-81bc23a181a4.pdf.

[26] Laura Peter (Deputy Director of the USPTO), Remarks Delivered at Trust, but Verify: Informational Challenges Surrounding AI-Enabled Clinical Decision Software (23 January 2020), available at https://www.uspto.gov/about-us/news-updates/remarks-deputy-director-peter-trust-verify-informational-challenges.

[27] See USPTO Decision on Petition In Re Application No. 16/524,350, available at https://www.uspto.gov/sites/default/files/documents/16524350_22apr2020.pdf?utm_campaign=subscriptioncenter&utm_content=&utm_medium=email&utm_name=&utm_source=govdelivery&utm_term=. See also Jon Porter, ‘US Patent Office Rules that Artificial Intelligence Cannot Be a Legal Inventor’, The Verge (29 April 2020), available at https://www.theverge.com/2020/4/29/21241251/artificial-intelligence-inventor-united-states-patent-trademark-office-intellectual-property.

[28] Jon Porter, ‘US Patent Office Rules that Artificial Intelligence Cannot Be a Legal Inventor’, The Verge (29 April 2020), available at https://www.theverge.com/2020/4/29/21241251/artificial-intelligence-inventor-united-states-patent-trademark-office-intellectual-property.

[29] Dani Kass, ‘Physicist Says AI Inventor Ban Alters Patentability Criteria’, Law360 (10 August 2020), available at https://www.law360.com/articles/1299867.

[30] Cara Salvatore, ‘Giving AI Inventorship Would Be A Bridge Too Far, Judge Says’, Law360 (6 April 2021), available at https://www.law360.com/articles/1354993.

[31] United States Patent and Trademark Office, ‘USPTO releases report on artificial intelligence and intellectual property policy’, (6 October 2020), available at https://www.uspto.gov/about-us/news-updates/uspto-releases-report-artificial-intelligence-and-intellectual-property?_sm_au_=iVV6kKLFkrjvZrvNFcVTvKQkcK8MG.

[32] United States Patent and Trademark Office, ‘Public Views on Artificial Intelligence and Intellectual Property Policy’, (6 October 2020), available at https://www.uspto.gov/sites/default/files/documents/USPTO_AI-Report_2020-10-07.pdf.

[33] NSCAI, ‘The Final Report’, (1 March 2021), available at https://www.nscai.gov/wp-content/uploads/2021/03/Full-Report-Digital-1.pdf.

[34] id.

[35] S. 3127 – Bot Disclosure and Accountability Act of 2018, 115th Cong (2018), available at https://www.congress.gov/bill/115th-congress/senate-bill/3127 and S. 2125 Bot Disclosure and Accountability Act of 2019, 116th Cong (2019), available at https://www.congress.gov/bill/116th-congress/senate-bill/2125.

[36] Cory Booker, Booker, Wyden, Clarke Introduce Bill Requiring Companies To Target Bias In Corporate Algorithms, United States Senate (10 April 2019), available at https://www.booker.senate.gov/?p=press_release&id=903; see also S.1108 – Algorithmic Accountability Act, 116th Cong (2019).

[37] The bill would allow regulators to take a closer look at any ‘high-risk automated decision system’ – those that involve ‘privacy or security of personal information of consumers’, ‘sensitive aspects of [consumers’] lives, such as their work performance, economic situation, health, personal preferences, interests, behavior, location, or movements’, ‘a significant number of consumers regarding race [and several other sensitive topics]’, or ‘systematically monitors a large, publicly accessible physical place’. For these ‘high-risk’ topics, regulators would be permitted to conduct an ‘impact assessment’ and examine a host of proprietary aspects relating to the system.

[38] Press Release, Senator Martin Heinrich, ‘Heinrich, Portman Announce Bipartisan Artificial Intelligence Bills To Boost AI-Ready National Security Personnel, Increase Governmental Transparency’ (12 May 2021), available at https://www.heinrich.senate.gov/press-releases/heinrich-portman-announce-bipartisan-artificial-intelligence-bills-to-boost-ai-ready-national-security-personnel-increase-governmental-transparency; S. 1705, 117th Cong (2021).

[39] id.

[40] S. 1705, 117th Cong (2021).

[42] id.

[43] Elisa Jillson, ‘Aiming for truth, fairness, and equity in your company’s use of AI’, FTC, Business Blog (19 April 2021), available at https://www.ftc.gov/news-events/blogs/business-blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai.

[44] id.

[45] NSCAI, ‘The Final Report’, (1 March 2021), available at https://www.nscai.gov/wp-content/uploads/2021/03/Full-Report-Digital-1.pdf.

[46] NIST, ‘A Proposal for Identifying and Managing Bias in Artificial Intelligence’, (22 June 2021), available at https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.1270-draft.pdf.

[47] id.

[48] SB 1001, Bolstering Online Transparency Act (Cal 2017), available at https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180SB1001.

[49] Press Release, Assemblymember Ed Chau, ‘Chau Introduces Automated Decision Systems Accountability Act of 2021’ (8 December 2020), available at https://a49.asmdc.org/press-releases/20201208-chau-introduces-automated-decision-systems-accountability-act-2021.

[52] Eugenia Lostri, ‘Washington’s New Facial Recognition Law’, CSIS (3 April 2020), available at https://www.csis.org/blogs/technology-policy-blog/washingtons-new-facial-recognition-law.

[53] SB 5116, 67th Legislature (Wash 2021), available at https://app.leg.wa.gov/billsummary?BillNumber=5116&Year=2021&Initiative=False.

[54] S. 1943, 219th Legislature (NJ 2020), available at https://www.njleg.state.nj.us/2020/Bills/S2000/1943_I1.PDF.

[56] id.

[57] NSCAI, ‘The Final Report’, (1 March 2021), available at https://www.nscai.gov/wp-content/uploads/2021/03/Full-Report-Digital-1.pdf.

[58] id.

[59] See Cronk, Terri Moon, ‘DOD Unveils Its Artificial Intelligence Strategy’ (12 February 2019), available at https://www.defense.gov/Newsroom/News/Article/Article/1755942/dod-unveils-its-artificial-intelligence-strategy/. In particular, the JAIC director’s duties include, among other things, developing plans for the adoption of artificial intelligence technologies by the military and working with private companies, universities and non-profit research institutions toward that end.

[60] HR 6395, 116th Cong (2021), available at https://www.congress.gov/bill/116th-congress/house-bill/6395.

[61] Summary of the 2018 Department of Defense Artificial Intelligence Strategy, Harnessing AI to Advance Our Security and Prosperity (https://media.defense.gov/2019/Feb/12/2002088963/-1/-1/1/SUMMARY-OF-DOD-AI-STRATEGY.PDF).

[62] Another potentially significant effort is the work currently being performed under the direction of DARPA on developing explainable AI systems. See https://www.darpa.mil/program/explainable-artificial-intelligence. Because it can be difficult to understand exactly how a machine learning algorithm arrives at a particular conclusion or decision, some have referred to artificial intelligence as being a ‘black box’ that is opaque in its reasoning. However, a black box is not always an acceptable operating paradigm, particularly in the context of battlefield decisions, within which it will be important for human operators of AI-driven systems to understand why particular decisions are being made to ensure trust and appropriate oversight of critical decisions. As a result, DARPA has been encouraging the development of new technologies to explain and improve machine–human understanding and interaction. See also DARPA’s ‘AI Next Campaign’ (https://www.darpa.mil/work-with-us/ai-next-campaign).

[63] id. at 9. See also id. at 15 (the JAIC ‘will articulate its vision and guiding principles for using AI in a lawful and ethical manner to promote our values’.); in addition, under the 2019 NDAA, one duty of the JAIC director is to develop legal and ethical guidelines for the use of AI systems. https://www.govinfo.gov/content/pkg/BILLS-115hr5515enr/pdf/BILLS-115hr5515enr.pdf.

[64] Calls for bans or at least limits on ‘killer robots’ go back several years, and even garnered several thousand signatories, including many leading AI researchers, to the Future of Life Institute’s pledge. See https://futureoflife.org/lethal-autonomous-weapons-pledge.

[65] Indeed, Google was forced to withdraw from Project Maven because of employee activism. See https://www.nytimes.com/2018/06/01/technology/google-pentagon-project-maven.html.

[66] Artificial Intelligence for the Armed Forces Act, S. 3965, 116th Cong (2020).

[67] Press Release, Senator Rob Portman, ‘Portman, Heinrich Secure Groundbreaking Advancements for Artificial Intelligence in FY 2021 NDAA’ (15 December 2020), available at https://www.portman.senate.gov/newsroom/press-releases/portman-heinrich-secure-groundbreaking-advancements-artificial-intelligence; HR 6395, 116th Cong (2021), available at https://www.congress.gov/bill/116th-congress/house-bill/6395.

[68] ‘Summary of AI Provisions from the National Defense Authorization Act 2021’, Stanford Human-Centered Artificial Intelligence, available at https://hai.stanford.edu/policy/policy-resources/summary-ai-provisions-national-defense-authorization-act-2021.

[69] Securing American Leadership in Science and Technology Act, HR 5685, 116th Cong (2020); Securing American Leadership in Science and Technology Act, HR 2153, 117th Cong (2021).

[70] Press Release, Senator Martin Heinrich, ‘Heinrich, Portman Announce Bipartisan Artificial Intelligence Bills To Boost AI-Ready National Security Personnel, Increase Governmental Transparency’ (12 May 2021), available at https://www.heinrich.senate.gov/press-releases/heinrich-portman-announce-bipartisan-artificial-intelligence-bills-to-boost-ai-ready-national-security-personnel-increase-governmental-transparency; S. 1705, 117th Cong (2021); S. 1776, 117th Cong (2021).

[71] S. 1705, 117th Cong (2021).

[72] S. 1260, 117th Cong (2021); Catie Edmondson, ‘Senate Overwhelmingly Passes Bill to Bolster Competitiveness With China’, The New York Times (10 June 2021), available at https://www.nytimes.com/2021/06/08/us/politics/china-bill-passes.html.

[73] S. 1260, 117th Cong (2021).

[74] id.

[75] House Armed Services Committee, ‘Cyber, Innovative Technologies, and Information Systems’, available at https://armedservices.house.gov/cyber-innovative-technologies-and-information-systems.

[76] Press Release, House Armed Services Committee, ‘Smith, Langevin Announce New Subcommittee for the 117th Congress’ (3 February 2021), available at https://armedservices.house.gov/2021/2/smith-langevin-announce-new-subcommittee-for-the-117th-congress.

[77] See, eg, the Mayo Clinic’s partnership with Visage Imaging to further develop artificial intelligence in healthcare; the University of Pittsburgh Schools of the Health Sciences’ new company, Realyze Intelligence, which plans to ‘use both artificial intelligence and natural language processing to determine optimal treatments for patients with chronic diseases;’ and Ohio State University’s new artificial intelligence tool for colonoscopies. Jill McKeon, ‘Mayo Clinic Partnership Will Accelerate Artificial Intelligence,’ Health IT Analytics (9 June 2021), available at https://healthitanalytics.com/news/mayo-clinic-partnership-will-accelerate-artificial-intelligence.

[78] Daniel E. Ho et al., ‘How US Law Will Evaluate Artificial Intelligence for covid-19,’ BMJ (March 2021), available at https://www.bmj.com/content/372/bmj.n234.

[79] Natasha Singer, ‘Employers Rush to Adopt Virus Screening. The Tools May Not Help Much,’ The New York Times (14 May 2020), available at https://www.nytimes.com/2020/05/11/technology/coronavirus-worker-testing-privacy.html.

[80] Chris Arsenault, ‘Forgot your keys? Scan your face, says Canadian firm amid privacy concerns,’ Reuters (16 April 2020), available at https://www.reuters.com/article/us-canada-tech-homes-feature-trfn/forgot-your-keys-scan-your-face-says-canadian-firm-amid-privacy-concerns-idUSKBN21O1ZT.

[81] 740 Ill Comp Stat Ann 14/15.

[82] Rosenbach v. Six Flags Entm’t Corp., 2019 IL 123186, 129 N.E.3d 1197 (Ill. 2019).

[83] See Commercial Facial Recognition Privacy Act of 2019, S. 847, 116th Cong (2019); Ethical Use of Facial Recognition Act, S. 3284, 116th Cong (2020); COVID-19 Consumer Data Protection Act of 2020, S. 3663, 116th Cong (2020).

[84] See Facial Recognition and Biometric Technology Moratorium Act, S. ___, 117th Cong (2021).

[85] Protecting Personal Health Data Act, S. 24, 117th Cong (2021).

[86] Stop Marketing And Revealing The Wearables And Trackers Consumer Health Data Act, S. 500, 117th Cong (2021).

[87] Mason Marks, Emergent Medical Data: Health Information Inferred by Artificial Intelligence, 11 UC Irvine L Rev 995 (2021), available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3554118.

[88] See, eg, Health Insurance Portability and Accountability Act, 45 CFR section 264(a)–(b) (2006).

[89] If the use or disclosure is related to treating an individual, then the rule is generally not applicable.

[90] Various newer technologies may allow for use of this data in a way that could avoid certain privacy rules. For example, homomorphic encryption allows machine learning algorithms to operate on data that is still encrypted, which could permit a hospital to share encrypted data, allow a remote machine to run analyses, and then receive encrypted results that the hospital could unlock and interpret. See, eg, Kyle Wiggers, ‘Intel open-sources HE-Transformer, a tool that allows AI models to operate on encrypted data’ (3 December 2018), available at https://venturebeat.com/2018/12/03/intel-open-sources-he-transformer-a-tool-that-allows-ai-models-to-operate-on-encrypted-data/. Given its novelty, it is not clear how this would work within the confines of, for example, HIPAA, but could offer a means to keep personal health information private, while also encouraging AI development.

[91] Emma Mayer, ‘Healthcare Workers Sue Amazon Over Potential HIPAA Violations With Alexa Device,’ Newsweek (2 July 2021), available at https://www.newsweek.com/healthcare-workers-sue-amazon-over-potential-hipaa-violations-alexa-device-1606589.

[92] Jessica Rich, ‘How our outdated privacy laws doomed contact-tracing apps,’ Brookings (28 January 2021), available at https://www.brookings.edu/blog/techtank/2021/01/28/how-our-outdated-privacy-laws-doomed-contact-tracing-apps/.

[93] See US Food & Drug Administration, Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan, available at https://www.fda.gov/media/145022/download.

[94] id. at 3.

[95] id.

[96] id. at 4–5

[97] id. at 6.

[98] See US Food & Drug Administration, Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine learning (AI/ML)-Based Software as a Medical Device (SaMD), available at https://www.fda.gov/media/122535/download. The paper introduces that one of the primary benefits of using AI in an SaMD product is the ability of the product to continuously update in light of an infinite feed of real-world data, which presumably will lead to ‘earlier disease detection, more accurate diagnosis, identification of new observations or patterns on human physiology, and development of personalized diagnostics and therapeutics’. But the current review system for medical devices requires a pre-market review, and pre-market review of any modifications, depending on the significance of the modification. If AI-based SaMDs are intended to constantly adjust, the FDA posits that many of these modifications will require pre-market review – a potentially unsustainable framework in its current form. The paper instead proposes an initial pre-market review for AI-related SaMDs that anticipates the expected changes, describes the methodology, and requires manufacturers to provide certain transparency and monitoring, as well as updates to the FDA about the changes that in fact resulted in accordance with the information provided in the initial review. Identified as the health AI hub of Europe by a report by MMC Ventures, the United Kingdom is similarly recognising that current aspects of healthcare regulation may be inconsistent with needs for AI development. MMC Ventures, The State of AI 2019: Divergence (2019). For example, the Health Secretary described that the current system is unfavourable to new companies, as it requires long periods of testing. In April 2019 a new unit called NHSX was founded in the UK, which brings together tech leadership (the NHSX’s data strategy is available at https://www.nhsx.nhs.uk/key-tools-and-info/data-saves-lives/), and a separate funding agency to support PhD students to use AI technology to address healthcare issues, among other concerns. The NHS chief executive also spoke on trying to motivate scientists to offer AI technologies, including based on changing the financial framework currently in use. ‘NHS aims to be a world leader in artificial intelligence and machine learning within 5 years’, NHS England (5 June 2019). In June 2021, the UK government announced a £36 million AI research funding boost for the NHS.

[99] While a number of public interest groups, such as the American Civil Liberties Union (ACLU), have come out strongly against the governmental use of facial recognition software, there also seems to be widespread resistance to law enforcement and governmental use of the technology across the political legislative spectrum. Drew Harwell, ‘Both Democrats and Republicans blast facial-recognition technology in a rare bipartisan moment’, The Washington Post (22 May 2019), available at https://www.washingtonpost.com/technology/2019/05/22/blasting-facial-recognition-technology-lawmakers-urge-regulation-before-it-gets-out-control/.

[100] See Bryan Walsh, ‘The Coming Conflict Over Facial Recognition’, Axios (13 February 2021), available at https://www.axios.com/facial-recognition-capitol-hill-riots-regulation-ce71660f-cd52-4c0f-8acf-309ecda3abc6.html (‘Even as efforts to restrict facial recognition at the local level are gathering momentum, the technology is being used across US society, a trend accelerated by efforts to identify those involved in the Capitol Hill insurrection.’).

[101] See Drew Harwell & Craig Timberg, ‘How America’s Surveillance Networks Helped the FBI Catch the Capitol Mob,’ The Washington Post (2 April 2021), available at https://www.washingtonpost.com/technology/2021/04/02/capitol-siege-arrests-technology-fbi-privacy/ (quoting Evan Greer, director of Fight for the Future, a digital rights advocacy group: ‘Once in a while, this technology gets used on really bad people doing really bad stuff. But the rest of the time it’s being use on all of us, in ways that are profoundly chilling for freedom of expression’).

[102] Mark MacCarthy, ‘Mandating fairness and accuracy assessments for law enforcement facial recognition systems,’ Brookings (26 May 2021), available at https://www.brookings.edu/blog/techtank/2021/05/26/mandating-fairness-and-accuracy-assessments-for-law-enforcement-facial-recognition-systems/.

[103] Ethical Use of Facial Recognition Act, S. 3284, 116th Cong (2020).

[104] National Biometric Information Privacy Act, S. 4400, 116th Cong (2020).

[105] Federal Police Camera and Accountability Act, HR 1280, 117th Cong (2021).

[106] Police Creating Accountability by Making Effective Recording Available Act of 2021, HR 1280, 117th Cong (2021).

[107] See Gibson Dunn, ‘Fourth Quarter and 2020 Annual Review of Artificial Intelligence and Automated Systems’ (29 January 2021), available at https://www.gibsondunn.com/fourth-quarter-and-2020-annual-review-of-artificial-intelligence-and-automated-systems/.

[108] Advancing Facial Recognition Technology Act, HR 4039, 117th Cong (2021).

[109] Advancing American AI Act, S. 1353, 117th Cong (2021).

[110] See Daniel Friedman, ‘Information Transparency and Personal Data Control Act Introduced in Congress,’ Nat L Rev (2 April 2021), available at https://www.natlawreview.com/article/information-transparency-and-personal-data-control-act-introduced-congress.

[111] Nicole Sakin, ‘Will there be federal facial recognition regulation in the US?’ IAPP (11 February 2021), available at https://iapp.org/news/a/u-s-facial-recognition-roundup/.

[112] Thomas Macaulay, ‘California blocks bill that could’ve led to a facial recognition police-state,’ The Next Web (4 June 2020), available at https://thenextweb.com/neural/2020/06/04/california-blocks-bill-that-couldve-led-to-a-facial-recognition-police-state/.

[113] AB-2261, 2019–20 Reg Sess (Cal 2020).

[114] HB 1202, Reg Sess (Md 2020).

[115] SB 6280, Reg Sess (Wash 2020).

[116] HB 2031, Reg. Session (2020-2021). See also Gibson Dunn, ‘Fourth Quarter and 2020 Annual Review of Artificial Intelligence and Automated Systems’ (29 January 2021), available at https://www.gibsondunn.com/fourth-quarter-and-2020-annual-review-of-artificial-intelligence-and-automated-systems/.

[117] See ‘Facial recognition bills proposed in Maryland, Alabama,’ IAPP (4 February 2021), https://iapp.org/news/a/facial-recognition-bills-proposed-in-maryland-alabama/.

[118] See Illinois ‘Biometric Information Privacy Act’, 740 ILCS 14/1 (PA 95-994, effective 3 October 2008) (Illinois BIPA); Texas Business and Commerce Code Sec. 503.001 ‘Capture or Use of Biometric Identifier’; and Title 19 of the Revised Code of Washington, Chapter 19.375, ‘Biometric Identifiers’.

[119] See Illinois BIPA, Section 20 (providing for statutory damages and a private right of action). The Illinois Supreme Court has further held that pleading an actual injury is not required in order to maintain a private right of action under the Illinois BIPA. See Rosenbach v Six Flags Entertainment Corporation, 2019 IL 123186 (25 January 2019); see also Patel v Facebook, Inc, No. 18-15982 (9th Cir. 8 August 2019) (finding Article III standing for an individual to bring a suit under the Illinois BIPA due to the BIPA’s protection of concrete privacy interests, such that violations of the procedures required by the BIPA amount to actual or threatened harm to such privacy interests).

[120] See Joseph Lazzarotti, ‘Maryland Joins New York with a BIPA-like Biometric Privacy Bill,’ Nat L Rev (21 February 2021), available at https://www.natlawreview.com/article/maryland-joins-new-york-bipa-biometric-privacy-bill.

[121] See California Civil Code Section 1798.100, et seq. (definition of ‘personal information’ under the Act specifically includes ‘biometric information,’ which itself includes ‘imagery of the . . . face’ and ‘a faceprint’; see CCC Sec. 1798.140 (o)(1)(e) and (b), respectively). Note that ‘publicly available’ information is generally excluded from the definition of ‘personal information,’ but that there is a carve-out to this exclusion for biometric information that is collected without the consumer’s knowledge. See CCC Sec. 1798.140 (o)(2).

[123] See, eg, Celeste Bott, ‘Topgolf Agrees To Pay Workers $2.6M In BIPA Suit,’ Law360 (7 June 2021), available at https://www.law360.com/articles/1391488/topgolf-agrees-to-pay-workers-2-6m-in-bipa-suit; Joyce Hanson, ‘$5M BIPA Suit Against McDonald’s Goes To Federal Court,’ Law360 (4 June 2021), available at https://www.law360.com/articles/1390989/-5m-bipa-suit-against-mcdonald-s-goes-to-federal-court.

[124] See Joyce Hanson, ‘$5M BIPA Suit Against McDonald’s Goes To Federal Court,’ Law360 (4 June 2021), available at https://www.law360.com/articles/1390989/-5m-bipa-suit-against-mcdonald-s-goes-to-federal-court.

[125] See Tae Andrews, ‘Ill. BIPA Ruling Marks Critical Win For Silent Cyber Coverage,’ Law360 (8 June 2021), available at https://www.law360.com/articles/1392035/ill-bipa-ruling-marks-critical-win-for-silent-cyber-coverage.

[126] See Gibson Dunn, ‘Fourth Quarter and 2020 Annual Review of Artificial Intelligence and Automated Systems’ (29 January 2021), available at https://www.gibsondunn.com/fourth-quarter-and-2020-annual-review-of-artificial-intelligence-and-automated-systems/.

[127] See Nicole Sakin, ‘Will there be federal facial recognition regulation in the US?’ IAPP (11 February 2021), available at https://iapp.org/news/a/u-s-facial-recognition-roundup/.

[128] See Zack Whittaker, ‘New York City’s new biometrics privacy law takes effect,’ TechCrunch (9 July 2021), available at https://techcrunch.com/2021/07/09/new-york-city-biometrics-law/.

[129] See Nicole Sakin, ‘Will there be federal facial recognition regulation in the US?’ IAPP (11 February 2021), available at https://iapp.org/news/a/u-s-facial-recognition-roundup/.

[130] Emily Birnbaum, ‘How the Democrats’ police reform bill would regulate facial recognition,’ Protocol (8 June 2020), available at https://www.protocol.com/police-facial-recognition-legislation.

[131] HR 4355, 116th Cong (2019).

[132] See Scott Briscoe, ‘US Laws Address Deepfakes,’ ASIS (12 January 2021), available at https://www.asisonline.org/security-management-magazine/latest-news/today-in-security/2021/january/U-S-Laws-Address-Deepfakes/.

[133] IOGAN Act, Public L No 116-258, 116th Cong (2020), available at https://www.congress.gov/bill/116th-congress/senate-bill/2904.

[134] US Senate Committee on Commerce, Science and Transportation, Press Release (24 October 2017), available at https://www.commerce.senate.gov/public/index.cfm/pressreleases?ID=BA5E2D29-2BF3-4FC7-A79D-58B9E186412C.

[135] Letter from Democratic Senators to US Senate Committee on Commerce, Science and Transportation (14 March 2018), available at https://morningconsult.com/wp-content/uploads/2018/11/2018.03.14-AV-START-Act-letter.pdf.

[136] Kris Van Cleave, ‘Bipartisan senators try to jumpstart autonomous driving legislation,’ CBS News (23 April 2021), available at https://www.cbsnews.com/news/self-driving-car-autonomous-driving-legislation/.

[137] David Shepardson, ‘US push for self-driving cars faces union, lawyers opposition,’ Reuters (16 June 2021), available at https://www.reuters.com/business/autos-transportation/us-push-self-driving-cars-faces-union-lawyers-opposition-2021-06-16/.

[138] Andrew J Hawkins, ‘Secretary Pete Buttigieg on the Future of Transportation,’ The Verge (10 May 2021), available at https://www.theverge.com/22412233/pete-buttigieg-transportation-secretary-interview.

[139] House Committee on Energy and Commerce, Re: Hearing on ‘Autonomous Vehicles: Promises and Challenges of Evolving Automotive Technologies’ (7 February 2020), available at https://docs.house.gov/meetings/IF/IF17/20200211/110513/HHRG-116-IF17-20200211-SD002.pdf.

[140] The Hill, ‘House lawmakers close to draft bill on self-driving cars’ (11 February 2020), available at https://thehill.com/policy/technology/482628-house-lawmakers-close-to-draft-bill-on-self-driving-cars; Automotive News, ‘Groups call on U.S. lawmakers to develop ‘‘meaningful ’legislation’ for AVs’ (11 February 2020), available at https://www.autonews.com/mobility-report/groups-call-us-lawmakers-develop-meaningful-legislation-avs.

[141] Previously released draft bill includes sections on federal, state and local roles, exemptions, rulemakings, FAST Act testing expansion, advisory committees and definitions, https://www.mema.org/draft-bipartisan-driverless-car-bill-offered-house-panel.

[142] Maggie Miller, ‘Congress makes renewed push on self-driving cars bill,’ The Hill (17 February 2021), available at https://thehill.com/policy/technology/539063-congress-makes-renewed-push-on-self-driving-cars-bill.

[143] House Committee on Energy and Commerce, Re: Hearing on ‘Promises And Perils: The Potential Of Automobile Technologies’ (18 May 2021), available at https://energycommerce.house.gov/committee-activity/hearings/hearing-on-promises-and-perils-the-potential-of-automobile-technologies.

[144] US Dep’t of Transp, Automated Vehicles Comprehensive Plan, available at https://www.transportation.gov/sites/dot.gov/files/2021-01/USDOT_AVCP.pdf.

[145] US Dep’t of Transp, Ensuring American Leadership in Automated Vehicle Technologies: Automated Vehicles 4.0 (January 2020), available at https://www.transportation.gov/sites/dot.gov/files/docs/policy-initiatives/automated-vehicles/360956/ensuringamericanleadershipav4.pdf.

[146] US Dep’t of Transp, ‘Preparing for the Future of Transportation: Automated Vehicles 3.0’ (September 2017), available at https://www.transportation.gov/sites/dot.gov/files/docs/policy-initiatives/automated-vehicles/320711/preparing-future-transportation-automated-vehicle-30.pdf.

[147] Its goal of modernising the regulatory environment includes plans to streamline paths to deployment of ADS technologies (including issuing exemptions and waivers where appropriate) and remove unnecessary barriers in the Federal Motor Vehicle Safety Standards (FMVSS) and Federal Motor Carrier Safety Regulations (FMCSRs), while also developing innovative safety performance oversight models.

[148] US Dep’t of Transp, NHTSA Issues First-Ever Proposal to Modernize Occupant Protection Safety Standards for Vehicles Without Manual Controls, available at https://www.nhtsa.gov/press-releases/adapt-safety-requirements-ads-vehicles-without-manual-controls.

[150] See Framework for Automated Driving System Safety, 85 Fed Reg 78058 (3 December 2020), available at https://www.federalregister.gov/documents/2020/12/03/2020-25930/framework-for-automated-driving-system-safety.

[151] The proposed rulemaking identifies ‘four primary functions’ of the ADS that the NHTSA will consider: (1) ‘how the ADS receives information about its environment through sensors’; (2) ‘how the ADS detects and categorizes other road users…infrastructure…and conditions’; (3) ‘how the ADS analyses the situation, plans the route it will take on the way to its intended destination, and makes decisions on how to respond’; and (4) ‘how the ADS executes the driving functions necessary to carry out that plan…through interaction with other parts of the vehicle.’ See Framework for Automated Driving System Safety, 85 Fed Reg 78058 (03 December 2020), available at https://www.federalregister.gov/documents/2020/12/03/2020-25930/framework-for-automated-driving-system-safety.

[152] See NHTSA, Standing General Order on Crash Reporting for Levels of Driving Automation 2-5, available at https://www.nhtsa.gov/laws-regulations/standing-general-order-crash-reporting-levels-driving-automation-2-5.

[153] In Washington, Governor Jay Inslee signed into law HB 1325, a measure that will create a regulatory framework for personal delivery devices (PDDs) that deliver property via sidewalks and crosswalks (eg, wheeled robots). See 2019 Wash Sess Laws, Ch 214. Washington is now the eighth US state to permit the use of delivery bots in public locations. The other states are Virginia, Idaho, Wisconsin, Florida, Ohio, Utah and Arizona. On 27 March 2020, Governor Inslee signed HB 2676 into law, which established minimum requirements for AV testing in Washington. The requirements include liability insurance, advance notification to local and state law enforcement, and annual incident reporting. Notably absent is a provision regarding the presence of a human operator. See 2020 Wash Sess Laws, Ch 182. Pennsylvania, which recently passed legislation creating a commission on ‘highly automated vehicles’, has proposed a bill (still in committee) that would authorise the use of an autonomous shuttle vehicle on a route approved by the Pennsylvania Department of Transportation. HB 1078, 2019–2020 Reg Sess (Pa 2019).

[154] On 13 June 2019, Florida Governor Ron DeSantis signed CS/HB 311: Autonomous Vehicles into law, which went into effect on 1 July. CS/HB 311 establishes a statewide statutory framework, permits fully automated vehicles to operate on public roads, and removes obstacles that hinder the development of self-driving cars. See, eg, ‘Governor Ron DeSantis Signs CS/HB 311: Autonomous Vehicles’ (13 June 2019), available at https://www.flgov.com/2019/06/13/governor-ron-desantis-signs-cs-hb-311-autonomous-vehicles/.

[155] For testing both with and without drivers, users must give information to Cal DOT, as well as have a minimum of US$5 million in insurance. See Insurance Institute for Highway Safety, ‘Autonomous vehicle laws’ (July 2021), available at https://www.iihs.org/topics/advanced-driver-assistance/autonomous-vehicle-laws.

[156] State of California Department of Motor Vehicles, Autonomous Light-Duty Motor Trucks (Delivery Vehicles), available at https://www.dmv.ca.gov/portal/vehicle-industry-services/autonomous-vehicles/california-autonomous-vehicle-regulations/. The DMV’s regulations continue to exclude the autonomous testing or deployment of vehicles weighing more than 10,001 pounds.

[157] See California Public Utilities Commission, Press Release, ‘CPUC Launches New Autonomous Vehicle Programs’ (19 November 2020), https://docs.cpuc.ca.gov/PublishedDocs/Published/G000/M351/K623/351623457.PDF.

[158] SB 500, 2021-2022 Reg Sess (Cal 2021).

[159] SB 570, 2021-2022 Reg Sess (Cal 2021)

[160] See Robert Seamans, ‘Autonomous vehicles as a ‘killer app’ for AI,’ Brookings (22 June 2021), available at https://www.brookings.edu/research/autonomous-vehicles-as-a-killer-app-for-ai/; SB 66, 2021-2022 Reg Sess (Cal 2021).

[161] See NCSL, ‘Autonomous Vehicles State Bill Tracking Database’ (8 July 2021), https://www.ncsl.org/research/transportation/autonomous-vehicles-legislative-database.aspx.

[162] See SF 2912 (Minn 2019) in Minnesota, and SB 5376, 66th Leg, Reg Session (Wash 2019) in Washington.

[163] New York Privacy Act, A680/S6701, Reg Sess (NY 2021).

[164] ‘Meaningful human review’ is defined in as ‘review or oversight by one or more individuals who (1) are trained in the capabilities and limitations of the algorithm at issue and the procedures to interpret and act on the output of the algorithm, and (2) have the authority to alter the auto-mated decision under review.’ The previous version of the bill, introduced in 2020, limited entities’ abilities to make decisions based solely on profiling with legal effects for the consumer. New York Privacy Act, A8526/S5642, Reg Sess (NY 2020). There, ‘profiling’ was broadly defined as ‘automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person. In particular, to analyse or predict aspects concerning the natural person’s economic situation, health, personal preferences, interests, reliability, behaviour, location, or movements.’ And if the entity engages in profiling, it must disclose that fact, and ‘information about the logic involved, and the significance and envisaged consequences of the profiling.’ The new version of the bill, however, does not include any reference to ‘profiling.’ Instead, the new bill only limits predictions of consumer behaviour as they relate to targeted advertising, which does not include advertising ‘based on the context of the consumer’s current search query or visit to a website or online application.’ Thus, while the state legislation discussed throughout this section may pose some obstacles for those who wish to use AI with consumer data, reintroduced legislation show some major concessions that could negatively affect Americans whose data is collected and used by regulated entities.

[165] On the development side, the New York Privacy Act requires entities to ‘describe[] and evaluate[] the objectives and development of the automated decision-making processes including the design and training data used to develop the automated decision-making process, how the automated decision-making process was tested for accuracy, fairness, bias and discrimination.’ On the impact side, the bill requires entities to ‘assess[] whether the automated decision-making system produces discriminatory results on the basis of a consumer’s or class of consumers’ actual or perceived race, color, ethnicity, religion, national origin, sex, gender, gender identity, sexual orientation, familial status, biometric information, lawful source of income, or disability.’

[166] Virginia Consumer Data Protection Act, S.B. 1392 § 59.1-576. The assessment is required for specified uses of personal data. These uses are (1) processing personal data for targeted advertising, (2) selling personal data, (3) using personal data for profiling that has a foreseeable risk, (4) processing sensitive data, and (5) any processing activities involving personal data with a heightened risk of harm to the consumer. A data controller assesses such uses of personal data by identifying and weighing direct and indirect benefits flowing from the processing to the controller, consumer, the public, and other stakeholders against the potential risks to the rights of the consumers. The balancing can take into account any mitigating safeguards that reduce the risk of harm. The assessment also considers in the analysis the use of de-identified data, the reasonable expectations of consumers, the processing context, and the relationship between the consumer whose personal data is being processed and the controller processing that data.

[167] CRS §6-1-1306(1)(a)(I).

[168] CRS §6-1-1306(1)(a)(I).

[169] CPRA Section 21, adding Cal. Civ. Code §1798.185(a)(16).

[170] CPRA Section 14, adding Cal. Civ. Code §1798.140(z).

[171] See, eg, GDPR article 22, and Recital 71 (‘the data subject should have the right not to be subject to a decision, which may include a measure, evaluating personal aspects relating to him or her which is based solely on automated processing and which produces legal effects concerning him or her or similarly significantly affects him or her, such as automatic refusal of an online credit application or e-recruiting practices without any human intervention’).

[172] SB 6281, 66th Leg, Reg Session (Wash 2019), available at http://lawfilesext.leg.wa.gov/biennium/2019-20/Pdf/Bills/Senate%20Bills/5376.pdf.

[173] For a related analysis on how GDPR may hinder AI made in Europe, see Ahmed Baladi, ‘Can GDPR Hinder AI Made in Europe?’, Cybersecurity Law Report (10 July 2019) available at https://www.gibsondunn.com/can-gdpr-hinder-ai-made-in-europe.

[174] CPRA’s broad definition of ‘personal information’ further contributes to this issue. The CPRA will generally define personal information to be ‘information that identifies, relates to, describes, is reasonably capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household.’ This definition is narrowed from the predecessor CCPA, which omitted each use of the word ‘reasonably’ in the CPRA’s definition. Still, the CPRA also includes a new class of information called ‘sensitive personal information’ that includes personal information that reveals, for example, a person’s social security number, financial account information, precise geolocation, ‘a consumer’s racial or ethnic origin, contents of certain communications, religious or philosophical beliefs, or union membership,’ and genetic information. It also includes processing, collecting, and analysing certain information, like processing biometric information for the purpose of identifying the individual, personal information collected and analysed concerning a consumer’s health, sexual orientation or sex life. Note, however, that certain requirements present in GDPR that are not present in CPRA or CCPA, make the risk of stifling AI development less prominent under CCPA. For example, one is the need to have a legal basis for processing under GDPR. This requirement is likely to inhibit development even more, because obtaining consent, or supporting legitimate interests for AI technologies may be difficult, particularly where (1) it may be unknown how exactly the technology works, (2) it may not have a clear use at the company, and (3) it may be in developmental stages. This is similarly the case for the data minimisation principles under GDPR. While data minimisation may be assumed under CPRA and its predecessor CCPA, it is not explicitly required, and minimising data can be fundamentally at odds with AI development. Under the VCDPA, personal data is defined as ‘any information that is linked or reasonably linkable to an identified or identifiable natural person.’ The VCDPA excludes from ‘personal data’ ‘de-identified data or publicly available information’.

[175] See, eg, Tony Romm, ‘Privacy Activist in California launches new ballot initiative for 2020 election’, The Washington Post, 24 September 2019, available at https://www.washingtonpost.com/technology/2019/09/25/privacy-activist-california-launches-new-ballot-initiative-election/. The California Privacy Rights Act (CPRA) obtained enough signatures to place it on the California ballot for voters in November. While the CPRA would impose additional obligations on businesses, it also extends the business-to-business and employment-related data exemptions. Geoffrey A Fowler, ‘The Technology 202: Privacy advocates battle each other over whether California’s Proposition 24 better protects consumers’, The Washington Post.

[176] Under the Virginia Consumer Data Protection Act (VCDPA), a company must process or control the personal data of at least 100,000 Virginia residents before being subject to the Act’s requirements. Alternatively, a company that derives over 50 per cent of gross revenue from the sale of personal data and control or process personal data of at least 25,000 Virginia residents is subject to the VCDPA, as well.

[177] Office of US Senator Kirsten Gillibrand, Press Release, Gillibrand Introduces New And Improved Consumer Watchdog Agency To Give Americans Control Over Their Data (17 June 2021), https://www.gillibrand.senate.gov/news/press/release/gillibrand-introduces-new-and-improved-consumer-watchdog-agency-to-give-americans-control-over-their-data.

[178] Data Protection Act of 2021, S. 2134, 117th Cong. §9 (2021). ‘Personal data’ is defined under the proposed legislation as ‘electronic data that, alone or in combination with other data—(A) identifies, relates to, describes, is capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular individual, household, or device; or (B) could be used to determine that an individual or household is part of a protected class.’ Data Protection Act of 2021, S. 2134, 117th Cong. §2(16) (2021).

[179] Data Protection Act of 2021, S. 2134, 117th Cong. §2(3) (2021).

[180] Data Protection Act of 2021, S. 2134, 117th Cong. §§2(11), (12) (2021).

[181] In addition the Data Protection Act of 2021, various federal bills have been introduced over the years that are also likely to affect AI companies, including the Consumer Data Privacy and Security Act of 2020 (CDPSA), S. 3456 116th Cong (2020). The CDPSA proposes a comprehensive data privacy framework that incorporates concepts from CCPA and GDPR, and covers obligations with respect to consent, transparency, and legitimate means for processing data. The Consumer Online Privacy Rights Act (COPRA) proposed in December 2019, also a comprehensive data protection law, specifically puts obligations on algorithmic decision-making, including annual impact assessments. COPRA, S. 2968 116th Cong (2019). These could both hinder the collection and use of information for building AI algorithms.

[182] The EEOC enforces federal laws protecting job applicants and employees from discrimination based on protected categories (including race, colour, religion, sex, national origin, age, disability and genetic information), including Civil Rights Act of 1964 section 7, 42 USC section 2000e et seq (1964), Equal Pay Act of 1963, 29 USC section 206(d), Age Discrimination in Employment Act of 1967, 29 USC sections 621–634, Rehabilitation Act of 1973, 29 USC section 701 et seq, and the Civil Rights Act of 1991, S. 1745, 102nd Cong (1991).

[183] See Alexandra Reeves Givens, Hilke Schellmann & Julia Stoyanovich, ‘We Need Laws to Take on Racism and Sexism in Hiring Technology’ New York Times (17 March 2021) (discussing the biases built into technologies like resume-scanning algorithms and facial recognition interview software that primarily impacts marginalised communities).

[184] Chris Opfer and Ben Penn, ‘Punching In: Workplace Bias Police Look at Hiring Algorithms’, Bloomberg Law (28 October 2019).

[185] An FTC blog post warned companies that using AI that results in discrimination against a protected class could result in an FTC ‘complaint alleging violations of the FTC Act’ and any other relevant federal law for which the FTC has enforcement authority. Elisa Jillson, ‘Aiming for truth, fairness, and equity in your company’s use of AI’, FTC Blog (19 April 2021, 9:43 AM), https://www.ftc.gov/news-events/blogs/business-blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai.

[186] Kamala D Harris, Patty Murray and Elizabeth Warren, Letter to US Equal Employment Opportunity Commission (17 September 2018), available at https://www.scribd.com/embeds/388920670/content#from_embed. US Senators Kamala Harris, Patty Murray and Elizabeth Warren probed the EEOC in a September 2018 letter requesting that the Commission draft guidelines on the use of facial recognition technology in the workplace (eg, for attendance and security), and hiring (eg, for emotional or social cues presumably associated with the quality of a candidate). The letter cites various studies showing that facial recognition algorithms are significantly less accurate for darker-skinned individuals, and discusses legal scholars’ views on how such algorithms may ‘violate workplace anti-discrimination laws, exacerbating employment discrimination while simultaneously making it harder to identify or explain’ in a court, where such violations may be remediated. Similarly focused letters were sent to the FTC and FBI by varying groups of senators. Kamala D Harris, Cory A Booker and Cedric L Richmond, Letter to Bureau of Investigation, (17 September 2018), available at https://www.scribd.com/embeds/388920671/content - from_embed; Kamala D Harris, Richard Blumenthal, Cory A Booker and Ron Wyden, Letter to Federal Trade Commission (17 September 2018), available at https://www.scribd.com/embeds/388920672/content - from_embed. Further, in late 2019, senators sent letters to the FTC and Centers for Medicare and Medicaid Services questioning their efforts to investigate healthcare companies regarding biased decision-making, including relating to prioritisation of healthcare. See eg, Tom Simonite, ‘Senators Protest a Health Algorithm Biased Against Black People’, Wired, (3 December 2019) https://www.wired.com/story/senators-protest-health-algorithm-biased-against-black-people/. For example, US Senators Warren and Doug Jones also sent a letter in June 2019 to various federal financial institutions (the Federal Reserve, Federal Deposit Insurance Corporation, Office of the Comptroller of the Currency and Consumer Financial Protection Bureau) regarding the use of AI by financial technology companies that have resulted in discriminatory lending practices. The Senators requested answers to various questions to ‘help [them] understand the role that [the agencies] can play in addressing FinTech discrimination’. Elizabeth Warren and Doug Jones, Letter to The Board of Governors of the Federal Reserve, the Federal Deposit Insurance Corporation, The Office of the Comptroller of the Currency, and The Consumer Financial Protection Bureau (10 June 2019), available at https://www.warren.senate.gov/imo/media/doc/2019.6.10%20Letter%20to%20Regulators%20on%20Fintech%20FINAL.pdf. These efforts have continued in 2021, as virtual interviewing increased due to the pandemic. Senator Michael Bennet, for example, sent a letter to the EEOC requesting information on the EEOC’s authority to investigate and research AI technology and in producing guidance for companies using such technology in the hiring process. Michael Bennet, Cory booker, Sherrod Brown, Elizabeth Warren, Catherine Cortez Masto, Christopher A. Coons, Ron Wyden, Tina Smith, Chris Van Hollen and Jeffrey A. Merkley, Letter to Equal Employment Opportunity Commission (8 December 2020), available at https://www.bennet.senate.gov/public/_cache/files/0/a/0a439d4b-e373-4451-84ed-ba333ce6d1dd/672D2E4304D63A04CC3465C3C8BF1D21.letter-to-chair-dhillon.pdf.

[187] Data Protection Act of 2021, S. 2134, 117th Cong. § 5(b)(1) (2021). Senator Gillibrand’s proposed legislation, as discussed in more detail above, would also create a research unit to assess and minimise discriminatory data use – including assessing the development, impact, and effect of using AI. Data Protection Act of 2021, S. 2134, 117th Cong. §§5(b)(1), (2) (2021).

[188] See, eg, Jeffrey Dastin, ‘Amazon scraps secret AI’. The tool has since been decommissioned. See, eg, Tom Simonite, ‘Senators Protest a Health Algorithm Biased Against Black People’, Wired, (3 December 2019) (a hiring tool from Amazon was found to incorporate and extrapolate a pre-existing bias towards men, resulting in a penalisation of resumes referencing women-related terms and institutions (eg, all-women colleges, the word ‘women’s’).

[189] For example, at the end of 2020, New York placed a two-year moratorium on using facial recognition in schools because of the biases the technology introduces. See Governor of New York Press Office, Press Release, Governor Cuomo Signs Legislation Suspending Use and Directing Study of Facial Recognition Technology in Schools (22 December 2020). Further, a New York bill introduced in 2021 would require companies that use automated employment-decision tools for deciding compensation or screening candidates to disclose this to candidates. It also would require the technology companies producing the software to run yearly audits to ensure the technology is not biased. See Tom Simonite, ‘New York City Proposes Regulating Algorithms Used in Hiring’, Wired (8 January 2021), available at https://www.wired.com/story/new-york-city-proposes-regulating-algorithms-hiring/.

[190] Dakota Foster, ‘Antitrust Investigations have Deep Implications for AI and National Security,’ The Brookings Institute (2 June 2020), available at https://www.brookings.edu/techstream/antitrust-investigations-have-deep-implications-for-ai-and-national-security/. See also Nitisha Tiku, ‘Big Tech: Breaking Us Up Will Only Help China,’ Wired (23 May 2019), available at https://www.wired.com/story/big-tech-breaking-will-only-help-china/. The Chinese government has, starting in early 2021, begun enforcing its antitrust laws more aggressively against some of its large technology companies, issuing significant fines against Tencent, Baidu, and Alibaba. See Zheping Huang and Coco Liu, ‘Tencent and Baidu Fined by Antitrust Regulator For Previous Deals,’ Bloomberg (11 March 2021), available at https://www.bloomberg.com/news/articles/2021-03-12/tencent-baidu-fined-by-antitrust-regulator-for-previous-deals. However, some commentators suggest that these antitrust actions actually may actually spur additional AI innovation in China rather than hindering it, since the fines are not designed to discourage mergers, but instead to align the companies with the Chinese government’s innovation goals. See Angela Huyue Zhang, ‘China is leaning into antitrust regulation to stay competitive with the US’, Fortune (8 February 2021), available at https://fortune.com/2021/02/08/china-antitrust-tech-alibaba-tencent-billionaires/.

[191] US House Committee on The Judiciary, Press Release, Judiciary Antitrust Subcommittee Investigation Reveals Digital Economy Highly Concentrated, Impacted by Monopoly Power (6 October 2020), available at https://judiciary.house.gov/news/documentsingle.aspx?DocumentID=3429; Rep. Jerrold Nadler and Rep. David N. Cicilline, Subcommittee on Antitrust, Commercial and Administrative Law of the Committee on the Judiciary, Investigation of Competition in Digital Markets: Majority Staff Report and Recommendations, available at https://judiciary.house.gov/uploadedfiles/competition_in_digital_markets.pdf?utm_campaign=4493-519.

[192] Danielle Abril, ‘Facebook and Amazon grilled over history of aggressive competitive practices at antitrust congressional hearing’, (29 July 2020).

[193] John D. McKinnon, ‘Google, Facebook Pressure Falls Short as Antitrust Measures Advance in House Committee,’ The Wall Street Journal (24 June 2021), available at https://www.wsj.com/articles/google-facebook-pressure-falls-short-as-major-antitrust-measures-advance-in-house-committee-11624505607; see also HR 3460, 117th Cong (2021); HR 3816, 117th Cong (2021); HR 3825, 117th Cong (2021); HR 3826, 117th Cong (2021); HR 3843, 117th Cong (2021); HR 3849, 117th Cong (2021).

[194] Sadie Gurman, ‘Barr Strives to Keep Justice Moving Amid Coronavirus Crisis,’ The Wall Street Journal (23 March 2020), available at https://www.wsj.com/articles/barr-strives-to-keep-justice-moving-amid-coronavirus-crisis-11584955802.

[195] Justice Department Reviewing the Practices of Market-Leading Online Platforms, Department of Justice, Office of Public Affairs, Press Release No. 19-799 (23 July 2019), available at https://www.justice.gov/opa/pr/justice-department-reviewing-practices-market-leading-online-platforms.

[196] Ben Brody and Daniel Stoller, ‘Facebook Acquisitions Probed by FTC in Broad Antitrust Inquiry’, Bloomberg (1 August 2019), available at https://www.bloomberg.com/news/articles/2019-08-01/facebook-acquisitions-probed-by-ftc-in-broad-antitrust-inquiry; Federal Trade Commission, ‘FTC to Examine Past Acquisitions by Large Technology Companies’, (11 February 2020), https://www.ftc.gov/news-events/press-releases/2020/02/ftc-examine-past-acquisitions-large-technology-companies.

[197] Promoting Competition in the American Economy, 86 Fed Reg 36,987 §5(h)(i) and (iv).

[198] Brent Kendall and Ryan Tracy, ‘Big Tech Adversary Poised to Take Assertive FTC Antitrust Role,’ The Wall Street Journal (23 May 2021), available at https://www.wsj.com/articles/big-tech-adversary-poised-to-take-assertive-ftc-antitrust-role-11621785601; Lauren Hirsch and David McCabe, ‘Biden to Name a Critic of Big Tech as the Top Antitrust Cop,’ The New York Times (20 July 2021), available at https://www.nytimes.com/2021/07/20/business/kanter-doj-antitrust.html.

[199] John D. McKinnon, ‘FTC Vote to Broaden Agency’s Mandate Seen as Targeting Tech Industry,’ The Wall Street Journal (1 July 2021), available at https://www.wsj.com/articles/ftc-vote-to-broaden-agencys-mandate-seen-as-targeting-tech-industry-11625169512?mod=article_inline.

[200] See, eg, Irina Ivanova, ‘Why Big Tech’s big breakup may never come’, CBS News (4 June 2019), available at https://www.cbsnews.com/news/feds-eye-google-facebook-amazon-apple-for-antitrust-issues.

[201] For example, it is expected that the government will face issues with the fast-paced changes of technology, with defining the companies individually as a monopoly (rather than potentially combined together), and with simply being up against several of the largest companies in the world. See, eg, Jon Swartz, ‘Four reasons why antitrust actions will likely fail to break up Big Tech’, MarketWatch (15 June 2019), available at https://www.marketwatch.com/story/breaking-up-big-tech-is-a-big-task-2019-06-10.

[202] Suspension of Entry of Immigrants Who Present a Risk to the United States Labor Market During the Economic Recovery Following the 2019 Novel Coronavirus Outbreak, 85 Fed Reg 23,441 (22 April 2020).

[203] Suspension of Entry of Immigrants and Nonimmigrants Who Present a Risk to the United States Labor Market During the Economic Recovery Following the 2019 Novel Coronavirus Outbreak, 85 Fed Reg 38,263 (22 June 2020).

[204] Suspension of Entry of Immigrants and Nonimmigrants Who Continue to Present a Risk to the United States Labor Market During the Economic Recovery Following the 2019 Novel Coronavirus Outbreak, 86 Fed Reg 417 (31 December 2020).

[205] Daniel Costa, ‘Trump executive order to suspend immigration would reduce green cards by nearly one-third if extended for a full year,’ Economic Policy Institute Working Economics Blog (23 April 2020), available at https://www.epi.org/blog/trump-executive-order-to-suspend-immigration-would-reduce-green-cards-by-nearly-one-third-if-extended-for-a-full-year/.

[206] See Michelle Hackman, ‘Businesses Worry About Biden’s Silence on Work-Visa Ban,’ The Wall Street Journal (10 February 2021), available at https://www.wsj.com/articles/businesses-worry-about-bidens-silence-on-work-visa-ban-11612965607 (showing that 188,123 H-1B visas were approved in FY 2019, compared to 124,983 in FY 2020, a decrease of approximately one-third).

[207] Michelle Hackman, ‘Biden Administration to Allow Work-Visa to Expire,’ The Wall Street Journal (31 March 2021), available at https://www.wsj.com/articles/biden-administration-to-allow-work-visa-ban-to-expire-11617204628.

[208] Strengthening Wage Protections for the Temporary and Permanent Employment of Certain Aliens in the United States, 85 Fed Reg 63,872 (8 October 2020); Strengthening Wage Protections for the Temporary and Permanent Employment of Certain Aliens in the United States, 86 Fed Reg 3,608 (14 January 2021); Michelle Hackman, ‘Trump Administration Announces Overhaul of H-1B Visa Program,’ The Wall Street Journal (6 October 2020), available at https://www.wsj.com/articles/trump-administration-announces-overhaul-of-h-1b-visa-program-11602017434.

[209] Michelle Hackman, ‘Federal Judge Strikes Down Trump’s H-1B Visa Rules on Highly Skilled Foreign Workers,’ The Wall Street Journal (1 December 2020), available at https://www.wsj.com/articles/federal-judge-strikes-down-trumps-h-1b-visa-rules-on-highly-skilled-foreign-workers-11606871592; Stuart Anderson, ‘Trump’s H-1B Visa Wage Rule Is Dead: What’s Next?’ Forbes (1 July 2021), available at https://www.forbes.com/sites/stuartanderson/2021/07/01/trumps-h-1b-visa-wage-rule-is-dead-whats-next/?sh=416d16cc4a21.

[210] See Department of Labor, Announcements, ‘June 29, 2021. OFLC Announces Updates to Implementation of the Final Rule Affecting Wages for H-1B and PERM Workers; District Court’s Order Vacating Final Rule,’ available at https://www.dol.gov/agencies/eta/foreign-labor/news; Request for Information on Data Sources and Methods for Determining Prevailing Wage Levels for the Temporary and Permanent Employment of Certain Immigrants and Non-Immigrants to the United States, 86 Fed Reg 17,343 (2 April 2021); Genevieve Douglas, ‘Labor Department Seeking H-1B Wages Input as Trump Rule Waits,’ Bloomberg Law (1 April 2021), available at https://news.bloomberglaw.com/daily-labor-report/labor-department-seeking-h-1b-wages-input-as-trump-rule-waits.

[211] Angus Loten, ‘Tech Executives Look to New Administration to Undo Rules Blocking Talent,’ The Wall Street Journal (19 January 2021), available at https://www.wsj.com/articles/tech-executives-look-to-new-administration-to-undo-rules-blocking-talent-11611096393.

[212] Jeffrey Mervis, ‘More restrictive U.S. policy on Chinese graduate student visas raises alarm,’ Science (11 June 2018), available at https://www.sciencemag.org/news/2018/06/more-restrictive-us-policy-chinese-graduate-student-visas-raises-alarm.

[213] Suspension of Entry as Non-immigrants of Certain Students and Researchers From the People’s Republic of China, 85 Fed Reg 34,353 (4 June 2020).

[214] Eamon Barrett, ‘Biden lifts COVID-19 travel restrictions for Chinese students, providing a lifeline to U.S. colleges,’ Fortune (28 April 2021), available at https://fortune.com/2021/04/28/biden-covid-19-visa-travel-restrictions-chinese-students-china-international-us-colleges-universities/.

[215] Marco Polo, ‘The Global AI Talent Tracker’, available at https://macropolo.org/digital-projects/the-global-ai-talent-tracker/.

[216] Paul Mozer and Cade Metz, ‘A U.S. Secret Weapon in A.I.: Chinese Talent’, The New York Times (9 June 2020), available at https://www.nytimes.com/2020/06/09/technology/china-ai-research-education.html.

Get unlimited access to all Global Data Review content