United States: Artificial Intelligence

Introduction

Over the past several years, lawmakers and government agencies have sought to develop artificial intelligence (AI) strategies and policy with the aim of balancing the tension between protecting the public from the potentially harmful effects of AI technologies, and encouraging positive innovation and competitiveness. As AI technologies become increasingly commercially viable, one of the most interesting challenges lawmakers face in the governance of AI is determining which of its challenges can be safely left to ethics (appearing as informal guidance or voluntary standards), and which rules should be codified in law.[1]

The first half of 2019 saw a surge in debate about the role of governance in the AI ecosystem and the gap between technological change and regulatory response in the digital economy. In the United States, this trend was manifested in particular by calls for regulation of certain ‘controversial’ AI technologies or use cases, in turn increasingly empowering lawmakers to take fledgling steps to control the scope of AI and automated systems in the public and private sectors. While it remains too soon to herald the arrival of a comprehensive federal regulatory strategy in the United States, there have been a number of recent high-profile draft bills addressing the role of AI and how it should be governed at the US federal level, and US state and local governments are already pressing forward with concrete legislative proposals regulating the use of AI. Likewise, the European Union has taken numerous steps to demonstrate its commitment toward the advancement of AI technology through funding,[2] while simultaneously pressing for companies and governments to develop ethical applications of AI.[3]

Similarly, US federal, state and local government agencies are beginning to show a willingness to take concrete positions on that spectrum, resulting in a variety of policy approaches to AI regulation – many of which eschew informal guidance and voluntary standards and favour outright technology bans. We should expect that high-profile or contentious AI use cases or failures will continue to generate similar public support for, and ultimately trigger, accelerated federal and state action.[4] For the most part, the trend in favour of more individual and nuanced assessments of how best to regulate AI systems specific to their end uses by regulators in the United States has been welcome. Even so, there is an inherent risk that reactionary legislative responses will result in a disharmonious, fragmented national regulatory framework. Such developments will yield important insights into what it means to govern and regulate AI over the coming year.

Further, as the use of AI expands into different sectors and the need for data multiplies, legislation that traditionally has not focused on AI is starting to have a growing impact on AI technology development. This impact can be seen in areas such as privacy, discrimination, and antitrust laws. While some of these areas may help alleviate some of the ethical concerns AI engenders (eg, eliminating bias), others may unnecessarily inhibit development and make it difficult to operate (eg, complying with consumer deletion requests under privacy laws).

The following section in this chapter will discuss the general regulatory framework of AI technology in the United States, contrasting the approach with other jurisdictions that have invested in AI research and development where appropriate, and will highlight differences in how AI technology is regulated by use in various key sectors.

The final section in this chapter will discuss certain areas of existing and proposed legislation and policies that may distinctly affect AI technologies and companies, even though they are not directly targeting them, and what effects may result.

AI-specific regulations and policies – existing and proposed

Legislation promoting and evaluating AI ethics and development

By early 2019, despite its position at the forefront of commercial AI innovation, the United States still lacked an overall federal AI strategy and policy.[5] By contrast, observers noted other governments’ concerted efforts and considerable expenditures to strengthen their domestic AI research and development,[6] particularly China’s plan to become a world leader in AI by 2030.[7] These developments abroad prompted many to call for a comprehensive government strategy and similar investments by the United States’ government to ensure its position as a global leader in AI development and application.[8]

The federal government thus began to prioritise both the development and regulation of AI technology. On 11 February 2019, President Donald Trump signed an executive order (EO) creating the ‘American AI Initiative’,[9] intended to spur the development and regulation of AI and fortify the United States’ global position by directing federal agencies to prioritise investments in research and development of AI.[10] The EO, which was titled ‘Maintaining American Leadership in Artificial Intelligence,’ outlined five key areas: research and development,[11] ‘unleashing’ AI resources,[12] establishing AI governance standards,[13] building an AI workforce,[14] and international collaboration and protection.[15] The AI Initiative is coordinated through the National Science and Technology Council (NSTC) Select Committee on Artificial Intelligence (Select Committee).

The full impact of the AI Initiative is not yet known: while it sets some specific deadlines for formalising plans by agencies under the direction of the Select Committee, the EO is not self-executing and is generally thin on details. Therefore, the long-term impact will be in the actions recommended and taken as a result of those consultations and reports, not the EO itself.[16] Moreover, although the AI Initiative is designed to dedicate resources and funnel investments into AI research, the EO does not set aside specific financial resources or provide details on how available resources will be structured.[17] On 19 March 2019, the White House launched ai.gov as a platform to share AI initiatives from the Trump administration and federal agencies. These initiatives track along the key points of the AI EO, and ai.gov is intended to function as an ongoing press release.[18]

A couple of months later, on 11 April 2019, the Growing Artificial Intelligence Through Research (GrAITR) Act was introduced to establish a coordinated federal initiative aimed at accelerating AI research and development for US economic and national security and closing the existing funding gap.[19] The Act would create a strategic plan to invest US$1.6 billion over 10 years in research, development and application of AI across the private sector, academia and government agencies, including the National Institute of Standards and Technology (NIST), and the National Science Foundation and the Department of Energy – aimed at helping the United States catch up to other countries, including the United Kingdom, who are ‘already cultivating workforces to create and use AI-enabled devices’. The bill has been referred to the House Committee on Science, Space, and Technology.

A companion bill to GrAITR, the Artificial Intelligence Government Act, would attempt to create a national, overarching strategy ‘tailored to the US political economy’, for developing AI with a US$2.2 billion federal investment over the next five years.[20] The Act would task branches of the federal government to use AI where possible in operation of its systems. Specifically, it includes the establishment of a national office to coordinate AI efforts across the federal system, requests that NIST establish ethical standards, and proposes that the National Science Foundation set educational goals for AI and STEM learning.[21] The draft legislation complements the formation of the bipartisan Senate AI Caucus in March 2019 to address transformative technology with implications spanning a number of fields including transportation, healthcare, agriculture, manufacturing and national security.[22]

More recently, Congress has expressed the need for ethical guidelines and labour protection to address AI’s potential for bias and discrimination. In February 2019, the House introduced Resolution 153 with the intent of ‘[s]upporting the development of guidelines for ethical development of artificial intelligence’ and emphasising the ‘far-reaching societal impacts of AI’ as well as the need for AI’s ‘safe, responsible and democratic development.’[23] Similar to California’s adoption last year of the Asilomar Principles[24] and the OECD’s recent adoption of five ‘democratic’ AI principles,[25] the House Resolution provides that the guidelines must be consonant with certain specified goals, including ‘transparency and explainability’, ‘information privacy and the protection of one’s personal data’, ‘accountability and oversight for all automated decisionmaking’, and ‘access and fairness’. This Resolution puts ethics at the forefront of policy, which differs from other legislation that considers ethics only as an ancillary topic. Yet, while this resolution signals a call to action by the government to come up with ethical guidelines for the use of AI technology, the details and scope of such ethical regulations remain unclear.

Further, the proposed AI JOBS Act of 2019, introduced on 28 January 2019, would authorise the Department of Labor to work with businesses and education institutions in creating a report that analyses the future of AI and its impact on the American labour landscape.[26] Similar to the house resolution on ethics, this Act indicates federal recognition of the threat the introduction of AI technology poses; however, there is no indication as to what actions the federal government might take in order to offer labour protection.

Regulation of AI technologies and algorithms

There are no presently enacted federal regulations that specifically apply to AI technology. However, there are two proposed pieces of legislation that seek to do so. The Bot Disclosure and Accountability Act, first introduced on 25 June 2018 and reintroduced on 16 July 2019, mandates that the FTC come up with regulations that force digital platforms to publicly disclose their use of an ‘automated software program or process intended to replicate human activity online’.[27] It also prohibits political candidates or parties from using these automated software programs in order to share or disseminate any information targeting political elections. The Act hands the task of defining ‘automated software program’ to the FTC, which leaves wide latitude in interpretation beyond the narrow bot purpose for which the bill is intended. Also, commentators even express that this bill goes too far in regulating otherwise protected free speech and free expression, in violation of constitutional rights.

On 10 April 2019, a number of Senate Democrats introduced the Algorithmic Accountability Act, which ‘requires companies to study and fix flawed computer algorithms that result in inaccurate, unfair, biased or discriminatory decisions impacting Americans.’[28] The bill stands to be Congress’s first serious foray into the regulation of AI and the first legislative attempt in the United States to impose regulation on AI systems in general, as opposed to regulating a specific technology area, such as autonomous vehicles. While observers have noted congressional reticence to regulate AI in past years, the bill hints at a dramatic shift in Washington’s stance amid growing public awareness for AI’s potential to create bias or harm certain groups.[29] The bill casts a wide net, such that many technology companies would find common practices to fall within the purview of the Act. The Act would not only regulate AI systems but also any ‘automated decision system,’ which is broadly defined as any ‘computational process, including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, that makes a decision or facilitates human decision making, that impacts consumers’.[30] Additional regulations will be needed to give these key terms meaning but the bill is a harbinger for AI regulation that identifies areas of concern.

The bill reflects a step back from the previously favoured approach of industry self-­regulation, since it would force companies to actively monitor use of any potentially discriminatory algorithms. Although it does not provide for a private right of action or enforcement by state attorneys general, it would give the Federal Trade Commission the authority to enforce and regulate these audit procedures and requirements. Further congressional action on this subject can certainly be anticipated.

At the state level, California passed a bill in September 2018, the ‘Bolstering Online Transparency Act’,[31] which was the first of its kind and (similar to the federal bot bill) is intended to combat malicious bots operating on digital platforms. This state law does not attempt to ban bots outright, but requires companies to disclose whether they are using a bot to communicate with the public on their internet platforms. The law went into effect on 1 July 2019.

In May 2019, Illinois passed a piece of legislation, the Artificial Intelligence Video Interview Act, that limits an employer’s ability to incorporate AI into the hiring process.[32] Employers must meet certain requirements to use AI technology for hiring, which includes obtaining informed consent by explaining how the AI works, and what characteristics the technology examines, and employers must delete any video content within 30 days. However, the bill does not define what ‘AI’ means, and other requirements for the informed consent provisions are considered vague and subject to wide latitude.

National security and military use

In the last few years, the US federal government has been very active in coordinating cross-agency leadership and planning for bolstering continued research and development of artificial intelligence technologies for use by the government itself. Along these lines, a principle focus for a number of key legislative and executive actions was the growth and development of such technologies for national security and military uses. As a result of the passing of the John S. McCain National Defense Authorization Act for 2019 (the 2019 NDAA),[33] the National Security Commission on Artificial Intelligence was established to study current advancements in artificial intelligence and machine learning, and their potential application to national security and military uses.[34] In addition, in response to the 2019 NDAA, the Department of Defense created the Joint Artificial Intelligence Center (JAIC) as a vehicle for developing and executing an overall AI strategy, and named its director to oversee the coordination of this strategy for the military.[35] While these actions clearly indicate an interest in ensuring that advanced technologies like AI also benefit the US military and intelligence communities, the limited availability of funding from Congress may hinder the ability of these newly formed entities to fully accomplish their stated goals.

Still, the JAIC is becoming the key focal point for the Department of Defense (DOD) in executing its overall AI strategy. As set out in a 2018 summary of AI strategy provided by the DOD,[36] the JAIC will work with the Defense Advanced Research Projects Agency (DARPA),[37] various DOD laboratories, and other entities within the DOD to not only identify and deliver AI-enabled capabilities for national defence, but also to establish ethical guidelines for the development and use of AI by the military.[38]

The JAIC’s efforts to be a leader in defining ethical uses of AI in military applications may further prove challenging because one of the most hotly debated uses of AI is in connection with autonomous weaponry.[39] Even indirectly weaponised uses of AI, such as Project Maven, which utilised machine learning and image recognition technologies to improve real-time interpretation of full-motion video data, have been the subject of hostile public reaction and boycott efforts.[40] Thus, while time will tell, the tension between the confidentiality that may be needed for national security and the desire for transparency with regard to the use of AI may be a difficult line for the JAIC to walk.[41]

Healthcare

Unsurprisingly, the use of AI in healthcare draws some of the most exciting prospects and deepest trepidation, given potential risks.[42] As of yet, there are few regulations directed at AI in healthcare specifically, but regulators have recently acknowledged that existing frameworks for medical device approval are not well-suited to AI-related technologies. The US Food and Drug Administration (FDA) has therefore proposed a specific review framework for AI-related medical devices, intended to encourage a pathway for innovative and life-changing AI technologies, while maintaining the FDA’s patient safety standards.

The FDA recently published a discussion paper – ’Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD)’ – offering that new framework for regulating health products using AI, and seeking comment. The paper introduces that one of the primary benefits of using AI in an SaMD product is the ability of the product to continuously update in light of an infinite feed of real-world data, which presumably will lead to ‘earlier disease detection, more accurate diagnosis, identification of new observations or patterns on human physiology, and development of personalized diagnostics and therapeutics’.[43] But the current review system for medical devices requires a pre-market review, and pre-market review of any modifications, depending on the significance of the modification.[44] If AI-based SaMDs are intended to constantly adjust, the FDA posits that many of these modifications will require pre-market review – a potentially unsustainable framework in its current form. The paper instead proposes an initial pre-market review for AI-related SaMDs that anticipates the expected changes, describes the methodology, and requires manufacturers to provide certain transparency and monitoring, as well as updates to the FDA about the changes that in fact resulted in accordance with the information provided in the initial review. The FDA published the paper on 2 April 2019, and requested comments by 3 June 2019 on various issues, including whether the categories of modifications described are ones that would require pre-market review, defining ‘Good Machine Learning Practices’, and in what ways might a manufacturer can ‘demonstrate transparency’. Additional discussion and guidance is expected following the FDA’s review of the comments.[45]

The Health Insurance Portability and Accountability Act of 1996 (HIPAA) incorporates a Privacy Rule[46] that also may unintentionally hinder AI development. For example, one of the basic tenets of the Privacy Rule is that use and disclosure of protected health information should be limited to only the ‘minimum necessary’ to carry out the particular transaction or action.[47] While there are innumerable ways how AI could be used (and may be part of treatment, an enumerated exception), such limitations on use can affect the ability to develop AI related to healthcare.[48]

Facial recognition and other biometric surveillance technologies

Perhaps no single area of application for artificial intelligence technology has sparked as fervent an effort to regulate or ban its use in the United States as has the adoption of facial recognition technology by law enforcement and other public officials.[49] Like other biometric data, data involving facial geometries and structures is often considered some of the most personal and private data about an individual, leading privacy advocates to urge extra care in protecting against unauthorised or malicious uses. As a result, many public interest groups and other vocal opponents of facial recognition technology have been quick to raise alarms about problems with the underlying technology as well as potential or actual misuse by governmental authorities.[50] While much of the regulatory activity to date has been at the local level, momentum is also building for additional regulatory actions at both the state and federal levels.[51]

Indeed, municipal and city governments have been the ones to take up the banner and adopt ordinances governing the use of facial recognition software by police and local officials. While San Francisco was the first such city to enact an outright ban with regard to the use of facial recognition information by public officials including law enforcement, at the time of writing, three additional cities have also approved comparable bans on the technology, while similar bans remain under consideration in even more cities.[52]

However, the current lack of US state and federal restrictions on facial recognition technology noted above may soon change. For example, a federal bill introduced in the Senate by Senator Roy Blunt, the ‘Commercial Facial Recognition Privacy Act of 2019,’ would, if approved, preclude the commercial use of facial recognition technology for the tracking and collection of data relating to consumers absent consent.[53] Similarly, several states are currently considering legislation that would either ban or at least restrict the use of facial recognition software and information derived from such technology at the state level.[54]

In addition, other states have also enacted more general biometric data protection laws that are not limited to facial recognition, but which nevertheless regulate the collection, processing and use of an individual’s biometric data (which, at least in some cases, includes facial geometry data). At the time of writing, Illinois, Texas and Washington have all enacted legislation directed to providing specific data protections for their residents’ biometric information.[55] Only the Illinois Biometric Information Privacy Act provides for a private right of action as a means of enforcement.[56] In addition, once it goes into effect, the California Consumer Privacy Act will also extend its protections to an individual’s biometric information, including that used in facial recognition technology.[57] Still other states have included biometric data privacy as part of their data breach laws, or are currently considering the adoption of more general privacy bills that would include protection of biometric information.[58]

Autonomous vehicles and the automobile industry

There was a flurry of legislative activity in Congress in 2017 and early 2018 towards a national regulatory framework for autonomous vehicles. The US House of Representatives passed the Safely Ensuring Lives Future Deployment and Research In Vehicle Evolution (SELF DRIVE) Act[59] in September 2017, but its companion bill (the American Vision for Safer Transportation through Advancement of Revolutionary Technologies (AV START) Act),[60] stalled in the Senate as a result of holds from Democratic senators who expressed concerns that the proposed legislation remains underdeveloped in that it ‘indefinitely’ pre-empts state and local safety regulations even in the absence of federal standards.[61] At the time of writing, the bill has not been re-introduced since expiring with the close of the 115th Congress last December, and, even if efforts to reintroduce it are ultimately successful, the measure may not be enough to assuage safety concerns as long as it lacks an enforceable federal safety framework.

In practice, therefore, autonomous vehicles (AVs) continue to operate largely under a complex patchwork of state and local rules, with tangible federal oversight limited to the US Department of Transportation’s (DoT) informal guidance. In 3 October 2018, the DoT’s National Highway Traffic Safety Administration (NHTSA) released its road map on the design, testing and deployment of driverless vehicles: ‘Preparing for the Future of Transportation: Automated Vehicles 3.0’ (AV 3.0).[62] AV 3.0 reinforces that federal officials are eager to take the wheel on safety standards and that any state laws on automated vehicle design and performance will be pre-empted. But the thread running throughout is the commitment to voluntary, consensus-based technical standards, and the removal of unnecessary barriers to the innovation of AV technologies.

During 2019, several federal agencies announced proposed rule-making to facilitate the integration of autonomous vehicles onto public roads. In May 2019, in the wake of a petition filed by General Motors requesting temporary exemption from Federal Motor Vehicle Safety Standards (FMVSSs) which require manual controls or have requirements that are specific to a human driver,[63] NHTSA announced that it was seeking comments about the possibility of removing ‘regulatory barriers’ relating to the introduction of automated vehicles in the United States.[64] It is likely that regulatory changes to testing procedures (including pre-programmed execution, simulation, use of external controls, use of a surrogate vehicle with human controls and technical documentation) and modifications to current FMVSSs (such as crashworthiness, crash avoidance and indicator standards) will be finalised in 2021.

Meanwhile, legislative activity at the US state level is stepping up to advance integration of autonomous vehicles.[65] State regulations vary significantly, ranging from allowing testing under certain specific and confined conditions to the more extreme, which allow for testing and operating AVs with no human passenger behind the wheel. Some states, such as Florida, take a generally permissive approach to AV regulation in that they do not require that there be a human driver present in the vehicle.[66] California is considered to have the most comprehensive body of AV regulations, permitting testing on public roads and establishing its own set of regulations just for driverless testing.[67] In April 2019, the California DMV published proposed AV regulations that allow the testing and deployment of autonomous motor trucks (delivery vehicles) weighing less than 10,001 pounds on California’s public roads.[68] In the California legislature, two new bills related to AVs have been introduced: SB 59[69] would establish a working group on autonomous passenger vehicle policy development while SB 336[70] would require transit operators to ensure certain automated transit vehicles are staffed by employees. A majority of states either dictate that manufacturers are not responsible for AV crashes unless defects were present at the time of crash (eg, DC), or that AV crash liability is subject to applicable federal, state or common law.[71] However, some US states have established provisions for liability in the event of a crash.[72]

Also at the local level, some states expressly forbid local governments from prohibiting pilot programmes within the state (eg, Oklahoma,[73] Georgia, Texas, Illinois, Tennessee and Nevada), while others are less restrictive and merely dictate that companies looking to start pilot AV programmes should inform municipalities in writing (eg, California).[74]

Non-AI specific regulation likely affecting AI technologies

Data privacy

Following the General Data Protection Regulation (GDPR) in Europe, and various high-profile privacy incidents over the past few years, lawmakers at both the state and federal level are proposing privacy-related bills at a record rate. Among these are the California Consumer Privacy Act (CCPA), taking effect on 1 January 2020, and the New York Privacy Act, which lost steam mid-way through 2019. Although most are not specific to AI technologies, some include provisions related to automated decision-making, and most have the capacity to greatly affect – and unintentionally stifle progression of – AI technologies.

Legislation proposed in Minnesota and Washington would regulate potential AI technologies directly.[75] For example, Minnesota’s pending comprehensive privacy law includes a specific right for a consumer to request information if ‘the processing [of personal data] is carried out by automated means,’ and limits entities’ abilities to make ‘a decision based solely on profiling which produces legal effects concerning such consumer,’ absent consent or particular circumstances. Here, ‘profiling’ is broadly defined as ‘automated processing of personal data that consists of using personal data to evaluate certain personal aspects relating to a natural person, including analyzing or predicting aspects concerning the natural person’s economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.’ And if the entity engages in profiling, it must disclose that fact, and ‘information regarding the logic involved, and the significance and potential consequences of the profiling.’ Companies using AI technologies may find solace in a potentially significant carve-out through the use of the term ‘solely’; as with GDPR, if there is human intervention at some point in the decision-making process, then the regulation is not invoked.[76] Washington’s proposed Privacy Act includes very similar provisions, and as additional states develop their own legislation, it is anticipated that this framework may continue.[77]

Further, broadly applicable privacy laws – even without provisions specific to AI – are often fundamentally at odds with AI, and likely to generate headaches for companies developing and using AI technologies.[78] At bottom, AI technologies require large data sets, and those data sets are likely to contain some elements of personal information. This may trigger an avalanche of requirements under privacy laws.[79] For example, as the first and broadest privacy act in the United States, the CCPA allows consumers to request businesses to delete personal information without explanation. For an AI data system, this can be not only impossible, but, to the extent it is possible, it may result in skewed decision-making, a risk to the AI technology’s integrity. While CCPA includes several exceptions to this general right (including for reasons of security, transactions, public interest research or internal use aligned with the consumer’s expectations), it is unclear how these exceptions will be applied, and whether an entity can use the exceptions as a broad permission to include data in the data sets. The uncertainty is compounded by the fact that there are yet unreleased regulations from the California Attorney General’s office,[80] several pending amendments to the CCPA awaiting approval by the Governor, and a proposed ballot initiative that could overhaul the CCPA in 2021.[81]

Further, CCPA’s security, transparency and sale provisions may pose additional concerns for AI development. The CCPA is set to be enforced in large part by the California Attorney General, but also provides a private right of action based on certain data breaches. With more data comes more vulnerability, making AI companies particularly at risk of litigation for cybersecurity incidents. Further, consumers’ right to transparency of what data is collected, how it is used, and where it originates, may simply be impossible for potentially ‘black box’ AI algorithms. Understanding what data is collected may be feasible, but disclosing how it is used, and in what detail, poses complex issues for AI. And even where disclosure is feasible, companies may face conflicts between not wanting to disclose exactly how such information is used for trade secret purposes, but also complying with consumer notification requirements under privacy regulations where the acceptable level of detail is yet undefined.

On the other hand, these privacy laws may not apply in various circumstances. For example, companies with data of 50,000 consumers or less, that have limited revenues, or are not-for-profit, may not be subject to the CCPA. While this may allow freer range for start-ups, it may have the unintended consequence of decreasing protection (as a result of unsophisticated security systems), and increasing potential of bias (smaller data sets, and less mature anti-bias systems). Also, proposed privacy laws generally do not apply to aggregated or de-identified data. While those definitions can at times be stringent, it may be that AI uses of data can fall out of the scope of regulations by the mere fact that they may not actually use ‘personal information’ in the form the data is in. That said, given the scope of information potentially used by AI technologies, and that the ability to identify a person based on various data points is increasingly possible, companies focusing on AI technologies are likely to be affected by privacy laws.

This wave of proposed privacy-related federal and state regulation is likely to continue, potentially affecting AI technologies, even where provisions are not directly applied to automated processes. As a result, companies involved in this area are certain to be focused on these issues in the coming months, and tackling how to balance these requirements with further development.[82]

Discrimination

While the federal laws the United States Equal Employment Opportunity Commission (EEOC) enforces[83] – and its guidelines – have not changed, AI is recognised as a new medium for such discrimination.

Indeed, US Senators Kamala Harris, Patty Murray and Elizabeth Warren probed the EEOC in a September 2018 letter requesting that the Commission draft guidelines on the use of facial recognition technology in the workplace (eg, for attendance and security), and hiring (eg, for emotional or social cues presumably associated with the quality of a candidate).[84] The letter cites various studies showing that facial recognition algorithms are significantly less accurate for darker-skinned individuals,[85] and discusses legal scholars’ views on how such algorithms may ‘violate workplace anti-discrimination laws, exacerbating employment discrimination while simultaneously making it harder to identify or explain’ in a court, where such violations may be remediated. Similarly focused letters were sent to the FTC and FBI by varying groups of senators.[86]

For example, US Senators Warren and Doug Jones also sent a letter in June 2019 to various federal financial institutions (the Federal Reserve, Federal Deposit Insurance Corporation, Office of the Comptroller of the Currency and Consumer Financial Protection Bureau) regarding the use of AI by financial technology companies that have resulted in discriminatory lending practices. The Senators requested answers to various questions to ‘help [them] understand the role that [the agencies] can play in addressing FinTech discrimination’.[87]

These concerns are based on an already realised dilemma. Various human resources and financial lending tools have fallen susceptible to inadvertent biases. For example, a hiring tool from Amazon was found to incorporate and extrapolate a pre-existing bias towards men, resulting in a penalisation of resumes referencing women-related terms and institutions (eg, all-women colleges, the word ‘women’s’).[88] And a recent Haas School of Business (UC Berkeley) study found that algorithmic scoring by Fannie Mae and Freddie Mac resulted in charging higher interest rates for Latinx and African-American borrowers, which could violate US anti-discrimination law, including under the Fair Housing Act.[89]

As a result of recent focus on the potential for discrimination and bias in AI, we may see anti-discrimination laws used with more frequency – and potentially additional proposed regulations – against AI-focused technologies.

Antitrust

Government agencies are showing an increasing willingness to scrutinise the business practices of large technology companies on antitrust issues. While not affecting AI directly, these large technology companies are often the companies doing a significant amount of work building, utilising and testing AI, and any threatened ‘breakup’ of such companies could adversely affect their ability to continue building AI technologies. Centralised concentrations of data – potentially a problem in the antitrust world – may actually promote AI development, given the need for large data sets for use in the development of machine learning systems.

In July 2019, the Department of Justice announced that its Antitrust Division would review ‘whether and how market-leading online platforms have achieved market power and are engaging in practices that have reduced competition, stifled innovation or otherwise harmed consumers.’[90] The Federal Trade Commission also is reportedly investigating online platforms,[91] and the House Judiciary Committee opened a bipartisan investigation into competition in digital markets, which includes holding hearings and subpoenaing documents.[92] In justifying these investigations, proponents often cite criticisms of the advertising practices (which often include AI technologies) of large companies such as Google and Amazon, and the resulting extraordinary influence these large companies have on communications and commerce.[93] On the other hand, because this would be a new area for the application of antitrust laws, critics expect that the federal government will face various challenges in this context.[94]

The Federal Trade Commission’s Bureau of Competition also considered earlier in the year potential implications of using AI technology that may violate competition laws. For example, in discussing the launching of an FTC Technology Task Force, the Director of the Bureau of Competition Bruce Hoffman noted the myriad ways AI could itself pose antitrust concerns, namely (1) AI could collude by itself and explicitly agree on price, output and other indicators often left to market forces, (2) machines may independently reach ‘oligopoly outcome’ more consistently, even if they do not collude, (3) AI could monitor market and competitor activity much more effectively and quickly than humans, which could allow machines to identify and eliminate competitive threats in a way that the human mind cannot conceive, and (4) a broad category of feared unknowns.[95] Director Hoffman further stated that the FTC wanted to be careful of regulating without a ‘fact-based, theoretical framework’, but clearly recognised the possibility that AI will be subject to antitrust regulation as it continues to increase in utilisation.[96] As a result, as these efforts continue, AI growth may be inhibited by antitrust laws indirectly, by affecting the companies that develop the technologies, and also directly, through regulation and investigation targeting AI technologies.

Conclusion

Discussion around regulating AI technologies has grown immensely over the last few years, resulting in a multitude of proposals across sectors from local and federal legislatures. The laws that have passed, and pending regulations and policies, raise significant questions about whether AI technology should be regulated, when, and how, including whether additional growth is required to understand the potential effects and address them adequately, without overzealously inhibiting the United States’ position as a world leader in AI. Similarly, non-AI specific laws, including privacy regulation, may have an unintentional disparate impact on AI technologies, given their need for data. The next few years will prove exceedingly interesting with respect to regulation of AI as companies continue to incorporate AI across business lines, and as laws continue to develop and affect AI, directly and indirectly. Given the fast-paced nature of these developments, it is expected that even between the drafting of this chapter and its publication, the landscape of this sector will change dramatically.

The authors would like to acknowledge and thank Virginia Baldwin, Zak Baron, Allie Begin, and Iman Charania for their assistance in compiling the underlying research for this chapter.


Footnotes

Get unlimited access to all Global Data Review content