Legal Update

Oct 30, 2023

President Biden Signs Executive Order Setting Forth Broad Directives for Artificial Intelligence Regulation and Enforcement

Click for PDF

Seyfarth synopsis: President Biden’s Executive Order on artificial intelligence sets forth his vision for America to continue leading in AI innovation while also addressing risks associated with the use of AI. While much of the document delves into cutting-edge safety issues with national security implications, there are many provisions in the EO that have broad ramifications for companies generally, and employers specifically. The Order mandates greater coordination by civil-rights agencies on AI issues, emphasizes worker protections, and instructs the Department of Labor to guide federal contractors regarding AI-driven hiring practices. It marks the strategic emphasis on the government’s internal standards for AI governance and AI risk management, and towards articulating and implementing “required minimum risk-management practices” for AI applications that “impact people’s rights or safety.” The Executive Order’s emphasis on security assessments of AI systems are also set to influence AI risk management and safety dialogues across various sectors, all with significant implications in the labor and employment domain.

On October 30, 2023, President Biden signed a lengthy and far-reaching Executive Order regarding artificial intelligence that outlines the Administration’s vision for the federal government’s AI regulation and enforcement activities in the years to come. While some early media coverage of the EO has focused on what the EO says about identifying and managing AI risks related to national security, national public health and safety, and steps to develop watermarking standards to identify and label AI-generated content, the EO represents a comprehensive government-wide approach to addressing AI. The EO sets in motion action from multiple Departments and independent agencies, with the aim towards harnessing the benefits of AI and maintaining American leadership in innovation, while addressing risks associated with the use of AI.

While some of today’s EO targets cutting-edge AI models developed by leading American AI companies, and delves into national security and national safety issues with global geopolitical implications, companies across multiple industries should pay close attention to some of the specific actions initiated by President Biden in today’s Executive Order. Many of these actions will be shaping discussions regarding AI safety and bias for the foreseeable future.

In the upcoming days, Seyfarth will continue to provide updates on various areas targeted for Executive Branch action in AI, which may include, for example, intellectual property, privacy, cybersecurity, health care, labor relations, and other areas. Please continue to join us as we help companies better understand both risks associated with AI and, as President Biden put it on Monday, the “incredible opportunities” from the “most consequential technology” of our time.

Our update today focuses on issues we have identified as being of particular interest to employers.

1. Employers Should Expect Ongoing Coordination on AI Enforcement From Civil-Rights Agencies.

In the statement made today before signing the Executive Order, President Biden emphasized how AI is “all around us” and that AI presented an “incredible opportunity”. President Biden also spoke of the risks of AI. This theme – that AI presents both opportunities and risks – is present throughout the Executive Order.

On the risk side of things, President Biden’s Executive Order on AI is clear in its mandate to agencies charged with enforcing civil rights laws, directing them to make “comprehensive use of their respective authorities” to address potential civil-rights harms arising out of the use of AI, including “issues related to AI and algorithmic discrimination”. The Executive Order further directs the Attorney General to “coordinate with and support agencies in their implementation and enforcement of existing Federal laws to address civil rights and civil liberties violations and discrimination related to AI” and directs agencies to coordinate “on best practices for investigating and prosecuting civil rights violations related to AI” and to provide further inter-agency training and technical assistance.

While some of these efforts are undoubtedly targeted at developing enforcement capacity at the various civil-rights enforcement agencies, the focus and ongoing coordination between federal enforcement agencies regarding the civil-rights implications of AI is not new, and the Executive Order builds on existing initiatives at the EEOC, OFCCP and other agencies.

In April 2023, the leaders of the EEOC, Department of Justice Civil Rights Division, the CFPB, and the FTC issued a “Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems,” emphasizing that their existing legal authorities applied to the use of AI. More recently, in the EEOC’s Strategic Enforcement Plan for FY24-28, released in September 2023, the Commission expressed its intent to continue prioritizing its enforcement efforts on discriminatory recruitment and hiring practices, with a special emphasis on employers’ use of AI. Likewise, OFCCP has also increased its focus on the use of artificial intelligence technologies. In August 2023, the Agency released its expanded supply and service audit scheduling letter to include a request for “information and documentation of policies, practices, or systems used to recruit, screen, and hire, including the use of artificial intelligence, algorithms, automated systems or other technology-based selection procedures.”

Section 8 of the Executive Order discusses protecting consumers, patients, passengers, and students. It encourages independent regulatory agencies responsible for laws and federal programs relating to those stakeholders “to consider using their full range of authorities” to address AI risks.

For employers, the implications are indirect but noteworthy. In its directives in Section 8 of the Executive Order, the Biden Administration encourages agencies to consider rulemaking as well as guidance “clarifying the responsibility of regulated entities to conduct due diligence on and monitor any third-party AI services they use, and emphasizing or clarifying requirements and expectations related to the transparency of AI models and regulated entities’ ability to explain their use of AI models.”

The need to monitor third-party AI services and concepts of transparency and explainability are common themes in AI risk management discussions. While the EEOC did not receive a direct mandate to incorporate these concepts, the overarching themes in AI regulation will very well shape the discourse and future expectations in employment contexts.

2. The Biden Administration Remains Focused on Worker Protections

President Biden’s Executive Order on AI further emphasizes his administration’s focus on labor issues, and it contains multiple provisions regarding the responsible development and use of AI in order to support American workers. Specifically, the Executive Order states that AI “should not be deployed in ways that undermine rights, worsen job quality, encourage undue worker surveillance, lessen market competition, introduce new health and safety risks, or cause harmful labor-force disruptions,” but instead be developed to improve the lives of workers through such innovative technology.

The Order also focuses on labor-market disruptions from AI, and directs the Chairman of the Council of Economic Advisers to prepare and submit a report to the President on the labor-market effects of AI, by April 2024. Among other things, the report is to identify ways to strengthen and expand education and training opportunities for pathways to occupations related to AI.

Relatedly, by April 2024, the Secretary of Labor is tasked with consulting with labor unions, workers, and other outside entities, to “develop and publish principles and best practices for employers that could be used to mitigate AI’s potential harms to employees’ well-being and maximize its potential benefits.”

One of the harms to workers that the Biden Administration has highlighted is the potential harm from automated systems that monitor workers. The Executive Order asserts, “AI should not be deployed in ways that undermine rights, worsen job quality, [or] encourage undue worker surveillance”.  The White House’s fact sheet discussing the Executive Order cites “the dangers of increased workplace surveillance, bias, and job displacement,” and the Executive Order itself also directs the Secretary of Labor to issue guidance “to make clear that employers that deploy AI to monitor or augment employees’ work must continue to comply with” the requirements of the FLSA.

In the context of disabilities, including in the workplace, the Executive Order mentions specific technology often associated with automated monitoring. It specifically identifies the risk of unequal treatment of people with disabilities “from the use of biometric data like gaze direction, eye tracking, gait analysis, and hand motions” and encourages the Architectural and Transportation Barriers Compliance Board to issue technical assistance and recommendations on the risks and benefits of AI in using biometric data as an input.

3. OFCCP and Federal Contractors’ Use of AI

The EO directs that within a year, the Secretary of Labor “shall” publish guidance to federal contractors regarding “nondiscrimination in hiring involving AI and other technology-based hiring systems.”  Accordingly, Federal contractors in particular should expect additional guidance from OFCCP clarifying how it will be applying existing guidance, such as the 1978 Uniform Guidelines on Employee Selection Procedures, to cutting-edge hiring technology.   The guidance to date has been limited and simply provides that the Uniform Guidelines apply to even the most sophisticated technologies. 

Employers not directly subject to OFCCP’s audits should still pay close attention to what the Department of Labor does here, because there is always the potential for the Department of Labor’s guidance to be held out as a broader standard that employers everywhere might incorporate in their practices. 

4. The way the Federal government thinks about AI risk will influence the way private companies think about AI risk.

Employers should pay close attention to what the federal government does with respect to its own use of artificial intelligence, and the way that federal agencies will be directed to think about AI risk as they continue their own AI journeys. These steps include straightforward risk-management practices such as directing agencies to appoint a “Chief Artificial Intelligence Officer” who is responsible for both promoting AI innovation in their agency as well as managing risks from their agency’s use of AI, and things like directing the Office of Personnel Management to develop guidelines on the use of generative AI for work by the Federal workforce.

With respect to generative AI, generally, employers should also pay attention to how the Executive Order takes a balanced approach to the use of generative AI within the federal workforce. It “discourages” agencies from imposing bans, or wholesale blocking the use, of generative AI. The executive order also encourages agencies to put “appropriate safeguards in place” and to “employ risk-management practices, such as training … staff on proper use, protection, dissemination, and disposition of Federal information; negotiating appropriate terms of service with vendors; implementing measures designed to ensure compliance with record-keeping, cybersecurity, privacy, and data protection requirements; and deploying other measures to prevent misuse of Federal Government information in generative AI.” As federal agencies begin operationalizing these concepts further, private-sector employers should examine how their own practices align with (or may vary from) these practices.

And with respect to the federal government’s own use of AI, President Biden’s Executive Order also directs the Office of Management and Budget to issue guidance to federal agencies regarding “required minimum risk-management practices” for AI applications that “impact people’s rights or safety.” This guidance is to include key concepts from the AI Risk Management Framework by the National Institute of Standards and Technology (NIST), such as “conducting public consultation; assessing data quality; assessing and mitigating disparate impacts and algorithmic discrimination; providing notice of the use of AI; continuously monitoring and evaluating deployed AI; and granting human consideration and remedies for adverse decisions made using AI.” As the federal government intensifies its internal focus on these risk-management principles – both in its own operations and as a buyer of AI – we anticipate a corresponding rise in their significance within the private sector. This will especially hold true for companies leveraging AI in HR decision-making processes.

Speaking of NIST, the Executive Order places a lot of responsibility on NIST. Employers should pay special attention to the directive that NIST draft "guidelines and best practices, aiming to promote consensus industry standards, for the development and deployment of AI systems that are safe, secure, and trustworthy." This directive is important for employers who are either currently utilizing AI in their HR operations or contemplating its integration. Notably, NIST is tasked with establishing guidelines and benchmarks for the evaluation and auditing of AI capabilities, emphasizing areas where AI might pose potential hazards. While some of the risks mentioned by the Executive address national security and biosecurity concerns, we believe that working towards creating standards to evaluate risks related to unlawful bias and employment-discrimination is firmly within the responsibilities assigned to NIST by the Executive Order.

5. Security testing and “red teaming” of AI will become more-relevant for employers.

Some of the headline-grabbing portions of President Biden’s Executive Order involve President Biden’s invocation of the Defense Production Act to require American tech giants developing the largest and most-sophisticated AI models to disclose the results of certain “red teaming” security testing exercises. While some of the test subjects may sound like they were drawn from science fiction –the Executive Order refers to testing whether AI models might have “the possibility for self-replication or propagation” – even employers who aren’t developing cutting-edge AI models should still pay attention to the broad scope of the “red teaming” of AI models.

Crucially for employers, the Executive Order’s definition of “AI red-teaming” explicitly includes tests for potentially discriminatory output from an AI system. The Executive Order’s full definition of “AI red-teaming” is

a structured testing effort to find flaws and vulnerabilities in an AI system, often in a controlled environment and in collaboration with developers of AI.  Artificial Intelligence red-teaming is most often performed by dedicated “red teams” that adopt adversarial methods to identify flaws and vulnerabilities, such as harmful or discriminatory outputs from an AI system, unforeseen or undesirable system behaviors, limitations, or potential risks associated with the misuse of the system.

Earlier this summer, President Biden’s chief AI advisor, OSTP Director Arti Prabhakar, addressed the DEF CON security conference, where OSTP and other federal agencies sponsored a wide-scale “generative AI red teaming” exercise. In this exercise, thousands of volunteers engaged in the adversarial testing of large language models, seeking, among other things, to induce the language models to generate outputs revealing explicit or implicit bias, misinformation, or directions for illegal activities. Director Prabhakar encouraged the security researchers, noting that it was important for developers and the government to “understanding how these systems break so [they] can keep getting better.” Given its prominent place in President Biden’s Executive Order, the concept of “AI red-teaming” will gain further attention. It will be an important concept for many AI applications, and not just the most advanced ones, so employers should evaluate all of their AI safety and security practices, including testing and validation, in light of this heightened emphasis.

Conclusion

President Biden’s far-reaching Executive Order on AI will dominate discussion about AI regulation and enforcement for some time to come. We plan to provide additional updates as we dive more deeply in this comprehensive EO.  For additional information, we encourage you to contact the authors of this article,  a member of Seyfarth’s People Analytics team, or any of Seyfarth’s attorneys.