10 Steps To Reduce Risks From AI Employment Tools

(December 18, 2023, 5:18 PM EST) --
Gerard O’Shea
Gerard O’Shea
Joseph Lockinger
Joseph Lockinger
Steven Zuckerman
Steven Zuckerman
On Oct. 30, the Biden administration issued a sweeping executive order on artificial intelligence, representing the most comprehensive guidance to federal agencies and AI developers to date.

The "Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence" directs the federal government to closely examine AI's impact and "harness AI for good," while mitigating risks stemming from "fraud, discrimination, bias, and disinformation."

The executive order does not create any immediate mandates for employers using AI tools in connection with their employees, including as part of the employment decision-making process — that is, decisions on who to hire, fire or promote. Instead, it sets a road map for executive agencies to issue guidance concerning the use of AI tools that will directly affect employers, software development and federal contractors in the months to come.

While AI offers seemingly endless potential benefits in the workplace — including improvements in efficiency, cost-cutting and innovation — employers must balance those benefits with the legal risks of using AI tools in employment processes. As the executive order warns employers, "AI should not be deployed in ways that undermine rights, worsen job quality, encourage undue worker surveillance, lessen market competition, introduce new health and safety risks, or cause harmful labor-force disruptions."

To meet this goal, employers using AI tools should align their use with existing guidance that has been issued by executive agencies and must also attempt to keep up with regulatory and technological developments in this rapidly evolving space.

We've outlined a 10-step plan for employers using AI-based tools in employment decision-making and related processes to best mitigate the risks of using such tools and prepare for future AI regulation stemming from the executive order.

We've focused on the broad umbrella of tools powered by AI, including those using machine learning or natural language processing, that are used in employment processes. These tools have become the center of focus from agencies, such as the U.S. Equal Employment Opportunity Commission, due to some tools' incorporation of software that uses algorithmic decision-making at different stages of the employment process.

1. Identify the technology.

As a preliminary matter, employers need to identify existing AI technology used in the workplace, including how they are using it and what technologies they may want to implement in the future. According to EEOC Chair Charlotte Burrows, more than 80% of employers use AI in some of their employment decision-making processes, but many employers might not realize the ubiquity and broad scope of tools that use AI technologies.

For example, AI technology may be used in sourcing and screening candidates; interviewing; onboarding; performance management; succession planning; talent management; diversity, equity and inclusion activities; and employee engagement/workplace monitoring. Some examples include resume scanners, keyloggers and software that take screenshots or webcam photos during the workday, virtual training programs, virtual assistants or chatbots, video interviewing software, "job fit" or "cultural fit" testing software, and trait- or ability-revealing applicant gaming systems.

In identifying AI technology, employers should be mindful of the order's particular focus on tools used to "monitor or augment employees' work," which includes the employee-monitoring software mentioned above. For example, the order directs the secretary of the U.S. Department of Labor to issue guidance clarifying that employers using such tools need to comply with the Fair Labor Standards Act and other legal requirements requiring employees to be compensated for all hours worked.

If an AI tool records keystrokes or calls made outside an employee's scheduled work hours, but an employer fails to pay the employee for such time and the employee is compensated on an hourly basis or eligible for overtime, this would potentially raise issues under the FLSA.

In addition, the order focuses on worker displacement caused by using AI tools that improve efficiency and automation. These potential efficiency gains are an area of significant interest to members of the business community and are currently being tested to see if those gains can actually be achieved in practice.

For example, a working paper released in September by the Harvard Business School and the Boston Consulting Group found that the use of generative AI improved an employee's work performance by more than 40%, compared with a control group that did not use the technology.

As the order notes, however, such improvements are not without risks, and directs the secretary of labor to submit a report to the White House analyzing executive agencies' ability to "support workers displaced by the adoption of AI and other technological advancements" by the end of April 2024, as well as develop and publish principles and best practices for employers relating to job displacement.

2. Understand the role of human oversight.

It is critical for employers to understand the role of human oversight in the use of AI tools. Employers should ensure that a tool does not replace human judgment and any final decisions, where tools are used in the employment decision-making process, continue to be made by HR or management.

Human oversight is not only advisable from a legal perspective, but it also may mitigate distrust and employee morale issues arising from concerns of overreliance on AI technologies in employment decision making. Developing and deploying trustworthy AI technologies is a fundamental principle of the order, which broadly proclaims that both developers and users of AI should be held accountable to standards protecting against "unlawful discrimination and abuse," as "only then can Americans trust AI to advance civil rights, civil liberties, equity, and justice for all."

3. Vet vendors, tools and data.

As the order demonstrates, the government is increasingly focused on preventing discrimination and bias resulting from the use of AI in the workplace. For example, the order tasks the Attorney General's office to coordinate with and support agencies in enforcing discrimination laws related to AI use, including algorithmic discrimination.

In addition, the order focuses on developers' responsibilities in creating trustworthy, bias-free AI tools. For example, the order requires the National Institute of Standards and Technology to develop guidelines and best practices for developers in conducting "red-team testing" or "AI red-teaming," which it defines as a "structured testing effort to find flaws and vulnerabilities in an AI system." These flaws and vulnerabilities include "harmful or discriminatory outputs from an AI system."

In light of these broad directives focusing on preventing discriminatory outputs, employers should keep the below steps in mind when vetting AI vendors, tools and the data used by such tools.

Vet the vendor.

The explosion of AI tools designed for use in employment processes means that vendors will need to be carefully vetted as a threshold matter. Employers may want to consider whether the vendor's developers receive any training on detecting and preventing bias in the design and implementation of such tools.

Employers also may wish to consider whether vendors regularly consult with diversity, labor or other outside experts to address and prevent bias issues. Employers should be wary of any claims of "bias-free" or "EEOC-compliant" tools, as these representations have no legal effect.

In addition, employers should take a close look at any purchasing contracts made with vendors, with particular focus on how potential liability in connection with the tool's use will be allocated.

Vet the tool.

Employers should thoroughly vet any tool they wish to implement, including learning how the tool works and how it makes any recommendations or conclusions. As a preliminary matter, employers should consider the track record of the tool, including if and how long the tool in consideration has been used by other employers, and the purposes for which it has been utilized.

An important part of onboarding any new AI tool should be gathering sufficient information related to functionality to be able to explain to employees or applicants the role the tool might play in the employment decision-making process.

Employers also should understand if and how the tool was developed for individuals with physical and mental disabilities. It's important to ask whether any interface is accessible to individuals with disabilities, whether materials presented are available in alternative formats, and whether vendors attempted to determine if using an algorithm disadvantages individuals with disabilities, such as when characteristics measured by the tool are correlated with certain disabilities.

Some tools may improperly screen out individuals with disabilities, including visual disabilities,[1] and employers should ask vendors how the tool mitigates or provides accommodations for that issue. In addition, improper screening out can occur if tools, such as chatbots, are programmed to reject all applicants who have gaps in their employment history, as such gaps may have resulted from disability-based reasons.

Vet the data.

Understanding the data that the AI tool has been trained on is a critical part of vetting any AI tool.

Prior to using a tool, employers should mitigate any risk that a tool is a proxy for impermissible discrimination. Such discrimination can occur where the data a tool is trained on is itself biased — as might occur if there is a lack of diversity in an employer's existing employee population — which can lead to potentially biased results. Employers also may consider what statistical analyses have been run on the tools and how such analyses were selected.

4. Assemble the right team.

A joint statement released in April from the Consumer Financial Protection Bureau, U.S. Department of Justice, Equal Employment Opportunity Commission and Federal Trade Commission asserted:

Many automated systems are "black boxes" whose internal workings are not clear to most people and, in some cases, even the developer of the tool. This lack of transparency often makes it all the more difficult for developers, businesses, and individuals to know whether an automated system is fair.[2]

The order echoes this black box sentiment by stressing that all workers, including unionized ones, "need a seat at the table" in developing and deploying AI technology. To mitigate against this asserted black box problem in the workplace, employers should ensure that they assemble a multidisciplinary team tasked with implementing and monitoring any AI tool.

This team should not only comprise members from human resources, legal, communications, marketing and DEI functions, but also members of IT, including those with backgrounds in software or data engineering. Assembling the right team with the appropriate experience will help ensure all players understand, are aligned on and are able to explain the business goals tied to using AI tools and how the tool reaches particular decisions or predictions.

This alignment will be key in responding to employee and applicant questions regarding the use of AI tools in employment processes. Employers may want to designate a team member tasked with monitoring trends and technology developments in this evolving space.

5. Know the applicable laws.

The order makes clear that federal government AI regulation and guidance are imminent. In the meantime, however, employers using such technologies are already subject to numerous federal and state anti-discrimination, intellectual property, cybersecurity and data privacy laws.

In the employment space in particular, federal anti-discrimination laws — including the Americans with Disabilities Act, the Age Discrimination in Employment Act and Title VII of the Civil Rights Act — collectively prohibit both disparate treatment discrimination and disparate impact discrimination.

Disparate treatment is intentional discrimination against members of a protected class, whereas disparate impact discrimination consists of neutral policies that discriminate in practice against members of a protected class.

In addition, U.S. states and local jurisdictions may impose more protective anti-discrimination laws. The use of AI tools in certain instances can trigger compliance risks under other federal employment laws, such as the National Labor Relations Act and the Fair Credit Reporting Act.

Federal contractors in particular should pay special caution, as the order requires the labor secretary to issue guidance regarding "nondiscrimination in hiring involving AI and other technology-based hiring systems" by October 2024. This focus on federal contractors follows the Office of Federal Contract Compliance Programs' recent revisions to its scheduling letter and itemized listing to require employers to provide information and documents relating to the use of "artificial intelligence, algorithms, automated systems or other technology-based selection procedures."[3]

Some jurisdictions specifically regulate certain AI technologies used in the workplace. For example, New York City began enforcing its Automated Employment Decision Tools, or AEDT, Law in July. The statute imposes several requirements for employers that use a qualifying AEDT, including conducting an independent bias audit of the AEDT and making available certain information about data collected by the tool.[4]

The Illinois Artificial Intelligence Video Interview Act also imposes notification, consent, deletion and reporting requirements for jobs based in Illinois. Maryland H.B. 1202 similarly requires applicant consent for employers to use facial recognition technology during preemployment job interviews.

Employers navigating this complex area will need to ensure that their use of AI tools complies with all applicable laws.

6. Have appropriate policies in place.

Employers should consider whether to implement policies identifying and addressing appropriate use of AI technologies in employment processes. In a policy, employers should be transparent about how the tool operates, what data is being used and how — if at all — the tool assists with decision-making processes.

With clear language identifying how such tools are used, employees and applicants can be better informed, and employment decisions such as hiring and promotion can be perceived as more fair. Any applicable policies should be communicated and updated regularly.

7. Implement training and education.

Any AI use policies should be communicated to employees, preferably through training and education programs.

Management-level employees also should receive education and training on AI tools, including applicable legal requirements regulating the use of such tools, the potential for tools to perpetuate bias or discrimination if used improperly, the importance of human oversight, and concerns regarding incorrect or misleading outputs.

8. Ensure accommodations are available.

Employers using AI tools should prepare their managers and HR teams to recognize and evaluate accommodations requests from applicants and employees. Even though some laws, such as New York City's AEDT law, do not explicitly require that employers provide accommodations, only that individuals are provided notice that they may request an accommodation or alternative selection process, accommodations are required under the federal ADA and New York City and state human rights laws.

The EEOC[5] and the White House[6] have cautioned employers to not screen out candidates by failing to provide applicants with human alternatives to AI tools.

9. Conduct regular testing and audits.

Once deployed, AI tools should be evaluated and regularly monitored to ensure that business objectives for using the tool continue to be met, that the tool is implemented in a fair and unbiased manner, and so any adjustments may be made. Even if the tool has been subject to an audit prior to being used, employers should continue to conduct such audits at least on an annual basis, as the implementation of the tool in any particular workforce can result in unforeseeable issues.

For instance, the White House's Blueprint for an AI Bill of Rights suggests that automated systems be regularly monitored for "algorithmic discrimination that might arise from unforeseen interactions of the system with inequities not accounted for during the pre-deployment testing, changes to the system after deployment, or changes to the context of use or associated data."[7]

10. Pay attention to the rapidly evolving AI landscape.

Employers, especially those operating in multiple jurisdictions, need to stay up to date on potential laws and regulations regarding AI in employment processes.

For example, new laws similar to — or even more stringent than — NYC's AEDT law have been proposed in New York state, New Jersey, California, Massachusetts and the District of Columbia, while other states have created task forces to advise on and propose regulations governing such tools in the employment context. In California, a new state agency — the California Privacy Protection Agency — is tasked with addressing automated decision-making technology.

In addition to new federal guidance stemming from the order, employers can also expect to see increased coordination between federal agencies in addressing discrimination issues relating to AI use. This approach builds on earlier interagency efforts on AI dating back to earlier this year between the EEOC, FTC, DOJ and CFPB. This may be welcome news for employers challenged with complying with the current patchwork of AI laws.

Employers should also keep an eye on regulations and guidance affecting the federal government's own use of AI. This guidance, while directed at federal agencies' use of AI, may provide others a framework for best practices for using certain AI technologies, including periodic accountability reviews, completing an AI impact assessment, testing the AI tool for performance in a real-world context, independently evaluating the AI through an AI oversight board or similar third party, and ensuring adequate human training and assessment so AI operators can interpret and act on outputs and combat any risks.

Conclusion

AI tools will continue to revolutionize the workplace. Employers should keep on top of these rapid developments and implement best practices for mitigating legal risk in using such tools.



Gerard O'Shea is a partner, Joseph Lockinger is special counsel and Steven Zuckerman is an associate at Cooley LLP.

Cooley resource attorney Anna Matsuo contributed to this article.

The opinions expressed are those of the author(s) and do not necessarily reflect the views of their employer, its clients, or Portfolio Media Inc., or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.


[1] https://www.eeoc.gov/laws/guidance/visual-disabilities-workplace-and-americans-disabilities-act#:~:text=Does%20an%20employer%20have%20an,Yes.

[2] https://www.eeoc.gov/joint-statement-enforcement-efforts-against-discrimination-and-bias-automated-systems.

[3] https://www.dol.gov/agencies/ofccp/manual/fccm/figures-1-6/figure-f-3-combined-scheduling-letter-and-itemized-listing.

[4] https://www.cooley.com/news/insight/2023/2023-05-15-nyc-issues-final-regulations-on-automated-employment-decision-tools-law.

[5] https://www.eeoc.gov/laws/guidance/americans-disabilities-act-and-use-software-algorithms-and-artificial-intelligence.

[6] https://www.whitehouse.gov/ostp/ai-bill-of-rights/#human.

[7] https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf.

For a reprint of this article, please contact reprints@law360.com.

Hello! I'm Law360's automated support bot.

How can I help you today?

For example, you can type:
  • I forgot my password
  • I took a free trial but didn't get a verification email
  • How do I sign up for a newsletter?
Ask a question!