With 88% of life sciences organizations using or planning to use AI in recruitment and/or hiring, AI regulation is a priority for the industry.
The future is already here. According to a survey of life sciences recruiters conducted this year by BioSpace, 28% were already using AI in recruitment and/or hiring, and 60% of organizations planned to use AI to assist with these activities in the future. As the rapid advance of AI has sparked a flood of coverage and no small measure of concern, legislators are now increasingly looking to regulate its use in employment decision-making.
While the federal government has not officially created legislation, the Equal Employment Opportunity Commission (EEOC) issued guidance in May 2023 addressing AI in the hiring process. The new guidance focuses on AI compliance with the Title VII guidance and the Americans with Disabilities Act guidance (ADA guidance). The guidance states employers may be held liable for failure to adopt anti-discriminatory algorithms, or for performing inadequate discrimination assessments of AI. Prior to this guidance, the EEOC, the Federal Trade Commission and other regulatory agencies issued an open letter stating their intent to use their authorities to protect individual rights from potential violations stemming from the use of AI. At the time of this publication, the federal government has not enacted any laws beyond the guidance.
State legislators are taking a firmer approach by both proposing and enacting laws to prevent AI discrimination and bias during the hiring process. While state legislation proposals have increased in 2023 to address AI, only Illinois, Maryland and New York have rules requiring consent from candidates for the use of AI during the hiring process. Other states have proposed AI regulations with limited success.
Goli Mahdavi, a data privacy and technology attorney at Bryan Cave Leighton Paisner LLP, is actively monitoring the situation. “There’s been so much legislative activity this year, and it’s hard for companies to keep up. We see a lot of attention being paid to hiring because AI decisions in the hiring context have very real consequences, right?”
Among the developments is the recent failure of proposed bills. Notably, California did not pass a broad AI bill, explained Mahdavi. Interestingly, states with previously enacted legislation are also experiencing failed bills. The most recent guidance proposed in Illinois—the first state to enact a law specifically addressing AI—did not pass. New York City, which enacted an employment law in July, has also seen the failure in 2023 of bills that would have further expanded on the enacted labor law. This suggests that state regulators are still figuring out the best ways to protect their constituents from potential AI bias.
Due to the complexity of protecting individual rights while adopting AI, it is understandable that legislators will keep trying to protect individuals as industries adapt to AI. Liz Nguyen, talent & culture advisor and former senior vice president and head of talent & culture for Surrozen, highlighted concerns that beyond demographic factors, candidates could also be subject to bias stemming from a lack of technical know-how. “Right now, the burden unfortunately lies on candidates as formatting resumes and using keywords become especially important,” she said. “Quality candidates without AI knowledge or guidance may fall through the cracks and not be considered.”
The Future
Employers should expect a flurry of changing and evolving AI legislation focused on bias and discrimination hiring within the coming years. Both at the state and federal levels, legislators have recognized that AI is not infallible, and is only as good as its training and available data. At both the federal and state level, guidances and proposed bills hold employers liable for any biased AI processes.
Madhavi highlighted New York City’s Local Law 144 as a blueprint for other legislation. “The first in the nation, Local Law 144 requires the performance of a bias audit, providing job candidates and employees with notice and a host of other things.” State laws with many of the same requirements have been proposed in New York, New Jersey and Washington, DC, she noted.
The EEOC is at the forefront of this issue, Mahdavi said, and is actively training staff on how to detect algorithmic bias. With both New York and the EEOC offering blueprints, employers should start now to prepare for the future. Mahdavi suggested starting with having compliance and legal departments engage with their human resources and procurement teams to assess the use of AI-related tools in making hiring and retention decisions. “After that, develop an inventory of these tools and a deep understanding of what your human resource information system is and what are all of the different tools that are made available in these third-party systems.” She further cautioned that this is not where it ends. “Review the inventory regularly because of the rapid AI adoption rate. Vendors will release new product features, and companies may unwittingly be using AI tools without being aware of it.” This weaves into vendor due diligence. Employers should start asking about vendor auditing processes and the logic behind how a tool functions.
While we cannot predict what the future will look like for employers until legislation is enacted at both the federal and state levels, the New York and the EEOC actions provide some indication. Liability will be a major concern for employers, so preparing now is imperative for employers to safeguard themselves and their employees. Mahdavi emphasized that “At the core, it is really around making sure that these tools are used in a fair way, making sure that there’s some check to make sure that there isn’t any sort of a disparate impact when these tools are used in practice.”
Bryan Cave Leighton Paisner LLP maintains a state-by-state AI legislation snapshot tracker here.
State AI Legislation Activity - September 2023
Description
D.C. | Stop Discrimination by Algorithms Act of 2023 (B25-0114) | February 2, 2023 | Proposed | Unlawful for a DC business to make decisions stemming from algorithms based on characteristics including race, religion, sex, gender, nationality, gender identity, etc. Specifies a civil penalty of up to $10,000 per violation. |
CA | Automated Decision Tools (AB 331) | January 30, 2023 | Failed | Requires impact assessments for automated decision-making tools. The areas of focus are decisions made in employment, education, housing, healthcare, utilities, family planning, financial services, and the criminal justice system. |
---|---|---|---|---|
IL | HB 3773 | February 17, 2023 | Failed | Restricts employers from using race, or ZIP codes as a proxy for race, when making automated hiring decisions. |
MA | An Act Preventing a Dystopian Work Environment (H.1873) | February 16, 2023 | Proposed | Requires employers to give notice prior to using automated decision systems. Employers must have assessments of all decisions. Additionally, any decisions based on bad data must be rectified. |
NJ | A4909 | December 5, 2022 | Proposed | Requires that candidates be notified of AI screening within 30 days of applying and requires regular bias audits of automated decision hiring tools. |
NY | S5641 | March 10, 2023 | Failed | Similar to the state’s labor law; establishes criteria for automated decision tools being used for the hiring process. |
NY | A7858 | July 7, 2023 | Proposed | Amends the labor law and requires employers to provide notice of use of AI tools. |
VT | H114 | January 25, 2023 | Proposed | Restricts the use of automated decision systems for employment decisions as well as the electronic monitoring of employees. Requires human oversight for any AI system. |
Lori Ellis is the head of insights at BioSpace. She analyzes and comments on industry trends for BioSpace and clients. Her current focus is on the ever-evolving impact of technology on the pharmaceutical industry. You can reach her at lori.ellis@biospace.com. Follow her on LinkedIn.