Many employers now use artificial intelligence (AI) for the hiring process, resulting in new guidance for AI and employment law claims. In the employment context, the use of AI typically means that the developer relies partly on the computer’s own analysis of data to determine which criteria to use when making recruitment decisions. AI may include machine learning, computer vision, natural language processing and understanding, intelligent decision support systems, and autonomous systems.
Using AI In the Hiring Process
According to a 2022 survey initiated by the Society for Human Resource Management, about 1 in 4 organizations use automation and/or AI to assist with employment-related activities, such as recruitment and hiring. AI tools used in employment decision-making often include chat bots that help applicants complete the application process, algorithms that screen resumes and predict job performance, and facial recognition tools used in interviews to assess a candidate’s attention span.
For employers, these tools may be an efficient and effective way to recruit the most suitable talent, but federal, state and local governments are increasingly focused on the potential for discrimination such as ignoring people of a certain gender, race, color, gender identity, religion, pregnancy status, disability and age for the role which breaches discrimination law.
EEOC Guidance on Employment Law
The Equal Employment Opportunity Committee (EEOC) has developed guidance for employers on how to legally use AI in the hiring process. The EEOC warns employers that the use of AI when making employment-based decisions such as hiring new employees may initiate violations of Title VII if individuals who are members of protected classes are adversely affected. The EEOC gives the following examples of AI technology that could end up with the employer breaching discrimination requirements in Title VII:
- using scanners that prioritize applications using particular keywords;
- monitoring software that rates employees based on their keystrokes or other factors;
- using virtual assistants or chat bots that ask candidates about their qualifications and reject those who do not meet any pre-defined requirements;
- using video software that assesses candidates based on their facial expressions or speech patterns;
- using software that can provide “job fit” scores for applicants based on certain preferred features such as their personalities, aptitudes, cognitive skills or cultural fit for the job.
Employers may also be required to take legal responsibility under Title VII for AI if the decision making tools are designed or administered by an outside software vendor or other 3rd party and a discriminatory selection procedure has taken place. The employer won’t be able to shift the blame to a third-party who developed or administered the testing procedures that are discriminatory.
State AI Regulations
Some states & cities are working on bills/regulations for AI and employment law. In NYC on July 5, 2023, the state will start to enforce its ‘Automated Decision Tools’ law which will regulate an employer’s use of artificial intelligence.
City-based employers and employment agencies that choose to use an automated employment decision tool (AEDT) which is any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence that produces simplified output, which assists in a big way or replaces discretionary decision-making, like a score or recommendation, which ultimately has a determinative impact in the hiring or promotion process are now required to contract an independent auditor to conduct a bias audit and meet certain notice requirements.
Speak With an Employment Law Attorney
If you believe a potential employer used AI to make a recruitment decision and you were adversely affected, an attorney may be able to help with a claim. Complete a Free Case Evaluation today to get in touch with an independent, participating attorney who subscribes to the website.