Emerging Liability for AI-Driven Hiring Tools: Key Developments in Mobley v. Workday, Inc.
Employers using artificial intelligence (AI) in hiring and talent acquisition should be aware of Derek L. Mobley v. Workday, Inc., Case No. 3:23-cv-00770-RFL, a case currently pending in the Northern District of California. This litigation tests the boundaries of traditional employment discrimination statutes in the context of AI decision-making and raises novel questions about the viability of class and collective actions targeting AI-based applicant screening tools. Recent developments, including conditional certification of an age-based collective, have allowed plaintiffs to advance claims that algorithmic hiring systems can violate Title VII, the Age Discrimination in Employment Act (ADEA), the Americans with Disabilities Act (ADA), and the California Fair Employment and Housing Act (FEHA).
The Parties and Nature of the Dispute
The defendant, Workday, Inc., provides human resource management and applicant-screening services on a subscription basis to employers across numerous industries. Workday’s platform reportedly processes millions of job requisitions and hundreds of millions of applications annually.
The lead plaintiff, Derek L. Mobley, is an African American male over the age of forty who alleges that he suffers from depression and anxiety, conditions he characterizes as disabilities under the ADA Amendments Act of 2008 (ADAAA). Mobley alleges that, since at least 2017, he applied for over one hundred positions through employers that use Workday’s platform and was rejected for every application. Three additional named plaintiffs have joined the suit, each bringing claims based on their own rejections through Workday-powered hiring systems.
The Challenged Practices
The Mobley plaintiffs challenge Workday’s use of AI, machine learning, and other algorithmic decision-making tools to screen, evaluate, rank, and reject job applicants. The plaintiffs allege that Workday’s screening tools function as decision-making systems that materially influence, and in many cases determine, whether an applicant is advanced or rejected for employment opportunities. Workday denies that it is responsible for making employment decisions for individual applicants.
The plaintiffs specifically allege that Workday’s discriminatory conduct arises from its policy and practice of using automated, algorithm-driven systems, instead of individualized human review, to evaluate and screen applicants. The plaintiffs further allege that because these systems rely on historical data and statistical modeling, they reproduce and amplify disparities embedded in prior hiring decisions and labor market structures. The plaintiffs assert that bias may enter the system in different ways, including the composition of training data, the selection and weighing of input variables, the design of model objectives, and the evaluation criteria used to assess a candidate as “fit.”
The plaintiffs argue that even where protected characteristics such as race, age, sex, or disability are not explicitly provided to the system, they can be inferred through correlated variables. For example, employment gaps may reflect disability or health conditions, years of experience may serve as a proxy for age, educational background and institutional affiliations may correlate with race, and other application data may reflect gender or caregiving responsibilities. Through these proxy variables, plaintiffs claim, the system can reproduce discriminatory outcomes without explicitly classifying applicants by protected characteristics.
The Claims Presented and Procedural Status
The claims currently presented (which have survived motions to dismiss) are for disparate impact discrimination based on race and sex under Title VII, disability under the ADAAA, age under the ADEA, and race, gender, and age under the FEHA. Earlier in the case, Workday argued that it was not subject to liability under these employment statutes because it does not qualify as an “employer” for the job applicants. However, the Court permitted the claims to proceed on the basis that the operative complaint adequately alleged that Workday is an agent of its client-employers and therefore falls within the definition of “employer” under Title VII, the ADEA, and the ADA. On May 16, 2025, the Court granted preliminary certification of a collective action on the ADEA claim.
Takeaways for Employers
Mobley v. Workday presents a cautionary tale for employers and vendors deploying AI-driven hiring tools and further highlights the existing tensions in this area of the law.
The Executive Branch’s stance on AI and disparate impact litigation changed between the Biden and Trump administrations. Guidance from the EEOC issued in May of 2023 instructed employers to ensure that any AI-based models they used did not screen out any applicant class and emphasized that employers could be held liable for the disparate impact caused by the use of AI in hiring tools. Further, in Mobley, the EEOC filed amicus briefs in support of the plaintiffs’ positions, particularly that Workday qualified as an “employer” under the applicable statutes.
Since then, the May 2023 EEOC guidance concerning AI has been removed from the EEOC’s website, and on April 23, 2025, Executive Order 14281 was issued, which instructs federal agencies to eliminate or reduce the pursuit of disparate impact theories of liability. The EEOC has since deprioritized disparate impact cases, such as the theories presented in Mobley. However, individual litigants are not restricted from pursuing such claims.
In the wake of changing executive priorities, an increasing number of states, including California, Illinois, Colorado, and Texas, have passed legislation aimed at employers’ use of AI in hiring and management tools. However, the Mobley case illustrates that individuals may not need a private cause of action outside of traditional federal and state anti-discrimination statutes to pursue such claims. As the use of AI becomes more prevalent in the workplace and in hiring, the Mobley case may become a model for similar litigation in the absence of federal or state legislation governing the use of AI in the hiring process.
Members of Maynard Nexsen’s Employment and Labor Law Team are continuing to monitor these litigation trends and emerging theories of liability in connection with employers’ use of AI in the workplace. For additional information about these topics, contact any member of Maynard Nexsen’s Employment and Labor Law Team.
About Maynard Nexsen
Maynard Nexsen is a nationally ranked, full-service law firm with more than 600 attorneys nationwide, representing public and private clients across diverse industries. The firm fosters entrepreneurial growth and delivers innovative, high-quality legal solutions to support client success.