Scale AI faces its third alleged labor practice lawsuit in just over a month, this time with workers claiming they suffered psychological trauma from viewing disturbing content without adequate safeguards.
Scale, which was valued at $13.8 billion last year, relies on workers it classifies as contractors to perform tasks such as rating AI model responses.
Earlier this month, a former worker filed a lawsuit alleging she was actually paid less than minimum wage and was wrongly classified as a contractor. A complaint alleging similar issues was also filed in December 2024.
The latest complaint, filed on January 17 in the Northern District of California, is a class-action lawsuit that focuses on the psychological harm allegedly suffered by six staff members of Scale platform Outlier.
The plaintiffs allege they were forced to write disturbing reminders of violence and abuse, including child abuse, without appropriate psychological support, and suffered retaliation when they sought mental health counseling. They say they were misled about the nature of their jobs during the recruitment process and ended up developing mental health issues such as post-traumatic stress disorder (PTSD) as a result of their work. They are seeking the creation of a medical monitoring program and new safety standards, as well as unspecified damages and attorneys' fees.
One of the plaintiffs, Steve McKinney, was the lead plaintiff in a separate complaint filed against Scale in December 2024. The same law firm, Clarkson Law Firm in Malibu, Calif., represents the plaintiffs in both lawsuits.
Clarkson Law Firm previously filed a class-action lawsuit against OpenAI and Microsoft over their use of stolen data, but the lawsuit was dismissed after being criticized by a district judge for its length and content. Scale AI spokesman Joe Osborne criticized the Clarkson law firm in reference to the case and said Scale planned to "vigorously defend itself."
"Clarkson Law Firm has previously unsuccessfully brought legal actions against innovative technology companies, but those actions were summarily dismissed in court. A federal court judge found that one of their previous complaints was "unnecessarily lengthy," and contained “largely irrelevant, distracting or redundant information,” Osborne told TechCrunch.
Osborne said Scale complies with all laws and regulations and has "numerous safeguards" in place to protect its contributors, such as the ability to opt out at any time, advance notice of sensitive content and access to health and wellness plans. Osborne added that Scale does not undertake projects that may contain child sexual abuse content.
In response, Glenn Danas, a partner at law firm Clarksons, told TechCrunch that Scale AI has been “forcing employees to watch horrific and violent content to train these AI models” and has failed to ensure workplace safety.
“We must hold big tech companies like Scale AI accountable, otherwise workers will continue to be exploited to train this unregulated technology for profit,” Danas said.