_xlarge

Meta Pauses Work With Mercor After Data Breach Puts AI Industry Secrets at Risk

Meta has temporarily halted all collaboration with Mercor, a data contracting firm, as it conducts an inquiry into a significant security breach that affected the startup, according to sources familiar with the matter. The suspension is open-ended, with similar AI research organizations reassessing their engagements with Mercor while determining the full extent of the incident. Mercor is among several companies relied upon by leading AI labs to produce customized training datasets using extensive networks of human contractors. These datasets are typically confidential, given their strategic importance in developing competitive AI models like ChatGPT and Claude Code. Such labs are highly protective of this information since its exposure could potentially reveal critical aspects of their training methods to rivals across various countries. It remains uncertain whether the compromised data poses a substantial advantage to competitors.

OpenAI confirmed it has not ceased ongoing projects with Mercor but stated it is actively investigating the breach to understand if any proprietary training data was leaked. The firm emphasized that user data remains unaffected by the incident. Meanwhile, Anthropic has yet to comment. Mercor officially notified its employees of the security issue on March 31, describing it as a widespread cyber event impacting numerous organizations globally. Internal communications revealed that contractors assigned to Meta-related work are currently unable to log hours until project activities potentially resume, effectively leaving many without work in the interim. The company is reportedly seeking alternative projects for those impacted. Contractors have not been provided with detailed explanations for the suspension of Meta initiatives. For instance, a message on a Slack channel related to a Meta-specific project designed to improve AI verification of information indicated that Mercor is reevaluating the scope of this work.

The attacker responsible, identified as TeamPCP, is believed to have compromised two versions of the AI API tool LiteLLM by distributing tainted updates. This breach has affected numerous companies and services integrating LiteLLM, potentially involving thousands of targets including major AI players, highlighting the sensitivity of the exposed data. Mercor and its rivals, including Surge, Handshake, Turing, Labelbox, and Scale AI, maintain strict confidentiality around their service offerings to AI labs, often using codenames internally and rarely having executives discuss specifics publicly. The incident has been further complicated by claims from a group using the well-known Lapsus$ alias, which announced on Telegram and similar forums that it possesses a substantial amount of Mercor’s data, including extensive databases, source code, and video files. However, cybersecurity experts suspect this may be an opportunistic use of the Lapsus$ identity, and Mercor’s confirmation about the LiteLLM link suggests TeamPCP or an associated actor is the true culprit.

TeamPCP has been increasingly active in a broader supply chain hacking campaign over recent months, gaining notoriety for distributing compromised software updates and engaging in both data extortion and collaborations with ransomware groups like Vect. The group has also delved into politically motivated activities by deploying a destructive data-wiping worm, known as CanisterWorm, specifically targeting cloud instances configured with Farsi language settings or Iranian time zones. Security experts describe TeamPCP as primarily financially driven, although the extent of any political motives remains ambiguous. Analysis of dark web data purported to originate from the Mercor breach indicates no verifiable link to the original Lapsus$ group, underscoring the complexity and evolving nature of cybercrime branding strategies in recent attacks.

Read More