Facebook and Instagram parent Meta sues Chinese deepfake app which uses AI to remove clothes from photos, turning them into nudes
Meta has filed a lawsuit in Hong Kong against Joy Timeline HK Limited , the Chinese company behind the AI-powered app "CrushAI," which generates deepfake nude images from photos of clothed individuals. The legal action, reported by CBS News, seeks to ban the company from advertising its services on Meta’s platforms, including Facebook and Instagram.
The lawsuit is said to highlight Facebook and Instagram parent Meta’s efforts to combat non-consensual intimate imagery , a growing concern in the age of advanced AI technologies. According to court documents, Joy Timeline made "multiple attempts" to bypass Meta’s ad review process by using deceptive tactics, such as inoffensive imagery, to promote its app. Meta stated, "This legal action underscores both the seriousness with which we take this abuse and our commitment to doing all we can to protect our community from it."
Dangerous apps with alarming consequences
CrushAI is not the first app of its kind. Similar "nudify" apps have previously evaded ad filters on major social media platforms, including Meta’s, by exploiting loopholes in content moderation systems. In response, Meta has collaborated with external experts and internal teams to enhance its detection capabilities, expanding the list of safety-related terms, phrases, and emojis its systems are trained to flag.
The implications of such software are alarming, as it enables anyone to create fake nude images without the subject’s consent, posing significant privacy and ethical concerns. Meta has a strict policy against non-consensual intimate imagery and has consistently removed ads for "nudify" apps from its platforms, as previously confirmed to CBS News.
Recently, Meta announced a new partnership with the Tech Coalition’s Lantern Program, an initiative focused on tracking websites and services that violate child safety regulations. Through this collaboration, Meta will share information with other tech companies about apps, sites, or entities that breach its policies, aiming to curb the spread of harmful AI-driven content industry-wide.
"We’ll continue to take necessary steps — which could include legal action — against those who abuse our platforms like this," Meta reaffirmed in a statement.
The case underscores the broader challenge facing tech giants as they grapple with regulating AI technologies that can be weaponized to exploit and harm users. As of now, Joy Timeline HK Limited has not publicly responded to the lawsuit. The outcome of this legal battle could set a precedent for how platforms address similar AI-driven abuses in the future.
The lawsuit is said to highlight Facebook and Instagram parent Meta’s efforts to combat non-consensual intimate imagery , a growing concern in the age of advanced AI technologies. According to court documents, Joy Timeline made "multiple attempts" to bypass Meta’s ad review process by using deceptive tactics, such as inoffensive imagery, to promote its app. Meta stated, "This legal action underscores both the seriousness with which we take this abuse and our commitment to doing all we can to protect our community from it."
Dangerous apps with alarming consequences
CrushAI is not the first app of its kind. Similar "nudify" apps have previously evaded ad filters on major social media platforms, including Meta’s, by exploiting loopholes in content moderation systems. In response, Meta has collaborated with external experts and internal teams to enhance its detection capabilities, expanding the list of safety-related terms, phrases, and emojis its systems are trained to flag.
The implications of such software are alarming, as it enables anyone to create fake nude images without the subject’s consent, posing significant privacy and ethical concerns. Meta has a strict policy against non-consensual intimate imagery and has consistently removed ads for "nudify" apps from its platforms, as previously confirmed to CBS News.
Recently, Meta announced a new partnership with the Tech Coalition’s Lantern Program, an initiative focused on tracking websites and services that violate child safety regulations. Through this collaboration, Meta will share information with other tech companies about apps, sites, or entities that breach its policies, aiming to curb the spread of harmful AI-driven content industry-wide.
The case underscores the broader challenge facing tech giants as they grapple with regulating AI technologies that can be weaponized to exploit and harm users. As of now, Joy Timeline HK Limited has not publicly responded to the lawsuit. The outcome of this legal battle could set a precedent for how platforms address similar AI-driven abuses in the future.
Next Story