Hero Image

World's Top AI Companies Unite to Safeguard Children Online

In a monumental move towards enhancing child safety on the internet, leading artificial intelligence (AI) companies, including OpenAI, Microsoft, Google, Meta, and several others, have joined forces to combat the dissemination of child sexual abuse material (CSAM). This collaborative effort, spearheaded by child-safety organization Thorn and All Tech Is Human, signifies a pivotal step in leveraging technology responsibly to protect vulnerable populations.


Industry Pledge:

The core objective of this initiative is to prevent AI tools from being exploited to generate and circulate CSAM. By committing to this pledge, these companies aim to curb the proliferation of sexually explicit content involving minors across various online platforms. According to Thorn, the significance of these pledges lies in their potential to establish a precedent within the industry, marking a substantial advancement in shielding children from online exploitation facilitated by generative AI technologies.

Alarming Statistics:

Highlighting the urgency of collective action, Thorn revealed that over 104 million files of suspected CSAM were reported in the United States in 2023 alone. Without concerted efforts to address this issue, the proliferation of generative AI could exacerbate the challenges faced by law enforcement agencies, complicating the identification and protection of genuine victims.

Safety by Design:

Accompanying this pledge is the release of a comprehensive paper titled “Safety by Design for Generative AI: Preventing Child Sexual Abuse,” authored by Thorn and All Tech Is Human. This document outlines strategic recommendations for various stakeholders, including AI developers, search engine providers, social media platforms, and hosting companies, to proactively mitigate the risks posed by generative AI.

Key Recommendations:

One crucial recommendation emphasizes the meticulous selection of training data sets, advocating for the exclusion of content featuring both CSAM and adult sexual material. Given the propensity of generative AI to conflate these concepts, such precautionary measures are essential to prevent the inadvertent synthesis of harmful content.

Platform Responsibilities:

Additionally, social media platforms and search engines are urged to remove links to websites and applications facilitating the dissemination of illicit images of minors. Failure to address this issue could exacerbate the "haystack problem," inundating law enforcement with an overwhelming volume of AI-generated CSAM, thereby impeding efforts to identify and intervene in cases of child sexual abuse.

Addressing Technological Challenges:

Rebecca Portnoff, Thorn's vice president of data science, underscored the significance of proactive measures in mitigating the harms associated with generative AI. Some companies have already taken proactive steps, segregating content involving minors from adult material within their data sets. While certain solutions such as watermarking AI-generated content have been implemented, challenges persist, as watermarks and metadata can be easily circumvented.

READ ON APP