TikTok announced on Thursday its intention to introduce labels for images and videos uploaded to its platform that have been generated using artificial intelligence, employing a digital watermark called Content Credentials.
In light of concerns raised by researchers about the potential misuse of AI-generated content, particularly in the context of U.S. elections this fall, TikTok has joined a coalition of 20 tech companies committed to combatting such misuse.
While TikTok already identifies AI-generated content produced within its app, this new initiative will extend labeling to videos and images created outside of the platform.
Adam Presser, head of operations and trust and safety at TikTok, explained, “We also have policies that prohibit realistic AI that is not labeled, so if realistic AI-generated content appears on the platform, then we will remove it as violating our community guidelines.”
The Content Credentials technology, spearheaded by the Coalition for Content Provenance and Authenticity, was co-founded by Adobe, Microsoft, and others. It is now available for other companies to adopt, including OpenAI, the creator of ChatGPT.
Major platforms such as YouTube (owned by Google) and Meta Platforms (which owns Instagram and Facebook) have also committed to implementing Content Credentials.
For the system to function effectively, both the creator of the AI tool used to generate content and the platform distributing the content must agree to adhere to the industry standard.
For instance, when an image is generated using OpenAI’s Dall-E tool, OpenAI attaches a watermark and embeds data in the file to indicate potential tampering. If such an image is uploaded to TikTok, it will automatically receive an AI-generated label.
TikTok, a subsidiary of China’s ByteDance, boasts 170 million users in the U.S. In response to recent legislative efforts requiring ByteDance to divest TikTok or face a ban, both TikTok and ByteDance have filed lawsuits, arguing that such measures violate the First Amendment.