FB Pixel no scriptWill TikTok’s new content labels really combat AI misinformation? | KrASIA

Will TikTok’s new content labels really combat AI misinformation?

Written by Jeff Chay Published on   3 mins read

The automatic labeling of AI-generated content on TikTok is a small step in an uphill battle against the misuse of AI.

TikTok has implemented a new system for automatically flagging artificial intelligence-generated content. The short video sharing platform is now testing the system, which entails using digital watermarks to identify and label AI-generated content uploaded onto its platform, making it the first in its industry to adopt such technology.

In tandem, TikTok has also introduced a new feature for creators to inform their followers each time they post AI-generated content. To enhance transparency, the platform’s AI effects will also be clearly labeled with “AI” in their names and corresponding effect labels. Guidelines have been shared with TikTok Effect House creators to ensure consistency in labeling.

Meanwhile, TikTok’s Chinese counterpart, Douyin, is working on a similar initiative in China. It published a standard for adding labels and metadata to AI content in May 2023, but the standard has yet to reach industry-wide adoption.

The digital watermarking standard employed by TikTok is provided by the Coalition for Content Provenance and Authenticity (C2PA). Known as “Content Credentials,” it embeds important metadata into images and videos. Such information may include details about where a piece of content originated, whether it has been edited, or what tools or AI were used to create or modify it.

Image displays Content Credentials verification results after uploading an image of a teacup generated using Adobe’s Firefly software.

In addition to using them to identify AI-generated content uploaded to TikTok from outside sources, TikTok will also add Content Credentials to videos created on its own platform. This means that those labels will persist if AI-generated content created on TikTok is downloaded and shared elsewhere.

Content Credentials were designed to be tamper-evident, meaning that any undue changes will be visibly recorded in the metadata. Moreover, removing or altering these credentials would invalidate the cryptographic signatures used to verify authenticity. As such, they provide platforms with a simple and reliable means of verifying the provenance of online material.

As previous iterations of digital watermarking have tended to rely on users’ own sense of skepticism to verify information, the visibility of TikTok’s content labels will determine their effectiveness in addressing previous limitations of digital watermarks.

The usage of these watermarks in detecting AI-generated content is, however, limited to material created by C2PA-registered platforms, such as OpenAI, Adobe Creative Cloud, and Midjourney. Content that is generated using AI on platforms which are not part of C2PA could therefore escape detection.

Moreover, Content Credentials are not invulnerable to manipulation by malicious actors. According to a study by the University of Maryland, applying Gaussian noise to distort an image’s watermark pattern can effectively bypass such a detection algorithm. Screenshots of AI-generated images also do not retain the original version’s metadata.

The embedding of personally identifiable information into content metadata could also possibly renew concerns about user privacy on TikTok. The app has faced criticism in the past for its data collection practices and purported links to the Chinese government.

Even so, in a bid to stem the tide of AI misinformation online, numerous tech giants including Meta and Google have announced plans to implement C2PA’s Content Credentials.

Meta will begin applying “Made with AI” labels in accordance with C2PA standards in May to AI-generated videos, images, and audio posted on Facebook and Instagram. This expands on its initial policy from February which addressed only a narrow slice of digitally-altered videos.

Google joined the C2PA steering committee in February and will explore incorporating Content Credentials into platforms like YouTube and Google Images. At the same time, it is developing its own digital watermarking toolkit, SynthID, for identifying AI-generated images, audio, text, and videos.

While TikTok’s watermark-based system remains far from a perfect solution, it could be a useful indicator that will work in tandem with a growing wave of skepticism toward what people see online. Such an undercurrent of doubt is tapped into by X, which eschews such labeling practices in favor of relying on user-created “Community Notes” to highlight false or misleading information on the platform.

The bottom line is that creating a truly reliable system to authenticate AI-generated content will take time. In the meantime, schemes like Content Credentials which provide a layer of scrutiny over the use of AI are an important step in the right direction.


Auto loading next article...