AI-generated images have become realistic enough that you can’t always tell them apart from real photos, and that poses problems for social media sites where viral images can quickly spread misinformation. To address the problem, Meta will soon be labeling AI-generated images posted to Facebook, Instagram and Threads, letting us know that image of a llama on a surfboard is AI-generated. While you may be able to guess that llamas have not become surfing pros, some AI-generated images can be more difficult to suss out and trick you into believing things that aren’t true, like AI-generated images of celebrities, politicians, or other public figures. With this new feature, everything will be labeled so we can make better judgment calls about what we see online.
Meta already labels images generated with its own Meta AI image generator with “Imagined with AI,” but doing this with other AI image generators is a bit more difficult. Meta’s AI-generated images include a visible watermark indicating they were generated with AI, as well as invisible watermarks and metadata that will allow others to identify them. Soon, Meta should be able to identify other AI-generated images that follow similar industry-standard labeling practices. We should see labels on AI-generated images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock in the coming months.
Read more: The Complete Guide to Facebook Privacy Settings
But while AI-generated images are starting to adopt this technology, AI-generated audio and video don’t have similar standards to help viewers identify them. Because Meta can’t automatically label such content as AI-generated, it’s adding a feature that lets people mark their own content as AI-generated. Disclosing this kind of content will be required, and Meta “may apply penalties” to accounts that don’t properly label AI-generated audio or video.
This comes shortly after Meta’s independent Oversight Board called the company’s policies on manipulated media “incoherent” and allowed a manipulated video of U.S. President Biden to remain online. Meta’s current policy only applies to media generated or modified by AI that makes people say things they did not actually say. The Oversight Board suggested revising the policy to include manipulated media, whether or not AI was involved, as well as labeling manipulated content so viewers understand what they’re seeing. Meta’s new rules add labeling similar to what was suggested by the board but continue to limit it to AI-generated content – and it’s unclear whether these new rules would have applied to the video in question. However, labeling AI-generated content is still a big step towards preventing the spread of misinformation.
Nick Clegg, Meta’s President of Global Affairs, says he expects AI-generated content will become “increasingly adversarial” in the coming years as individuals and organizations intentionally attempt to deceive others via manipulated media. Meta’s social media platforms, which have billions of active users, are the perfect place to spread misinformation. These new tools, assuming they work as described, will help us all identify AI-generated images and make our own decisions about their content.
Look for AI-generated audio, video, and photos to be labeled on Facebook, Instagram, and Threads soon. And if you upload any AI-generated content, be sure to label it appropriately so Meta doesn’t take action against your account.
[Image credit: Meta]
Elizabeth Harper is a writer and editor with more than a decade of experience covering consumer technology and entertainment. In addition to writing for Techlicious, she's Editorial Director of Blizzard Watch and is published on sites all over the web, including Time, CBS, Engadget, The Daily Dot and DealNews.