Facebook Advertisers Must Label Manipulated Images

John Lister's picture

New Facebook and Instagram rules say political advertisers will now have to tell users if they've digitally manipulated images or video. The companies still officially ban misleading "deepfake" videos.

The new rules from parent company Meta take effect from the start of 2024. They apply to any advertisement classed as political, electoral or covering a social issue. The rules only cover "photorealistic" images and video and "realistic sounding audio," meaning illustrations or cartoons are excluded.

While Meta didn't explicitly address the timing, it's likely the changes follow concerns about misleading information online during a series of elections around the world scheduled for 2024.

Cropping Still OK

Under the rules, advertisers must disclose when they've altered an image or digitally created it (for example, using AI) and it:

  1. Depicts somebody saying or doing something they didn't do.
     
  2. Depicts somebody who doesn't exist.
     
  3. Depicts an event that didn't happen.
     
  4. Falsely appears to be a real depiction of an event (whether or not it happened).
     
  5. Alters real footage of an event.

The rules don't apply to minor digital alterations such as cropping or color correction unless they make a "consequential or material" change to the apparent content of the image. (Source: facebook.com)

Ads Could Be Banned

When advertisers make this disclosure, Facebook and Instagram will add a note when it's displayed. This will also appear in Facebook's library of political ads, which is designed to give open information about who is behind ads, for example revealing how a political group is targeting its message to different audiences.

If the advertisers don't make the disclosure (and Meta spots the violation), it will block the ad. Repeated failures to disclose may mean an advertiser is punished.

It's safe to say the policy has some significant limitations. It only applies to advertising content and not through misleading material that is posted by an "ordinary user."

The BBC notes that existing Facebook and Instagram rules that ban any user from posting digitally manipulated videos that "would likely mislead an average person to believe a subject of the video said words that they did not say." (Source: bbc.co.uk)

What's Your Opinion?

Is labeling digitally altered and AI-generated multimedia sufficient? Should Meta ban such ads rather than simply require a label? Will bad actors find ways round the rules?

Rate this article: 
Average: 5 (2 votes)