Generative AI tools are all the rage in the ad world—but there are some places where they won’t be an option.
Meta is restricting AI usage in political advertisements on its platform beginning next year, the company announced this week via blog post. The new measures entirely ban advertisers from using Meta’s own generative AI tools to create political or social issue-focused ads.
Label it: However, advertisers can still use third-party AI tools to generate content for ads, as long as the use of AI is disclosed in specific cases. Those cases include when an existing real-life person is depicted saying or doing something that they did not, when a realistic-looking person or incident that didn’t happen is depicted, when footage of a real event is altered, or when fake imagery of a real event is created.
There are some carve outs. If AI alterations made are deemed “inconsequential or immaterial to the claim, assertion, or issue raised in the ad,” that does not need to be disclosed.
If an ad is tagged as digitally altered, Meta will “add information on the ad” in the form of a tag that appears if users click on the ad. If Meta determines that an advertiser failed to disclose AI use in an ad, that ad will be rejected, and repeat offenses may result in penalties against the advertiser. Meta did not respond to requests for additional information on the nature of the penalties or how Meta will determine whether ads contain AI-generated content.
The policy will be enforced globally starting next year.
The new policy from Meta comes amid a surge of interest in generative AI. Last month, Meta gave advertisers access to more AI-supported text and image tools on its Ads Manager platform, and this week, Google added generative AI tools to its own ad platform.
Get marketing news you'll actually want to read
Marketing Brew informs marketing pros of the latest on brand strategy, social media, and ad tech via our weekday newsletter, virtual events, marketing conferences, and digital guides.
Meta isn’t the only Big Tech player assessing its political ad policies ahead of 2024, which is expected to see the highest political ad spend of all time. Microsoft also unveiled a set of “election protection protection commitments” this week, announcing Content Credentials as a Service, a tool that will enable groups like political campaigns to attach information to digital content. Meanwhile, earlier this year, Microsoft’s ad-tech platform, Xandr, announced a ban on political ads outright.
Meta’s new policy may serve as a foundation for a conversation around ethical AI usage, said Dan Lowden, CMO of tech firm Blackbird.AI, which works with marketers around brand safety.
“The new policy is a good first step to start this conversation around how brand advertisers need to be thinking about the creation of ads and the creation of campaigns and the like that will somehow be impacted by the election,” he said. “I think the challenge will be that it puts a lot of the onus on brand advertisers around determining what’s being created by AI and what’s not.”
That conversation is already happening on a federal level: The Federal Election Commission has taken steps toward regulating AI-generated deepfakes in political ads, and last month, President Biden issued an executive order calling for ethical AI development.
Ready or not, AI is already taking American politics by storm: Earlier this year, the Republican National Committee and a super PAC supporting Florida Gov. Ron DeSantis’s presidential campaign published ads with AI-generated content. It’s only the beginning, Lowden said.
“I honestly believe over time, everything will leverage AI from a creation perspective, and the challenge with that is it blurs the line,” he said.