Meta to require advertisers to disclose AI alterations
Advertisers must acknowledge if their adverts portray real people as doing or saying something that they did not, or if they digitally produce a real-looking person
08 November 2023 - 19:25
byAgency Staff
Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
Meta Platforms said advertisers will from 2024 have to disclose when AI or other digital methods are used to alter or create political, social or election related advertisements on Facebook and Instagram.
Meta, the world’s second-biggest platform for digital adverts, said in a blog post on Wednesday it would require advertisers to disclose if their altered or created adverts portray real people as doing or saying something that they did not, or if they digitally produce a real-looking person.
The company will also ask advertisers to disclose if those adverts show events that did not take place, alter footage of a real event, or even depict a real event without the true image, video, or audio recording of the actual event.
The policy updates, including Meta’s earlier announcement on barring political advertisers from using generative AI advertising tools, come a month after the Facebook-owner said it was expanding advertisers’ access to AI-powered advert tools that can instantly create backgrounds, image adjustments and variations of ad copy in response to simple text prompt.
Alphabet’s Google, the biggest digital advertising company, announced the launch of similar image-customising generative AI advert tools last week and said it planned to keep politics out of its products by blocking a list of “political keywords” from being used as prompts.
Lawmakers in the US have been concerned about the use of AI to create content that falsely depicts candidates in political advertisements to influence federal elections, with a slew of new “generative AI” tools making it cheap and easy to create convincing deepfakes.
Meta has already been blocking its user-facing Meta AI virtual assistant from creating photo-realistic images of public figures, and its top policy executive, former UK deputy prime minister Nick Clegg, said last month that the use of generative AI in political advertising was “clearly an area where we need to update our rules”.
The company’s new policy won’t require disclosures when the digital content is “inconsequential or immaterial to the claim, assertion, or issue raised in the advert”, including image size adjusting, cropping an image, colour correction, or image sharpening, it said.
Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
Meta to require advertisers to disclose AI alterations
Advertisers must acknowledge if their adverts portray real people as doing or saying something that they did not, or if they digitally produce a real-looking person
Meta Platforms said advertisers will from 2024 have to disclose when AI or other digital methods are used to alter or create political, social or election related advertisements on Facebook and Instagram.
Meta, the world’s second-biggest platform for digital adverts, said in a blog post on Wednesday it would require advertisers to disclose if their altered or created adverts portray real people as doing or saying something that they did not, or if they digitally produce a real-looking person.
The company will also ask advertisers to disclose if those adverts show events that did not take place, alter footage of a real event, or even depict a real event without the true image, video, or audio recording of the actual event.
The policy updates, including Meta’s earlier announcement on barring political advertisers from using generative AI advertising tools, come a month after the Facebook-owner said it was expanding advertisers’ access to AI-powered advert tools that can instantly create backgrounds, image adjustments and variations of ad copy in response to simple text prompt.
Alphabet’s Google, the biggest digital advertising company, announced the launch of similar image-customising generative AI advert tools last week and said it planned to keep politics out of its products by blocking a list of “political keywords” from being used as prompts.
Lawmakers in the US have been concerned about the use of AI to create content that falsely depicts candidates in political advertisements to influence federal elections, with a slew of new “generative AI” tools making it cheap and easy to create convincing deepfakes.
Meta has already been blocking its user-facing Meta AI virtual assistant from creating photo-realistic images of public figures, and its top policy executive, former UK deputy prime minister Nick Clegg, said last month that the use of generative AI in political advertising was “clearly an area where we need to update our rules”.
The company’s new policy won’t require disclosures when the digital content is “inconsequential or immaterial to the claim, assertion, or issue raised in the advert”, including image size adjusting, cropping an image, colour correction, or image sharpening, it said.
Reuters
AI REGULATION: Bad AI poetry puts Biden in actors’ camp against fakes
US, China and scores of governments sign AI safety ‘Bletchley Declaration’
Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.
Please read our Comment Policy before commenting.
Most Read
Published by Arena Holdings and distributed with the Financial Mail on the last Thursday of every month except December and January.