Google announced today that it will require political advertisers to prominently disclose when they use AI to create their ads. The new policy will take effect in November and will apply to all political ads that feature "synthetic content" that depicts "realistic-looking people or events."
This includes ads that use AI to make someone look as if they're saying or doing something they never did, as well as ads that change the footage of an actual event (or fabricate a realistic-looking one) to create a scene that never happened.
The disclosures must be "clear and conspicuous" and must appear in a "prominent" location on the ad. They must also state that the ad contains "synthetic content" and that the content has been "digitally altered or generated."
Google says the new policy is necessary to protect voters from being misled by ads that use AI to create false or misleading content. The company also says the policy is consistent with its broader efforts to combat the spread of misinformation.
The new policy is likely to have a significant impact on the way political campaigns use AI. In the past, campaigns have been able to use AI to create ads that are difficult to distinguish from real footage. This has made it easier for campaigns to spread misinformation and to manipulate voters.
The new policy will make it more difficult for campaigns to use AI in this way. It will also help voters to be more skeptical of political ads that they see online.
The new policy is a welcome step from Google. It is a sign that the company is taking the issue of misinformation seriously. The policy is also likely to have a positive impact on the integrity of the political process.
In addition to Google, other tech companies are also taking steps to address the issue of misinformation. For example, Facebook has announced that it will start labeling political ads that have been edited or manipulated. Twitter has also said that it is working on a way to label political ads that use AI.
These efforts are a positive development. They show that tech companies are aware of the problem of misinformation and that they are committed to addressing it. However, more needs to be done. Tech companies need to work together to develop a comprehensive strategy to combat misinformation. They also need to work with governments and other stakeholders to raise awareness of the issue and to develop solutions.
The use of AI in political advertising is a new and evolving area. It is important to be aware of the potential risks of this technology and to take steps to mitigate those risks. The new policy from Google is a step in the right direction. It is a sign that tech companies are taking the issue of misinformation seriously and that they are committed to protecting voters.