From talking fridges to iPhones, our experts are here to help make the world a little less complicated.
The move reflects a growing trend among tech companies to address the rise of AI-generated content and provide users with more transparency about how the technology may influence what they see.
In the post, Google said it will also highlight when an image is composed of elements from different photos, even if non-generative features are used. For example, Pixel 8's Best Take and Pixel 9's Add Me combine images taken close together in time to create a blended group photo.
"This work is not done, and we'll continue gathering feedback and evaluating additional solutions to add more transparency around AI edits," Fisher wrote.
This isn't the first time Google has rolled out ways to inform users about AI use. In July, the company announced a feature called About This Image that works with its Circle to Search for phones and in Google Lens for iOS and Android. It provides context about how a photo has been used or created.
OpenAI, Adobe, Microsoft, Apple and Meta are also experimenting with technologies that help people identify AI-edited images. In July, Meta announced plans torename the labels it applies to social media posts that are suspected to have been manipulated with AI tools by displaying "AI info" alongside a post instead of "Made with AI." The change aims to give users access to more specific information about how AI tools were used rather than only labeling photos as AI-generated.
Meanwhile, Apple's upcoming Apple Intelligence features, which let users create new emoji, edit photos and create images using AI, are expected to add code to each image for easier AI identification.
Source: cnet.com