Google has released a digital AI watermark for images generated by its text-to-image service Imagen. It is currently the first generative AI provider to do so. 

The SynthID watermark is still detectable even after the generated image has been edited without compromising image quality.  

According to Google, SynthID uses two deep learning models that work to both identify and watermark images. Additionally, there are several identification confidence levels within the system that can help advise users to treat possible generated images with caution. 

Unlike metadata, SynthID is in the pixels of the image itself. 

Traditional metadata is stored within the image file, meaning it can easily be deleted or manipulated. 

SynthID therefore allows an AI generated image to remain detectable even after the metadata has been lost or tampered with. 

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

Google aligns the release of SynthID with its commitment to developing AI that is safe and responsible.  

Whilst it admits that SynthID is not “foolproof” against extreme image manipulation, Google believes that its watermark will help stop generative AI contributing to misinformation whilst not hindering user creativity. 

GlobalData’s 2023 thematic report into tech regulation found that AI was the most mentioned technology on social media posts about regulatory frameworks. 

According to GlobalData's social media database, tech regulation has been a major discussion point for social media users over the last 12 months. 

However, the analyst also notes that net sentiment on the theme has gradually declined in the last five years, suggesting that users are less pleased with the current efforts to regulate AI and big tech. 

Currently, GlobalData records a net sentiment of 0.46 for posts on tech regulation, which is lower than the 0.69 sentiment recorded in 2019. 

The EU’s AI Act, passed 14 June 2023, has strict transparency requirements that generative AI software will need to adhere to. Whilst these requirements mainly require companies to disclose summaries of the copyrighted data their AI is trained on, creating watermarks that can easily identify generated content will help these concerns. 

Google has stated that its SynthID could be expanded across other companies’ AI models. 

Currently, the company is still gathering user feedback on SynthID with ambitions that the watermarking technology will be used across wider society. 

Our signals coverage is powered by GlobalData’s Thematic Engine, which tags millions of data items across six alternative datasets — patents, jobs, deals, company filings, social media mentions and news — to themes, sectors and companies. These signals enhance our predictive capabilities, helping us to identify the most disruptive threats across each of the sectors we cover and the companies best placed to succeed.