- OpenAI says it is adding a new digital watermark to DALL-E 3 images.
- The company admits that the solution isn’t perfect as it can be easily removed.
- There are growing concerns about the spread of misinformation generated by artificial intelligence in elections.
OpenAI says it is adding a new digital watermark to DALL-E 3 images.
A watermark from the Content Provenance and Authenticity Alliance (C2PA) will be added to AI-generated images, the company said in a blog post on Tuesday. The company believes the use of watermarks will help increase public trust in digital information.
Users can check whether artificial intelligence produced the image using the following website Content Credentials Verification. The company noted that some media organizations are also using the standard to verify content origins.
However, OpenAI admits that this method is not a perfect solution and says that the watermark can be easily removed.
During the upcoming election, there are growing concerns about the spread of misinformation, especially artificial intelligence-generated audio, images and videos.
As billions of people head to the polls this year, voters are already encountering problems with artificial intelligence-generated content, including robocalls impersonating Joe Biden and fake video ads from Rishi Sunak. Taylor Swift’s apparent deepfakes also made headlines last month, sparking international condemnation and legislative action.
Technology company Meta has also expressed its readiness to crack down on the spread of artificial intelligence-generated content. The company said Tuesday it plans to add tags to AI-generated images on Facebook, Instagram and Threads.
Not a “silver bullet”
OpenAI knows these plans aren’t perfect.
The company said C2PA is included in the metadata for images produced using the online version of DALL·E 3 and plans to extend it to mobile users by February 12.
However, OpenAI said metadata is not “a panacea for provenance issues,” noting that it can easily be deleted accidentally or intentionally.
Labeling AI-generated content has proven difficult. Research has found that most forms of digital watermarks have weaknesses that malicious actors can easily exploit.
Early attempts to provide systems for checking written content for signs of artificial intelligence proved relatively unfruitful. OpenAI quietly canceled its detection service due to accuracy issues.
Representatives for OpenAI did not immediately respond to Business Insider’s request for comment outside regular business hours.
Axel Springer, the parent company of Business Insider, has struck a global deal to have OpenAI train its models based on reports from its media brands.
3 Comments
Pingback: OpenAI is adding watermarks to AI images. This is not a perfect solution. – Tech Empire Solutions
Pingback: OpenAI is adding watermarks to AI images. This is not a perfect solution. – Paxton Willson
Pingback: OpenAI is adding watermarks to AI images. This is not a perfect solution. – Mary Ashley