Yesterday, TikTok showed me a deepfake video of Timothee Chalamet sitting on Leonardo Dicaprio’s lap and yes, I immediately thought “what if As good as this stupid video is, imagine how bad election misinformation would be.” OpenAI must have been thinking about the same thing, and today updated its policy to start addressing the issue.
this wall street journal Notice the new policy changes first posted on the OpenAI blog. ChatGPT, Dall-e, and other OpenAI tool users and manufacturers are now prohibited from using OpenAI’s tools to impersonate candidates or local governments, nor can users use OpenAI’s tools for campaigning or lobbying. Users are also prohibited from using OpenAI tools to prevent voting or distort the voting process.
The digital credential system will encode the origin of the image, effectively making it easier to identify artificially generated images without having to look for odd hands or unusual accessories.
OpenAI’s tools will also begin directing U.S. voting questions to CanIVote.org, which is often one of the best authorities on the web about where and how to vote in the U.S.
But all of these tools are currently in the process of being rolled out and rely heavily on users reporting bad actors. Given that artificial intelligence is itself a fast-moving tool that regularly surprises us with beautiful poetry and outright lies, it’s unclear how effective it will be in combating misinformation during election season. For now, your best bet will be to continue embracing media literacy. This means questioning every piece of news or image that seems too good to be true, and if your ChatGPT finds something completely crazy, at least do a quick Google search.
2 Comments
Pingback: Here’s OpenAI’s big plan to combat election misinformation – Paxton Willson
Pingback: Here’s OpenAI’s big plan to combat election misinformation – Mary Ashley