Here’s OpenAI’s big plan to combat election misinformation

Illustration: The Verge

Yesterday TikTok presented me with what appeared to be a deepfake of Timothee Chalamet sitting in Leonardo Dicaprio’s lap and yes, I did immediately think “if this stupid video is that good imagine how bad the election misinformation will be.” OpenAI has, by necessity, been thinking about the same thing and today updated its policies to begin to address the issue.

The Wall Street Journal noted the new change in policy which were first published to OpenAI’s blog. ChatGPT, Dall-e, and other OpenAI tool users and makers are now forbidden from using OpenAI’s tools to impersonate candidates or local governments and users cannot use OpenAI’s tools for campaigns or lobbying either. Users are also not permitted to use OpenAI tools to discourage voting or misrepresent the voting process.

In addition to being firmer in its policies on election misinformation OpenAI also plans to incorporate the Coalition for Content Provenance and Authenticity’s (C2PA) digital credentials into images generated by Dall-E “early this year”. Currently Microsoft, Amazon, Adobe, and Getty are also working with C2PA to combat misinformation through AI image generation.

The digital credential system would encode images with their provenance, effectively making it much easier to identify artificially generated image without having to look for weird hands or exceptionally swag fits.

OpenAI’s tools will also begin directing voting questions in the United States to CanIVote.org, which tends to be one of the best authorities on the internet for where and how to vote in the U.S.

But all these tools are currently only in the process of being rolled out, and heavily dependent on users reporting bad actors. Given that AI is itself a rapidly changing tool that regularly surprises us with wonderful poetry and outright lies it’s not clear how well this will work to combat misinformation in the election season. For now your best bet will continue to be embracing media literacy. That means questioning every piece of news or image that seems too good to be true and at least doing a quick Google search if your ChatGPT one turns up something utterly wild.

Recent Articles

Related Stories