OpenAI announces plans to combat misinformation in 2024 elections

esteria.white

With elections expected to take place in more than 50 countries in 2024, the threat of disinformation will be a priority.

OpenAI, the developer of AI chatbot ChatGPT and image generator DALL-E, has announced new measures to prevent abuse and misinformation in the run-up to this year’s big elections.

In a January 15 post, the company announced that it was working with the National Association of Secretaries of State (NASS), the oldest nonpartisan professional organization of public officials in the United States, to prevent the use of ChatGPT in purposes of disinformation before January 15. US presidential election in November.

For example, when asked questions about the election, such as where to vote, OpenAI’s chatbot will direct users to CanIVote.org, the authoritative website for information about voting in the United States.

“The lessons from this work will inform our approach in other countries and regions,” the firm added.

Fighting deepfakes with cryptographic watermarking

To prevent deepfakes, OpenAI also announced that it will implement digital credentials from the Coalition for Content Provenance and Authenticity (C2PA) for images generated by DALL-E 3, the latest version of its image generator. images powered by AI.

C2PA is a project of the Joint Development Foundation, a Washington-based nonprofit organization that aims to combat misinformation and manipulation in the digital age by implementing crypto content provenance standards.

Its main initiatives are the Content Authenticity Initiative (CAI) and Project Origin.

Several major companies, including Adobe,

Finally, OpenAI said he was experimenting with a provenance classifier, a new tool for detecting images generated by DALL-E.

“Our internal testing showed promising early results, even when images were subjected to common types of edits. We plan to make it available soon to our first group of testers – including journalists, platforms, and researchers – for feedback.

Google DeepMind has developed a similar tool to digitally watermark AI-generated images and audio with SynthID. Meta is also experimenting with a similar watermarking tool for its image generator, although Mark Zuckerberg’s company has shared little information about it.

A step in the right direction

Talk to Information securityAlon Yamin, co-founder and CEO of AI text analytics platform Copyleaks, encouraged OpenAI’s commitment to combating misinformation, but warned that it could be difficult to implement.

“As we approach this election year, considered one of the most important in recent history, and not just in the United States but around the world, there are many concerns about how AI will be misused for political campaigns, etc., and this concern is fully justified. It is therefore encouraging to see OpenAI taking the first steps to remove potential abuses of AI. But as we have seen with social media over the years, these actions can be difficult to implement due to the large size of the user base,” he said.

In the United Kingdom, where the next general election is expected to take place between mid-2024 and January 2025, the Information Commissioner’s Office (ICO) has launched a series of consultations on generative AI on January 15.

The first chapter is open until March 1st.

Leave a comment