Tuesday, October 15

In a big election year, the architects of artificial intelligence are opposing its misuse

AI companies have been at the forefront of developing transformative technology. Now they are also racing to set limits on the use of artificial intelligence in a year full of important elections around the world.

Last month, OpenAI, the maker of the ChatGPT chatbot, said it was working to prevent abuse of its tools during elections, in part by banning their use to create chatbots that pretend to be real people or institutions. In recent weeks, Google has also said it will limit its AI chatbot, Bard, from responding to certain election-related requests to avoid inaccuracies. And Meta, owner of Facebook and Instagram, promised to better label AI-generated content on its platforms so voters could more easily discern what information was real and what was false.

On Friday, Anthropic, another major AI startup, joined its peers in banning its technology from being applied to political campaigns or lobbying efforts. In a blog post, the company, which makes a chatbot called Claude, said it will warn or suspend any users who violate its rules. It added that it is using trained tools to automatically detect and block disinformation and influence operations.

“The history of AI implementation has also been full of surprises and unexpected effects,” the company said. “We expect that 2024 will see surprising uses of AI systems, uses that were not foreseen by the developers themselves.”

The efforts are part of a push by artificial intelligence companies to gain control over a technology they popularized as billions of people head to the polls. According to Anchor Change, a consultancy, at least 83 elections are expected around the world this year, the largest concentration for at least the next 24 years. In recent weeks, citizens of Taiwan, Pakistan and Indonesia have voted, while India, the world’s largest democracy, holds its general elections in the spring.

It’s unclear how effective restrictions on AI tools will be, especially as tech companies move forward with increasingly sophisticated technologies. On Thursday, OpenAI unveiled Sora, a technology that can instantly generate realistic videos. Such tools could be used to produce text, sound and images in political campaigns, confusing fact and fiction and raising doubts about voters’ ability to recognize what content is real.

AI-generated content has already appeared in US political campaigns, provoking regulatory and legal resistance. Some state lawmakers are drafting bills to regulate AI-generated political content.

Last month, New Hampshire residents received robocall messages dissuading them from voting in the state primary in a voice that was most likely artificially generated to sound like President Biden’s. The Federal Communications Commission last week banned such calls.

“Bad actors use AI-generated voices in unsolicited robocalls to extort vulnerable family members, impersonate celebrities, and misinform voters,” FCC Chair Jessica Rosenworcel said at the time.

AI tools have also created misleading or deceptive portrayals of politicians and political topics in Argentina, Australia, Great Britain and Canada. Last week, former Prime Minister Imran Khan, whose party won the most seats in Pakistan’s elections, used the voice of an artificial intelligence to declare victory while in prison.

In one of the most important election cycles in memory, the misinformation and deceptions that artificial intelligence can create could be devastating for democracy, experts say.

“We are behind the eight ball here,” said Oren Etzioni, a University of Washington professor specializing in artificial intelligence and founder of True Media, a nonprofit that works to identify online misinformation in political campaigns. “We need tools to respond to this situation in real time.”

Anthropic said in its announcement Friday that it was planning tests to identify how its Claude chatbot might produce biased or misleading content related to political candidates, policy issues and election administration. These “red team” tests, which are often used to break a technology’s safeguards to better identify its vulnerabilities, will also explore how the AI ​​responds to malicious questions, such as requests for voter suppression tactics .

In the coming weeks, Anthropic will also launch a trial that aims to redirect U.S. users who have voting-related questions to authoritative information sources such as TurboVote from Democracy Works, a nonpartisan nonprofit group. The company said its AI model was not trained frequently enough to reliably provide real-time data on specific elections.

Similarly, OpenAI said last month it plans to direct people to voting information via ChatGPT, as well as label AI-generated images.

“Like any new technology, these tools come with benefits and challenges,” OpenAI said in a blog post. “They are also unprecedented, and we will continue to evolve our approach as we learn more about how our tools are used.”

(The New York Times sued OpenAI and its partner, Microsoft, in December, alleging copyright infringement of news content related to artificial intelligence systems.)

Synthesia, a start-up with an AI video generator that has been linked to disinformation campaigns, also bans the use of the technology for “news-like content,” including false, polarizing, divisive or misleading material. The company has improved the systems it uses to detect misuse of its technology, said Alexandru Voica, Synthesia’s head of corporate affairs and policy.

Stability AI, a start-up with an image generation tool, said it has banned the use of its technology for illegal or unethical purposes, that it has worked to block the generation of unsafe images and that it has applied a watermark imperceptible to all images.

Even the largest technology companies have contributed. Last week, Meta said it was working with other companies on technology standards to help recognize when content was generated with artificial intelligence. Ahead of the European Union’s parliamentary elections in June, TikTok said in a blog post on Wednesday that it will ban potentially misleading manipulated content and require users to label realistic AI creations.

Google said in December it will also require YouTube video creators and all election advertisers to disclose altered or digitally generated content. The company said it is preparing for the 2024 elections by preventing its AI tools, such as Bard, from returning answers to certain election-related questions.

“Like any emerging technology, artificial intelligence presents new opportunities but also challenges,” Google said. AI can help fight abuse, the company added, “but we are also preparing to change the misinformation landscape.”