Economy

In Big Election Year, A.I.’s Architects Move Against Its Misuse

Artificial intelligence companies have been at the vanguard of developing the transformative technology. Now they are also racing to set limits on how A.I. is used in a year stacked with major elections around the world.

Last month, OpenAI, the maker of the ChatGPT chatbot, said it was working to prevent abuse of its tools in elections, partly by forbidding their use to create chatbots that pretend to be real people or institutions. In recent weeks, Google also said it would limit its A.I. chatbot, Bard, from responding to certain election-related prompts to avoid inaccuracies. And Meta, which owns Facebook and Instagram, promised to better label A.I.-generated content on its platforms so voters could more easily discern what information was real and what was fake.

On Friday, Anthropic, another leading A.I. start-up, joined its peers by prohibiting its technology from being applied to political campaigning or lobbying. In a blog post, the company, which makes a chatbot called Claude, said it would warn or suspend any users who violated its rules. It added that it was using tools trained to automatically detect and block misinformation and influence operations.

“The history of A.I. deployment has also been one full of surprises and unexpected effects,” the company said. “We expect that 2024 will see surprising uses of A.I. systems — uses that were not anticipated by their own developers.”

The efforts are part of a push by A.I. companies to get a grip on a technology they popularized as billions of people head to the polls. At least 83 elections around the world, the largest concentration for at least the next 24 years, are anticipated this year, according to Anchor Change, a consulting firm. In recent weeks, people in Taiwan, Pakistan and

Back to top button