Can deepfakes be good for democracy? India tries to balance risks and rewards of AI.
This year, the World Economic Forum designated manipulated and falsified information as the most pressing short-term risk facing the world today. India is especially concerned with misinformation, which is exacerbated by the widespread use of artificial intelligence.
Surveys show most Indians have encountered AI-generated deepfakes in the past year. Some of that content has arguably benefited India’s democracy, with many parties embracing AI tools to improve voter outreach and translate political speeches. But since elections began April 19, there’s been a surge of deepfake videos harnessing the likenesses of politicians and celebrities to manipulate voters.
Why We Wrote This
Indian society is scrambling to respond to an uptick of political deepfakes during critical elections. Its efforts could help build a roadmap on how democracies balance the good and the bad of artificial intelligence.
Last week, a doctored video of India’s home minister promising to dismantle benefits to lower castes sparked a political firestorm and led to several arrests.
Throughout India, individuals, tech companies, and fact-checking organizations are trying to mitigate the risks of this rapidly advancing technology. Other countries are paying attention.
“We have to grow resilient in the face of these new tools,” says Josh Lawson, director of AI and democracy at the U.S.-based Aspen Institute. “AI tools can be used to bridge language divides and to reach a broader audience with vital civic information. But the same tools can be used to try and manipulate voters at key moments.”
India’s marathon elections – which last six weeks and involve nearly a billion voters – are offering the world a glimpse into the promise and perils of rapidly advancing artificial intelligence.
A survey by cybersecurity company McAfee found that, in the last 12 months, more than 75% of Indians online have seen some form of deepfake content – video or audio that’s been manipulated through AI technology to convincingly imitate another person.
Some of this content has arguably benefited India’s democracy, including the ruling party’s use of AI tools to make Prime Minister Narendra Modi’s speeches available in different languages. But since elections began April 19, there’s been a surge of viral videos harnessing the likenesses of politicians and Bollywood celebrities, seemingly to sway voters ahead of a constituency’s polling period.
Why We Wrote This
Indian society is scrambling to respond to an uptick of political deepfakes during critical elections. Its efforts could help build a roadmap on how democracies balance the good and the bad of artificial intelligence.
Last week, for example, a doctored video of India’s home minister promising to dismantle benefits to lower castes sparked a political firestorm and led to the arrest of several people allegedly involved in its creation.
The escalating frenzy surrounding these deepfakes prompted a lawyers’ consortium to petition the Delhi High Court to compel India’s Election Commission to ban the use of deepfake technology in political communications – a petition the court declined this week. Elsewhere in India, individuals, tech companies, and fact-checking organizations are stepping up to try and manage this crisis. Experts say their efforts could help other countries that are preparing for elections.
“We have to grow resilient in the face of these new tools,” says Josh Lawson, director of AI and democracy at the U.S.-based Aspen Institute. “AI tools can be used to bridge language divides and to reach a broader audience with vital civic information. But the same tools can be used to try and manipulate voters at key moments.
“Civil society groups, campaigns, regulators, and the voters all have a role in defending fact-based reality,” he adds.
Anupam Nath/AP
India’s AI experiment
For years, Indian political parties have been embracing AI technology to improve voter outreach. During the most recent campaign season, many partnered with digital production companies to deploy highly personalized audio messages or videos – including of deceased party leaders encouraging people to vote for certain candidates.
As the demand for such content grew, these companies formed a coalition to create guidelines for ethical AI usage that respects human rights, privacy, and transparency. Their standards, for example, prohibit content that defames opposition leaders and require videos to have a clear stamp noting they’re AI-generated.
Senthil Nayagam, founder of southern India tech startup Muonium AI and founding member of the coalition, says the effort fosters responsible practices and helps build public trust. “Ethical coalitions can work proactively to understand and mitigate the negative impacts of AI on society, such as job displacement, bias in AI systems, and other social and economic consequences,” says Mr. Nayagam.
Yet as free AI tools become more available, some argue that companies are not equipped to contain the technology or regulate its usage. Indeed, the recent viral deepfakes do not adhere to any ethics guidelines, and it’s this kind of overtly deceptive content that’s alarming rights groups and voters alike.
Turning discussion into action
This year, the World Economic Forum designated manipulated and falsified information as the most pressing short-term risk facing the world today. India is among the nations most concerned with the proliferation of misinformation, which is exacerbated by the widespread use of generative AI and has the potential to destabilize democracies, according to the Forum’s Global Risks Report 2024.
That report inspired seasoned political strategist Sagar Vishnoi to join hands with Pranav Dwivedi, a policy consultant, to educate public servants on the risks of deepfakes. The childhood friends both have experience with AI, but it’s only recently that they began discussing the immense threats AI-generative technology can pose to society.
“We are living in an era where technological advancements have blurred the lines between reality and fabrication,” says Mr. Vishnoi. “After multiple brainstorming sessions, we decided to launch the training workshops.”
Their first workshop, held April 5 and 6 in the Shravasti district of the northern state of Uttar Pradesh, drew a diverse group of over 150 attendees, including dozens of police, district judges, and senior administrators. Many were AI novices.
Courtesy of Inclusive AI
The two three-hour sessions delved into the intricacies of fake news and manipulated videos, using real-life examples and potential election scenarios to illustrate their impact. The group practiced scrutinizing media for visual and auditory cues that suggest manipulation. They were also taught how to use subscription-based tools like Deepware and InVID, with Mr. Dwivedi and Mr. Vishnoi emphasizing the importance of verifying information with credible sources.
The workshops – which are paid for by the local government and hosted by Mr. Dwivedi’s organization, Inclusive AI – are designed by Mr. Vishnoi and Mr. Dwivedi and feature legal, AI, and cybersecurity experts.
“There are so many anti-misinformation campaigns around the world, and I am trying to learn more from them,” says Mr. Vishnoi, listing off a slew of AI projects he’s following. “They are all trying to make citizens aware about the digital threat.”
Scheduling workshops with administration officials has been challenging during elections, but Mr. Vishnoi says multiple states have requested training and their next event – with a women-focused group near Delhi – is on the books for next week. The duo is also reaching out to banks to participate in future events, given their susceptibility to financial scams.
Meanwhile, other groups are working to arm the general public with tools to combat deepfakes.
Empowering voters
Last month, Misinformation Combat Alliance – a coalition of media, tech, and fact-checking organizations – launched the Deepfakes Analysis Unit (DAU), a resource that allows anyone to forward a piece of media to a WhatsApp number, and receive expert assessments on the authenticity.
This initiative, which supports multiple languages including English, Hindi, Tamil, and Telugu, leverages a diverse network of academicians, researchers, tech platforms, and fact checkers. The reports aren’t immediate, but while users wait, they can browse past assessments on the coalition’s website, or join the DAU WhatsApp channel for more updates.
During its launch, journalist and DAU head Pamposh Raina described the project as “India’s first tipline to help citizens discern between real and synthetic media” and said it will focus on “audio and video that could have the potential to mislead people on matters of public importance, and could even cause real-world harm.”
It’s a concern that resonates around the world, including in the United States, which holds elections Nov. 5.
According to a recent Polarization Research Lab report, 49.8% of Americans expect AI to have negative consequences for the safety of elections, 65.1% are worried that AI will harm personal privacy, and 50.1% anticipate elections becoming less civil due to the use of AI.
In the coming weeks and months, India could offer lessons on how to mitigate that damage – and even how to harness AI for good.
“The late date of U.S. elections means their citizens benefit from a year of learning abroad,” says Mr. Lawson, from the Aspen Institute.