With just five months until voters go to the polls, the United States may be more vulnerable to foreign disinformation aimed at influencing voters and undermining democracy than before the 2020 election, the head of the Senate Intelligence Committee said Monday.
Senator Mark Warner, a Democrat from Virginia, based his warning on several factors: improved disinformation tactics from countries like Russia; the rise of domestic candidates and groups who are also willing to spread false information; and the emergence of artificial intelligence (AI) programs that can quickly create images, audio and video that are difficult to distinguish from the real thing.
In addition, technology companies have withdrawn efforts to protect users from false information, and the government's own progress in addressing the problem has been mired in debates about surveillance and censorship.
As a result, Warner said, the United States may face a greater threat of disinformation before the 2024 election than in the 2016 or 2020 presidential elections. "In 2024, 155 days before the election, we may not be as prepared as we were in 2020 under former President Trump," Warner said.
Security officials, democracy activists and disinformation researchers have noted similar activity in 2016 and 2020, and have warned for years that groups in Russia, Iran and the United States will use online platforms to spread false and polarizing content aimed at influencing the race between Trump and President Joe Biden.
Warner's assessment of U.S. vulnerability comes just weeks after senior security officials told the Intelligence Committee that the United States has greatly improved its ability to combat foreign disinformation.
However, some new challenges will make the 2024 election different from previous election cycles.
AI has been used to generate misleading content, such as an automated call imitating Biden's voice telling New Hampshire voters not to vote in the state's primary. Deceptive deep fakes created by AI have also appeared before elections in India, Mexico, Moldova, Slovakia and Bangladesh.
Federal agencies have communicated with tech companies about disinformation campaigns, but the process has been complicated by court cases and contentious questions about the government’s role in policing political discourse.
Tech platforms have largely abandoned aggressive policies banning election misinformation. Platform X has fired most of its content reviewers and taken a hands-off approach.
Last year, Google-owned YouTube changed its policy banning false election rhetoric and now allows videos arguing that the 2020 election was stolen.
Meta, which owns Facebook, WhatsApp and Instagram, prohibits posts that interfere with the electoral process and removes content that involves foreign influence. The platform also said it would label content generated by AI. But the company also allowed political ads that claimed the 2020 election was rigged, which critics said undercut its commitment.
“I’m not sure these companies have done anything meaningful other than issuing press releases,” Warner said.
In Album: Julie Winslet's Timeline Photos
Dimension:
750 x 422
File Size:
30.61 Kb
Like (1)
Loading...