Digital Truth Crisis: AI, Elections, and the Battle for Reality in 2024

C&C Misinformation Show cover
Explore how AI-powered misinformation is transforming global politics in 2024. From celebrity deepfakes to election manipulation, discover the tools and tactics behind digital deception. We examine real-world cases, reveal the psychology of belief, and provide practical strategies to combat false information. Learn why traditional fact-checking isn’t enough and how societies can maintain truth in an era of artificial intelligence. Essential listening for anyone concerned about democracy in the digital age.

For the latest in AI news, analysis and tools, subscribe to the newsletter!

Transcript

This show focuses on the impact of AI powered misinformation and disinformation on political elections. Since the 2016 US Presidential election, we’ve podcasted about the intersection of technology, social media and politics. Over time, we’ve seen the problem expand and the stakes rise.

Whereas in 2016 and 2020 we could only speculate about the potential of AI to transform politics, circa 2024 AI powered misinformation has seeped into the political mainstream. In September 2024, the global superstar Taylor Swift posted to her 284 million followers on Instagram about her experience with AI and her subsequent endorsement of Vice President Kamala Harris to win the Presidency.

This was more than a political endorsement by a popular figure. This was Taylor Swift reclaiming her identity from AI-powered misinformation.

She wrote, and I quote: ” Recently I was made aware that AI of ‘me’ falsely endorsing Donald Trump’s presidential run was posted to his site. It really conjured up my fears around AI, and the dangers of spreading misinformation. It brought me to the conclusion that I need to be very transparent about my actual plans for this election as a voter. The simplest way to combat misinformation is with the truth.” This isn’t Taylor Swift’s first encounter with fraudulent AI. Before this latest blow up, sexually explicit and abusive deepfakes of Taylor Swift were shared widely on the Internet. How this struggle between tech power and star power eventually lands, we still don’t know.

But it’s clear that 2024 marked the year when AI embedded into political culture on a global scale. What’s more, AI generated misinformation isn’t just a problem only for superstars. The same capabilities apply down to the individual whether its misinformation originating from a vengeful spouse, an upset customer, or just a garden variety asshole, anyone with a smartphone and malicious intent can create and distribute high res propaganda.

So our purpose is to help you partially inoculate yourself to AI enhanced disinformation and misinformation. We’ll start with the stakes, which are global. We’ll also explore the various types of AI output and how they’re used to direct behavior.

Then we’ll look within to what psychology and professional magicians tell us about the things that cause people to believe anything. We’ll finish by asking what Big Tech and Public Officials doing to combat unreality and finally give a few decent rules of thumb to help you navigate a confusing mess.

Part 1: How will societies handle misinformation that’s generated and amplified by AI?

In 2024, the citizens of more than 50 countries representing half of population of planet Earth cast a ballot of some kind in some election. Take a step back and think about how much of this planet runs on voting of some type. Many leaders, whether in politics, economics, or culture, derive their legitimate power or influence from some form of vote.

From a formal vote for a political candidate to an audience vote for a music idol, a market research survey, a poll and many other variations, the act of voting – for anything – has become pervasive in modern life. There are huge incentives to meddle, which provides entrée for disinformation and misinformation.

Russia used video deepfakes to attack Moldova’s pro-Western President Maia Sandu, who was falsely depicted endorsing a pro-Russian political party as well as calling for a ban on Rosehip tea. In Slovakia, audio deepfakes portrayed a leading candidate for the Liberal Party as being in favor of raising taxes on beer, which has been a bad idea politically for thousands of years.

Simultaneously, deepfakes have been used to polish the reputations of those in power like in Indonesia, where an AI photo app encouraged followers of the incumbent candidate to take a selfie and insert their image shaking hands with the politician to circulate on social media. And to complete the circle, now you have politicians blaming AI to cast doubt on authentic images or audio that depicts them in an unfavorable light.

The end result is an erosion of a shared sense of objective reality of virtually anything we experience in the public square. In the 20th century, US Senator Patrick Moynihan famously said that democracy means you have the right to your own opinions, but you do not have the right to your own facts.

However, the spread of false information, whether through doctored videos, deepfakes, or misleading content, undermines this founding premise of democratic society. So let’s turn to disinformation and misinformation as concepts. Misinformation is defined as any false information, regardless of intent. It includes an honest mistake, a misunderstanding of a fact, or something or some quote taken out of context. Disinformation is created and spread intentionally to confuse and mislead. It has a specific agenda and is usually part of a coordinated campaign rather than a one-off communication. For example, you might tell people that an election is on the wrong date. Most experts would call that misinformation because you can’t draw a clear motive behind this false information plus it’s pretty straight forward to fix something binary like a true or false date.

On the other hand, there have been disinformation campaigns in the US telling voters to vote from home, which sounds reasonable but is illegal in all 50 states. It’s disinformation because it’s patently false, it aims to sound like voting by mail, which is legal, and its agenda is by design aimed to disqualify votes and disenfranchise a group of voters. 

The AI foot soldiers for disinformation and misinformation are bots, cyborgs, deepfakes, shallow fakes, and sock puppets, which we’ll explore starting with bots. Bots are autonomous programs that run social media accounts to spread content without human involvement. Bots became mainstream during the 2016 US election when nearly 1 in 5 political tweets originated from an automated account.

Cyborgs, on the other hand, spread false information with a human touch. This is where you have a human operator of a set of bot accounts who might answer an audience query or tweak the content in some way to deflect the countermeasures used by platform providers to flag fake accounts. Cyborgs are more expensive to operate than bots and tend to be part of coordinated disinformation campaigns. An important point about bots and cyborgs is they are mainly focused on distributing false information rather than authoring it.

Deep fakes, shallow fakes and sock puppets focus more on the storytelling side of the equation. With deepfakes, you have AI-enhanced visuals and audio that depict a famous figure saying or doing something they would never do in real life. It’s false information from the get-go.

Shallow fakes, on the other hand, take information out of context to depict the speaker unfavorably. This might involve intentionally slowing down the video of a politician answering a question to make her seem like she’s slurring her words. Or selecting and doctoring a short clip designed to depict confusion or ignorance about a question. So a deepfake creates false information while a shallow fake twists actual information. Then there are sock puppet accounts where you have a famous person using a pseudonym to attack critics or praise themselves. Several politicians have been caught using these accounts to push their agenda while making it appear like a grass roots effort to support a candidate. Doubtless, this crude taxonomy will grow over time. But the effect is clear. Voters are choking on AI-powered virtual pollution that makes it difficult to trust anything they encounter online.

But the current misinformation and disinformation mess isn’t just a technology problem. Human biology and culture play a role too. So let’s turn to the behavioral aspects that prime people to believe and act on false information.

Regardless of a given technique, the psychological target for misinformation and disinformation is the gap that exists between our intellectual beliefs about what is true or false and our emotional beliefs about what we want to be true or false. The human mind often creates cause and effect relationships between events even when objective evidence suggests no relationship exists.  This is the playground for misinformation and disinformation – our human willingness to accept initial explanations that sound plausible even if they’re not factually true.

Case in point is the term “Deep State” which is difficult to define but easy to believe in for large swaths of the US population. It’s too easy for people outside of a given bubble to point at another group’s beliefs and say, “those people are gullible.” Deception professionals thrive on that type of bias because it’s often the case that the certainty your political opponents are mentally deficient gives entrée to appealing to your own blind spots. The difference now is that AI gives deceptive content a production value that makes it very hard to distinguish from valid content. Professional deception does more than just target our senses. It also targets our recall because human memory is a dynamic rather than static. We often think of memory as recording and retrieving something like a computer. It’s why eyewitness testimony is given serious weight in court cases.

But neuroscience and psychology strongly suggest our memories are reconstructions and remixes of what we’ve experienced. Memories aren’t cumulative like a set of records, but are combined with our existing biases and emotional drivers. Show the exact same image to two political partisans and they’ll be encoded differently in memory. Which is another way to say that we don’t remember what we experienced. We remember what we believe we experienced. The stronger the belief, the more malleable the memory. That’s a big reason why disinformation and misinformation works for advancing a political agenda. Once you get people oriented to believe something like vaccines are dangerous, their willingness to accept information that validates that belief grows in parallel to their resistance to information that challenges that belief.

If you look at causes listed on death certificates in the United States, you stand a significantly greater chance of being trampled by a cow than eaten by a shark. But if you survey people about their fears of mortal danger, they believe shark attacks are more important to worry about. Moreover, you won’t see the Discovery Channel launching a multi-million dollar advertising campaign about Cow Week. Con men, conjurers and professional magicians have long practiced the art of directing people’s attention to focus on the wrong thing at the right time in order to pull off an illusion. The difference now is professional deceivers have the power of AI to create a compelling illusion and the power of social media to distribute that illusion as broadly or as precisely as necessary to achieve their objective.

And all the while, Big Tech and policymakers are debating who should monitor and potentially arbitrate political speech. Tech and social media companies argue for self-regulation. In the US, section 230 of the 1996 Telecommunications Act is quoted as Holy Scripture throughout Silicon Valley. It’s a piece of regulation that insulates the owners of networks and technology platforms from liability for the content that flows through their systems. The original intent of Section 230 in 1996 was to level the playing field between the fledgling Internet and the incumbent telecommunication providers. Fast forward over two decades and the relative market capitalization and power of Internet and social media providers dwarfs that of raw telecommunications.

What’s more, the advertising based business model that built the Internet and social media rewards content that drives engagement, not necessarily content that’s true. Whereas in 2016, most of the disinformation meddling in the US election came from the Russian state, by 2020 you had massive troll farms operated by Macedonian hackers whose purpose was the siphon advertising dollars from Google, Facebook, Twitter and other platforms. Make no mistake. Fake news is big business.

But that leaves democratic societies with a conundrum. The platform providers have the resources but not the incentives to attack the digital sludge coming across their networks. Moreover, it’s not clear whether the public interest is served by private actors policing speech in the public square.

Conversely, public authorities have incentives but few resources to do the same job. And all the while, what differentiates misinformation and disinformation from good old fashioned political spin remains a moving target.

Outside of the US, you have governments like Australia considering legislation that puts platform providers on notice that unless they take proactive measures to curb disinformation and misinformation online, the government will levy a hefty fine, up to 5% of global revenue. For their part, social media providers threaten to pull out altogether. Given that 90% of Australians have a Facebook account, this is not a trivial threat.

The bottom line is that digital pollution is becoming as real and as costly as physical pollution. Moreover, the idea of a pollution free digital world is equally unrealistic. Democratic societies attempt to balance the benefits of new technologies or resources with the pollution they generate. Whether we’re talking about environmental regulations for power plants, safety standards for aircraft, water quality, electrical codes and the like, our modern world is replete with this balancing act between benefits and costs surrounding innovation.

And yet, for almost 30 years, we’ve said the digital world is different. Now, the impacts of AI powered, network delivered human nature expressing itself good, bad and ugly are challenging the very foundation of democratic society, which is the idea of a shared sense of reality.

So what’s a citizen to do? Well, we believe if you just try to chase the technologies behind disinformation and misinformation you’re going to be outrun every time. It’s more important to look at how does disinformation and misinformation work on the human psyche. Understand that our ability to distinguish between reality and illusion depends on two things: our belief of what reality should look like and the strength of the illusion.

Even though we say seeing is believing, neuroscience and psychology suggest we initially see what we already believe. Rather than look for extra fingers or slurred speech in a piece of political content, look closer at the emotional outcome a piece of content is trying to achieve. I’m not saying all negative political content is fake.

Indeed, for better or worse, a big part of politics is persuading people how wrong the other candidate is as much as how right your candidate is for the future. If something sounds too good or too bad to be true, it probably is, but even that’s not guaranteed. Truth has consistently beat out fiction for strangeness.

After you do an emotional response check, the next steps are pretty straight forward. Check sources, consume a variety of content – including that which challenges your beliefs, make sure your commitments to a given cause grow in stages rather than immediately jumping all in. And most all cultivate a healthy skepticism.

After all, I’m confident that nearly everyone seeing this media has been solicited by email to split $20 million with an overseas prince, who just needs a little help arranging the transaction.

If A.I. is going to be part and parcel of our daily life. It’s going to be part of our politics. AI is mainstreaming right at the point when people know their lives are changing rapidly, their lives are changing irrevocably, but they don’t have a driving narrative for why the world works like it does. It’s this gap between what we experience and what we want to believe that provides entree for disinformation and misinformation. The public sector, the tech industry and the public will be forced to examine some of their deepest held assumptions about the digital world and its place in our politics and in our society.