top of page
Search
rickenpatel

From Fake News to Fake Views


Will We Let Bot Armies Destroy our Democracies?


The internet, and particularly social media, are largely fake.


While most people know that their instagram and twitter pages and the comment forms on media sites are infested with bots, few seem to understand just how widespread this problem is, how damaging and dangerous it is, and how it’s about to get far, far worse.


In 2016, fake news took the internet by storm, convincing hundreds of millions of people to believe stories like “Pope Francis endorses Donald Trump for President”. It was highly effective because (as research showed) most people of all political stripes were fooled, and there were few mechanisms to deal with it. The scale of the problem was vast - in some countries like Brazil, the number of views of fake news on social media outnumbered views of real news. And the sources of it varied, from massive government troll operations to a budding global cottage industry in Macedonia.


Since then, a host of efforts have massively reduced the problem of outright fake news and false misinformation. I’m deeply proud of having led the effort that developed the model and successfully advocated for correction labels on Facebook. Our collective success in the vast challenge of fake news should give us hope.


But we need to start by recognizing just how vast the problem of “Fake Views” is.


The Staggering Scale of the Problem


The excellent “Bad Bot Report” by Imperva is a good place to start. About 40% of all internet traffic is created by malicious bots. This is a staggering statistic. It means for example that “Fake Views” accounts for perhaps 3% of all global carbon emissions.


Malicious bots are what they sound like. They’re not bots that serve a legitimate purpose like indexing the internet for google. They are designed for deception, misinformation and fraud.


Social media is the epicentre of the fakery. Way back in 2020 (which is an eternity in terms of how fast Fake Views are growing), a study of 12 million Instagram accounts by HypeAuditor found that 55% had large numbers of fake followers and engagement, rising to 66% of accounts with large followings. This fake activity was provided by the 45% of all instagram accounts that were themselves fake.


Facebook admits that 5% of its users are bots (many experts call this an underestimate)  and Twitter also admits to 5% fake accounts, though Elon Musk famously cited research suggesting that the number was closer to 20%. But what matters is not how many fake accounts there are but how much of the activity on social media, the activity that drives the algorithm and the prominence of content, is fake. All signs suggest that bots are far more active than authentic users. Fake facebook accounts have 7 times as many friends as real ones and are nearly 100 times more likely to abuse powerful tools such as highlight tagging.


In 2022 two scholars at Washington University were able to gain access to 6 weeks of data for Twitter accounts (big tech guards their data closely). They wrote an algorithm that could predict inorganic users with 98% accuracy. Using this, they could demonstrate that, depending on the topic discussed, between 25% and 68% of all content on Twitter was produced by bots. Various studies suggest that somewhere between a third and a half of all tweets are manipulated inorganic content.


Much of the fakery has to do with commerce of course. One common fraud scheme involves getting fake accounts to click on paid advertizing - the advertizers pay google for each click, and google shares a portion with the site where the click occurred, which in turn pays the clickfarmer. This fraud alone is estimated by the firm Polygraph to account for 11% of all google ad clicks and $96 billion per year in fraudulent earnings. That’s 3 times the estimated profits from global human trafficking, and that’s just one of the many types of fraud. 


Much of this activity has been driven out of click farms, mostly in developing countries where electricity and old smartphones are cheap. The click farmer will wire up hundreds of old smartphones to one computer and then click, like, and comment or share on posts or ads when paid to. Just one such farm raided in Thailand was found to have half a million SIM cards.


Unsurprisingly, organized crime has invested massively in the fake web. One network of Chinese triads has kidnapped and trafficked over 200,000 highly educated and often English speaking citizens to become slaves in scam shops in Myanmar, Thailand and Cambodia. Each slave runs multiple avatars that pursue romance scams and “pigbutchering” operations - where an online relationship and confidence is built with a target and then used to scam money. The officially reported losses to these scams in the US and UK alone is in the billions, but the actual number is certainly far higher as many victims are too ashamed to report and know the police can do very little about it.


All of this has profound implications for commerce, competition, and our culture and society. For starters, the algorithms that now curate 4 hours per day and counting of human consciousness have been deeply shaped by fraudulent behaviour. They do not operate on an accurate picture of what humans are like, and what we want. Perhaps that is part of why social media still seems so offputting to so many. For another, our current AIs are being trained on data, on the current internet, and at this point by a society which has also been profoundly shaped by fake views. That may prove to be its most lasting legacy.


The Threat to Democracy


After money, perhaps the second most powerful driver of fake views has been power. And this may be its most devastating impact. Extensive research shows that fake views have been deployed at scale for highly effective political influence operations.


The most absurd objection I’ve heard to this concern is that people don’t get their political views from the internet. It’s very clear that exposure to even a single piece of content - a video or an ad - can ‘swing’ a small percentage of people to a certain point of view. In my work we do randomized control trials with such pieces of content all the time. Political digital advertizing, and digital advertizing more generally, is a trillion dollar business for one reason - because it works.


But what works far more powerfully than ads, is social proof. The belief that others - your peers - believe something, is one of most powerful shapers of our own beliefs, and actions. This is what fake views (and comments and posts and reviews) accomplishes. Humans have a powerful set of social biases such as social conformity bias, where our brains even try to anticipate precognitively (before conscious thought) what our ‘tribe’ thinks so that we may think the same. We are social animals, and today’s fake views firms have made an art form of manipulating and architecting ‘herd behaviour’.


And there is plenty of evidence that this power is being weaponized at scale for political purposes. The bot armies are particularly active at particular times, on particular issues and in particularly powerful places. The Imperva Bad Bot Report shows that bots are far more active in the US  for example than many other countries like Sweden, and that they target particular industries like law and government. Bad bot activity on twitter spiked on climate change discussions just before Donald Trump withdrew the US from the Paris climate agreement. And a study by the Times newspaper and Swansea University showed that 6500 Russian controlled twitter bots were active in supporting Jeremy Corbyn and denigrating the Tories in the UK election.


In 2022 researchers from the US state department reported over 1500 YouTube channels, 562 WhatsApp groups, 62 websites and over 1000 Twitter accounts that appeared to be artificially amplifying President Nayib Bukele’s narratives in El Salvador. This is an extraordinary amount of reach in a country of just 6 million people.


More recently, the fingerprints of bad bots have been all over surprise authoritarian populist election victories, from Marcos in the Philippines to Prabowo in Indonesia.


And such influence campaigns need not be targeted at the general public - they can be part of sophisticated efforts to attack individuals and institutions. The women’s march after Donald Trump’s election win in 2016 offered a massive coalescing of opposition forces, but its leadership fell apart in acrimony. It was only years later that an investigation by the New York Times discovered that the accusations of racism and anti-semitism that tore apart the march leadership first emerged on Twitter from Russian controlled bot accounts. In Kenya, bot manipulation was twinned with a sophisticated hacking and impersonation effort (where hackers secretly wrote emails from hacked accounts of senior journalists and other regime figures to each other) that appeared to successfully spark election violence.


A somewhat small circle of firms are actually responsible for much of the high level political fake views activity. Many of them are owned and run by former Israeli national security specialists, most notoriously people like Tal Hanan and Joel Zamel. They cycle regularly through different shell companies and control vast networks of bots and avatars across the world. Everyone has heard of the Russian St. Petersburg troll factory, but fewer know about these much larger and more automated commercial operations. These firms will charge many millions of dollars to get even an unlikely candidate elected, and they typically deliver.


Digital comms leads for political campaigns tell of a battle of the bots, where positive and negative narratives about candidates are advanced and suppressed through fake posts, comments and pile ons, often operating within minutes of events like Presidential debates.


But it’s almost always an unfair fight. Because liberal democratic politicians in most countries are extremely reticent to use these deceptive and manipulative tools, while authoritarian populists use them with abandon. The bot armies are a weapon of mass democratic destruction, partly because only the authoritarians appear willing to use them at scale.


Why it’s all about to get much worse


As bad as the Fake Views problem is, it’s rapidly getting far, far worse.


Fake views were already largely automated, with varying degrees of sophistication. But the AI revolution offers a new magnitude of sophistication. Before, simply clicking on an account could give you some sense of how genuine it was. There were many tell tale signs like oddly focused posting, or artificial sounding comments that closely mirrored many others. But with AI-powered bots, whole personalities, communities and environments can be created. Fake Views has always been “fake it til you make it” in starting out fake and gradually drawing in more organic users as scale was reached. But with AI, the fakery is vastly more effective, and able to rapidly generate organic activity.


For example, AI-driven bots are able to message and discuss at length with individual organic users, developing relationships, even for example romantic flirtations. They’re far harder to track and disrupt, even if big tech did want to, because unlike older bots they can mimic the same frequency of words in human vocabulary, and be active at the same times of day with the same frequency. And they’re able to learn and adapt from their interactions, becoming ever more effective in their malicious purposes.


All signs already point to an explosion of AI bot usage. In 2023 there has been a 400% increase in scraping activity on the internet, where bots gather data about users. This is likely partly because training AIs requires a lot of data about users to learn from.


To illustrate the potential of this kind of capability, imagine the following scenario…


Putin has long sought to weaken the West by sowing discord and acrimony and supporting extremists on all sides of the political spectrum. The science of politics is psychology for the artists of this kind of disruption. But if weakening the mental health and psychosocial health and resilience of individuals is core to the plan, psychology suggests that people’s primary attachment bond is the most powerful source of that resilience and health. If you can disrupt personal relationships, marriages and families, profoundly, it may do more than any political campaign to weaken a society, and promote general strife. Already, millions of men have fallen in love with a transparently artificial bot named Replika. How many marriages and relationships might an AI-powered bot army of avatars be able to disrupt? Already 60% of divorce proceedings records in the US mention Facebook.


And what kinds of popular culture justifications might the bot armies promote to weaken families? It may not be just a thought experiment. Already we see an unprecedented, sharp and accelerating political divergence between young men and young women in Generation Z, across the entire world, starting just 5 years ago. The rise of gender tribalism - from ‘critical theory feminism’ in the MeToo movement on the left, to various incarnations of a toxic masculinity movement on the right, is exactly the kind of cultural shift that Kremlin social media bot armies have been pushing for nearly a decade now. Having successfully exploited and amplified racial and various other divisions in the West, they have now turned to gender.


So what do we do? Verified Identity.


One solution above all would help to rein in Fake Views. It’s as fundamental as the ‘corrections’ that were introduced by Meta and others to contain fake news. That solution is verified identity. If social media users are required to validate their identity with a government-issued ID document, or their accounts are blocked (or massively deprioritized in the algorithm), the bot armies would die overnight.


The main objection to this has been that it removes anonymity online. But anonymity in the public space has never been anything close to a human right. Its value is not ethical but in practice - whether it actually promotes human rights and democracy or doesn’t. It’s very clear that in democratic societies anonymity, and its fostering of Fake Views, is potentially fatal to democracy and rights.


For those in autocratic societies, their anonymity is already critically compromised by governments’ abilities to trace IP addresses and force big tech and telecoms firms to hand over data. The illusion of anonymity is perhaps the greater danger. The rights of these citizens are best safeguarded not by purportedly anonymous accounts but by end to end encrypted communications that are not visible to governments or companies.


Perhaps the real objection that big tech has to verified identity is that they stand to lose a huge percentage of their ‘users’ and a far larger percentage, perhaps half or more, of all their activity, and that activity is their revenue model. There is a kind of conspiracy to defraud the public going on here - where big tech takes a huge cut of all the revenue from fraudulent activity on their platforms. I’ve often thought that a class action suit from people advertising on Facebook might be the fastest way to force structural change, but such a case would not conclude quickly.


Given this, verified identity will take years to be enforced through regulation or legal action. Perhaps some enterprising new social media startups will introduce it, and big tech might adopt some halfway measures.


In the meantime, we need to build cost effective tools that provide optional verified identity so that an increasing percentage of users are immunized from the direct impact of bots - bots which will nonetheless continue to heavily game the algorithms and influence our information environment.


The verified identity solution isn’t a dream, it’s practical, and already used by any online program, such as Airbnb or Upwork for example, that needs to verify the authenticity of their users to prevent fraud and abuse. We just need big tech to join the list of companies that actually want to prevent fraud on their platforms.


What do we do in the short term?


Verified identity will take years to pass, and in 2024 alone we could see the world pivot towards authoritarian governments powered by movements created and fuelled by fake views.


Big tech can help by urgently taking what they call “break glass measures”  to identify and downgrade fake accounts and activity. The bot networks are often interconnected - the organization I led was able to trace a huge percentage of the fake news spreading social media accounts in Brazil to just one political operative. Downgrading bots, tracing them to their source, and banning those users for “inauthentic activity” is an urgent need.


But we can’t trust big tech to do this. I think we also need to consider fighting fire with fire in the short run. What if we flooded the internet and social media with bot activity promoting love, tolerance, hope, and narratives of pluralism and healing? It wouldn’t be ethical, but it might be our best immediate option.


We might also consider transparent ‘robocop’ bots that publicly call out bot activity and alert users when they identify it.


Whatever we do, we need it to happen soon. Because fake views is a weapon of mass democratic destruction. And once leaders come to power through the manipulation of fake accounts, they’ll protect the fraud that put them there.

290 views

Comments


Subscribe Form

Thanks for submitting!

bottom of page