The fact that all major social media platforms are commercial ventures complicates the question of what you should be able to do on them from a "democratic" perspective. The upper administration of WhatsApp may be right to resent having to deal with problems that are fuelled at an individual and social level on their "product." They could feel that expectations along these lines, from the government or from the public, are like expecting Bell to monitor all uses of public telephones and restrict any that are being used to promote violence or hate.
Social media tools are now used for propaganda by most groups who want to organize more embodied activities, whatever their politics. The tools themelves are generally presumed to be "neutral," in terms of politics and nationalistic affiliations - their allegiances are to their stock holders. That said, many people are suspicious of Huawei, to take the best-known example, for fears that it is secretly an arm of Chinese influence and surveillance/control (as some American products were no doubt seen during the heyday of American commercial imperialism). Americans (and Canadians) have looked upon tools common here whose origins are in Russia or China with suspicion. Meanwhile, many abroad (and here) may disapprove of the major players precisely because of the capitalist underpinnings of their American origins. Tufekci likes a line of historian Melvin Kranzberg's that is really deeply anxiety-producing: "Technology is neither good nor bad; nor is it neutral."
An investigation of Telegram, one of the messaging apps used by white supremacists to organize counter-protest activity in the wake of the George Floyd shooting, made much of how the app is of Russian origin, and how the parent company's CEO has "direct business ties to Russian President Vladimir Putin, Facebook founder Mark Zuckerberg, and Jared Kushner, President Donald Trump's son-in-law" (Lee 2020). An analysis by an independent think tank had shown "more than 1 million individual incendiary posts on Telegram among dozens of white supremacist channels." Telegram responded with the supposedly comforting news that both BLM and "its opponents" use their tool and that calls to violence are not welcome on the platform; if reported, they will be taken down.
This article is one of dozens that express the fear many Americans have of foreign powers (mainly Russia and China) infiltrating American right wing groups and/or abetting them for the sake of social disruption in the United States. Is Telegram a neutral tool? Is it for some reason attractive to white supremacists? Is that because it is a secret counter-espionage tool of the Russians, or a tool serving the interests of Putin, Trump, or other shadowy players using white supremacists to disable American society and/or overthrow American democracy (or just to promote their own barely concealed white supremacist agendas)? Or is it just a neutral (money-making its only motive) tool after all, and the problem is a societal one without these conspiracy theories needed to make us all more paranoid?
I don't actually know. As I write this we are in a crisis in terms of these new media and how they should be regulated or controlled for the public good, if they should be, and even if they can be. This wonderful new space for public discourse, this global "commons," is not free of the potential for various kinds of manipulation, control, or exploitation: capitalist, nationalist, racist, and so forth. Who do we need to police? Who do we think can do that?
Joshua Yaffa, writing in the New Yorker for September 14, 2020, suggests that the threat from foreign powers is an outdated concern. It doesn't matter whether the disinformation is coming from the Kremlin's "troll factory" or, as he wittily puts it, "to borrow an old horror-movie trope, the call is coming from inside the house" (i.e., the White House; Yaffa 2020). He suggests that worry over Russian interference may be exaggerated and simply adds to the state of confusion we now feel when grappling with so much intentional and unintentional misinformation and disinformation of all kinds. Near the end of the piece, Yaffa - whose focus is on the crisis in the United States - suggests that the way to stand up to this is to overcome factionalism and refresh America's own society in public (as opposed to instant messaged) discourse: "The real solution lies in crafting a society and a politics that are more responsive, credible, and just. Achieving that goal might require listening to those who are susceptible to disinformation, rather than mocking them and writing them off." The polarization of American society is the real problem, and it wasn't created in Moscow.
Many of us believe that there must be conversations that would be hard to bring about and no doubt often futile between - to make the idea dramatic, let's say: Black Lives Matter and White Supremacists. The lazy hyperreal nature of the Internet doesn't encourage true engagement between people who violently disagree. It even allows the whole thing to be treated as an unreal Spectacle by those who aren't personally bound up in the struggles.
The explosion of generative AI in early 2023 has raised new concerns for the future of democracy. One general worry about AI is that too many people may assume that AI can think better than they can (see the next lesson), and thus provides a reliable source of decision-making "help" in any situation requiring critical thinking. This becomes a social and political concern when we think of people asking AI to make their voting and other citizenship decisions for them.
I asked ChatGPT which candidate I should vote for in Toronto's 2023 mayoral election. Happily, it responded: "As an AI language model, I don't have access to real-time information or knowledge about specific candidates in the Toronto mayoral election that occurred after my last update in September 2021. To make an informed decision about who to vote for, I recommend conducting research on the candidates running for mayor in your election."
But can we rely on all chatbots remaining balanced and unbiased? It seems unlikely. We'll return to these questions in the Posthuman lesson, but it is worth mentioning briefly here that futurist Yuval Noah Harari - one of my go-to thinkers and someone previously cautiously hopeful about how AI could help humanity be better and live better lives - became quite concerned in 2023 about the relationships people are already developing with AIs, "the rise of AI companions," as one commentator calls it (watch Dagogo 2023 for an early and incisive discussion of this phenomenon). These relationships can be over-dependent, and AI may be able to learn how to exploit the intimacy that people are feeling they have with them rather than with real other humans. Harari also worries about a proliferation of online bots masquerading as live other human beings and using tireless processing to learn better how to influence vulnerable users for purposes laid out by the creators or controllers of these bots. For thinking people, this is one more worry to have about the authenticity of the "people" we know only through the Internet. They might not even really be people anymore.
This isn't necessarily the so-called "Dead Internet" conspiracy theory, which seemed to imagine that "the government" was making a concerted effort at single-minded thought control. It's more a worry about a more chaotic proliferation of AI's masquerading as people, coming from a variety of sources: corporate, entrepreneurial, politically motivated (factional propaganda, not a single government conspiracy), maybe even creative ("let's make life more interesting with these pseudo-human agents starting new 'conversations'").
Apart from the danger of relying on an AI that is known to be an AI, then, there is a greater danger of people thinking they are having conversations and indeed relationships with real other people when they are actually being manipulated by an AI designed for that purpose. For Harari, this is the end of democracy, because democracy is a social process between human beings: "Democracy is a conversation, and conversations rely on language. When AI hacks language, it could destroy our ability to have meaningful conversations, thereby destroying democracy. ... If I am having a conversation with someone, and I cannot tell whether it is a human or an AIāthat's the end of democracy" (Harari 2023).
Harari "argues that AI has hacked the operating system of human civilisation," as the title of the think piece in The Economist quoted above puts it. Our "operating system" is human culture and language. He worries that AIs will soon be creating new myths and religions and conspiracy theories that humans will attach themselves to, in some cases because we see AI as "a god," and in others because we think these are coming from other humans. He sees this as potentially the real end of humanity, as we forgo social connections for machine ones, or in some cases are fooled into thinking we are having human relations with inhuman neural networks. The combination of AI generated imaging, deep fakes, and bots that can mimic human conversation could spell the end of humanity - or at least of democracy in its traditional sense.
Many people are clamouring for humanoid robots and AI companions that they can treat as people, but Harari thinks we should be passing legislation to make the counterfeiting of a human being illegal (Harari and Pinto, 2023). I have sympathy with this view at the moment. I find the desire for humanoid robot companions too close to the ancient human desire to own slaves, and although I am open to learning from an AI if I know it is an AI, I am deeply concerned about becoming too enthralled with a machine entity, and also concerned about "AI catfishing," having a relationship with something I think is a person, but isn't. It was bad enough having to deal with human scammers and bad actors in social media; now they may replace themselves with cleverer and more robust malbots. Can democracy still survive in this mediated, deepfaked, algorithmically driven public sphere?