It is not only our physical selves out in public that can be indentified, observed, and recorded. The result of the almost constant immersion in and interaction with digital technology has given rise to a new form of monitoring that occurs largely without our being too conscious of it - dataveillance. If surveillance is generally something done by the government or the police, dataveillance is mostly driven by corporations that want to profile us as consumers - though obviously other institutions might be able to get hold of the data for their own interests - or at least that seems to be what people who are concerned about dataveillance are most afraid of.
Why are all my social media platforms so convinced that I am interested in manscaping tools lately? I did look at a beard trimmer on Amazon a couple months ago, but I don't think I've searched for anything lately. Is something picking up on my Covid beard using image recognition algorithms? Does some algorithm deduce something about me from my browsing history that I don't know myself? ,-)
Computerized technologies can record all of our virtual activities and associate them with an electronic profile of us as users of technology. Technology now tracks much more than has ever been recorded of our actions by cameras or wiretaps. Consider the difference between unlocking a hotel room with a physical key and with a keycard. The first is a physical interaction that leaves behind a few physical traces. There may be fingerprints on the door, or metal shavings in the lock mechanism, or scratches on the lock plate. This could provide a sufficiently dedicated CSI operative with some information, but it doesn't say much about when the door was opened, by whom, how often it was opened, or for how long.
On the other hand, the digital interaction that opens the keycard door can be tracked easily, and presumably it is - recording the time, frequency, and duration of door interactions. Cameras in the hallways can be linked to the keycard information to provide a synchronized visual record. This is just one form of dataveillance (combined with surveillance) that is almost incidental to the convenience it provides. But the recording itself isn't dataveillance.
The analysis of the digital information that is recorded as we navigate through the computerized world is what we call "dataveillance" (Radke). Dataveillance may analyze the records of what we do, whom we communicate with, what we buy, what we read or watch, where we go, what we eat, how much we spend, what we say or type, and much more about ourselves. This would create knowledge about us, and thus, many fear, power over us.
The modern office workplace is often an example of a scene that seems to combine elements of panopticon surveillance and dataveillance both. In the office environment, your computer screen and network activity can be monitored from a remote location to ensure employees aren't on YouTube or shopping online during work hours, but also to see what they are communicating to each other and to the outside world about the work situation itself. In a call centre, computers autodial for the workers and keep records of how long the phone conversations go on for. Everything you do is logged, and instruments more accurate than Taylor's stopwatch can later be used to evaluate your work behaviour.
In his theoretically sophisticated and sometimes darkly amusing account of Foucaultian surveillance and Baudrillardian simulation in Las Vegas, Nathan Radke* draws attention to the increase in hyperreal simulations brought about by our sense of being monitored in the workplace. One development has been a whole subculture devoted to "passing" at work - appearing to be productive and engaged when you are actually distracted and alienated. A similar, even more pervasive phenomenon can be seen among students in classrooms.
For the sake of workplace slacking, some clever people have actually written little computer programs that replace whatever you were actually doing on your desktop with a business-y spreadsheet or similar. Whenever your boss walks by, you hit a keyboard shortcut and the YouTube video you were watching is replaced by serious work-y looking stuff. Radke quotes from William Bogard's 1995 study of "hypercontrol" (controlling people through virtual and hyperreal means) to show how new forms of surveillance are countered with new forms of simulation (adding again to the unreal, hyperreal quality of our lives):
Surveillance serves to unmask appearances, while simulation can mask, pervert, disguise, hide, and distract. This can be done on a vast scale – building a simulated town over a World War II fighter plane factory – or on an insignificant scale – a student feigns interest in a lecture, even going to the lengths of scribbling in a notebook to simulate note-taking. In particular, Bogard writes of simulation countering workplace surveillance:
Working people, in fact, have always known how to reverse the poles of the control of labour using simulation. In France, they call this la perruque, “the wig”, all those ingenious ways workers have devised to trick their employers or supervisors into thinking they are working, or that make their work less burdensome.
Modern day examples of la perruque include reading personal email, viewing internet pornography or browsing online book stores, making phone calls, or in the case of the student, simply removing one’s brain from performing the required task while allowing one’s body to pantomime the correct behaviour. (Radke 2005, quoting Bogard 1995)
Radke draws our attention to the energy many of us expend in trying to thwart surveillance. Clearly we don't like it when we are watched by those who have power over us, even the modest power of the college professor!
Dataveillance is the digital recording and analysis of your electronic activities. It includes logging of technological interactions such as phone calls, use of bank machines, and, perhaps most visibly and controversially, the record of what you do on the web in the form of tracking cookies, logged page requests on the Internet, and search history. Dataveillance is why you may start seeing ads in Instagram for a product you were searching for through Google.
Dataveillance is mostly performed by corporations, and is largely an automated process in which no individual human being scrutinizes any other individual's behaviour. Most of this work is done by algorithms, and in general it is unlikely that any person working for Google, say, is interested enough in you personally to look at the records of your surfing. In any case, there are laws in place that would make that activity an actionable offense if it were discovered. People who worry a lot about datavaillence often seem to imagine that there is an employee, or perhaps a hacker, that wants to invade their personal privacy (perhaps to cyber-stalk them) or steal their identity (mainly meaning their credit card info). I think this would be a rare case, and would generally not be sanctioned by the institution doing the dataveillance. (In the 2020 "documentary" The Social Dilemma, three guys are dramatised watching individual people's web browsing in a high tech war room and pushing buttons to use the app to influence the individual's behaviour, but this is an outtake from Black Mirror, not a reflection of how dataveillance actually works today, or would ever be likely to work. Dataveillance of this kind is done by computer programs, not human individuals. The algorithm isn't out to get you; it doesn't judge.)
In general, of course, consumers have opted to share this information with corporations voluntarily by agreeing to Terms and Conditions, typically without reading them, of course.
Though many people are worried about an individual stalking them or watching them through their web camera, the true dangers of dataveillance lie elsewhere. When corporations sell their ability to profile individual users to customers who want to influence your behaviour, for one thing. As I'll discuss more in the next lesson, it seems to me that the underlying problem is that our social media are currently run as capitalist enterprises rather than ad-free utilities. Your privacy is not at risk from prying human eyes, but you may be leaving yourself open to subtler and more targeted forms of manipulation than people ever faced from scattershot television advertising. AI and algorithms can target each of us in ways no actual human bad actor would dream of.
I personally see the problems with datavaillance as it is understood today to be of two kinds.
One kind is what some Big Brother or other might be able do to you with it. Big Brother in this case is first and foremost Big Business. This includes all of the negative sides of what people are now calling surveillance capitalism. That is Shoshana Zuboff's phrase for the commodification of our personal data - turning the data, or the analysis of it, into a product that can be bought and sold in the marketplace. Although this is mainly known to us through targeted advertising, other uses could theoretically be made of our commodified data, and even targeted marketing could be considered pernicious in some cases (though many people are actually happy to get personalized ads).
Some suggestions about the dangers of surveillance capitalism were suggested by the much publicized Cambridge Analytica scandal, which became famous in 2018. That company wanted to suggest that it could sell targeted political influence that could sway the outcome of things like the 2016 U.S. presidential election or the Brexit referendum and public opinion about it.
A recent Amnesty International call for radical reform of Facebook and Google's data collection policies mentioned other, more basic social concerns, for instance that dataveillance as surveillance capitalism "allows all kinds of new exploitative advertising tactics such as preying on vulnerable people struggling with illness, mental health or addiction. Because these ads are tailored to us as individuals, they are hidden from public scrutiny.” (Kumi Naidoo of Amnesty International, qtd in Thibault 2019). Tufekci spoke to the CBC about these issues in 2018, when Mark Zuckerberg was appearing before Congress, partly because of the Cambridge Analytica scandal. You might find watching the interview later useful as review of the real concerns social and political thinkers have around dataveillance today. (As opposed to comparatively far-fetched personal worries about cyberstalking and identity theft.)
The other possible problem I see with dataveillance is not what Big Brother (or Big Business) wants to do to us, but what we may inadvertantly do to ourselves because of it. These concerns are often talked about in terms of filter bubbles and echo chambers. If the Internet is our main space to engage in public discourse, then dataveillance, in its attempt to helpfully show us stuff we may like or be interested in, may come to ensure that we see more and more media that agrees with what we already think and less and less media coming from other viewpoints. This may lead to a kind of individualized ideological echo chamber, in which our prejudices are reinforced constantly and we never learn anything new from actual confrontation with other people's views and arguments.
Those of us who worry about this, often make an effort, at least for news, to follow several different channels to get a picture that is informed by awareness of other perspectives. For instance, apart from news sources that are very similar to my own political views and cultural background, such as The Guardian or the CBC, I also follow foreign news services that are coming from a dissimilar background such as Al Jazeera, Chinese sources, and also queer and trans commentators, and Black influencers. And I even follow a few people I am violently opposed to, such as the right-wing Breitbart News or Fox News so that I hear what they are saying too, and can investigate it if it seems important.
(If you have Twitter, you can get an estimate of your own "echo chamber" by trying this Echo Checker. Very U.S. focused, and should be taken with a grain of salt, but an interesting exercise. Thanks to 2022 student Anya Hrehirchuk for making me aware of this.)
* Full disclosure and backstory: Nathan is a Humber professor and close personal friend of mine. He was responsible for an earlier version of this course and many of his ideas and no doubt even of his words will be found remixed in this course for which I am now solely responsible. Nathan is a superb teacher and I recommend the degree version of his elective course Conspiracy Theories (CULT 2005). If you are interested in, or concerned about, Conspiracy Theories I would also recommend Nathan's popular podcast: The Uncover Up.