In a book from 2016, another philosophy professor, Michael P. Lynch, looked at the state of human knowledge in the 21st century. Lynch coined the phrase "google-know" for the kind of knowing (or perhaps scare-quotes "knowing") we engage in when we use our extended minds.
Like all of us, Lynch loves and marvels at the extended "knowledge" our devices give us access to, but he also worries about how we may now be discouraged from really knowing things and thinking about them for ourselves in the old print culture way. There is a temptation to let other people do a lot of our thinking for us, and Lynch finds this worrying. You can see him making his balanced (I think) argument in this video, ironically shot in an empty university library ...
Like Lynch, I personally think understanding is an important aspect of real knowledge; and like Sanger, I tend to think you can't understand anything unless you have it "on board," in your memory, your meat mind. You can't just google-know everything and still make sense of the world, really understand what people are talking about, and be a responsible member of a liberal democracy, as described in the previous lesson.
Leaving your thinking to be done by others, whether it is Google, Wikipedia, a WhatsApp group, or now an AI is worth thinking twice about.
In the Autumn semester of 2022, more than one participant writing in the discussion forum for this lesson put new pressure on the catchphrase "Work smart, not hard." This would seem to be the question with all aspects of augmented intelligence, or extended minds: is it actually smart to let others do your thinking for you? Work smart, not hard is really about reducing personal effort and increasing expedience - the ideals of much modern life, automation, and economies of scale. There can be moral reservations about laziness and irresponsibility, but "smart" in this context basically means "making the minimal effort for the maximum return." As Christina Donnelly summed up in her post: "in some cases and for some individuals, tech tools have elicited laziness and demotivating variables associated with learning. The saying 'work smarter, not harder' takes on an entirely different meaning when applied in this context and means 'why learn when we can take the industrial capitalistic approach?'!" (Nov 2, 2022) Donnelly wants to criticize this easy complacency of exploiting the work of others when it comes to thinking, yet her concluding rhetorical question strikes me as satirical but also ambivalent; it's a real question now. Is it a question you should answer for yourself? Or are you going to rely on your extended mind to answer it for you? ,-)
By the end of that semester, ChatGPT was blowing up, and many teachers were struggling with how to handle a barrage of student essays assembled using AI. I was initially convinced that the kinds of topics I pose in this class would be too specific and perverse for the AI to create a plausible response to, so I plugged a version of my midterm question on Neil Postman into the chat prompt box, and got this, first try:
In his book "Amusing Ourselves to Death," Neil Postman argues that the rise of television and manufactured media has turned politics into a form of entertainment, with the focus being on creating a show rather than on serious discourse and decision-making. According to Postman, this shift towards the "Society of the Spectacle" has had negative effects on democracy, as it reduces the importance of substance and encourages voters to base their decisions on superficial factors such as appearance and charisma rather than on policy and ideology.
Postman's concerns about the impact of television on democracy are still relevant today, as many people continue to get their information about political issues from social media and other forms of mass media. The constant stream of hyperreal images and information can make it difficult for people to distinguish between what is real and what is not, and this can have negative consequences for democracy. For example, it can lead to the spread of misinformation and the manipulation of public opinion, which can undermine the integrity of the democratic process.
In my own experience, I have seen how mass and social media can shape people's political attitudes and behaviors. For example, social media algorithms often prioritize content that is sensational or controversial, which can distort people's perceptions of reality and lead to the amplification of extreme viewpoints. This can create a hyperpartisan environment in which people are more likely to engage in tribalism and to view political opponents as enemies rather than as fellow citizens with whom they disagree.
In conclusion, I believe that the hyperreality created by mass and social media can be a problem for democracy, as it can distort people's perceptions of reality and undermine the principles of informed and respectful dialogue that are essential for healthy democratic discourse. It is important for people to be aware of this and to be critical consumers of information, so that they can make informed decisions about political issues and participate in a meaningful way in the democratic process.
It's a somewhat basic answer, but very self-assured sounding, and it is clear and sounds right as far as it goes. It is also spelled and punctuated correctly, with refreshing paragraph breaks, and flawless grammar. I was astonished to see it use the first person in the third paragraph! It doesn't sound like a machine, it sounds like an educated person, though perhaps a little lacking in personality.
Sometimes students writing for my classes aim for this kind of production: they have been taught to focus on clean grammar, spelling, and punctuation. And they may also have been rewarded for keeping things a little vague and unprovocative, but still stating opinions confidently, as facts. Human intelligence tends to be messier, and if it is trying to be persuasive and forthright it may acknowledge and engage with opposing views, complications, or concrete examples. Concrete examples are both humanizing and clarifying. So in the generated output above, an example of "its own experience" would have been helpful. If you are using ChatGPT instead of doing the thinking yourself, and you don't want you audience to know you are just channeling and AI's output, I woud encourage you to watch out for vague statements that don't "get into the weeds" and give concrete examples; this makes you sound like ChatGPT! And if you want to disguise a ChatGPT response, I suggest adding some spelling errors and run-on sentences, too. Lol.
The brief example above makes me think that I will need to revise my topics in future, if I want to be sure of hearing an individual's personal thoughts rather than a bunch of plausible and well-constructed generalities from the Internet. Or if we still care about evaluating individual student learning and thinking we might have to go back to doing all assessments live ...
Current artificial intelligence models seem to cull information from the Internet and then use various algorithmic logic networks to sift and connect the information into something that sounds like things people say, often accurately enough (though not always), and with a good communicative voice. Ordinary humans, on the other hand, work with whatever they know and ideally weigh and analyze and evaluate questions in their meat minds. Could it be true that it's wise ("work smart") to let the machines do the learning and the analysis for us in many or most cases? Maybe it depends on what you're "thinking" about or why you're doing the thinking ...
Should you care about doing your own thinking? Lazy or "busy" or unthinking people may well assume that AI can think better than they can, and they may also applaud the fact that something like ChatGPT is leveraging everything written on the Internet (including the opinions of other lazy and unthinking people, not to mention evil people), rather than just the work of experts, recognized authorities, and people who can be held accountable for their statements. But is this really "smart"? Does expedience really equal intelligence?
Personally, I wouldn't trust either a computer program or the hive mind to do my thinking for me in the present state of the technology. The hive mind is more democratic, certainly, but does that make it smarter? Would you trust software to rearrange material sampled from all over the Internet to give you your sense of what is true and real in the world? It's not an entirely rhetorical question. Apart from the algorithmic Echo Chambers we already get our information through, in February 2023, the site NewsGPT was launched, and many similar platforms now exist. The rationale was this:
"For too long," Alan Levy, NewsGPT's CEO, said in the release, "news channels have been plagued by bias and subjective reporting. With NewsGPT, we are able to provide viewers with the facts and the truth, without any hidden agendas or biases." (qtd in Harrison 2023)
That certainly sounds good, but Maggie Harrison (2023) explains at length what some of the issues are with thinking that human-programmed systems analyzing unspecified data sources makes news that is more reliable and unbiased than old-fashioned journalism (despite all the latter's shortcomings).
When it comes to cheating on schoolwork, an important question is whether you are smart enough at least to know when the machine's work has been adequate. Think about the midterm answer I quoted above. Having written these lessons, set the topic, and marked many midterms in the past, I can look at what ChatGPT generated above and evaluate its accuracy and pertinence. But if you haven't actually read and understood my lesson, would you be able to tell whether the response above would be well-received? It seems to me that intelligence now is largely about evaluating information and its relevance and/or persuasiveness to the task or the "audience." This is a form of curation - an art that requires understanding and judgment and the ability to communicate yourself. But how can you have those without having individual knowledge of your own? Something may "sound fine" but be wrong or an inappropriate response. Often when I get work that is plagiarized or generated by AI it has those qualities. I can certainly detect at least some of that stuff, because I understand the material (I think! ,-). Profs are now exploring having students submit questions to ChatGPT and then evaluating the answer it generates, to demonstrate their own understanding beyond chatbot answers. Most people using plagiarism, resources like Course Hero, or now AI to cheat don't have the expertise to evaluate how good a stolen, purchased, or fabricated response is; they have to hope that it will do. Again, the question of what "work smart" means is worth reflecting on.
The question comes down to this: is it "smart" not to learn anything and not to do any thinking for oneself, given the busy world of "the industrial capitalist approach" to existence in which we live? Is individual knowledge and meat-mind thinking no longer expedient, no longer valuable, in a world where value is determined by economics and Taylorism? Do you value individual knowledge and thinking? If everyone stopped doing individual thinking, would individual thinking lose its assumed value, and would we live in whatever world AI suggests we should in the future? Would that be good or bad?
In an already cliché gesture, I asked ChatGPT what "its own" "thoughts" on individual knowledge were. The generated response:
Yes, I believe that individual knowledge and personal thinking and analysis are valuable and important. It is important for individuals to develop their own understanding of the world and to be able to think critically and analytically about issues and problems. This enables them to form their own opinions and make informed decisions, rather than simply accepting the views and beliefs of others without question.
In addition, individual knowledge and personal thinking and analysis can help individuals to become more creative and innovative, as they are able to come up with new ideas and solutions that are based on their own unique perspective and understanding of the world. This can be particularly valuable in a variety of settings, such as in the workplace, in community organizations, and in personal relationships.
Overall, valuing individual knowledge and personal thinking and analysis can help individuals to become more self-reliant, confident, and effective in their personal and professional lives, and it can also contribute to the overall health and vitality of society.
I couldn't have said it better myself! And maybe even Larry Sanger would approve!
The ideal of the rational individual taking time to understand and respond with a knowledgeable personal viewpoint was still central to Western culture when I was a kid. Writers like Carr, Lanier, Sanger, and Lynch still believe in the individual, and believe that traditional book-reading, focused study, taking information on board, and reflecting on it in the quiet of our undistracted minds nourish responsible individual habits of mind. They assume that collaborative, fast, "noisy," distracted, online extended minds and factoid fishing discourage those habits of mind. (None of those guys, by the way, is so much concerned about the accuracy or quality of the information online; rather, they are focused on the habits of mind that using these tools will cultivate. First we shape our tools, and then our tools shape us.) Is it okay for the majority of humans to know little and do relatively little thinking for themselves? Is that even anything new? What are the dangers? Is there anything to be said for "working harder so that you can be smarter"?
There is little point in just memorizing the year that Martin Luther King, Jr. wrote "The Letter from Birmingham Jail." Taken in isolation, as a factoid to memorize, it means nothing. But it could mean something to you if you already know - personally, in your meat mind - what else was happening around 1963, maybe have some idea what happened in 1945 and 1955 too. If you know why King wrote the letter, if you know what it says, if you have an idea when American slavery was abolished (1865), what happened to African Americans in the next hundred years (segregation), when John F. Kennedy was assassinated, when King was born, when King was assassinated, and any number of other "meaningless dates" (and other "meaningless" information) that could provide a context for the date of King's letter - then the information becomes meaningful, becomes knowledge.
The reason to know things, it seems to me, is so that you can know, and more importantly understand and appreciate, other things. The kind of knowing involved in commiting information to meat memory - after having understood it - gives things a context for you; the kind of "knowing" that your instant-dictionary Google Glasses or ChatGPT midterm responses could provide - at least until they can plug directly into our brains for instant simultaneous comparison and contrast with other information - really is of a different kind. I would not call it knowing.
Knowledge of Martin Luther King and "The Letter from Birmingham Jail" and some idea of where 1963 fits into history - rather than thinking of it as just somewhere in a giant Historical Singularity of "back then" (the time before you were born, or the time before the Internet, or the time before electronic media became standard) - is different from knowing what time it is right now or someone's phone number. The latter pieces of information have no human significance; the former are the kinds of things that make life, human history, knowledge, and ideas meaningful, and are required for understanding.
What about words? Do you know what "nescient" means? If you have your little smartglasses on you might say "yes" and then check the readout. Is this really extended knowledge? Is it really the same as "knowing" the time or "knowing" someone's phone number? You may "know" what time it is when you look at your phone, but a word is much more complex, ambiguous, and powerful than a clock readout or a dictionary definition. Perhaps you can only ever "know" a word to a certain extent, not totally, and it's questionable whether you know it at all until you have seen it in many contexts, used it yourself, understood what it stands for and what it is good for, and you know it because it is part of you, part of your flesh and blood, not just popping to view in a corner of your screen. Knowing a word is not really a matter of looking it up (unlike "knowing" the current time).
We'll return to some of these questions in the last lesson, but for now I don't want to leave you without a touch of doubt that our tools actually might be shaping the human race in a dumbing-down way, even if I doubt it. That seemed more plausible to me ten years ago, than with the students I am getting today. But what do I know, after all? People are seriously studying the effects of screen reading, unbridled media consumption, and distracted learning on the brain and some of the evidence does seriously seem to suggest we should be concerned. So I'm not sure we should too hastily dismiss the possibility that Nicholas Carr was making a valid point when he ended his essay on Google making us stupid with one of his cleverer lines: "as we come to rely on computers to mediate our understanding of the world, it is our own intelligence that flattens into artificial intelligence" (Carr 2008).