The more benign visions of a world with machines that think better than humans tend to conceive a paradise where we have created tools that will look after us and our planet. A sort of technological bunch of benevolent caretakers, maybe good parents, keeping us safe while we play and grow forever. Richard Brautigam wrote a little poem in the 1960s that is often quoted to promote or parody this childlike vision. "All watched over by machines of loving grace":
I like to think (and
the sooner the better!)
of a cybernetic meadow
where mammals and computers
live together in mutually
programming harmony
like pure water
touching clear sky.
I like to think
(right now, please!)
of a cybernetic forest
filled with pines and electronics
where deer stroll peacefully
past computers
as if they were flowers
with spinning blossoms.
I like to think
(it has to be!)
of a cybernetic ecology
where we are free of our labors
and joined back to nature,
returned to our mammal
brothers and sisters,
and all watched over
by machines of loving grace.
A kind of a second childhood for humanity, released finally from the struggle to survive. This seems a kind of an easy version of Posthumanism: back in touch with our humanity and animality and cared for by our technological creations. Could increased technology not lead to an increase in our humanity, as the bittersweet last line of the slam perfomance by Marshall Soulful Jones suggested on the first page of this course?
Frankly, it seems to me that it is lazy and dangerous to look forward to the imminent day when artificial intelligence and other new forms of technology will solve all our human problems for us. This may happen; I'm not saying it won't. I hope it could. But in the meantime human problems confront us right now, and so far Siri can't even turn on my GPS and tell me what street I'm on, let alone solve the climate crisis, degradation of the ecosystem, racism, ethnic and nationalistic wars, fake news, runaway hyperreality as the new opioid epidemic, massive poverty, hunger, constant human suffering, and all the other stuff that threatens our species and our planet. The fact that in two or three years ChatGPT may be able to do twice as many logical operations in the same amount of time doesn't mean it'll be able to think about, let alone care about and solve, human problems. Our problems remain our problems. And of course we could well make the planet unliveable for most humans and many other forms of life before this Singularity ever even happens. Every year at the COP summit on environmental response to the Climate Crisis the attendees agree to sending more money to the countries likely to be most devastated by climate change. As an indigenous protester at the COP30 in Brazil said in 2025, "We don't eat money." But in the world that our technology, industrialization, and capitalism have created for us people tend to think what is real is media and money. Whereas I start from the premise that neither of those things in really real, and I believe that what is real is plants and animals and air and water. In other words, life.
Technology: how powerful it seems to us today! Many people are expecting technology to solve the environmental crisis. We will perfect carbon capture technologies or new geoengineering tools to counteract the greenhouse gases in our atmosphere, so that we can continue to burn fossil fuels and run our industries, economies, and personal lives as we have grown accustomed to. This has been one example of the hope (perhaps already passé, but still surging forward in our world) that we can just add on more technology to address the problems our technology has caused, that we can throw money at them, rather than going through the more painful transition away from fossil fuels, overpopulation, and hyperconsumption.
In 2007, physicist Freeman Dyson suggested that biotechnology (another huge field that is starting to "hit" and will change all our lives if we live long enough) would allow us to adapt our life forms (I'm talking about the plants and animals we use, but also perhaps our own biology) to the new environmental cirumstances we create with our man-made climate change and pollution (which are expected to make the world uninhabitable for many of the species we have in their current naturally evolved forms). Again, technology causes a dangerous shift in the natural world, but rather than curtail or re-think it, we use more technology to make lifeforms adapt to the new circumstances technology has created.
Others are expecting smarter-than-human AI to solve these problems in ways we can't imagine with our puny human brains. That could happen, but - however brilliant the AI is -there may be problems with just getting the data the AI needs to process. The data is out here in the world, not in math equations and training media. Right now human intelligence and painstaking research can barely begin to understand the complexities of the interdependencies that make life on earth possible. We do not even know all of the species of life on the planet, let alone more than a suggestive fraction of the story of how they interact to create our ecosystem. Unless the intelligence of the Singularity can also figure out brilliant new ways of gathering such data, it's hard to see how it will be able to make up any accurate solution.
During the Generative AI hype explosion of 2023, Naomi Klein talked about the fact that we don't need AI to solve the climate crisis; we're already smart enough to know how to fix things ourselves, we just don't have the will to do it:
[The proponents of AI salvation seem to imagine that] the failure to “solve” big problems like climate change is due to a deficit of smarts. Never mind that smart people, heavy with PhDs and Nobel prizes, have been telling our governments for decades what needs to happen to get out of this mess: slash our emissions, leave carbon in the ground, tackle the overconsumption of the rich and the underconsumption of the poor because no energy source is free of ecological costs.
The reason this very smart counsel has been ignored is not due to a reading comprehension problem, or because we somehow need machines to do our thinking for us. It’s because doing what the climate crisis demands of us would strand trillions of dollars of fossil fuel assets, while challenging the consumption-based growth model at the heart of our interconnected economies. The climate crisis is not, in fact, a mystery or a riddle we haven’t yet solved due to insufficiently robust data sets. We know what it would take, but it’s not a quick fix – it’s a paradigm shift. Waiting for machines to spit out a more palatable and/or profitable answer is not a cure for this crisis, it’s one more symptom of it. (Klein 2023)
In other words, what people are hoping for from technology is some way to fix the climate crisis without us having to change our ways of being and of seeing the world. And I think Klein puts too much rhetorical emphasis on the billionaires here. It is not just the rich who need to change. It is all of us. The planet just can't support 8 billion people trying to live like middle class American hyperconsumers.
In recent years, partly sparked by the climate crisis and now greatly abetted by the realities of the Coronavirus pandemic, there has been increasing interest among scientists and some philosophers and activists in making ordinary people more mindful of the real non-technological realities of human existence that it is so easy for us to neglect or forget (and technology is part of why). Ziya Tong's The Reality Bubble (2019) is a useful and eye-opening science-based onslaught on how delusional many of the average North American's ways of perceiving reality have become. Those of us in the developed world largely live in cities; our food comes to us packaged and processed, our garbage magically disappears once a week, our shit goes down the drain and we never have to think about it again. And of course, we spend a huge amount of our time focused on the incredibly rich hyperreality of media and worrying about the seemingly inescapable economy that our technology has helped us create for ourselves.

But in fact, we are animals - vulnerable, mortal, physical - desperately dependent on the rest of life on earth, on the oceans, on the weather and the oxygen we breathe, on fresh water, on micro-organisms within us and in the environment that we don't even fully understand - and all this is mostly hidden from us by the lives we have chosen to lead (perhaps we have chosen this way of living so we can avoid as much as possible the reality of what we actually are: mortal organisms). The pandemic was a wake-up call for some, a reminder that we are part of nature, not serenely above it and in control. A tiny virus can kill us, force us to slow the economy and change our lives around. Because we are part of life, not supremely above the rest of it just because we have some technology and science, and cool consumer products. We are animals. We are vulnerable. We need each other and the rest of nature to survive. Even so, each of us will die.
Can technology really save us from this reality, as opposed to just distracting us from it, making it easier to ignore? Can more intelligence save the planet in spite of humans, and/or save "human" consciousness as some new kind of non-biological force, continuing to evolve without bodies, growth, decay, and death? Do we want that? Is the human animal something we really want to overcome? Perhaps. It is scary being an animal. But that's what being alive is.
We are making impressive strides in AI. It can beat us at most board games (but not basketball yet! at least not without robots). It can "learn," at least in certain senses. It can generate plausible copy. It can imitate artistic styles. It will likely provide more and more emotional services for us, perhaps as teachers, caregivers, therapists, and so forth. But will it be able to solve our problems as a species of mammal that has gone spiraling out of control?
Siri may "know" what time it is; but so did the wristwatch, and so did the earliest mechanical clock with which this course began. I suspect that the only intelligence that can potentially understand when it is time to wake up, however, at least for the foreseeable future, is still our own human intelligence, such as it is, including our (often stunted) emotional and ethical intelligence. After they dropped the atomic bomb in 1945, Albert Einstein remarked: "It has become appallingly obvious that our technology has exceeded our humanity." As Freeman Dyson later warned, "ethical progress is the only cure for the damage done by scientific progress” (Dyson, 1997). Humans have made remarkable scientific and technological progress in our short period of civilization. Perhaps more progress than we can handle.
I think we are also making ethical progress as well; but it has been slow, much slower than our technological progress seems to be. Can we rely on AI to make the rest of that ethical progress for us or do we have to do it ourselves? It doesn't seem like cheap drama to me to say that we are at a critical moment in the story of humankind. We have the power to destroy ourselves in a few different ways, we have shaped the tools that could bring about our own destruction or widespread suffering, and much suffering besides our own (it already does). We also have more and more tools that can connect us, help us grow, help us understand, show us directions for change. Are we going to turn out to be the hero or the villain or the tragic victim of this story? Will we do it? Will we grow up? I don't personally think that is a rhetorical question. I think it is a real question for us all. Never mind if a machine can become human. Can an animal species really become "human," in the best possible sense of the word?