Print this page

Get real

Will the future really bring machines or intelligent systems that are more sensible than humans? Will we humans of the future (or you humans of the future at least) become posthuman beings that can think and live not like humans? Will we make machines that think precisely like we do, and is that a worthwhile thing to attempt? What do we want?

Frankly, it seems to me that it is lazy and dangerous to look forward to the imminent day when artificial intelligence will solve all our human problems for us. This may happen; I'm not saying it won't. But in the meantime human problems confront us right now, and so far Siri can't even turn on my GPS and tell me what street I'm on, let alone solve the climate crisis, degradation of the ecosystem, racism, ethnic and nationalistic wars, fake news, runaway hyperreality, massive poverty and hunger and human suffering, and all the other stuff that threatens our species and our planet. The fact that in two or three years "she" may be able to do twice as many logical operations in the same amount of time doesn't mean she'll be able to think about, let alone care about and solve, human problems. Our problems remain our problems for now. And of course we could well make the planet unliveable for most humans and many other forms of life before this Singularity ever even happens.

Technology: how powerful it seems to us today! Many people are expecting technology to solve our current environmental crisis, for instance. We will perfect carbon capture technologies or new geoengineering tools to counteract the greenhouse gases in our atmosphere, so that we can continue to burn fossil fuels and run our industries, economies, and personal lives as we have grown accustomed to. This has been one example of the hope (perhaps already passé) that we can just add on more technology to address the problems our technology has caused, rather than going through the more painful transition away from fossil fuels, overpopulation, and hyperconsumption.

In 2007, physicist Freeman Dyson suggested that biotechnology (another huge field that is starting to "hit" and will change all our lives if we live long enough) would allow us to adapt our life forms (I'm talking about the plants and animals we use, but also perhaps our own biology) to the new environmental cirumstances we create with our man-made climate change and pollution (which are expected to make the world uninhabitable for many of the species we have in their current naturally evolved forms). Again, technology causes a dangerous shift in the natural world, but rather than curtail or re-think it, we use more technology to make lifeforms adapt to the new circumstances technology has created.

Others are expecting smarter-than-human AI to solve these problems in ways we can't imagine with our puny human brains. That could happen, but - however brilliant the AI is -there may be problems with just getting the data the AI needs to process. Right now human intelligence and painstaking research can barely begin to understand the complexities of the interdependencies that make life on earth possible. We do not even know all of the species of life on the planet, let alone more than a suggestive fraction of the story of how they interact to create our ecosystem. Unless the intelligence of the Singularity can also figure out brilliant new ways of gathering such data, it's hard to see how it will be able to make up any accurate solution.

During the AI hype explosion of 2023, Naomi Klein talked about the fact that we don't need AI to solve the climate crisis; we're already smart enough to know how to fix things ourselves, we just don't have the will to do it:

[The proponents of AI salvation seem to imagine that] the failure to “solve” big problems like climate change is due to a deficit of smarts. Never mind that smart people, heavy with PhDs and Nobel prizes, have been telling our governments for decades what needs to happen to get out of this mess: slash our emissions, leave carbon in the ground, tackle the overconsumption of the rich and the underconsumption of the poor because no energy source is free of ecological costs.

The reason this very smart counsel has been ignored is not due to a reading comprehension problem, or because we somehow need machines to do our thinking for us. It’s because doing what the climate crisis demands of us would strand trillions of dollars of fossil fuel assets, while challenging the consumption-based growth model at the heart of our interconnected economies. The climate crisis is not, in fact, a mystery or a riddle we haven’t yet solved due to insufficiently robust data sets. We know what it would take, but it’s not a quick fix – it’s a paradigm shift. Waiting for machines to spit out a more palatable and/or profitable answer is not a cure for this crisis, it’s one more symptom of it. (Klein 2023)

In other words, what people are hoping for from AI is some way to fix the climate crisis without us having to change. And I think Klein puts too much rhetorical emphasis on the billionaires here. It is not just the rich who need to change. It is all of us. The planet can't support 8 billion people trying to live like middle class American hyperconsumers.

In recent years, partly sparked by the climate crisis and now greatly abetted by the realities of the Coronavirus pandemic, there has been increasing interest among scientists and some philosophers and activists in making ordinary people more mindful of the real non-technological realities of human existence that it is so easy for us to neglect or forget (and technology is part of why). Ziya Tong's The Reality Bubble (2019) is a useful and eye-opening science-based onslaught on how delusional many of the average North American's ways of perceiving reality have become. Those of us in the developed world largely live in cities; our food comes to us packaged and processed, our garbage magically disappears once a week, our shit goes down the drain and we never have to think about it again. And of course, we spend a huge amount of our time focused on the incredibly rich hyperreality of media and the seemingly inescapable economy that our technology has helped us create for ourselves.

But in fact, we are animals - vulnerable, mortal, physical - desperately dependent on the rest of life on earth, on the oceans, on the weather and the oxygen we breathe, on fresh water, on micro-organisms within us and in the environment that we don't really understand - and all this is mostly hidden from us by the lives we have chosen to lead (perhaps we have chosen this way of living so we can avoid as much as possible the reality of what we actually are). The coronavirus was a wake-up call for some, a reminder that we are part of nature, not serenely above it and in control. A tiny virus can kill us, force us to slow the economy and change our lives around. Because we are part of life, not supremely above the rest of it just because we have some technology and science, and cool consumer products. We are animals. We are vulnerable. We need each other and the rest of nature to survive. Even so, each of us will die.

Can technology really save us from this reality, as opposed to just distracting us from it, making it easier to ignore? Can more intelligence save the planet in spite of humans, and/or save "human" consciousness as some new kind of non-biological force, continuing to evolve without bodies, growth, decay, and death? Do we want that? Is the human animal something we really want to overcome? Perhaps.

We are making impressive strides in AI. It can beat us at most board games (but not basketball yet! at least not without robots). It can "learn," at least in certain senses. It can generate plausible copy. It can imitate artistic styles. It will likely begin to provide emotional services for us, perhaps as teachers, caregivers, therapists, and so forth. But will it be able to solve our problems as a species of mammal that has gone spiraling out of control?

Siri may "know" what time it is; but so did the wristwatch, and so did the earliest mechanical clock with which this course began. I suspect that the only intelligence that can potentially understand when it is time to wake up, however, at least for the foreseeable future, is still our own human intelligence, such as it is, including our (often stunted) emotional and ethical intelligence. After they dropped the atomic bomb in 1945, Albert Einstein remarked: "It has become appallingly obvious that our technology has exceeded our humanity." As Freeman Dyson later warned, "ethical progress is the only cure for the damage done by scientific progress” (Dyson, 1997). Humans have made remarkable scientific and technological progress in our short period of civilization. Perhaps more progress than we can handle.

I think we are also making ethical progress as well; but it has been slow, much slower than our technological progress seems to be. Can we rely on AI to make the rest of that ethical progress for us or do we have to do it ourselves? It doesn't seem like cheap drama to me to say that we are at a critical moment in the story of humankind. We have the power to destroy ourselves in a few different ways, we have shaped the tools that could bring about our own destruction or widespread suffering, and much suffering besides our own (it already does). We also have more and more tools that can connect us, help us grow, help us understand, show us directions for change. Are we going to turn out to be the hero or the villain or the tragic victim of this story? Will we do it? Will we grow up? I don't personally think that is a rhetorical question. I think it is a real question for us all. Never mind if a machine can become human. Can an animal species really become "human," in the best possible sense of the word?

Print this page