Yuval Harari's new book, Homo Deus, argues that automation is making people unnecessary. In particular, the less powerful people (a.k.a. the masses) may come to be regarded as unnecessary by companies and governments. Japan and Europe provide free health care to their populations, and even the myopic US has some form of social safety net. Harari's point is that one of the motivations for government programs that take care of people is to strengthen their industry and their army. But if a country's industrial and military needs can be met with robots, governments may have no reason to care about people.
As the recent election made even more clear, voters have no particular interest in electing politicians that care about them, or in policies that promote their well-being. If governments decide that people are disposable, the people themselves will not be inclined to disagree. A majority of us have been taught to blame ourselves for our misfortunes and discount the benefits of cooperatively working together to ensure we are all taken care of.
"The reason to build all these mass social service systems was to support strong armies and strong economies. Already the most advanced armies don’t need [as many] people. The same might happen in the civilian economy." (Yuval Harari, "The Post-Human World," Atlantic, 2017 Feb 20)Indeed, we are, in certain ways, already beginning to lose the need for our selves -- or certain parts of ourselves. We are increasingly handing over to algorithms our very own decision-making judgment -- traditionally a central aspect of what makes us who we are.
"Look at GPS applications....Today, everybody is blindly following what Waze is telling them. They’ve lost the basic ability to navigate by themselves. If something happens to the application, they are completely lost. You reach a juncture on the road, and you trust the algorithm. Maybe the junction is your career. Maybe it’s the decision to get married. But you trust the algorithm rather than your own intuition. The most important invention that’s spreading now is biometric sensors. They may become ubiquitous. Humans will consult their biometric data to determine how to live. That is really interesting and scary stuff, because we will no longer be in charge of our identity. We will outsource our executive decisions to biometric readings of our neurochemical signals to decide how to live." (Harari)In the not-too-distant future, perhaps, we won't even decide for ourselves what to have for dinner. Some Artificial Intelligence with access to our biometric readings will tell us what foods will best contribute to our health and happiness. What's left of "me" if I'm not even deciding for myself what to have for dinner?
"What really happens is that the self disintegrates. It’s not that you understand your true self better, but you come to realize there is no true self. There is just a complicated connection of biochemical connections, without a core. There is no authentic voice that lives inside you....The very idea of an individual that exists, which has been so precious to us, is in danger." (Harari)But "the self disintegrates" isn't quite right. Rather, the illusion of a self disintegrates. When a person comes "to realize there is no true self," they are aware of what has always been the case but they didn't know it.
Longtime meditators grow accustomed and familiar with the arising of thoughts. Thoughts simply arise. Sitting and watching them arise, we become aware that there was no decision for a given thought to arise -- it just appears. Spending some time with this awareness, day after day, gradually leads to integrating the understanding that our thoughts are not us -- our thoughts are just something that happens to us, like a cold, or a traffic jam. In meditation we see for ourselves what the Buddha taught: there is no self.
If Yuval Harari is right, then popular neurobiology and artificial intelligence algorithms are now leading nonmeditators to understand what meditators have understood for centuries: the "very idea of an individual that exists, which has been precious to us" has always been an illusion.
The path of meditation tends to cultivate compassion and lovingkindness along with insight into the emptiness of our concepts of self. Along that path, there is a natural connection between seeing the way the ego constructs its illusions and feeling an opening of the heart to care for other beings. The path of artificial intelligence may be different. Algorithms may make better decisions than the wet-ware inside our skulls -- and growing comfortable with that may, as Harari suggests, have the effect of disabusing us of our notions of a transcendent "decider" soul. It's just wet-ware (100 billion neurons firing across 100 trillion synapses) on the one hand and microchips on the other hand. But this path of arriving at the insight of no-self might do nothing to improve our compassion.
It might, indeed, merely facilitate a growing assumption on the part of our governments and our corporate leadership that humans are superfluous -- an assumption we might be increasingly making about ourselves.
No comments:
Post a Comment