Wednesday, November 17, 2021

Back to Square One

 



At the time I am writing this, the global coronavirus pandemic, which everyone had hoped and expected would finally be winding down in 2021, has roared back to life - beginning in mid-summer with the rise of a new variant of the disease.  And now there are news reports of shortages appearing in stores and supermarkets again, similar to those that shocked and alarmed everyone when the pandemic first became serious in early 2020.  Back then, the shortages began with toilet paper, and then spread beyond paper products in general to soap and disinfectants, and finally to certain foods, like eggs.  When I observed how quickly these shortages appeared last year, I realized just how silly so many of those science fiction movies are which show the survivors of some sort of apocalypse periodically returning to abandoned grocery stores to restock their supplies of canned goods and other things.  The blunt fact is that if we ever have anything like a real apocalypse (and I am hoping that this present pandemic is not leading to one), then store shelves will be cleared of everything very quickly.

 

What I’m about to describe is not for the squeamish, and if you are one of those, then I recommend skipping this paragraph and moving on to the next one.  A real apocalypse, brought on by some kind of massive, debilitating, and irreversible crisis, would probably play out along the following lines.  There would be initial attempts by the government to rein in general panic by putting some kind of civic programs in place to address the crisis in a systematic fashion.  But as people began to fear for their personal safety and that of their families, they would proceed to stock up on things: essentials first, and then just about everything.  The appearance of shortages would spur panic buying, exacerbating the shortages, until long lines of people formed at stores hoping to buy whatever was still available, perhaps at greatly inflated prices.  What would follow would be a complete breakdown in social order.  People would hunker down inside of their homes, hoping that their stash of canned goods, water, and other supplies would sustain them until – somehow – this crisis finally passed.  At some point, however, as these stashes began to dwindle, roving bands of armed thugs, or just desperate people, would begin to raid other homes, perhaps systematically, in search of whatever supplies they still had, and would be willing or driven to even kill any families that resisted them.  In the end, even these raiders would not survive, because there would be fewer and fewer families to raid.  Only those who produced genuine food sources, such as farmers, might survive longer, and only if they had the capacity to defend themselves from a surrounding mob of desperately hungry people.  Of course, if anything like a government managed to remain intact through an extreme crisis such as this, then perhaps some semblance of order could be restored or maintained through military means, but if the supply chain of goods and services has been irreparably destroyed, then even this source of order will not be able to be maintained.


What could cause such a general breakdown of that magnitude?  A large-scale nuclear war, and the nuclear winter that followed, is one obvious possibility, the threat of which has hovered ominously over us for more than half a century.  But widespread devastation could come from natural sources as well, such as a massive asteroid strike.  One or more environmental catastrophes might lead to a general and irreparable breakdown of the global food chain.  A pandemic, more serious than the one that we are presently plagued with, might do it.  And even something as subtle as a widespread, powerful electromagnetic pulse, arising naturally from a solar flare, or intentionally as a form of warfare or terrorism, could suddenly make most of our electronic devices inoperable, including our cellphones and computers, which in turn could produce widespread chaos.  We don’t like to think of any apocalyptic scenarios, but the occurrence of any of these is very plausible, and in some cases are becoming increasingly plausible.

 

It is even more unpleasant to consider the long-term consequences of such a catastrophe.  Our civilization is one massive, interconnected network, and the things which we use and consume come from sources that are often far removed from us.  Much of the raw materials come from other countries, and many if not most of the end-products are manufactured somewhere else, and imported here.  I don’t even have any idea where the nearest farm is to where I live.  I couldn’t imagine where I could search for food if I couldn’t find it in a store.  And so much of our daily lives is contingent on a steady supply of electricity and water that the permanent interruption of these would be enough to constitute a devastating cataclysm.  We might think that we could adapt, and learn to manage without these – returning, for example, to pencils and paper to keep records and conduct rudimentary business.  But who would be able to manufacture pencils once our existing supply became exhausted, or paper, for that matter?

 


It is a very real possibility then, that such an apocalypse could send the survivors into a state of barbarism.  Even the rudiments of civilization would be lost, and literacy itself might fall into a general – if not complete – decline.  When I consider such a scenario, it reminds me of a story that the philosopher Plato told in the Timaeus about a conversation between the Athenian statesman Solon and an old Egyptian priest.  Solon had brought up the subject of the Great Flood (the ancient Greeks, like the Hebrews, had their own flood legend), and was speculating about when it actually occurred.  The priest replied, scornfully, “O Solon, Solon, you Hellenes are never anything but children, and there is not an old man among you.”  When Solon asked him to explain what he meant, the priest continued:

 

There have been, and will be again, many destructions of mankind arising out of many causes; the greatest have been brought about by the agencies of fire and water, and other lesser ones by innumerable other causes. . . . Whereas just when you and other nations are beginning to be provided with letters and the other requisites of civilized life, after the usual interval, the stream from heaven, like a pestilence, comes pouring down, and leaves only those of you who are destitute of letters and education; and so you have to begin all over again like children, and know nothing of what happened in ancient times, either among us or among yourselves.

 

Plato’s account has inspired many fanciful imaginations to conjure up elaborate theories about ancient civilizations that existed thousands – perhaps many thousands – of years ago, with technologies like our own, or comparable to our own, possessing the capabilities of flight or levitation, weapons of mass destruction, and lifestyles characterized by luxury and material abundance.  Plato himself, in the Timaeus and another work of his, the Critias, introduced the legend of Atlantis, which has since become the archetypal lost civilization.

 

Atlantis


But history has given us real examples of civilizations that have fallen into barbarism, the most prominent being that of ancient Rome.  After the Roman Empire fell to Germanic invaders, much of the culture and technology that had evolved in that civilization and the Greek civilization that it had inherited was lost, and for centuries, the most tangible evidence that it had once existed was the network of old Roman roads that survived and spanned much of western Europe, including the British isles.  Fortunately, the legacy of those civilizations was not completely lost, thanks in large part to Christian monks in western Europe who retained and preserved the writings from that era, along with literacy itself, and Islamic scholars in the East.  Earlier still, before the rise of the Roman Empire, the ancient Greeks themselves had come out of a Dark Age that had lasted for centuries, during which the entire population had even forgotten how to read or write.  This had come about when an earlier civilization, called the Mycenaean, met its downfall around 1200 BC, probably due to either invasion or internal societal breakdown, and it was not until 450 years later that literacy returned, and ushered in the classical Greek civilization which began with Homer and Hesiod and culminated with Socrates, Plato, and Aristotle.

 

Could a downfall of that magnitude – a complete loss of civilization as we know it – actually happen in our own time?  In his book Why Information Grows: The Evolution of Order, from Atoms to Economies, physicist Cesar A. Hidalgo argues that the growth and maintenance of our economic system and its complex products and services depends upon the accumulation of knowledge and knowhow that far transcends the capabilities of any individual, and therefore requires the establishment of interlocking networks of people, businesses, and other institutions.  These networks can collectively accumulate and use the necessary knowledge and knowhow which makes the creation and application of these products and services possible.  But if the networks don’t exist, or if they are destroyed, then it simply becomes impossible to maintain the necessary infrastructure to support civilization and its continued evolution.  Hidalgo writes:

 

As a thought experiment, consider sending a group of ten teenagers to a desert island equipped with indestructible solar-powered laptops containing full copies of the entire Internet and every book and magazine ever written. Would this “DNA” be enough for this group of teenagers to unpack the information contained in these sources in a matter of five to ten generations? Would they be able to evolve a society that embodies in its networks the knowhow of metallurgy, agriculture, and electronics that we take for granted in our modern society, and which is described in the information that lies dormant in the books and websites they carry with them? Or would they be unable to unpack that information into productive knowhow, failing to re-create a society holding any considerable amount of the knowhow that was contained in the society that sent them on this strange quest? Of course, reproducing this “Lord of the Flies” scenario experimentally is unfeasible, but there are examples in our past that tell us that knowhow is often lost when social groups are isolated, and that the knowhow available in some locations is hard to reproduce, even when the attempts to do so are fantastic.

 


It is remarkable to consider just how dependent we have become in recent years on smartphones, the internet, and personal computers to carry on our day-to-day activities, even those involving recreation and leisure.  Without these many of us – particularly those who have never lived in a world without them – would be completely lost and disoriented.  But, as Hidalgo argues, even if we could somehow retain the basic functionality of these devices in the wake of a great cataclysm, they would ultimately be insufficient to help us preserve or restore civilization, if the social and economic networks that were responsible for bringing about their existence and supporting them have been obliterated.

 

There have certainly been people who believe that a complete societal collapse is possible.  In the early 1980s, when Cold War hostilities between the U.S. and the U.S.S.R. had reignited after the Soviet invasion of Afghanistan and the election of the hawkish Ronald Reagan, and the U.S. economy had been in a state of extreme economic stagnation for more than a decade, many feared that a general collapse was imminent, either due to a complete economic breakdown, or an apocalyptic world war, or both.  A new movement emerged called survivalism, which had actually begun in response to Cold War and economic fears in the 1960s, but steadily grew in popularity and peaked in the 1980s, with books like Life After Doomsday by Bruce D. Clayton and Live Off the Land in the City and Country by Ragnar Benson.  Survivalists believed that they could weather a general catastrophe by forming intentional communities – often in isolated areas – which learned how to grow their own food, build their own shelters, and practice other basic survival skills which would enable them to preserve their existence indefinitely.  These communities usually also accumulated and learned how to use weapons in order to defend themselves against any stragglers in the chaotic, post-apocalyptic world that might try to plunder what they had.  This movement was satirized in the 1983 movie The Survivors, starring Robin Williams and Walter Matthau, where Robin Williams’ character joins one of these communities, only to discover that its leader is secretly profiting from the fear and paranoia that he has engendered among his followers.  While the movie’s depiction of survivalists verged on the cartoonish, it did reflect a misgiving on the part of the general public about groups like these.  They seemed to be not so much interested in preserving civilization in general, but rather just people like themselves, whether this similarity was along religious (often cultish), racial (generally white), or political (usually extremely conservative) lines.  In fact, these groups often believed that the very collapse of civilization would be a verification that it had followed a wrong course, and that a future course along the same destructive lines could only be avoided if they could recast civilization in their own image of what a healthy one would look like: an image reflecting themselves and their particular beliefs.



Still, the survivalists might have been onto something.  If we ever have a complete collapse of civilization, its eventual restoration will probably be contingent on the continued existence of certain pockets of survivors, who have managed to weather the worst of the catastrophe, or series of catastrophes, and developed the means to sustain themselves and their offspring.  As these pockets grow and begin to form new networks, a basis might form that will support the growth of economy and technology.

 

In the electricity industry, a new term has become popular in recent years: resiliency.  Traditionally, the quality of electricity service has been measured in terms of reliability, which is the percentage of time that customers have access to electricity.  While the industry generally maintains a very high standard for this measure (well above 99%), there has been a realization – particularly in the wake of extreme weather events such as Superstorm Sandy in 2012, which left large parts of New York City and surrounding areas without power for several days – that there is more to providing reliable electricity service than repairing downed power lines from time to time.  In the face of catastrophic outages, which could cripple power plants and transformers, in addition to downing power lines, a more general strategy needs to be in place for restoring service.  This strategy would involve a phased approach for literally rebuilding the electric system – at least in places – in order to resume service.  It would entail making sure that equipment redundancies, back-up systems, and elaborate restoration strategies – often involving a coordinated and concentrated effort among disparate and even widely dispersed entities – are in place to handle such potential extreme disruptions.

 


Perhaps we should start thinking along the same lines about civilization in general: about preserving its resiliency, in the face of a potentially crippling wide-spread catastrophe, or series of catastrophes.  It might involve something like the following phases:

 

1. Ensuring that at least part of the general populace, if not most of it, or ideally all of it, has knowledge of how to form autonomous, self-supporting survival pockets that will enable them to weather long-run disruptions in food supply, water supply, and electricity service.

2. Storing the vital information of our civilization in a way that ensures both its survival and its general accessibility, even under the worst circumstances, so that literacy is maintained among the survivors.

3. Having a plan in place to guide the general formation of networks among individual pockets of survivors, concomitant with the phased, gradual restoration of practices and technologies that will enable the return of civilization, such as mining and metallurgy, basic industry, the establishment of larger trade networks, and the resumption of utility services, including water and electricity.

 

I suspect that the politically powerful and wealthiest members of our civilization already have some sort of plans in place to protect themselves.  But, as with the survivalists, such plans would be self-serving and ultimately short-sighted if they don’t provide a framework for the restoration of a broad-based infrastructure that would be essential for preserving or bringing back our civilization.  Without this, we could see a future not unlike some dystopian science fiction novels, in which a band of relatively primitive human beings only have dim memories of some lost golden age when their ancestors could fly through the air, live in complete comfort with abundant food, and enjoy entertainments that now seemed possible only through some sort of magic.  This is certainly a future that we would not want to leave to our descendants – even our distant ones.  I’m sure that all of us instead would like to believe that the distant future of humanity will be a “Golden Age” that will make the present one pale by comparison.  And so the old adage, “Hope for the best but prepare for the worst” should be a philosophy that all of us – both individually and collectively – should take to heart . . . and put into practice, before it’s too late.




Tuesday, June 29, 2021

Theater of the Mind

 



I’ve been doing a lot of thinking lately . . . about thinking.  I’ve certainly had the time to do it, with so much time on my hands.  In fact, over the past couple of years I’ve felt like I’ve won the “time lottery”, having found much more of it available to me after retiring at the end of 2018, and then even more after being generally housebound due to the coronavirus epidemic.  This abundance of time is something that I looked forward to during my entire adult life: a future where I would have the freedom to do anything I want but, more important, to think about anything I want.  You see, I was never one of those “bucket list” people that had a list of potential experiences that I wanted to engage in someday before I died – like skydiving, or whitewater rafting, or bungee jumping, or even taking a vacation to some exotic location – and doing each of these when I found the opportunity: checking them off of my list, like a scavenger hunt or a bingo card.  Instead, I always looked forward to some future time in my life when I could just indulge in contemplation for its own sake: about the meaning of life, the essence of reality, and why things are the way they are and what they could be.  When I went to college, I would have much preferred studying philosophy, or maybe history, but I studied engineering instead, because, having come from a “blue collar” working class background, I wanted to get into a career where I could someday, at some distant future time, upon retirement, find myself in a place where I had an abundance of personal freedom, and time .  .  . to think.

And now, over these past couple of years, I’ve had a greater abundance of time than I could have ever hoped for, and yet, when I think about how I’ve spent it, I am generally appalled.  Like so many who have been similarly housebound during much of the pandemic, I’ve found myself devoting more time to things like “binge-watching” movies and television programs, or indulging in even more blatantly inane time-filling activities.  But the travesty of it all is even starker when I just reflect on what I think about from moment to moment, and the general banality and triviality of my thoughts.  There are very few “deep thoughts” here.  I suspect that if somebody could review the contents of my moment-to-moment thinking for some recent stretch of time, the experience would not be unlike that scene in the movie Jaws where a recently killed shark’s stomach is cut open in order to inspect what it had eaten during the last days of its life.  It was not a very pretty sight.



A friend of mine proudly told me last year that she was now reading several books a month, because of the pandemic.  I can’t help but suspect, however, that these books are of the escapist fiction variety that are only one level above the similar escapist entertainment on television that most of us are indulging more heavily in, now.  Still, her remark left me jealous.  I did, shortly after my retirement, start a “meaning of life” book club with some friends of mine, after having prepared a list of “deep” questions that I had compiled during my life: questions that I said I would like to someday devote myself to studying and perhaps answering when time permitted.  While the club has been a great boon to me, I must confess that even with this, I am probably spending hardly more than half-an-hour a day reading each of our monthly selections, and maybe the same amount of time reflecting on what I read.  It has given me a respite, but not rescued me, from a nearly constant condition of intellectual idleness.

And these reflections cause me to wonder about the many books written in recent years that talk about a possible future of enhanced human beings, where we are able to integrate artificial intelligence into our natural intelligence and dramatically increase the capability of our thinking.  What would an “intellectually-enhanced” human being think about?  And, more to the point, what would such a person want to think about?



I suspect that my own mental experiences are not unlike those of most people, and if they could review the contents of their own minds – the actual moment-to-moment thoughts that they experience, these would consist of generally banal things like mulling over trivial concerns or tasks of the moment; indulging in or anticipating pleasant experiences; engaging in escapist entertainment through reading or television; enjoying happy memories; stewing over perceived slights, or memories of them; obsessing over fears – real, exaggerated, or imaginary; carrying on conversations with others – probably about similarly banal things like gossip, recent movies watched, or sports; and immersing oneself in fantasies.  One’s mind might be applied, at times, to more genuinely challenging diversions, like working crossword or sudoku puzzles, or games of other sorts.  And then there are those “bucket list” activities, which one can plan for, anticipate, experience, and then enjoy in memory.

And so I return to my earlier question:  If we were all suddenly gifted with “enhanced” intellects, whether it be through interfacing with computers, or genetic alterations, or chemicals, then what would this mean, exactly?  How would – or could – our thinking be “better” than it is now?  Even if we had the capability to, say, calculate Pi (π) to 30 decimal places, would we want to?  Would we even derive any satisfaction from doing so?



In the 1976 film The Man Who Fell to Earth, David Bowie portrayed an alien with advanced intelligence who has come to Earth to bring back water for his drought-ravaged home world.  But while on Earth, he succumbs to the temptation of popular earthly vices, and finds that the most enjoyable way to engage his intellect - in addition to dulling it with alcohol – is by watching several televisions – each tuned to a different channel – simultaneously.  If we developed intelligence rivaling that of David Bowie’s alien character, would we follow a similar course: not just “binge-watching” television, but binge-watching several series at the same time?  Would we develop an even greater temptation to cloud or distort our thinking with alcohol and drugs?




The 1998 Japanese film After Life addresses the question of the ideal mental life in a different way.  It imagines a heaven where each recently-deceased arrival is invited to review his or her entire life and come up with one single happiest memory, which will then be experienced for the rest of eternity.  It is an intriguing idea, and inspired me to ask what single memory I would choose to revisit over and over again.  But as I contemplated this, I realized that it is difficult, maybe even impossible, to find some life event that was truly, completely happy.  I think back to what on the surface was a set of particularly pleasant memories: the summer barbecues that my family held each year.  But I suspect that if I literally relived even the best of one of these, I would find that it was filled with little annoyances and distractions, like somebody over- or under-cooking the hamburgers and steaks, and various interpersonal melodramas going on among the family members.  I’m not sure how pleasant it would be to actually relive these experiences, as opposed to merely reflecting on them later.  Similarly, some of my greatest and happiest personal successes were preceded by great stress and anxiety, and it was only after their successful culmination that I was able to experience something like euphoria.  Here, too, I wonder how pleasant reliving the entire episode would actually be.  It seems that I would only enjoy them if I could do so by partially being removed from them: witnessing them the same way that I might watch a television drama, as the characters in this movie apparently did.

In the pilot episode for the original Star Trek series, called “The Cage” (later retitled “The Menagerie” when it was incorporated into the series), a man is captured on a planet by a race of humanoid beings there who have developed immense mental powers.  It is explained to him by a fellow captive that these abilities became a sort of drug to these aliens, as the vivid dreams and fantasies that they were able to experience became more important to them than reality, and they eventually lost the ability to maintain the machinery of their civilization.  It is an intriguing and perhaps not all that unrealistic cautionary tale of what might befall us if we develop enhanced mental capabilities, only to discover that our principal desire is to use them in exactly the same ultimately crippling way.




But of course there are natural ways that we increase our capacity to think, and we engage in them all the time, through formal education, instructional videos, tapes, and live lectures, reading, and exposure to new experiences that broaden the mind.  I’ve certainly done more than my share of these, particularly with respect to formal education, and it only heightens the unpleasant awareness that I frequently have of the general shallowness of my thinking.  Often I will find myself perusing my bookcase shelves and surveying the many textbooks on engineering, applied mathematics, and other sciences that I retained from my college days.  I feel a genuine sense of depression over how little of that knowledge I managed to use during my lifetime in some practical application of benefit to others, or even to myself.  I remember, when I was an undergraduate student in electrical engineering, attending a speech given by a successful man who was a former alumni of that program at my university.  He said that it might surprise us students to learn that most of us would probably never use a convolution integral, or a Fourier transform, or complex algebra, or any of the other grueling mathematical and engineering applications that we had been compelled to learn, at any time in our future professional careers.  The more fundamental goal of teaching these, he told us, was to enable us to learn how to solve problems: by identifying the tools needed, locating them, mastering them, and then effectively applying them.  I was both inspired and relieved by his speech, particularly since I had a keen sense that my knowledge of these applications was already fading fast.  It is interesting that I don’t feel that keen sense of relief, now, decades later, but rather remorse at knowledge that was never put to practical use.  For much of my adult years, outside of the formal academic environment, I had also made a concerted effort to study philosophy, history, and the social sciences with the hopes that these might provide a guide for living, and for conducting myself in society.  Again, I’m not sure if these had any impact at all, at least upon my personal life.  Only the biographies and autobiographies that I read of people in history who I admired might have had such an impact, and even if so, much less, I think, than I hoped for.  Of course, the general truism is that training and educating one’s mind generally enables one to have a more lucrative occupation.  But this leads to the same disconcerting result: a more lucrative occupation provides more time for leisure, particularly after retirement.  And again one faces the specter of the banal theater of the mind.



And, too, in the many discussions about broadening one’s mind, and expanding one’s consciousness, what is often forgotten is that the very process that enables an intelligent engagement with the surrounding environment is one of limitation, rather than expansion.  Our perceptual faculties have been honed by evolution to function only within narrow bandwidths, because this is apparently the most effective and economical way that we can thrive and survive within our environment.  We see, for example, only a finite spectrum of light, and are blind to that which exists in the infrared and ultraviolet regions.  Similarly, as any dog owner will attest, there are sound frequencies that are imperceptible to us, some of which can be heard by other species.  We are limited, too, in the physical range of our perceptual faculties, and can only effectively sense things within a certain distance.  There is also a sort of size limitation that affects what we perceive in the world around us: for example, an entire universe of life exists at the microscopic level that we would be completely unaware of, but for the fact that we are often affected indirectly by it, through infectious diseases, among other things.  And even much of what we can and do perceive is actively screened out of our awareness, so that we can focus that awareness on what we judge to be most relevant at any particular moment.  When I look into a room, while I can actually “see” everything within it – the shelves full of books, the flooring, the windows, and the walls – I let most of these things remain unnoticed and obscure in the periphery of my vision, as I direct my gaze to the particular objects of interest to me.  Similarly, I regularly “tune out” sounds and other sensations that are monotonous, repetitive, or (judged to be) inconsequential.  And, beyond all of this natural limitation and “filtering” of perceptions in the present moment, there is a further editing after these enter my memory, and I find that just a fraction of even what did occupy my attention and thoughts remains readily accessible to that memory after only a relatively short amount of time.  Given these facts, one cannot help but wonder: how, exactly would a drug, or an electronic augmentation, or a form of mental discipline that expanded my field of awareness and/or my memory really improve the quality of my existence, if the quality of that existence is contingent upon how effectively I limit these things to begin with?



When trying to differentiate “higher” versus “lower” intelligence, and identify what it is that constitutes more advanced thinking, we often turn to the animal kingdom, and the distinctions between animal thought and human thought.  But the scientific study of animal intelligence has always been fraught with controversy, as the scientists who practice it face two conflicting poles of criticism.  On the one hand, there is the charge of anthropomorphism: that scientists are often tempted to ascribe too much intelligence to certain animals, simply because these animals behave in ways that superficially resemble human behavior.  And on the other hand, there is a charge that many people, including animal behaviorists, are inclined to exaggerate the gulf in intelligence between humans and other species, because of the need to believe that there is something unique and special about human beings, which starkly sets them apart from the rest of life.  This bias originally stemmed from religious beliefs about the special creation of humans, but even many scientists without a religious bent have been unable to resist the temptation to maintain this dogma, replacing special creation with the idea that the rise of Homo Sapiens represented a sort of culmination of evolution on planet Earth.  Clearly the differences in animal and human consciousness are not as stark as many would like to believe.  Animals occupy much of their thinking in similar ways that we do – focused on the basic desires of life (e.g., food, sex, security).  And like our thinking process, that of animals is enabled through a honing and limiting of perceptions and awareness, though in many cases their ranges of perception are different or even broader than our own.  Animals have emotions, they feel pain, they have memories and anticipations, they sleep, and, as any dog or cat owner knows, some also dream, and if they are capable of creating such fictional dramas subconsciously, then is it such a great leap to assume that some animals at least are able to indulge in conscious fantasy as well?  Many of them certainly enjoy play, as we do.  I remember a time many years ago when I was sitting at the patio in my backyard, quietly relaxing, and had been there so long that the usual denizens of my backyard – squirrels rabbits, and a few species of birds – became completely oblivious to, or at least unconcerned about, my presence, and so they began to engage in what was apparently their normal behavior when I was not around.  This was principally foraging for food, of course, but I was amused and intrigued to see that much of their behavior involved play, and not just with members of their own species, but among all of the other animals who were foraging for food.  The squirrels, rabbits, and birds postured with, and tussled with, one another, but in a way that was clearly not intended to be genuinely hostile or threatening.  They were having a grand old time together.  Scientists continue to find new evidence as well among many species of animals of their capability to plan, to reason, and to solve complex problems.  But we clearly believe that our thinking is more advanced – is better – than that of probably every other species of living being on the planet.  What is it about our thinking that makes it so? 



The critical difference, I think, lies in our greater capability for accessing the knowledge and experience of others.  This began with the creation of spoken language, when we could better share the contents of our thoughts, and continued with the development of writing, and the keeping of physical records.  Eventually, not just the knowledge and experience of our living contemporaries, but those who lived before us could be accessed as well.  We had a larger menu from which to select from in crafting our own thoughts.  We could experience both the memories and the fantasies of others, and also obtain practical information from a widening pool of acquired knowledge.  And the best thinkers seem to have a gift for interacting with this intellectual storehouse.  Their talent lies not in what they know, but in knowing what to know: a sort of meta-knowledge.  (And, after all, isn’t this just a continuation of what evolution was doing when it honed, refined, and limited our senses: compelling us to focus on what was most important to us?)  It brings back to mind that speech that I heard in college, about how it was not the specific information that we retained from our courses that was important, but the skill that we acquired in learning how to identify, find, and use the requisite knowledge to solve problems.  A person might be a walking encyclopedia, capable of being a champion on the game show Jeopardy, but might find himself (unless he actually does manage to get onto the game and win prize money) leading a much more modest existence compared to someone who merely is more effective at marshalling information resources (either by finding them directly, or enlisting people who can) in the service of some profitable enterprise.  In popular vernacular, this is the distinction often made between “book smarts” and “street smarts”.  And while the ranks of the “street smart” include grifters, they also include entrepreneurs, inventors, and successful managers.  Our entire civilization seems to rest on this ability to develop a widening base of generally accessible knowledge, and simultaneously cultivate the skill – the “metaknowledge” – to effectively draw from it in order to raise the quality of personal experience.  We see this skill applied not just in successful businesses, but in our private lives, both in practical pursuits and in the streaming of personal entertainment and gaming – the crafted fantasies of others – and – to the frustration of many teachers – in the vexing talent of our youth to find an answer to any question they are confronted with very quickly, and at their fingertips, in their smartphones. 

At this point I can hear from the reader a protest:  Is this, then, the culmination of human intelligence: the ability to develop an immense external storehouse of accumulated knowledge and experience, only so that we can draw from it to better entertain ourselves, and perhaps succeed in some practical projects as well?  Is this all there is to the accumulation and use of knowledge . . . is this all there is to thinking?  Many of the greatest thinkers in history, like Plato, Aristotle, and St. Augustine, would argue the reverse.  According to them, a life engaged in contemplation – thinking about profound things, in a non-pragmatic way, as an end in itself – was the epitome of a good life, or at least the best ultimate use of leisure time.  To think, in earnest, about the meaning of life, with no practical goal attached to it, is, in their opinion, to be a fully actualized human being.  It is a lofty goal, and maybe really is a worthy one, but in practical terms, how much of our time can we devote to such thinking, even if – as many of us have recently had – all the time in the world?


The School of Athens


There are certainly many tempting, alternative ways of engaging the mind these days which Plato and Aristotle would probably regard as unhealthy applications of the mind in leisure.  Five in particular seem to exert a particular draw upon people in our contemporary culture: 1) movies and television series, and while some are regarded of higher caliber than others, nearly all of them involve sex, violence, and/or intrigue; 2) escapist written fiction, also involving the same; 3) computer video gaming, usually with violent content; 4) internet pornography, and 5) social networking (e.g., Facebook, Twitter, Instagram, TikTok, etc.).  But I wonder, what would be the difference, exactly, between a mind that had dedicated much if not most of its leisure time to contemplation and higher intellectual pursuits versus one that had indulged exclusively in these other entertainments and diversions?   If the brains of each could be examined after death, would there be a conspicuous physical difference?  Would the difference have manifested itself instead in the character of each of the persons, with the first tending to have been more honorable in demeanor than the second?  Would the difference be more practical, as the life of the first was more accomplished and successful than that of the second?  (Or would those who followed the second course of escapist entertainment and diversions simply have a vague sense of regret, in their advanced years, that they had squandered the opportunity to live a more meaningful and accomplished life?)  I genuinely don’t know.  But if those unsuccessful studies in past years that attempted to draw a link between children exposed to violent cartoons and comedies, and later, adolescents exposed to violent movies and television, with the behavior of these children and adolescents as adults, is any guide, then it is very possible that there might be little if any connection at all.


 John Locke                   Isaac Newton


It is easy to come up with some interesting historical counterexamples.  John Locke and Isaac Newton, two of the leading lights in the history of intellectual advancement, whose attainments were the direct consequence of lives dedicated to higher thought, were apparently contemptible cads in their behavior towards others.  I suspect that these are hardly exceptions, and that at the very least no positive correlation will ever be established between great thinkers and visionaries and their personal characters.  Pope Pius XII, who headed the Catholic Church from 1939 to 1958, was by all accounts a man who had dedicated his life to study, contemplation, and prayer, and, even as pope, had a self-imposed regimented and austere lifestyle in which he could continue these practices.  And yet, according to John Cornwall, the author of Hitler’s Pope, Pius XII was instrumental in derailing any attempts by the Catholic clergy in Germany and elsewhere to organize a concerted resistance against the emerging toxic policies and practices of Nazism in the 1930s, and he later persistently resisted entreaties from others to explicitly denounce Hitler’s Final Solution, hence enabling rather than trying to oppose the evil consequences of Nazism and Fascism in Europe.  Apparently he did so because he thought that these were lesser evils compared to the threat of Communism, as embodied in Stalin’s Russia, but also because of antisemitic beliefs and prejudices that he personally harbored.


Pope Pius XII


Any discussion of elevated thinking has to touch on the subject of spiritual or mystical enlightenment in general, which some contend is the most elevated form of thinking possible.  As an “unenlightened” person I of course can’t give full justice to what this state of consciousness is actually like.  But having studied various forms of meditation during my lifetime, I have discerned some common features to all of them.  In the standard practice of meditation, a particular technique is used – be it counting the breaths, focusing on an object, chanting a mantra, or simply sitting silently in a meditative pose – to quiet the mind.  The meditator is instructed to not resist active thinking, but merely to observe random thoughts as they arise, while not dwelling upon them, and not succumbing to the temptation to let them lead to others in turn.  Eventually, these thoughts arise less frequently, until finally what is attained is a placid awareness of being aware: a cultivation of “the Witness” as it is sometimes called, or what might also be called a state of “meta-awareness”.  (A form of Zen in which the practitioner is instructed to meditate intensely on a “koan”, or thought puzzle with no logical solution, such as “What is the sound of one hand clapping”, is an interesting variant which apparently induces elevated thought by short-circuiting the traditional rational thought processes of the mind.)  I don’t know if enlightenment represents an extreme and/or extended state of this meta-awareness, but to the outside observer it seems to induce a condition of extreme placidity in the enlightened.  Paul Brunton, in his 1934 book A Search in Secret India, describes his encounters with several enlightened men and women in India, many of whom spent hours if not days sitting in what seemed to be an extended, blissed-out, trancelike state.  He found it frustrating that in spite of their supposed condition of enlightenment, few of these sages engaged in any activities that might improve the condition of their fellow Indians, or even seemed to care.  And none could give him any practical advice or wisdom to take back to a Europe that was descending into chaos.  I wonder how one might use artificial intelligence to simulate enlightenment.  It would seem to require programming a computer to have a higher-level awareness of its capacity to receive data and to process information, without actually processing any information.  Even if such a thing were possible, would this truly represent the highest level of artificial intelligence?  It seems that like those blissed-out mystics, such a computer would not produce anything of positive consequence, if it produced anything at all.




And I leave myself (and the reader) with a final question:  Is the general quality of thinking of the contemporary human better than that of our ancestors?  If so, what constitutes the improvement, or the difference?  Certainly our earliest ancestors were limited in their capability to share their knowledge, their experiences,  and their fantasies with others, even after the development of language.  The thoughts of the common person were dominated then by their drudgery, and perhaps livened a bit by their personal fantasies.  But as myths were passed on in the campfire stories of elders, and, in later generations, historical sagas like The Iliad were recited in verse by bards, the capacity for shared stories,  histories, and tall tales to enrich the mind grew.  The printed word, and the keeping of written records, increased this capacity exponentially, as did the eventual invention of the printing press, the telephone, the radio, television, and the internet.  Here again, it seems that the widening pool of knowledge, along with our improved capacity to draw upon it, has elevated the quality of our thinking.  And it is interesting to note that a recurring fear at every stage of this development has been that the democratization of knowledge is being accompanied by, or even in danger of being replaced by, a sort of information-based decadence.  There have been many incarnations of this fear, such as those that accompanied the rise of dime novels, sensationalist “yellow” journalism in the popular press, comic strips, escapist entertainment dominating the radio and later the television “boob tube”, and now the internet, with the ready access it provides to false information, hate groups, propaganda, skillfully directed mass marketing, and pornography, among other things.  We have a far greater capacity now, than at any time in human history, to tap into a massive base of shared knowledge, but also into our shared fears, delusions, prejudices, and vices.  Are both simply two sides of the Janus-face that represents our individual and collective advancement?  Is there a real danger that the dark side of this face will overshadow the bright one, and lead to our collective downfall rather than a culmination of this advancement?  Or is it merely incumbent upon each one of us to keep the dark side in check, while cultivating, as best as we individually can, the bright one?

            It has proven to be a daunting task for me, this thinking about thinking.  I want to enjoy thinking, while trying to avoid the risk of having it tainted, at least too much.  It will probably always be a tightrope walk.  In any case, these ruminations have inspired me to engage in some more “heavy” thoughts beyond those inspired by my book club.  I plan to try to get through a one-volume edition of the complete works of Plato over the next year.  I will also return to one of the most profound and challenging philosophy books that I read in my youth and studied in college, Immanuel Kant’s Critique of Pure Reason, and read it again, along with the commentaries that I acquired to try to better understand it back then.  Perhaps I will get something more out of it now, in my maturity, than I managed to do back then.  But I know that in its maturity the mind also loses some of its nimbleness.  A common lament among mathematicians is that they have to try to do their greatest work in their twenties or thirties, because after that the mind gets lazy, less able to pursue a particular line of thought diligently and doggedly, and also seems to have less of a capacity for creativity in the way that it explores and combines novel ideas.  I remember that my own mind seemed much more creative in my twenties, when fresh and interesting thoughts seemed to race by a mile a minute.  But I also remember that my mind was less disciplined back then: more reluctant to do the less-exciting tasks associated with thinking and learning, such as rigorously developing or examining new ideas that I encountered.  And I was more prone to accept and adopt ideologies uncritically, such as libertarianism, in order to find simple answers to serious and complex social problems.  So perhaps my mature mind will benefit from this quest for higher thinking in ways that it couldn’t have decades ago.  But I won’t be devoting too much time to this enterprise.  My sister has recommended that I watch Game of Thrones, and I have begun binge-watching that as well.