Wednesday, November 17, 2021

Back to Square One

 



At the time I am writing this, the global coronavirus pandemic, which everyone had hoped and expected would finally be winding down in 2021, has roared back to life - beginning in mid-summer with the rise of a new variant of the disease.  And now there are news reports of shortages appearing in stores and supermarkets again, similar to those that shocked and alarmed everyone when the pandemic first became serious in early 2020.  Back then, the shortages began with toilet paper, and then spread beyond paper products in general to soap and disinfectants, and finally to certain foods, like eggs.  When I observed how quickly these shortages appeared last year, I realized just how silly so many of those science fiction movies are which show the survivors of some sort of apocalypse periodically returning to abandoned grocery stores to restock their supplies of canned goods and other things.  The blunt fact is that if we ever have anything like a real apocalypse (and I am hoping that this present pandemic is not leading to one), then store shelves will be cleared of everything very quickly.

 

What I’m about to describe is not for the squeamish, and if you are one of those, then I recommend skipping this paragraph and moving on to the next one.  A real apocalypse, brought on by some kind of massive, debilitating, and irreversible crisis, would probably play out along the following lines.  There would be initial attempts by the government to rein in general panic by putting some kind of civic programs in place to address the crisis in a systematic fashion.  But as people began to fear for their personal safety and that of their families, they would proceed to stock up on things: essentials first, and then just about everything.  The appearance of shortages would spur panic buying, exacerbating the shortages, until long lines of people formed at stores hoping to buy whatever was still available, perhaps at greatly inflated prices.  What would follow would be a complete breakdown in social order.  People would hunker down inside of their homes, hoping that their stash of canned goods, water, and other supplies would sustain them until – somehow – this crisis finally passed.  At some point, however, as these stashes began to dwindle, roving bands of armed thugs, or just desperate people, would begin to raid other homes, perhaps systematically, in search of whatever supplies they still had, and would be willing or driven to even kill any families that resisted them.  In the end, even these raiders would not survive, because there would be fewer and fewer families to raid.  Only those who produced genuine food sources, such as farmers, might survive longer, and only if they had the capacity to defend themselves from a surrounding mob of desperately hungry people.  Of course, if anything like a government managed to remain intact through an extreme crisis such as this, then perhaps some semblance of order could be restored or maintained through military means, but if the supply chain of goods and services has been irreparably destroyed, then even this source of order will not be able to be maintained.


What could cause such a general breakdown of that magnitude?  A large-scale nuclear war, and the nuclear winter that followed, is one obvious possibility, the threat of which has hovered ominously over us for more than half a century.  But widespread devastation could come from natural sources as well, such as a massive asteroid strike.  One or more environmental catastrophes might lead to a general and irreparable breakdown of the global food chain.  A pandemic, more serious than the one that we are presently plagued with, might do it.  And even something as subtle as a widespread, powerful electromagnetic pulse, arising naturally from a solar flare, or intentionally as a form of warfare or terrorism, could suddenly make most of our electronic devices inoperable, including our cellphones and computers, which in turn could produce widespread chaos.  We don’t like to think of any apocalyptic scenarios, but the occurrence of any of these is very plausible, and in some cases are becoming increasingly plausible.

 

It is even more unpleasant to consider the long-term consequences of such a catastrophe.  Our civilization is one massive, interconnected network, and the things which we use and consume come from sources that are often far removed from us.  Much of the raw materials come from other countries, and many if not most of the end-products are manufactured somewhere else, and imported here.  I don’t even have any idea where the nearest farm is to where I live.  I couldn’t imagine where I could search for food if I couldn’t find it in a store.  And so much of our daily lives is contingent on a steady supply of electricity and water that the permanent interruption of these would be enough to constitute a devastating cataclysm.  We might think that we could adapt, and learn to manage without these – returning, for example, to pencils and paper to keep records and conduct rudimentary business.  But who would be able to manufacture pencils once our existing supply became exhausted, or paper, for that matter?

 


It is a very real possibility then, that such an apocalypse could send the survivors into a state of barbarism.  Even the rudiments of civilization would be lost, and literacy itself might fall into a general – if not complete – decline.  When I consider such a scenario, it reminds me of a story that the philosopher Plato told in the Timaeus about a conversation between the Athenian statesman Solon and an old Egyptian priest.  Solon had brought up the subject of the Great Flood (the ancient Greeks, like the Hebrews, had their own flood legend), and was speculating about when it actually occurred.  The priest replied, scornfully, “O Solon, Solon, you Hellenes are never anything but children, and there is not an old man among you.”  When Solon asked him to explain what he meant, the priest continued:

 

There have been, and will be again, many destructions of mankind arising out of many causes; the greatest have been brought about by the agencies of fire and water, and other lesser ones by innumerable other causes. . . . Whereas just when you and other nations are beginning to be provided with letters and the other requisites of civilized life, after the usual interval, the stream from heaven, like a pestilence, comes pouring down, and leaves only those of you who are destitute of letters and education; and so you have to begin all over again like children, and know nothing of what happened in ancient times, either among us or among yourselves.

 

Plato’s account has inspired many fanciful imaginations to conjure up elaborate theories about ancient civilizations that existed thousands – perhaps many thousands – of years ago, with technologies like our own, or comparable to our own, possessing the capabilities of flight or levitation, weapons of mass destruction, and lifestyles characterized by luxury and material abundance.  Plato himself, in the Timaeus and another work of his, the Critias, introduced the legend of Atlantis, which has since become the archetypal lost civilization.

 

Atlantis


But history has given us real examples of civilizations that have fallen into barbarism, the most prominent being that of ancient Rome.  After the Roman Empire fell to Germanic invaders, much of the culture and technology that had evolved in that civilization and the Greek civilization that it had inherited was lost, and for centuries, the most tangible evidence that it had once existed was the network of old Roman roads that survived and spanned much of western Europe, including the British isles.  Fortunately, the legacy of those civilizations was not completely lost, thanks in large part to Christian monks in western Europe who retained and preserved the writings from that era, along with literacy itself, and Islamic scholars in the East.  Earlier still, before the rise of the Roman Empire, the ancient Greeks themselves had come out of a Dark Age that had lasted for centuries, during which the entire population had even forgotten how to read or write.  This had come about when an earlier civilization, called the Mycenaean, met its downfall around 1200 BC, probably due to either invasion or internal societal breakdown, and it was not until 450 years later that literacy returned, and ushered in the classical Greek civilization which began with Homer and Hesiod and culminated with Socrates, Plato, and Aristotle.

 

Could a downfall of that magnitude – a complete loss of civilization as we know it – actually happen in our own time?  In his book Why Information Grows: The Evolution of Order, from Atoms to Economies, physicist Cesar A. Hidalgo argues that the growth and maintenance of our economic system and its complex products and services depends upon the accumulation of knowledge and knowhow that far transcends the capabilities of any individual, and therefore requires the establishment of interlocking networks of people, businesses, and other institutions.  These networks can collectively accumulate and use the necessary knowledge and knowhow which makes the creation and application of these products and services possible.  But if the networks don’t exist, or if they are destroyed, then it simply becomes impossible to maintain the necessary infrastructure to support civilization and its continued evolution.  Hidalgo writes:

 

As a thought experiment, consider sending a group of ten teenagers to a desert island equipped with indestructible solar-powered laptops containing full copies of the entire Internet and every book and magazine ever written. Would this “DNA” be enough for this group of teenagers to unpack the information contained in these sources in a matter of five to ten generations? Would they be able to evolve a society that embodies in its networks the knowhow of metallurgy, agriculture, and electronics that we take for granted in our modern society, and which is described in the information that lies dormant in the books and websites they carry with them? Or would they be unable to unpack that information into productive knowhow, failing to re-create a society holding any considerable amount of the knowhow that was contained in the society that sent them on this strange quest? Of course, reproducing this “Lord of the Flies” scenario experimentally is unfeasible, but there are examples in our past that tell us that knowhow is often lost when social groups are isolated, and that the knowhow available in some locations is hard to reproduce, even when the attempts to do so are fantastic.

 


It is remarkable to consider just how dependent we have become in recent years on smartphones, the internet, and personal computers to carry on our day-to-day activities, even those involving recreation and leisure.  Without these many of us – particularly those who have never lived in a world without them – would be completely lost and disoriented.  But, as Hidalgo argues, even if we could somehow retain the basic functionality of these devices in the wake of a great cataclysm, they would ultimately be insufficient to help us preserve or restore civilization, if the social and economic networks that were responsible for bringing about their existence and supporting them have been obliterated.

 

There have certainly been people who believe that a complete societal collapse is possible.  In the early 1980s, when Cold War hostilities between the U.S. and the U.S.S.R. had reignited after the Soviet invasion of Afghanistan and the election of the hawkish Ronald Reagan, and the U.S. economy had been in a state of extreme economic stagnation for more than a decade, many feared that a general collapse was imminent, either due to a complete economic breakdown, or an apocalyptic world war, or both.  A new movement emerged called survivalism, which had actually begun in response to Cold War and economic fears in the 1960s, but steadily grew in popularity and peaked in the 1980s, with books like Life After Doomsday by Bruce D. Clayton and Live Off the Land in the City and Country by Ragnar Benson.  Survivalists believed that they could weather a general catastrophe by forming intentional communities – often in isolated areas – which learned how to grow their own food, build their own shelters, and practice other basic survival skills which would enable them to preserve their existence indefinitely.  These communities usually also accumulated and learned how to use weapons in order to defend themselves against any stragglers in the chaotic, post-apocalyptic world that might try to plunder what they had.  This movement was satirized in the 1983 movie The Survivors, starring Robin Williams and Walter Matthau, where Robin Williams’ character joins one of these communities, only to discover that its leader is secretly profiting from the fear and paranoia that he has engendered among his followers.  While the movie’s depiction of survivalists verged on the cartoonish, it did reflect a misgiving on the part of the general public about groups like these.  They seemed to be not so much interested in preserving civilization in general, but rather just people like themselves, whether this similarity was along religious (often cultish), racial (generally white), or political (usually extremely conservative) lines.  In fact, these groups often believed that the very collapse of civilization would be a verification that it had followed a wrong course, and that a future course along the same destructive lines could only be avoided if they could recast civilization in their own image of what a healthy one would look like: an image reflecting themselves and their particular beliefs.



Still, the survivalists might have been onto something.  If we ever have a complete collapse of civilization, its eventual restoration will probably be contingent on the continued existence of certain pockets of survivors, who have managed to weather the worst of the catastrophe, or series of catastrophes, and developed the means to sustain themselves and their offspring.  As these pockets grow and begin to form new networks, a basis might form that will support the growth of economy and technology.

 

In the electricity industry, a new term has become popular in recent years: resiliency.  Traditionally, the quality of electricity service has been measured in terms of reliability, which is the percentage of time that customers have access to electricity.  While the industry generally maintains a very high standard for this measure (well above 99%), there has been a realization – particularly in the wake of extreme weather events such as Superstorm Sandy in 2012, which left large parts of New York City and surrounding areas without power for several days – that there is more to providing reliable electricity service than repairing downed power lines from time to time.  In the face of catastrophic outages, which could cripple power plants and transformers, in addition to downing power lines, a more general strategy needs to be in place for restoring service.  This strategy would involve a phased approach for literally rebuilding the electric system – at least in places – in order to resume service.  It would entail making sure that equipment redundancies, back-up systems, and elaborate restoration strategies – often involving a coordinated and concentrated effort among disparate and even widely dispersed entities – are in place to handle such potential extreme disruptions.

 


Perhaps we should start thinking along the same lines about civilization in general: about preserving its resiliency, in the face of a potentially crippling wide-spread catastrophe, or series of catastrophes.  It might involve something like the following phases:

 

1. Ensuring that at least part of the general populace, if not most of it, or ideally all of it, has knowledge of how to form autonomous, self-supporting survival pockets that will enable them to weather long-run disruptions in food supply, water supply, and electricity service.

2. Storing the vital information of our civilization in a way that ensures both its survival and its general accessibility, even under the worst circumstances, so that literacy is maintained among the survivors.

3. Having a plan in place to guide the general formation of networks among individual pockets of survivors, concomitant with the phased, gradual restoration of practices and technologies that will enable the return of civilization, such as mining and metallurgy, basic industry, the establishment of larger trade networks, and the resumption of utility services, including water and electricity.

 

I suspect that the politically powerful and wealthiest members of our civilization already have some sort of plans in place to protect themselves.  But, as with the survivalists, such plans would be self-serving and ultimately short-sighted if they don’t provide a framework for the restoration of a broad-based infrastructure that would be essential for preserving or bringing back our civilization.  Without this, we could see a future not unlike some dystopian science fiction novels, in which a band of relatively primitive human beings only have dim memories of some lost golden age when their ancestors could fly through the air, live in complete comfort with abundant food, and enjoy entertainments that now seemed possible only through some sort of magic.  This is certainly a future that we would not want to leave to our descendants – even our distant ones.  I’m sure that all of us instead would like to believe that the distant future of humanity will be a “Golden Age” that will make the present one pale by comparison.  And so the old adage, “Hope for the best but prepare for the worst” should be a philosophy that all of us – both individually and collectively – should take to heart . . . and put into practice, before it’s too late.




Tuesday, June 29, 2021

Theater of the Mind

 



I’ve been doing a lot of thinking lately . . . about thinking.  I’ve certainly had the time to do it, with so much time on my hands.  In fact, over the past couple of years I’ve felt like I’ve won the “time lottery”, having found much more of it available to me after retiring at the end of 2018, and then even more after being generally housebound due to the coronavirus epidemic.  This abundance of time is something that I looked forward to during my entire adult life: a future where I would have the freedom to do anything I want but, more important, to think about anything I want.  You see, I was never one of those “bucket list” people that had a list of potential experiences that I wanted to engage in someday before I died – like skydiving, or whitewater rafting, or bungee jumping, or even taking a vacation to some exotic location – and doing each of these when I found the opportunity: checking them off of my list, like a scavenger hunt or a bingo card.  Instead, I always looked forward to some future time in my life when I could just indulge in contemplation for its own sake: about the meaning of life, the essence of reality, and why things are the way they are and what they could be.  When I went to college, I would have much preferred studying philosophy, or maybe history, but I studied engineering instead, because, having come from a “blue collar” working class background, I wanted to get into a career where I could someday, at some distant future time, upon retirement, find myself in a place where I had an abundance of personal freedom, and time .  .  . to think.

And now, over these past couple of years, I’ve had a greater abundance of time than I could have ever hoped for, and yet, when I think about how I’ve spent it, I am generally appalled.  Like so many who have been similarly housebound during much of the pandemic, I’ve found myself devoting more time to things like “binge-watching” movies and television programs, or indulging in even more blatantly inane time-filling activities.  But the travesty of it all is even starker when I just reflect on what I think about from moment to moment, and the general banality and triviality of my thoughts.  There are very few “deep thoughts” here.  I suspect that if somebody could review the contents of my moment-to-moment thinking for some recent stretch of time, the experience would not be unlike that scene in the movie Jaws where a recently killed shark’s stomach is cut open in order to inspect what it had eaten during the last days of its life.  It was not a very pretty sight.



A friend of mine proudly told me last year that she was now reading several books a month, because of the pandemic.  I can’t help but suspect, however, that these books are of the escapist fiction variety that are only one level above the similar escapist entertainment on television that most of us are indulging more heavily in, now.  Still, her remark left me jealous.  I did, shortly after my retirement, start a “meaning of life” book club with some friends of mine, after having prepared a list of “deep” questions that I had compiled during my life: questions that I said I would like to someday devote myself to studying and perhaps answering when time permitted.  While the club has been a great boon to me, I must confess that even with this, I am probably spending hardly more than half-an-hour a day reading each of our monthly selections, and maybe the same amount of time reflecting on what I read.  It has given me a respite, but not rescued me, from a nearly constant condition of intellectual idleness.

And these reflections cause me to wonder about the many books written in recent years that talk about a possible future of enhanced human beings, where we are able to integrate artificial intelligence into our natural intelligence and dramatically increase the capability of our thinking.  What would an “intellectually-enhanced” human being think about?  And, more to the point, what would such a person want to think about?



I suspect that my own mental experiences are not unlike those of most people, and if they could review the contents of their own minds – the actual moment-to-moment thoughts that they experience, these would consist of generally banal things like mulling over trivial concerns or tasks of the moment; indulging in or anticipating pleasant experiences; engaging in escapist entertainment through reading or television; enjoying happy memories; stewing over perceived slights, or memories of them; obsessing over fears – real, exaggerated, or imaginary; carrying on conversations with others – probably about similarly banal things like gossip, recent movies watched, or sports; and immersing oneself in fantasies.  One’s mind might be applied, at times, to more genuinely challenging diversions, like working crossword or sudoku puzzles, or games of other sorts.  And then there are those “bucket list” activities, which one can plan for, anticipate, experience, and then enjoy in memory.

And so I return to my earlier question:  If we were all suddenly gifted with “enhanced” intellects, whether it be through interfacing with computers, or genetic alterations, or chemicals, then what would this mean, exactly?  How would – or could – our thinking be “better” than it is now?  Even if we had the capability to, say, calculate Pi (π) to 30 decimal places, would we want to?  Would we even derive any satisfaction from doing so?



In the 1976 film The Man Who Fell to Earth, David Bowie portrayed an alien with advanced intelligence who has come to Earth to bring back water for his drought-ravaged home world.  But while on Earth, he succumbs to the temptation of popular earthly vices, and finds that the most enjoyable way to engage his intellect - in addition to dulling it with alcohol – is by watching several televisions – each tuned to a different channel – simultaneously.  If we developed intelligence rivaling that of David Bowie’s alien character, would we follow a similar course: not just “binge-watching” television, but binge-watching several series at the same time?  Would we develop an even greater temptation to cloud or distort our thinking with alcohol and drugs?




The 1998 Japanese film After Life addresses the question of the ideal mental life in a different way.  It imagines a heaven where each recently-deceased arrival is invited to review his or her entire life and come up with one single happiest memory, which will then be experienced for the rest of eternity.  It is an intriguing idea, and inspired me to ask what single memory I would choose to revisit over and over again.  But as I contemplated this, I realized that it is difficult, maybe even impossible, to find some life event that was truly, completely happy.  I think back to what on the surface was a set of particularly pleasant memories: the summer barbecues that my family held each year.  But I suspect that if I literally relived even the best of one of these, I would find that it was filled with little annoyances and distractions, like somebody over- or under-cooking the hamburgers and steaks, and various interpersonal melodramas going on among the family members.  I’m not sure how pleasant it would be to actually relive these experiences, as opposed to merely reflecting on them later.  Similarly, some of my greatest and happiest personal successes were preceded by great stress and anxiety, and it was only after their successful culmination that I was able to experience something like euphoria.  Here, too, I wonder how pleasant reliving the entire episode would actually be.  It seems that I would only enjoy them if I could do so by partially being removed from them: witnessing them the same way that I might watch a television drama, as the characters in this movie apparently did.

In the pilot episode for the original Star Trek series, called “The Cage” (later retitled “The Menagerie” when it was incorporated into the series), a man is captured on a planet by a race of humanoid beings there who have developed immense mental powers.  It is explained to him by a fellow captive that these abilities became a sort of drug to these aliens, as the vivid dreams and fantasies that they were able to experience became more important to them than reality, and they eventually lost the ability to maintain the machinery of their civilization.  It is an intriguing and perhaps not all that unrealistic cautionary tale of what might befall us if we develop enhanced mental capabilities, only to discover that our principal desire is to use them in exactly the same ultimately crippling way.




But of course there are natural ways that we increase our capacity to think, and we engage in them all the time, through formal education, instructional videos, tapes, and live lectures, reading, and exposure to new experiences that broaden the mind.  I’ve certainly done more than my share of these, particularly with respect to formal education, and it only heightens the unpleasant awareness that I frequently have of the general shallowness of my thinking.  Often I will find myself perusing my bookcase shelves and surveying the many textbooks on engineering, applied mathematics, and other sciences that I retained from my college days.  I feel a genuine sense of depression over how little of that knowledge I managed to use during my lifetime in some practical application of benefit to others, or even to myself.  I remember, when I was an undergraduate student in electrical engineering, attending a speech given by a successful man who was a former alumni of that program at my university.  He said that it might surprise us students to learn that most of us would probably never use a convolution integral, or a Fourier transform, or complex algebra, or any of the other grueling mathematical and engineering applications that we had been compelled to learn, at any time in our future professional careers.  The more fundamental goal of teaching these, he told us, was to enable us to learn how to solve problems: by identifying the tools needed, locating them, mastering them, and then effectively applying them.  I was both inspired and relieved by his speech, particularly since I had a keen sense that my knowledge of these applications was already fading fast.  It is interesting that I don’t feel that keen sense of relief, now, decades later, but rather remorse at knowledge that was never put to practical use.  For much of my adult years, outside of the formal academic environment, I had also made a concerted effort to study philosophy, history, and the social sciences with the hopes that these might provide a guide for living, and for conducting myself in society.  Again, I’m not sure if these had any impact at all, at least upon my personal life.  Only the biographies and autobiographies that I read of people in history who I admired might have had such an impact, and even if so, much less, I think, than I hoped for.  Of course, the general truism is that training and educating one’s mind generally enables one to have a more lucrative occupation.  But this leads to the same disconcerting result: a more lucrative occupation provides more time for leisure, particularly after retirement.  And again one faces the specter of the banal theater of the mind.



And, too, in the many discussions about broadening one’s mind, and expanding one’s consciousness, what is often forgotten is that the very process that enables an intelligent engagement with the surrounding environment is one of limitation, rather than expansion.  Our perceptual faculties have been honed by evolution to function only within narrow bandwidths, because this is apparently the most effective and economical way that we can thrive and survive within our environment.  We see, for example, only a finite spectrum of light, and are blind to that which exists in the infrared and ultraviolet regions.  Similarly, as any dog owner will attest, there are sound frequencies that are imperceptible to us, some of which can be heard by other species.  We are limited, too, in the physical range of our perceptual faculties, and can only effectively sense things within a certain distance.  There is also a sort of size limitation that affects what we perceive in the world around us: for example, an entire universe of life exists at the microscopic level that we would be completely unaware of, but for the fact that we are often affected indirectly by it, through infectious diseases, among other things.  And even much of what we can and do perceive is actively screened out of our awareness, so that we can focus that awareness on what we judge to be most relevant at any particular moment.  When I look into a room, while I can actually “see” everything within it – the shelves full of books, the flooring, the windows, and the walls – I let most of these things remain unnoticed and obscure in the periphery of my vision, as I direct my gaze to the particular objects of interest to me.  Similarly, I regularly “tune out” sounds and other sensations that are monotonous, repetitive, or (judged to be) inconsequential.  And, beyond all of this natural limitation and “filtering” of perceptions in the present moment, there is a further editing after these enter my memory, and I find that just a fraction of even what did occupy my attention and thoughts remains readily accessible to that memory after only a relatively short amount of time.  Given these facts, one cannot help but wonder: how, exactly would a drug, or an electronic augmentation, or a form of mental discipline that expanded my field of awareness and/or my memory really improve the quality of my existence, if the quality of that existence is contingent upon how effectively I limit these things to begin with?



When trying to differentiate “higher” versus “lower” intelligence, and identify what it is that constitutes more advanced thinking, we often turn to the animal kingdom, and the distinctions between animal thought and human thought.  But the scientific study of animal intelligence has always been fraught with controversy, as the scientists who practice it face two conflicting poles of criticism.  On the one hand, there is the charge of anthropomorphism: that scientists are often tempted to ascribe too much intelligence to certain animals, simply because these animals behave in ways that superficially resemble human behavior.  And on the other hand, there is a charge that many people, including animal behaviorists, are inclined to exaggerate the gulf in intelligence between humans and other species, because of the need to believe that there is something unique and special about human beings, which starkly sets them apart from the rest of life.  This bias originally stemmed from religious beliefs about the special creation of humans, but even many scientists without a religious bent have been unable to resist the temptation to maintain this dogma, replacing special creation with the idea that the rise of Homo Sapiens represented a sort of culmination of evolution on planet Earth.  Clearly the differences in animal and human consciousness are not as stark as many would like to believe.  Animals occupy much of their thinking in similar ways that we do – focused on the basic desires of life (e.g., food, sex, security).  And like our thinking process, that of animals is enabled through a honing and limiting of perceptions and awareness, though in many cases their ranges of perception are different or even broader than our own.  Animals have emotions, they feel pain, they have memories and anticipations, they sleep, and, as any dog or cat owner knows, some also dream, and if they are capable of creating such fictional dramas subconsciously, then is it such a great leap to assume that some animals at least are able to indulge in conscious fantasy as well?  Many of them certainly enjoy play, as we do.  I remember a time many years ago when I was sitting at the patio in my backyard, quietly relaxing, and had been there so long that the usual denizens of my backyard – squirrels rabbits, and a few species of birds – became completely oblivious to, or at least unconcerned about, my presence, and so they began to engage in what was apparently their normal behavior when I was not around.  This was principally foraging for food, of course, but I was amused and intrigued to see that much of their behavior involved play, and not just with members of their own species, but among all of the other animals who were foraging for food.  The squirrels, rabbits, and birds postured with, and tussled with, one another, but in a way that was clearly not intended to be genuinely hostile or threatening.  They were having a grand old time together.  Scientists continue to find new evidence as well among many species of animals of their capability to plan, to reason, and to solve complex problems.  But we clearly believe that our thinking is more advanced – is better – than that of probably every other species of living being on the planet.  What is it about our thinking that makes it so? 



The critical difference, I think, lies in our greater capability for accessing the knowledge and experience of others.  This began with the creation of spoken language, when we could better share the contents of our thoughts, and continued with the development of writing, and the keeping of physical records.  Eventually, not just the knowledge and experience of our living contemporaries, but those who lived before us could be accessed as well.  We had a larger menu from which to select from in crafting our own thoughts.  We could experience both the memories and the fantasies of others, and also obtain practical information from a widening pool of acquired knowledge.  And the best thinkers seem to have a gift for interacting with this intellectual storehouse.  Their talent lies not in what they know, but in knowing what to know: a sort of meta-knowledge.  (And, after all, isn’t this just a continuation of what evolution was doing when it honed, refined, and limited our senses: compelling us to focus on what was most important to us?)  It brings back to mind that speech that I heard in college, about how it was not the specific information that we retained from our courses that was important, but the skill that we acquired in learning how to identify, find, and use the requisite knowledge to solve problems.  A person might be a walking encyclopedia, capable of being a champion on the game show Jeopardy, but might find himself (unless he actually does manage to get onto the game and win prize money) leading a much more modest existence compared to someone who merely is more effective at marshalling information resources (either by finding them directly, or enlisting people who can) in the service of some profitable enterprise.  In popular vernacular, this is the distinction often made between “book smarts” and “street smarts”.  And while the ranks of the “street smart” include grifters, they also include entrepreneurs, inventors, and successful managers.  Our entire civilization seems to rest on this ability to develop a widening base of generally accessible knowledge, and simultaneously cultivate the skill – the “metaknowledge” – to effectively draw from it in order to raise the quality of personal experience.  We see this skill applied not just in successful businesses, but in our private lives, both in practical pursuits and in the streaming of personal entertainment and gaming – the crafted fantasies of others – and – to the frustration of many teachers – in the vexing talent of our youth to find an answer to any question they are confronted with very quickly, and at their fingertips, in their smartphones. 

At this point I can hear from the reader a protest:  Is this, then, the culmination of human intelligence: the ability to develop an immense external storehouse of accumulated knowledge and experience, only so that we can draw from it to better entertain ourselves, and perhaps succeed in some practical projects as well?  Is this all there is to the accumulation and use of knowledge . . . is this all there is to thinking?  Many of the greatest thinkers in history, like Plato, Aristotle, and St. Augustine, would argue the reverse.  According to them, a life engaged in contemplation – thinking about profound things, in a non-pragmatic way, as an end in itself – was the epitome of a good life, or at least the best ultimate use of leisure time.  To think, in earnest, about the meaning of life, with no practical goal attached to it, is, in their opinion, to be a fully actualized human being.  It is a lofty goal, and maybe really is a worthy one, but in practical terms, how much of our time can we devote to such thinking, even if – as many of us have recently had – all the time in the world?


The School of Athens


There are certainly many tempting, alternative ways of engaging the mind these days which Plato and Aristotle would probably regard as unhealthy applications of the mind in leisure.  Five in particular seem to exert a particular draw upon people in our contemporary culture: 1) movies and television series, and while some are regarded of higher caliber than others, nearly all of them involve sex, violence, and/or intrigue; 2) escapist written fiction, also involving the same; 3) computer video gaming, usually with violent content; 4) internet pornography, and 5) social networking (e.g., Facebook, Twitter, Instagram, TikTok, etc.).  But I wonder, what would be the difference, exactly, between a mind that had dedicated much if not most of its leisure time to contemplation and higher intellectual pursuits versus one that had indulged exclusively in these other entertainments and diversions?   If the brains of each could be examined after death, would there be a conspicuous physical difference?  Would the difference have manifested itself instead in the character of each of the persons, with the first tending to have been more honorable in demeanor than the second?  Would the difference be more practical, as the life of the first was more accomplished and successful than that of the second?  (Or would those who followed the second course of escapist entertainment and diversions simply have a vague sense of regret, in their advanced years, that they had squandered the opportunity to live a more meaningful and accomplished life?)  I genuinely don’t know.  But if those unsuccessful studies in past years that attempted to draw a link between children exposed to violent cartoons and comedies, and later, adolescents exposed to violent movies and television, with the behavior of these children and adolescents as adults, is any guide, then it is very possible that there might be little if any connection at all.


 John Locke                   Isaac Newton


It is easy to come up with some interesting historical counterexamples.  John Locke and Isaac Newton, two of the leading lights in the history of intellectual advancement, whose attainments were the direct consequence of lives dedicated to higher thought, were apparently contemptible cads in their behavior towards others.  I suspect that these are hardly exceptions, and that at the very least no positive correlation will ever be established between great thinkers and visionaries and their personal characters.  Pope Pius XII, who headed the Catholic Church from 1939 to 1958, was by all accounts a man who had dedicated his life to study, contemplation, and prayer, and, even as pope, had a self-imposed regimented and austere lifestyle in which he could continue these practices.  And yet, according to John Cornwall, the author of Hitler’s Pope, Pius XII was instrumental in derailing any attempts by the Catholic clergy in Germany and elsewhere to organize a concerted resistance against the emerging toxic policies and practices of Nazism in the 1930s, and he later persistently resisted entreaties from others to explicitly denounce Hitler’s Final Solution, hence enabling rather than trying to oppose the evil consequences of Nazism and Fascism in Europe.  Apparently he did so because he thought that these were lesser evils compared to the threat of Communism, as embodied in Stalin’s Russia, but also because of antisemitic beliefs and prejudices that he personally harbored.


Pope Pius XII


Any discussion of elevated thinking has to touch on the subject of spiritual or mystical enlightenment in general, which some contend is the most elevated form of thinking possible.  As an “unenlightened” person I of course can’t give full justice to what this state of consciousness is actually like.  But having studied various forms of meditation during my lifetime, I have discerned some common features to all of them.  In the standard practice of meditation, a particular technique is used – be it counting the breaths, focusing on an object, chanting a mantra, or simply sitting silently in a meditative pose – to quiet the mind.  The meditator is instructed to not resist active thinking, but merely to observe random thoughts as they arise, while not dwelling upon them, and not succumbing to the temptation to let them lead to others in turn.  Eventually, these thoughts arise less frequently, until finally what is attained is a placid awareness of being aware: a cultivation of “the Witness” as it is sometimes called, or what might also be called a state of “meta-awareness”.  (A form of Zen in which the practitioner is instructed to meditate intensely on a “koan”, or thought puzzle with no logical solution, such as “What is the sound of one hand clapping”, is an interesting variant which apparently induces elevated thought by short-circuiting the traditional rational thought processes of the mind.)  I don’t know if enlightenment represents an extreme and/or extended state of this meta-awareness, but to the outside observer it seems to induce a condition of extreme placidity in the enlightened.  Paul Brunton, in his 1934 book A Search in Secret India, describes his encounters with several enlightened men and women in India, many of whom spent hours if not days sitting in what seemed to be an extended, blissed-out, trancelike state.  He found it frustrating that in spite of their supposed condition of enlightenment, few of these sages engaged in any activities that might improve the condition of their fellow Indians, or even seemed to care.  And none could give him any practical advice or wisdom to take back to a Europe that was descending into chaos.  I wonder how one might use artificial intelligence to simulate enlightenment.  It would seem to require programming a computer to have a higher-level awareness of its capacity to receive data and to process information, without actually processing any information.  Even if such a thing were possible, would this truly represent the highest level of artificial intelligence?  It seems that like those blissed-out mystics, such a computer would not produce anything of positive consequence, if it produced anything at all.




And I leave myself (and the reader) with a final question:  Is the general quality of thinking of the contemporary human better than that of our ancestors?  If so, what constitutes the improvement, or the difference?  Certainly our earliest ancestors were limited in their capability to share their knowledge, their experiences,  and their fantasies with others, even after the development of language.  The thoughts of the common person were dominated then by their drudgery, and perhaps livened a bit by their personal fantasies.  But as myths were passed on in the campfire stories of elders, and, in later generations, historical sagas like The Iliad were recited in verse by bards, the capacity for shared stories,  histories, and tall tales to enrich the mind grew.  The printed word, and the keeping of written records, increased this capacity exponentially, as did the eventual invention of the printing press, the telephone, the radio, television, and the internet.  Here again, it seems that the widening pool of knowledge, along with our improved capacity to draw upon it, has elevated the quality of our thinking.  And it is interesting to note that a recurring fear at every stage of this development has been that the democratization of knowledge is being accompanied by, or even in danger of being replaced by, a sort of information-based decadence.  There have been many incarnations of this fear, such as those that accompanied the rise of dime novels, sensationalist “yellow” journalism in the popular press, comic strips, escapist entertainment dominating the radio and later the television “boob tube”, and now the internet, with the ready access it provides to false information, hate groups, propaganda, skillfully directed mass marketing, and pornography, among other things.  We have a far greater capacity now, than at any time in human history, to tap into a massive base of shared knowledge, but also into our shared fears, delusions, prejudices, and vices.  Are both simply two sides of the Janus-face that represents our individual and collective advancement?  Is there a real danger that the dark side of this face will overshadow the bright one, and lead to our collective downfall rather than a culmination of this advancement?  Or is it merely incumbent upon each one of us to keep the dark side in check, while cultivating, as best as we individually can, the bright one?

            It has proven to be a daunting task for me, this thinking about thinking.  I want to enjoy thinking, while trying to avoid the risk of having it tainted, at least too much.  It will probably always be a tightrope walk.  In any case, these ruminations have inspired me to engage in some more “heavy” thoughts beyond those inspired by my book club.  I plan to try to get through a one-volume edition of the complete works of Plato over the next year.  I will also return to one of the most profound and challenging philosophy books that I read in my youth and studied in college, Immanuel Kant’s Critique of Pure Reason, and read it again, along with the commentaries that I acquired to try to better understand it back then.  Perhaps I will get something more out of it now, in my maturity, than I managed to do back then.  But I know that in its maturity the mind also loses some of its nimbleness.  A common lament among mathematicians is that they have to try to do their greatest work in their twenties or thirties, because after that the mind gets lazy, less able to pursue a particular line of thought diligently and doggedly, and also seems to have less of a capacity for creativity in the way that it explores and combines novel ideas.  I remember that my own mind seemed much more creative in my twenties, when fresh and interesting thoughts seemed to race by a mile a minute.  But I also remember that my mind was less disciplined back then: more reluctant to do the less-exciting tasks associated with thinking and learning, such as rigorously developing or examining new ideas that I encountered.  And I was more prone to accept and adopt ideologies uncritically, such as libertarianism, in order to find simple answers to serious and complex social problems.  So perhaps my mature mind will benefit from this quest for higher thinking in ways that it couldn’t have decades ago.  But I won’t be devoting too much time to this enterprise.  My sister has recommended that I watch Game of Thrones, and I have begun binge-watching that as well. 

Tuesday, December 22, 2020

A Taste of Victory

            One of the most bitter and contentious presidential elections in U.S. history is finally behind us.  It seemed that American democracy itself was under threat, and left voters on both sides of the political spectrum feeling jaded and demoralized.  There were certain features of this election that made it particularly unsavory – not the least of which was an incumbent president who was unwilling or unable to accept the outcome, and the cronies and lackeys of his who supported him in his delusion – and will probably result in its being remembered as one of America’s darker moments.  For me, it brought to mind the fictional election portrayed in the classic American western, The Man who Shot Liberty Valance.  In that movie, a young attorney from the east, named Ranse Stoddard, played by James Stewart, moves into an unnamed western territory and leads a movement to turn that territory into a state.  He almost immediately runs afoul of a local bully named Liberty Valance, well-portrayed by Lee Marvin, who is at the head of a faction of cattle barons opposed to statehood.  When Stoddard is elected as a delegate to the statehood convention, along with the local newspaper editor, who has published a story about some of Valance’s crimes, Valance beats the editor nearly to death, burns down the newspaper building, and challenges the attorney to a gunfight.  When Stoddard reluctantly faces off against him in a showdown, Valance is shot to death, much to the surprise of the local townsfolk, not to mention Stoddard himself.  Stoddard and his allies ultimately succeed in their drive for statehood, and he becomes a U.S. senator representing that state.

 



But aside from the ugly controversies surrounding it, the election did bring some longstanding criticisms about the voting process to light.  The Electoral College, in particular, in which each state is given a number of votes equivalent to the sum of its senators and representatives in Congress, came under renewed scrutiny.  A perennial complaint about the Electoral College is that it gives disproportionate power to the less populous states.  According to the 2010 Census, for example, California gets one electoral vote for each 700,000 persons in the state, while Wyoming gets one electoral vote for each 200,000 persons in the state.  There have been four elections in U.S. history when a president who won the Electoral College vote actually would have lost by popular vote, and two of these have occurred in the last twenty years (Al Gore vs. George W. Bush in 2000, and Hilary Clinton vs. Donald Trump in 2016).  Ironically, however, it can be proven mathematically that, under the Electoral College voting system, a single voter in a more populous state has a higher statistical probability of affecting the outcome of the election than a voter in a less populous state.  But in any case, the fact that this method potentially produces outcomes that differ from those which would occur with a simple count of the popular vote is very galling to many.

 

There are even more serious issues that arise, however, when there are more than two candidates running in an election, that even makes the outcome of a simple popular vote open to criticism.  These third-party candidates, if they are sufficiently popular, potentially become “spoilers” in the election, drawing away votes from one of the two majority party candidates with whom their views are most closely aligned.  Many Democrats worried that just such a thing might have happened in the most recent presidential election if Bernie Sanders had decided to run on a third party ticket.  While there have been many popular third-party candidates in American presidential races over the past century, (George Wallace in 1968, John Anderson in 1980, and Ralph Nader in 2000, 2004, and 2008), the one who is generally remembered as possibly having changed the outcome of an election is Ross Perot, who ran as a third-party candidate against incumbent George H.W. Bush and Bill Clinton in 1992.   Perot’s moderately conservative views were generally considered to be more aligned with Bush’s than Clinton’s, and therefore his strong showing (he received 19% of the popular vote) was thought by many to be responsible for President Bush’s failure to be re-elected.  While this conclusion has been debated (in spite of his strong showing in the popular vote, Perot received no votes in the Electoral College), Perot’s performance demonstrated the impact that a strong third-party candidate could have in elections where the winner is determined by a simple majority of votes.



            Some countries have tried to address the problem of selecting among multiple candidates by adopting more sophisticated voting methods, for example by allowing voters to rank candidates from most preferred to least preferred.  In a three-person contest for example, a voter’s first choice could be given 2 points, the second choice 1 point, and the third choice no points, and then the points would be totaled among all voters to determine the winner.  But even this method, as logical as it sounds, has been demonstrated to sometimes produce strange outcomes that seem to contradict the general will of the majority.  Consider, as a simple example, an election with three candidates, A, B, and C, and five voters.  Suppose that three of the voters, a majority of them, rank the candidates as follows (in descending order): C-A-B.  The other two voters rank the candidates: A-B-C.  Assigning 2 points to each of the first place votes, 1 point to each of the second place votes, no points for third place votes, and totaling, Candidate A gets 3 points for being the second choice of three voters (3×1), and 4 (2×2) points for being the first choice of two voters, for a total of 7 points.  Candidate B gets a total of 2 points (0 points from three voters and 1 point from two voters), and Candidate C gets a total of 6 points (2 points from three voters and 0 points from two voters), making Candidate A the winner, with the highest total of 7 points.  But three of the voters, a majority, had preferred Candidate C to Candidate A, which casts doubt on the reasonableness of selecting A as the winner.  Such paradoxes are not uncommon using this method, and in fact Nobel Prize-winning economist Kenneth Arrow proved that rank-ordering voting methods of this sort can never be devised to prevent these strange outcomes from occurring.

 


But two economists, Michel Balinski and Rida Laraki, in their 2007 book, Majority Judgment, make a compelling case for why a slight variant of the rank-order method actually can produce consistently valid outcomes.  Rather than using vote totals based on a preference ranking, the authors contend that a better method is to have each voter evaluate the total slate of candidates, with specific evaluations ranging from positive (approve) to negative (disapprove).  Evaluations for each candidate are then stacked from most favorable to least favorable, and the median (middle) evaluation is assigned as a rating to that candidate.  The candidate with the highest rating wins.  Consider the same example above, with Candidates A, B, and C, and that a top-choice ranking from voters is equivalent to an evaluation of “approve”, a bottom-choice ranking is equivalent to an evaluation of “disapprove”, and a second-choice ranking is considered “neutral” (i.e., neither “approve” nor “disapprove”).  Candidate A then received 3 “neutral” votes and 2 “approves”, giving it a median rating of “neutral”, since if we stacked these votes from most favorable to least favorable, then the evaluation in the middle of the stack would be one of the “neutrals”.  Similarly, Candidate B’s 3 votes of “disapprove” and 2 votes of neutral gives it a median rating of “disapprove”, and Candidate C’s 3 votes of “approve” and 2 votes of “disapprove”, gives it a median rating of “approve”.  Candidate C, then, has the highest rating among voters, with a median of “approve”, followed by Candidate A with “neutral” and Candidate B with “disapprove”.  The selection of Candidate C seems a more logical outcome in this election, since the majority of voters preferred Candidate C over both Candidate A and Candidate B. 


A Three-Way Election Outcome Using Majority Judgment Method










It is interesting to consider what would have happened if this method had been used in the most recent presidential election.  Suppose that the method proposed by the authors of Majority Judgment was used with the following five ratings available to voters (from best to worst): “strongly approve”, “approve”, “neutral”, “disapprove”, and “strongly disapprove”.  Given the extreme divisiveness that characterized this election, with voters for one of the candidates generally strongly detesting the other, it is not unlikely that the voters who selected Biden would have given him a “strongly approve” rating, while giving Trump a “strongly disapprove” rating, and Trump voters would have done the reverse: giving Trump a “strongly approve” rating and Biden a “strongly disapprove” rating.  Since Biden was preferred by a majority of the voters, he would have had more “strongly approve” ratings than “strongly disapprove” ratings, and his median rating would therefore be “strongly approve”, while, for the same reason, Trump’s median rating would be “strongly disapprove”.  These results, then, would have mirrored what actually happened in the Electoral College and the popular vote. 

 



But now suppose that Bernie Sanders had decided to run as a third-party candidate.  Under either the Electoral College system or the simple popular vote, the entry of Bernie Sanders into the race would have almost certainly siphoned off a significant number of votes from Joe Biden, and could very likely have resulted in the election victory going over to the incumbent President Trump.  However, the median rating approach would produce a distinctly different result.  Suppose that those who supported Biden over Bernie Sanders gave Biden a “strongly approve” rating and Sanders an “approve rating” while those who supported Sanders over Biden did the reverse.  Assume that both, however, still gave Trump a “strongly disapprove” rating.  Since Trump’s median rating would still then be “strongly disapprove”, he would again be the clear loser of the election.  The ultimate contest, then, would be between Biden and Sanders.  (Both of these candidates would probably now have a median rating of “approve”, suggesting a tie, but the authors of Majority Judgment provide a simple and elegant method for breaking ties.  In this case, the method would have selected as winner the more popular of these two candidates, Biden and Sanders.)

 

The voting method advocated in Majority Judgment is effective and suitable not only for elections, but for competitive activities that involve multiple judges, including sporting events such as the Olympics, and wine-tastings.  And it is to one of these that I would now like to turn, because as with our recent U.S. presidential election, the outcome was controversial, but unlike the election, it is remembered as a great moment in American history.  It was the famous “Judgment of Paris” wine competition of 1976, in which American wines were pitted against French wines in a blind tasting.

 

A little background is necessary in order to highlight the significance of this competition.  Before 1976, French wines were generally regarded as the best in the world.  And more than this, they were considered virtually unrivalled in their quality.  While some other European nations could lay claim to particular wines of excellence (Spain had its sherry, Portugal its port, Italy its Chianti, and Germany its sweet white wines, for example), the idea that any country beyond Europe could produce wine of any type that could even compare to France’s was unthinkable, even heretical, and this was especially true of wines produced in America – particularly in California, where many wines using grape varietals identical to France’s were produced.  And wine producers in California at the time even seemed to believe this themselves, as they often resorted to naming their white wines “Chablis”, red wines “Bordeaux”, and sparkling wines “Champagne”, which are all regions in France.  The French eventually raised a successful protest against this practice, as it was clearly a case of false advertising: what we might today even call “identity theft”.  The practice really did seem to be a tacit acknowledgement on the part of California wine-makers that their products were inferior imitations of the French bona fides.


The 1976 "Judgment of Paris"


A British wine merchant named Steve Spurrier decided to put this belief about the inferiority of California wines to the test.  (He was himself a believer, as he only sold French wines in his own shop.)  He arranged for a blind tasting in Paris involving several acclaimed judges which would include a red wine competition (four French Bordeaux wines against six California Cabernet Sauvignons) and a white wine competition (four French Burgundy and six California Chardonnays).  The tasting occurred on May 24, 1976.  There were eleven judges in all, including eight French, one Swiss, one American, and Spurrier himself.  When the tastings were completed, and the outcomes of the competitions were determined, it was revealed – much to the shock (if not horror) of the French judges – that a California wine had won first prize in both the red and white wine categories.  (The event is entertainingly portrayed in the 2008 movie Bottle Shock.)

 


While news of this competition and its outcome was downplayed in Europe, and particularly in France, the impact of the event was momentous.  By winning first prize, California winemakers had demonstrated that they could produce quality wines on a par with the vaunted French wines that were supposedly incomparable in their excellence.  The event virtually opened the floodgates to a vibrant international wine industry, as not just wineries in California, but others in both North and South America, as well as Australia and New Zealand, not to mention Europe itself, felt emboldened to challenge France’s winemaking dominance, in terms of popularity, or quality, or both.  It seemed that just knowing that such a thing was possible – a non-French wine winning first prize in a blind tasting – had a palpable impact on the industry.  In that first competition, most of the American red wines that had been part of the tasting actually did end up on the bottom of the ranking.  But when the Wine Spectator magazine hosted a France vs. U.S.A. tasting competition just ten years later, five of the six American red wines entered in the contest occupied the top five positions in the rankings.  It is a testament to the power of belief, and reminiscent of the famous story of Roger Bannister, who was the first person in history to run the mile in less than four minutes, in 1954, something which until then had been thought to be humanly impossible.  Within a year of his winning the record, 37 other runners also ran the mile in less than four minutes, and 300 runners within a year after that.  By just demonstrating that the feat could be accomplished, he made it far more achievable for others.  The 1976 “Judgment of Paris” and its outcome was truly legendary, and its legacy can be seen today in just about any store that sells wines, where there are aisles devoted to individual regions, with California Cabernet Sauvignons, Sauvignon Blancs, and Chardonnays proudly displayed, along with similarly-esteemed Argentinian Malbecs and red blends, German Rieslings, and the now globally popular Australian brand with the kangaroo on the label, featuring various varietals at a very affordable price.  French wines are still respected of course, and still popular, but no winery outside of France, in any region of the world, now feels compelled to try to pass off any of its products as a “Bordeaux”, or “Chablis”, or “Champagne” in order to gain respectability, or even popularity.

 


But here is where the story takes a bit of a left turn.  Did a California wine really win 1st Prize in the 1976 Judgment of Paris?  I have to return here to the authors of Majority Judgment, who had demonstrated that their method of determining election outcomes, and outcomes of competitions involving several judges, was superior to traditional methods, and devoid of all of the shortcomings attributed to them.  (Even Kenneth Arrow, the economist who had demonstrated that all traditional methods of rank-ordering candidates were flawed, endorsed the authors’ approach.)  In their book, they turn their attention to the Judgment of Paris, and in particular the red wine competition, and note that the outcome was determined by taking a simple average of each of the judges’ scores.  But by using their method, the authors contend that the American wine which supposedly won 1st prize, the 1973 vintage Stag’s Leap Cabernet Sauvignon, actually should have taken second place in the competition, with the real winner being the 1970 vintage French wine Chateau Mouton Rothschild, which had been given 2nd prize in the official ranking.  (Both the official ranking and the authors’ ranking concur that four of the six American wines entered in the competition occupied the four lowest positions, and that the remaining one came in 5th place.) 

 

The "Official" Outcome of the Judgment of Paris Red Wine Competition

This is a jarring conclusion, and leads to a profoundly different outcome for this 1976 event.  In fact, had this been the outcome that was officially observed at the time, it might have diminished or even eliminated the event’s historical significance.  After all, in the red wine category, five of the six American contestants ranked very poorly, or mediocre at best.  Hence, a 2nd place showing for Stag’s Leap might have been considered just an anomaly of no particular consequence.  For Americans, at least, this might make the authors’ voting methodology appear much less attractive.  (Some who dislike this outcome, and who are familiar with the book Majority Judgment, might even be tempted to observe that both of its authors were employed at a French university at the time when their book was published.)  But are the authors correct, nonetheless?

 

I have always been fascinated myself with the problem of how to properly rank order candidates or contestants based on a voting methodology, and have actively explored various approaches to this problem.  Several years ago, I came upon an insight which led to the development of my own methodology.  The insight was this:  In a competition that involves several judges, there are actually two types of information that are revealed in the judges’ scores.  The first, of course, is information about the things being judged, but the second is information about the caliber of the judges themselves.  If the particular scores of an individual judge tend to correlate highly with the average scores of the other judges, for example, then this is probably a good indication that the judge knows what he or she is doing.  For example, suppose three things are being evaluated – call them A, B, and C – by several judges, and Judge #1 determines that C is the best among the three, followed by B, followed by A.  If the collective ratings of the other judges, based on an average of their point scores, also puts C at the top, followed by B, and followed by A, then this suggests that Judge #1 has made a competent evaluation.  But if the collective ratings of the other judges does the reverse, putting A over B over C, then this suggests that Judge #1 either lacks the ability to discriminate effectively between the contestants, or has an aesthetic taste that runs counter to the population as a whole, or both.  It is also possible, of course, that Judge #1 is uniquely and exceptionally qualified to perform this role, and it is the rest of the judges that are incompetent rubes, but the former interpretation is far more likely, particularly if several judges are involved.  Hence, a judge whose individual scores are highly and positively correlated with the average scores of the other judges should get a higher weighting in the competition that is being evaluated, while any judge whose individual scores are either uncorrelated with the average of the others, or, worse, negatively correlated with the average, should be given a lower weighting, or perhaps should be disqualified entirely.  I have tested my method, using a technique involving multiple simulations called Monte Carlo analysis, and have found encouraging evidence that mine is superior to both the conventional method of simply averaging judges' scores and that which is proposed by the authors of Majority Judgment.

And when looking at how each of the individual judges’ ratings compared with the others at the Paris competition, one can make some interesting observations.  For example, one of the French judges, Pierre Tari, actually did exhibit a negative correlation between his wine ratings and those of the rest of the judges, meaning that he tended to rate highly those wines that the other judges were unimpressed with, and vice versa.  In his case, then, it really does appear that in spite of the fact that he actually owned a winery himself, as a wine connoisseur he was apparently in the wrong profession. 

 

(This may sound rather harsh, but I must confess that my own experience as a wine connoisseur parallels Monsieur Tari’s.  Many years ago I took a wine course, and it was customary to end each class with a blind tasting of the wines that had been featured that day.  I discovered that I invariably rated highly those wines that were disliked by the rest of the class, while panning the wines that were popular with everyone else.  It was very humbling at the time, but there was a consolation for me.  Because the wines that I preferred also tended to be the less expensive ones, I have been able to enjoy my favorite wines in the years since without it ever being a serious drain on my budget.)

 

Michel Dovaz and Pierre Tari


And three of the judges had ratings which, while positively correlated with those of the other judges, were only slightly so, suggesting that much of their ratings were not much better than assigning random scores.  In their case, this would be evidence of a palate that was not very discriminating.  One of these judges was the only American who participated, Patricia Gallagher, another was the British wine merchant who had proposed the contest, Steven Spurrier, and the third was Swiss wine instructor and author of books on wine, Michel Dovaz.  (Spurrier and Gallagher actually recused themselves from the competition and did not include their scores in the final tally, not because they felt that they were less than competent to judge, but because both had played an instrumental role in organizing the event.)

 


                    Aubert de Villaine and Jean-Claude Vrinat

But there were four judges, all French, that were standouts in a positive way, in that each of their ratings correlated strongly with the averages of the other judges, strongly suggesting that they had both a discriminating palate and a genuine cultivated taste for fine wines.  These were Claude Dubois-Millot, a restaurant sales director who was actually substituting for an absentee judge, economist and winery owner Aubert de Villaine,  restaurant owner Jean-Claude Vrinat, and Pierre Brejoux, Inspector General of the Appellation d'Origine Controlee Board, which oversees the production of the finest French wines.  Of these four judges, two put the California Stag’s Leap wine in a tie for second place, and two put it in a tie for third place.  Hence, while all of these judges agreed that Stag’s Leap was among the top four of the wines in the competition, they also were unanimous in determining that it was not the best of the contestants.  (By the way, Patricia Gallagher, the only American judge, rated Stag’s Leap even lower, putting it in a four-way tie for fourth place.)

 

(From left) Patricia Gallagher, Steven Spurrier, and Odette Kahn


The remaining three French judges exhibited positive correlations between their individual scores and those of the rest of the judges, but which were not very strong, indicating that while they had a general ability to discriminate between good wines and mediocre ones, it would perhaps be too flattering to refer to them as “connoisseurs”.  One of these, Odette Kahn, upon discovering to her horror that she had assigned first place to the American Stag’s Leap wine, demanded, unsuccessfully, to have her ballot returned.

 

When I use my own method of weighting the judges’ scores, I find that much depends on exactly how these scores are weighted.  With some weighting methods, my results support the contention that Stag’s Leap was the genuine winner, but with others, the 2nd place showing which the authors of Majority Judgment claims it actually deserved is supported.  In any case, the fact that the four apparently most competent judges were in agreement that Stag’s Leap was not the best wine in the competition casts serious doubt on the official outcome of the contest.

 

So the actual outcome of the famous “Judgment of Paris” may very well have been different than what is recorded in the history books.  It reminds me, again, of that movie, The Man Who Shot Liberty Valance, and the fateful gunfight in which the greenhorn, idealistic lawyer from the East Coast brought down a much more skilled opponent.  The showdown is remembered as a great, seminal moment in the history of the fledgling state, but during the movie (and, as a film critic would say at this point to those who haven’t seen the movie, “Spoiler Alert!”) it is revealed to a reporter many years later that it was not Ranse Stoddard’s bullet that actually killed Liberty Valance.

 


If you ever have a chance to come out to Washington, D.C., I recommend that you visit the Smithsonian Institution’s National Museum of American History, which is located on Constitution Avenue, NW, between 12th and 14th Streets.  In a permanent exhibition located on the first floor of the East Wing entitled “Food: Transforming the American Table”, you might still have an opportunity to see proudly on display an actual bottle of the Judgment of Paris “winner”, 1973 Stag’s Leap Cabernet Sauvignon, which was donated to the museum by the winery’s owner, William Winiarski.   I sometimes imagine myself standing there, admiring the exhibit, while some group is standing next to me, listening to someone describe the historic David vs. Goliath contest that allowed Stag’s Leap to bring respectability to American wines, and usher in a new era for wine in the entire world.  If that ever happens, I will simply smile silently, and nod in agreement, while remembering the most famous line in that movie, The Man Who Shot Liberty Valance, uttered by the reporter when he learns the truth about what actually happened during the climactic showdown between Stoddard and Valance:

 

“When the legend becomes fact, print the legend.”

 

I know that in these days of real and imagined incursions and accusations of “fake news”, this attitude of mine might be controversial, but I do believe that there will always be a place for myth in history, if the myth heals, unites, and inspires, rather than injures, divides, and enervates.  And so, in these divisive times, I offer a toast to the American ideal of unity in diversity, accomplishment in spite of adversity, and hope that in this ideal, “legend” will always become “fact”, and vice versa.  Or as the French would say, so simply and elegantly:

 

Je lève mon verre à la liberté!