Wednesday, January 8, 2025

Kamikaze

 


What so many of us had hoped would never happen again, has happened:  Donald Trump is once again President of the United States.  Of course, while this was a disappointing – if not alarming – result to myself and others, it was hardly the shock that it was in 2016, when he attained that office for the first time.  His opponent then was Hillary Clinton, President Barack Obama’s Secretary of State, and a former U.S. Senator (not to mention wife of former President Bill Clinton): a strong contender favored by most pollsters to easily win the election.  By contrast, I thought that Kamala Harris was a terrible choice as the Democratic candidate, and after learning of her candidacy, I privately referred to her as “BMF” – “Biden’s middle finger” – because I believed that Biden, in selecting her to run in his place, was essentially saying to those of his fellow Democrats who had so unceremoniously pressured him to drop out of the running, “Okay, you want me out?  See how successful you’re going to be with her.”  She was a cynical choice to be his running mate from the very beginning, after having dropped out of the presidential primaries in late 2019 due to dwindling campaign funds and amid rumors that her campaign had been mismanaged and that she had been an indecisive leader.  After securing his candidacy in 2020, rather than choosing from among the other strong contenders in the primaries, Biden selected Harris as his running mate.  By all appearances, it was a “DEI” (Diversity, Equity, Inclusion) choice: racial and gender window-dressing as a lazy way to appeal to a broader demographic electorate.  As Vice-President, Harris continued to be dogged by rumors of a toxic work environment existing among her staff.  Her underwhelming performance as Vice-President really concerned me, because she was, as they say, “only a heartbeat away from the Presidency”.  Nevertheless, I supported the Democratic ticket this year: not as a vote for Harris, but rather as a vote against Trump.

While Trump’s victory this time may not have been a shock, it is still disturbing to witness the jubilation of his supporters – how ecstatic they are that a man who is a convicted felon, has been accused of the rape and sexual assault of multiple females (including, with Jeffrey Epstein, a 13 year-old girl), and who has left a string of failed and fraudulent businesses in his wake, will now be leading the country.  He hasn’t even taken office yet, and his actions already seem to be demonstrating that his next term of office will be worse that his last: his picks for cabinet members and top advisors, in addition to being unquestioning sycophants, have, at best, dubious resumes for their positions and, at worst, past histories that are more sordid than Trump’s own.  They are a collection of kooks and cranks that remind me of the villains in the old 1960’s Batman television show: the only thing that they are missing are the outlandish costumes. 



Unless Trump’s supporters are merely being intentionally ignorant about his behavior, the only explanation for this that I can come up with is that they believe the disease is much worse than the cure: that – in the words of Ronald Reagan over four decades ago – “the ship of state is out of control”.  In this sense, they see Donald Trump fulfilling the role that Japanese kamikaze pilots did in World War II, when they intentionally flew their planes into enemy battleships in order to sink them.  Yes, these Trump supporters may believe, he will throw America into chaos, which may result in his own eventual immolation, but it’s something that has to be done, because only a radical cure like this will save America.

But save America from . . . what?  That is the question that the other half of the country is asking.  When I wrote “The Rise and Fall of Donald Trump” after his election in 2016, I predicted that his presidency would be like a B-movie, culminating in a calamitous ending.  I never imagined, back then, how calamitous that ending would be, after he refused to concede his defeat in the next election, based on bogus claims of vote-count fraud, and an angry mob of his followers stormed the Capitol building, with many of them apparently intending to murder both Vice-President Mike Pence (for certifying the election results) and Democratic Speaker of the House Nancy Pelosi.  In keeping with my B-movie analogy, I concluded “The Rise and Fall of Donald Trump” by saying “let us hope that there will not be a sequel”.  But I had preceded this by noting that Trump’s “secret weapon was that mass of the American populace disenchanted with the hollow, recurring promises made by the liberal establishment which seemed unwilling or incapable of connecting with them at a meaningful and fundamental level.  That they actually voted for Donald Trump, in spite of all of his blatant shortcomings, merely shows how extreme their level of disenchantment was.”  I believe that this accounts for the muted reaction to the recent election outcome among those who had not supported Trump.  There was relief, of course, that the election outcome would not be violently resisted by his minions, since it had gone their way.  (Many election commissioners – particularly those in battleground states – had actually feared for their personal safety, and that of their staffs.)  There was also an acknowledgement that Trump had not just won the controversial Electoral College vote, but had won the popular vote as well (something which couldn’t be said in his contest against Hillary Clinton).  But I believe that, beyond this, there is, finally, the realization among Democrats and the Left in general that something they are doing has strongly alienated half of the country.  And they are finally ready to do some serious soul-searching to find out what that is.


In retrospect, it is personally ironic that the very next piece that I wrote after “The Rise and Fall of Donald Trump” was “Blue Collar Elegy”, in which I praised the book Hillbilly Elegy and its author, J. D. Vance, because I felt that the experiences he described strongly resonated with many of my own growing up in a working class neighborhood.  I wrote that I believed the intent of his book was to:

. . . shine a spotlight on an entire segment – and a growing one – of American culture which is increasingly finding itself in trouble.  It is the segment of white blue collar workers who had once been able to adequately provide for their families in the factory towns across America.  They were patriotic, hard-working, religious, and with strong family values.  They generally mistrusted government intrusion, and particularly resented those who seemed to eschew their work ethic and instead depended upon the largess of government spending to sustain themselves.  The paradox, of course, is that as the factory jobs which provided employment for these blue collar workers began to disappear, they [i.e., those in J. D. Vance’s generation, not my own] found themselves increasingly reliant upon government aid to get by.  And, as unemployment and underemployment became more rampant among them, alcoholism and drug use became more widespread, and in its wake, the structure of the nuclear family began to unravel.  Broken or abusive marriages, unwed mothers, and criminality became a pervasive phenomenon.

J. D. Vance had once been a harsh critic of Donald Trump, and I never would have imagined – especially back then, when I wrote that piece – that he would eventually become a staunch ally, and running mate.  But when I was a young man, just passing the threshold of adulthood, those in my generation were contending with a different kind of malaise.  It had been brought on by the resignation of disgraced President Richard Nixon, and the loss of the Viet Nam War, followed by years of high inflation and high unemployment caused by the oil price shocks of the 1970s.  America had seemed to lose its standing in the world, and its competitiveness, and it left us with a pervading sense of very low morale.  Jimmy Carter had succeeded in restoring faith in the integrity of the office of the presidency, but he could do little or nothing to restore America’s faith in itself.  In a famous speech which he gave in July 1979, Carter himself alluded to this, calling it a “crisis in confidence”, a “growing doubt about the meaning of our own lives” and declaring that it is “threatening to destroy the social and political fabric of America”.  Later that year, in November, Iranian revolutionaries took 52 Americans in the U.S. embassy hostage, and the following month, the Soviet Union invaded Afghanistan.  By the end of that year, America seemed completely incapable of addressing fundamental problems in the international as well as domestic spheres.


How had it ended up in such a sorry state?  I believe that the liberalism which had been such a potent force for social change in the 1960s had, after the fall of Richard Nixon and the ignoble loss of the Viet Nam war, metastasized into a philosophy of national self-loathing.  Rather than focusing on what had been accomplished in that decade, liberals instead ruminated in a very critical manner over what still remained to be done.  America had fallen, in this world view, because it was a capitalist, racist, sexist, imperialist power that had lost its way.  Movies in the 1970s, line Fun with Dick and Jane (starring George Segal and Jane Fonda) and Thieves (starring Marlo Thomas and Charles Grodin), espoused a Robin Hood ethos which justified burglary and theft as a sort of noble reaction to the unearned social privileges of those living well.  Even the movie 9 to 5, (starring Jane Fonda, Lily Tomlin, and Dolly Parton) which was ostensibly a feminist satire, portrayed successful businessmen as accomplished con artists, rather than as people engaging in anything genuinely productive.  This sentiment featured prominently on television as well, where the iconic successful businessman was the scheming, manipulative J. R. Ewing of Dallas.  It pervaded the culture, including the blue collar world that I grew up in, and had an enervating impact.  As children in the 1960s, and teens in the 1970s, we absorbed the belief that striving to succeed in a conventional way – even in the classroom – was “selling out”, because we would be aligning ourselves with the “Establishment”.  And since few of us had parents or peers that were college graduates, the incentives for us to pursue a college education of our own – in spite of its affordability back then, unlike now – were greatly diminished.  But in addition to this, having grown up with fathers who had been able to comfortably support their families by working in jobs that did not require a higher education, we really believed that this would be unnecessary for us as well.  As we started entering the workforce in the late 1970s and early 1980s, however, we discovered even back then that those types of jobs were disappearing.

But a potent counter-movement emerged in reaction to the stifling liberalism of the post-Nixon, post-war years.  This was the resurgence of libertarianism.  In academia it found its strongest expression in the philosopher Robert Nozick’s 1974 book Anarchy, State, and Utopia, which was a libertarian rebuttal to philosopher John Rawls’s classic A Theory of Justice.  (Rawls had argued that an ideal just society could be designed if its architects existed under a “veil of ignorance”: not knowing what their individual identities – sex, race, economic status, personal capabilities – were while they were designing it.  Nozick countered that justice is more effectively attained and preserved, and the general welfare maximized, when individuals are free to act in their own self-interest, with certain minimal limitations preventing fraud and coercion.)  Libertarian ideas found expression in popular media as well, in books such as Robert Ringer’s Restoring the American Dream (1979), and the 1980 public television series Free to Choose, which was based on Nobel-Prize-winning economist Milton Friedman’s book of the same name, and which espoused the benefits of the free market and warned of the dangers of excessive government intervention.  Around this time there was also a resurgence of popularity and interest in the books of Ayn Rand, a libertarian philosopher who had popularized the ideology in the mid-twentieth century through works of fiction like The Fountainhead and Atlas Shrugged.  Rand was a very powerful and persuasive writer, and created a compelling worldview in her novels in which there are essentially two types of people:  On the one hand there are the entrepreneurs, architects, industrialists, and artists who, through their own personal initiative, create value in the world, and maintain the engine of civilization.  On the other hand, there are those who try to expropriate wealth without working for it or earning it, by appealing to contorted ideas of social justice, arguing that their disadvantaged position in life (which Rand’s novels always implied was due to their own timidity, laziness, and lack of personal initiative) entitled them to a share of that wealth.  This was a two-dimensional view of the world, certainly, but it presented a potent antidote to the malaise that had crippled America, which itself was partially the product of a two-dimensional, anti-business posture on the Left.  One popular stereotype that particularly rankled the middle and working classes was that of the “welfare queen”: a non-working woman who lived off the largesse of the government, feeling entitled to take the tax money paid by working people to support her and her children.  The stereotypical welfare queen usually had many children, often with different (also non-working) fathers, and the more children that she had, the more justified she felt in taking handouts from the government.  Libertarianism presented a non-racist cure to this generally perceived social problem, by advocating the minimization – if not complete elimination – of the welfare state.


In the U.S. presidential election of 1980, the Libertarian candidate, Ed Clark, won more than one percent of the popular vote – something the party had never achieved before (and would not do again for over thirty years).  The vote probably would have been much higher, if not for the fact that the Republican presidential candidate, Ronald Reagan, had incorporated much of the Libertarian ideology and rhetoric into his own campaign.  (Even Jimmy Carter, whose administration advocated government spending cuts and deregulation, embraced much of the growing libertarian sentiment, much to the chagrin of liberal Democrats.)  His plan consisted of reducing government expenditures, but also reducing taxes – with a particular emphasis on bringing down taxes in the higher income brackets.  Reagan had espoused what would come to be known as “trickle-down economics”: the idea that the wealthy in America created new businesses and industries, while supporting the continued growth of existing ones, and by lightening the tax burden of the wealthy, their ability to do this would be enhanced, which would be reflected in more economic growth, and more jobs.  The extreme Left vision of the world was that the wealthy were plunderers, using various nefarious means to gain and retain a larger share of the economic pie.  But this new, libertarian view espoused by Reagan Republicans saw the wealthy as making the entire pie grow in size, to everyone’s benefit.  It proved to be a successful campaign strategy, leaving liberals genuinely perplexed that it resonated with so many people: I remember a sketch from the comedy program Saturday Night Live at the time which featured Reagan supporters as formerly sane persons having been overcome by some mysterious alien force, just like the one that replaced people with soulless duplicates in the classic horror film, Invasion of the Body Snatchers.  Liberals were left stunned at his electoral victory, and, true to his word, once in office he led an aggressive effort to pare back the size of the federal government, while implementing tax cuts.  During his first two years in office, the country experienced back-to-back recessions, with the second one being particularly severe.  But when the economy recovered, at the end of 1982, it grew annually at an average rate of 4.5% during the remainder of Reagan’s two terms, which was significantly higher than the average growth rate of 3.2% in the 1970s.  And unemployment, which had peaked at 10.8% at the end of 1982, fell to 5.3% by the end of Reagan’s second term – the lowest it had been since 1974.  “Trickle-down economics” had seemed to live up its promise.


What “trickle down” economics failed to do was shrink the national debt, which actually mushroomed during the Reagan administration from 900 billion dollars to 2.3 trillion dollars.  This was due in large part to increased defense spending, stemming from Reagan’s staunch and aggressive opposition to the “evil empire” of the Soviet Union.  But given that his militaristic posturing actually led to groundbreaking peace treaties between the U.S. and the Soviet Union that included nuclear disarmament, and indirectly led to the eventual fall of the Soviet Union and liberation of Eastern Europe, it is hard to find fault with this departure from the libertarian philosophy of a limited government with minimal military and police forces.  There was one pernicious, lingering consequence, however, that stemmed from the other successes of “Reaganomics”, which was the idea that cutting taxes – particularly on the wealthy – was always a winning solution for jumpstarting the economy, even if there were no government spending cuts to offset the tax cuts.  During Reagan’s administration, the economist Arthur Laffer popularized the idea that tax cuts might even raise government revenues, because the greater economic growth would more than offset the reduction in tax rates.  Unfortunately, this has enshrined tax cuts as a permanent hat trick that Republicans fall back on time and again to curry favor with the voters: a painless solution to cure any form of economic malaise, but one that drives the U.S. government deeper and deeper into debt.

I recounted the Reagan experience in some detail because this is the closest historical parallel, I think, to what happened in this election.  Democrats back then, and liberals in general, had lost touch with a sizable portion of the American electorate, and then found themselves asking, after Reagan’s victory, “What just happened here!!!?”  They only regained the White House when Bill Clinton crafted a more pro-business liberalism, with his slogan, “It’s the economy, stupid”, and social programs, like “workfare”, which were intended to reduce the scope of nonproductive welfare programs and both encourage and enable the poor to work for a living.  Even now, decades later, I think that the Left fails to appreciate how much of the libertarian ethos is still retained by the middle and working classes.  They don’t want government handouts in the form of tax cuts, or subsidies, or government-funded programs: what they want is a healthy economy that will provide them with an opportunity to work and support themselves and their families.  The Randian worldview of producers vs. takers may have been overly simplistic, but who can deny that many of the transformational industries which have arisen over just the past few decades came about because of visionary entrepreneurs like Steve Jobs, Bill Gates, Jeff Bezos, and Elon Musk? 

That does not mean, of course, that liberal criticisms of this worldview are completely unfounded.  Not all of the socially privileged have attained and kept their wealth by creating or sustaining industries.  Many, if they are not merely throwing it around in profligate diversions, are using their money to make more money, without actually producing anything.  As an energy consultant once described it to me, there is a massive pool of such money circulating through the economy, with its owners constantly in search of the Next Big Thing: something that will yield a higher-than-average return on investment.  This is really just a form of high stakes gambling, but a form of gambling in which the House doesn’t always win, at least initially.  Eventually, however, when this pool of money all goes to the same place, it creates a bubble, and when the bubble bursts, it leaves an economic meltdown in its wake, and often it is the working classes that bear the brunt of the repercussions.  Even when new businesses are created, they don’t always fit the heroic mold of those in an Ayn Rand novel; for every Apple, Microsoft, and Amazon, there has been an Enron, a WorldCom, and a Kobe Steel.  (Trump supporters, of course, would put his companies in the first category, while his detractors would be more inclined to include at least some of them – like Trump University – in the second.)  And while, as a consumer, I have greatly benefited from legitimate businesses like Amazon and Walmart, I’m not sure I would like those companies as much if I had to work for them.  Working conditions may not be as brutal as they were in the industries of the 19th century, like Andrew Carnegie’s steel mills, where employees were expected to work long hours, in hazardous conditions, but the jobs created by many of the present-day captains of industry hardly sustain a middle-class household.  Nevertheless, for much if not most of the population, the specter of Big Government continues to be a much more ominous one than the specter of Big Business.

Jimmy Carter, Ronald Reagan, and Donald Trump, with all of their obvious differences, had something very important in common.  They were all perceived as Washington outsiders.  Carter was a peanut farmer and governor of Georgia, Reagan was a Hollywood actor, and Donald Trump was a successful business entrepreneur – or at least he played one on TV.  In fact, nearly every president elected in the last half-century has been a Washington outsider, or perceived as such.  Bill Clinton was governor of Arkansas, and someone who most people had never heard of a decade before he became president.  George W. Bush, although the son of a president, resided in Texas, and was its governor.  Barack Obama was a freshman senator who had been in Congress for less than four years when he was elected president.  For presidential candidates during this time, being perceived as a Washington insider has actually been a political liability.  One would think for example, that a man with a resume like that of George H. W. Bush, who had been a congressman, a U.S. ambassador, Director of the C.I.A., and vice-president to Ronald Reagan, would be unbeatable as a presidential candidate.  But after winning one term by riding on the coattails of Reagan’s immense popularity, he was unable to secure a second term, after losing to outsider Bill Clinton.  And Bill’s wife Hillary had a similarly sterling resume, having served as a Senator and then Secretary of State in the Obama administration.  I, like so many others, was shocked when she lost to Donald Trump, but I was even more shocked when I learned that my mother, who had voted for Barack Obama both times, had voted for Trump in that election.  My mother explained that she disliked and distrusted Hillary because Hillary was too much a part of the political establishment.  And that, I think, is what makes these outsiders so popular as presidential candidates.  There has been a long-standing perception among many if not most voters that America is in need of a fundamental change, and only somebody who is not part of the Washington establishment can make a credible case that they will be an agent for change.  Persons who have lived and worked in D.C. in any important capacity, for a significant length of time, are seen as part of the problem, rather than someone who could be trusted to provide a solution.  Ironically, the only president who has overcome the Insider Curse is Joe Biden – former vice-president to Barack Obama and, before that, a member of Congress for nearly 40 years – after much of the population had become angry and alarmed by Trump’s behavior during his first term of office.

But while the Insider Curse has been a staple of Washington presidential politics for fifty years, there have been fundamental changes in Capitol culture – and the political culture at large – during that time.  Around 1980, when libertarianism was having a resurgence of popularity, a disparaging term arose among its followers to describe the mainstream political parties: “Demopublicans”.  This alluded to the perception – not just held by libertarians – that there was no fundamental difference between the two major political parties, and that this was largely responsible for the national government’s inability to address America’s fundamental problems.  Nobody today would accuse the two parties of being indistinguishable: they seem to be implacable adversaries, with radically divergent worldviews, and generally unable to cooperate in any effective sort of way.  When I used to live and work in the D.C. area, I would sometimes ask old-timers there what brought about this change.  They said that back in the 1980s, members of Congress genuinely considered themselves residents of Washington: they would spend most of their time in the city, working, dining at local restaurants, and drinking at local bars.  And because of this, they tended to socialize with each other more often, blurring party lines and differences, which fostered increased dialogue and cooperation.  But in recent decades, serving in Congress has become more of a commuter occupation, and its members spend more time back in their home districts, with their constituents.  This in itself has muted much of the former dialogue between parties, but there has also been a marked decline in civility between them.  My old-timer friends laid the blame for this on one individual: Republican Congressman Newt Gingrich.

Newt Gingrich

In 1994, when Republicans regained control of the House of Representatives for the first time in forty years, and Newt Gingrich became Speaker of the House, he almost immediately set about reshaping the Republican Party.  He was a staunch conservative, but his brand of conservatism went beyond the libertarian conservatism popularized by Reagan, and included a Christian religious slant.  The behavior that he exhibited – and encouraged his fellow party members to exhibit – toward Democrats hardly  smacked of brotherly love, however.  It was a combative stance – openly belligerent – which descended into name-calling, with epithets directed against the Democrats like “fascists” and “traitors”.  Gingrich introduced the style of obstructionism and intransigent political brinkmanship that even led to a government shutdown when the parties could not agree on a government spending package: something which is sadly no longer uncommon.

But the lack of civility between the two parties, and between liberals and conservatives in general, had already been fomenting in the years preceding its appearance in Congress.  Its flames had been fanned by talk radio, and by the growing popularity of conservative deejays like Rush Limbaugh.  Limbaugh’s success, ironically enough, had its roots in the youth counterculture of the 1960s, when one of the most popular expressions of resistance against the Establishment, particularly by the young, was to engage in shocking behavior.  Young men let their hair grow long, and both sexes often wore clothing that appeared outlandish to their conservative, older peers.  Rock and roll music included lyrics that were bawdy and intentionally provocative.  And although rock stars were ostensibly popular mouthpieces for the radical youth, the lyrics of their songs did not necessarily reflect this.  The Rolling Stones song "Brown Sugar", for example, which I had heard at least one popular deejay call the greatest rock-and-roll song of all time, celebrates, in its lyrics, the sublime pleasure of sexually exploiting a captive black woman.  Now I’m sure that Mick Jagger and Keith Richards never really intended to advocate slavery, racism, or the sexual abuse of women in their songs.  What they merely wanted to do was write lyrics that would shock anyone who was not of their generation.  But when the youth of the Sixties grew up, and began assuming positions of power and influence, the conservative, conformist, button-down Establishment began to fade away, and in its place arose a new social order: one which had a much more left-leaning orientation.  And yet, in spite of the old Conservative establishment being dead, many entertainers still found that the best avenue to success was by being shocking, although it was no longer clear who exactly they were intending to unnerve.  This was particularly evident with the popular “shock jock” deejays, like Howard Stern and Steve Dahl, who were not above resorting to racist and sexist humor to elicit laughs from their audiences.  Entertainers of this stripe cleared the path for deejays like Rush Limbaugh, who specifically targeted the new liberal establishment for derision.  Limbaugh cultivated a sense of outrage among his listeners by presenting a caricatured view of liberals, falling back upon a common tactic used by extremists at both ends of the political spectrum, which is to highlight the bizarre statements and behaviors of persons at the opposite extreme.  He convinced his conservative audience that it was under siege, by maniacal liberals – not unlike those in Ayn Rand’s novels – who used distorted notions of social justice to take what his audience had rightfully earned, while providing unearned privileges and opportunities to envious, undeserving people unwilling to sustain the American Dream by supporting it or working for it.  Like Gingrich, Limbaugh enjoyed labelling his adversaries with derogatory terms, and even invented some of his own, like “feminazis”.  Conservative extremism found an even broader venue when the Fox News channel was introduced in 1996.

Rush Limbaugh Given Presidential Medal of Freedom 

Most of my liberal friends attribute the present Great Divide in politics to that media phenomenon, and some go so far to say that a large portion of the country has been deluded – even brainwashed – by conservative extremist commentators.  While I think there is some truth to this, I cannot believe that half of the country has been consumed by a demagogue-induced hysteria that is based entirely on paranoid fictions.  One defense that is often made of Fox News is that it actually has provided a counterbalance to a media that is dominated by left-leaning newspapers and television news programs, and this is not unfounded.  In the 1960s, the most popular anchorperson on television was Walter Cronkite of CBS, who was admired and trusted by the general public as a principled and impartial purveyor of the news.  But his successor in 1981, Dan Rather, left no doubt about his liberal political orientation, and it was an orientation that was also adopted by the other major networks.  Similarly, the most popular nighttime talk show entertainer in the 1960s and 1970s was the Tonight Show’s Johnny Carson, who had no scruples about lampooning politicians on the Left or the Right in his nightly monologues, although by the end of his career, he did seem to adopt a more left-leaning orientation.  The plethora of nighttime talk show hosts today universally present monologues that are hardly more than liberal talking points.  But rather than provide a plausible counterpoint to the dominant media perspectives, Fox News, from its very outset, presented skewed reporting and inflammatory commentary that often sank to the level of demagoguery practiced by Limbaugh and his fellow ultraconservative deejays.  But again, given the extreme popularity of the Fox News Channel – it is currently the most popular cable news network in America – I have to believe that it is touching some large-scale nerve of discontent that was not entirely a product of its own creation.  In what follows, I will explore what I think are some legitimate grievances that has driven so much of the U.S. electorate back to Trump.

I don’t think that I am the only one who is perennially irritated by the need to identify what pronouns I would like to have used in reference to me: on professional networking sites, resumes, or other areas that require personal identification, when only 1.1% of the U.S. population identify as transgender.  It strikes me as a clear case of “tyranny of the minority”.  But what is even more disturbing is that the U.S. psychiatric and medical establishment condones sex reassignment surgeries for minors who consider themselves transgender.  Given that unhappiness, depression, and a confusion about one’s evolving personal identity occurs often among the young – particularly adolescents – it is recklessly irresponsible to allow such an extreme and irreversible procedure to be performed on them.  Liberals downplay this by contending that the prevalence of the practice is overblown, but that it is allowed to be done at all is a national disgrace.  Female athletes are at a genuine disadvantage when competing against former males, and parents of female athletes have an additional concern about their safety.  What I also find intriguing – and disturbing – is that the number of persons who consider themselves transexuals has grown significantly.  According to a Gallop poll done in 2024, the percentage of persons in Gen Z who identify as such is 1.5%.  Among Millennials, the percentage is only 0.35%, and in all of the older generations the reported percentage is never higher than 0.2%.  It would seem that a pressing and relevant question is why the growth in transexuals has mushroomed in the younger generations, but in today’s academic environment, I suspect that a scholar would fear that the social repercussions of doing such research might be permanently damaging to his or her career and reputation.

On May 25, 2020, four white police officers in Minneapolis, Minnesota killed a black suspect, George Floyd, by asphyxiation, after having pinned him down for over nine minutes.  This clear case of police brutality – with its racist overtones – sparked a national outrage.  While the crime was an unpardonable example of race-based police brutality, George Floyd hardly fit the role of innocent martyr.  He had served several jail terms, including a stint of several years for a home invasion that he had committed with five other men, during which he held a pistol to the sole female occupant while his accomplices searched for things to steal.  But his funeral was televised, apparently in order for it to serve as a sort of national mourning.  The service was presided over by the Reverand Al Sharpton, a racial demagogue who had risen to prominence in 1987 after a young woman, Tawana Brawley, had claimed to have been gang-raped by five white men.  Sharpton involved himself in the case at that time, fanning the flames of hatred and outrage by claiming that Brawley was also the victim of a government coverup that was trying to protect the white defendants, and even hinted at a broader conspiracy that included the Ku Klux Klan and the Mafia.  Brawley’s claims were ultimately proved to have been a complete fabrication, but Sharpton’s antics had elevated him permanently to the status of a national celebrity.  Sadly, Floyd’s murder was just one of many that had been perpetrated by police officers upon black victims, and the earlier murders had led to the creation of the Black Lives Matter movement, which organized mass demonstrations and marches to protest police brutality.  But the George Floyd incident provoked particular outrage, leading to riots that left in their wake burned out businesses throughout the country.  Before these riots, most whites were probably sympathetic to the Black Lives Matter movement, but the riots provoked a reaction of extreme anger among many of them.  I remember seeing a picture posted on Facebook of a business that was destroyed, in spite of the fact that it had posted a Black Lives Matter sign on its front.  The destruction from these riots was widespread and indiscriminate, leaving many hard-working entrepreneurs without their livelihood, and also left much of the population perplexed over how this looting and devastation was advancing the cause of social justice.  The riots have left a bitter, lingering memory among many.


While on the subject of racial tensions, I have to mention an additional irritant which I think has contributed to the loss of support for Democrats, and this is the phenomenon of South-shaming, which emerged in earnest during the Obama administration.  The Civil War, a documentary by Ken Burns that first aired in 1990, was a magnificent achievement.  While not shying away from tracing the pro-slavery sentiments that led to Southern separatism, the documentary nevertheless presented a retelling of the conflict that was sympathetic to both sides, illustrating the heroism exhibited by both Union and Confederate combatants.  Ultimately, it was a celebration of how the United States, after healing from the wounds of that devastating conflict, emerged as a powerful, more unified nation.  In 1978, President Jimmy Carter had symbolically finalized the reconciliation of North and South by signing a Congressional posthumous restoration of full citizenship rights to Confederate president Jefferson Davis.  The following year, a television series titled The Dukes of Hazzard began to air, about two young Georgia cousins bucking local authority while riding around in a fast car that they named the General Lee, which had a Confederate battle flag painted on its hood.  I personally thought that the series was rather silly, and never watched it, but still regarded it – along with the rest of the country at the time, apparently – as harmless fun.  We all understood the Confederate flag to be an emblem of Southern pride, which did not connote racist overtones.  There is a heritage that Southerners are very proud of, rooted in a stoic character, religious piety, individualism, and a forthrightness in manner, devoid of duplicity – a tendency to say what one means and mean what one says.  Even liberal Southerners take pride in this heritage, such as the late, great country and western singer Johnny Cash.  When Canadian rock star Neil Young indiscriminately tarred that culture with his song “Southern Man” in 1970, the band Lynyrd Skynyrd shot back in 1974 with “Sweet Home Alabama”: a jubilant, rollicking song that said to Young and other Northern liberals that they could take their jaded, myopic view of the modern South and go stuff themselves.  But by the time of the Obama administration, Woke sensibilities had developed an intolerance for symbols that even indirectly glorified persons or institutions from the less enlightened past.  This even extended to historic personages generally regarded as having done great things in their lifetimes, like Abraham Lincoln and Winston Churchill, because of their antiquated views on race.  (A bust of Winston Churchill was removed from the Oval Office during both the Obama and Biden administrations.)  Statues of Confederate generals were taken down in various places, and the Confederate flag was vilified – condemned by liberals as a hateful symbol comparable to the Nazi swastika.  (And while it is true that hate groups, when holding rallies in the South, often display the swastika and Confederate flag together, for most Southerners the flag is simply a symbol of cultural pride, with no connotations of race hatred.)  Even the word “Dixie” was condemned.  I had actually admired Texas band The Dixie Chicks when one of their members spoke out against the American-led invasion of Iraq orchestrated by George W. Bush, but when the band decided to drop “Dixie” from its name in 2020, I regarded this as just another silly example of Woke-extremism.  One can only wonder when that band will also realize that the word “Chicks” might offend Woke feminist sensibilities, and drop it as well.  That will create a real conundrum, however, because there is already a band that calls itself the The.  Perhaps the “band-formerly-known-as-The-Dixie-Chicks” will have to settle for being referred to by some non-verbal, physical expression, like a look of pained, sanctimonious consternation.  In any case, it was clear in this most recent election that the Democratic liberals have succeeded in almost completely alienating the Dixie states, since all of them, with the exception of Maryland and Virginia, voted for Trump in the Electoral College.  It is a stark reversal of Jimmy Carter’s achievement in 1976, when he won all of the Dixie states except Virginia.


And now we come to that other flashpoint issue that drove so many voters into the Trump camp: immigration.  I should start by putting my cards on the table by saying that I believe that one of America’s greatest strengths lies in the fact that it is a nation of immigrants.  I suspect that I am far from being alone in this belief.  In fact, I used to joke that the best policy on immigration would be to let anyone willing to migrate to this country do so, while anyone belonging to a family that’s been here for more than three generations should have to pass a test in order to stay.  I was exaggerating of course, but I’ve come to see what I call the “entitlement curse” among Americans who are not first- or second-generation, which causes them to feel that the country owes them something simply for living here, and hampers their drive to succeed by putting in the effort to do so.  I’ve seen the entitlement curse play out in friends and relatives, and I was a victim of it myself, in my youth.  I have two Master’s degrees and a PhD, so people are often surprised when I tell them that I flunked out of college – twice – during my initial attempts, right after high school, and this simply because I didn’t bother to attend most of the classes.  At that time, I really didn’t put much stock in higher education, because I believed that America would provide me with a good job, no matter what I did.  (And, as I said earlier, I and most of my working class friends believed that submitting ourselves to education meant that we were allowing ourselves to be “brainwashed” by the Establishment.)  Fortunately, two happy accidents put me onto a better path.  The first one occurred after I spent the initial years after high school working menial, low-paying jobs, and eventually found myself working in a factory laboratory.  My boss had a PhD in engineering, and I soon observed what this had brought to him: an interesting circle of friends, a beautiful and intelligent wife, and a job that he loved to do, and which paid him very well to do it.  This epiphany inspired me to return to college in earnest, taking classes part-time at first at a local community college, but eventually taking a full-time class load and getting straight A’s, while still working at that factory.  The second happy accident occurred when I finally left that job and transferred to a four-year university.  Around this time there was a flood of immigrant refugees entering the country from Viet Nam.  I remember a woman from the old neighborhood bitterly complaining that the first thing these refugees asked upon entering the country was where to find the Welfare office.  And I was less than pleased when I then discovered that my roommate in college would be one of these Vietnamese refugees.  I had been paired with him because we were both years older than the typical students in our dormitory.  But I eventually realized that it was the best thing that could have possibly happened to me.  Like me, he was an electrical engineering student, and I had quickly discovered that in spite of my mathematical aptitude, the subject of electrical engineering was not one that I was perfectly suited for, particularly in this very competitive university.  I was profoundly inspired as I observed how my roommate diligently applied himself to his classes, having to study each textbook line by line, because English was his second language.  That, and his personal guidance – he had a much greater natural talent for this subject than I did – was just what I needed to get me through the final hurdle to my first degree.  I suspect that many of my fellow American students in that dormitory hall never even made it to their senior year, because the college experience for them was more of a party than an opportunity to improve themselves.

I remember having a spirited debate with my mother about immigration.  She railed about the Hispanics in particular, and how they seemed to be overrunning the country.  I was living and working in the D.C. area at the time of our debate, and I said – only half-joking – that Hispanics seemed to be the only ones who were doing any genuine work out there.  But in retrospect, I can understand the cause of her consternation.  The city that I grew up in – Waukegan, Illinois – had been a typical Midwestern town.  We were an ethnically and racially diverse community, but shared a common culture.  We all spoke the same language, attended the same schools, and, as teenagers, we worked at the same fast food restaurants together and socialized outside of work.  Waukegan prided itself as being the hometown of comedian Jack Benny, and the birthplace of science fiction writer Ray Bradbury.  But by the time that I had this debate with my mother, Waukegan was no longer the same city.  The grocery stores had been replaced by “supermercados”, and the signs in all stores were now posted in both English and Spanish.  The city’s historical traditions and common culture seemed to be disintegrating, and giving way to something entirely alien . . . “foreign”.  And while this didn’t lead to an explosion of crimes and gang activity (let alone the abduction and eating of family pets), as opponents of immigration often warn will happen, it did leave the long-term inhabitants feeling disoriented, and even strangers in their own neighborhoods.  This of course is not a universal phenomenon, and I’m not sure why it happened to a city nestled in the far north end of a northern Midwestern state, but I can understand why it produced so much resentment among the non-immigrant community.  I still believe that immigration is the “secret sauce” that has made – and continues to make – America a great country, but unrestricted immigration of the sort that occurred during the Biden administration left much of the population legitimately feeling under siege.


Shortly after Trump won the presidential election – in both the Electoral College and by the popular vote – a friend remarked to me, “I never understood how a majority of the German people could fall under the sway of Naziism . . . but now I do.”  And I suppose if somebody back then had attempted to do something similar to what I’ve done in the preceding paragraphs – get into the heads of people to understand why they made the choices they did – he or she might be accused of trying to justify rather than merely explain their actions.  I completely agree that there is much in pro-Trumpism that is toxic and irrational.  Many of his supporters – like the goof who wore the shaman helmet during the January 6 insurrection - are simply “shock jocks” in miniature: people who could never be taken seriously based on anything they had ever done or accomplished in their lives, and so they resort to, or support, behaviors that anger and/or frighten others in order to get respect and attention (like those of our current “Shock-Jock-in-Chief”,  D. J. Trump).  Many are of the gullible, shallow-brained variety who easily fall sway to outlandish conspiracy theories like “QAnon” or other lies and half-truths that have been circulating on the internet and social media in particular.  Others are just racists or neo-Nazis.   But I also personally know many decent, intelligent, rational, hard-working people – relatives, neighbors, friends, and coworkers – who supported Trump in this past election.  They know who and what he is, and what he is capable of doing, and yet there is something about the liberal establishment that is so abhorrent to them that they see a return of Trump to power as the only solution, however extreme, to remedy things.  I think it is important for the rest of us, and for the future of our country, to try to understand what has driven them to this conclusion.  I personally admit that the slogan “Make America Great Again” is still a mystery to me:  Given that America is one of the most prosperous nations in the world, the most powerful militarily, and – at least until now – one of the shrinking minority of nations that can still be called a legitimate democracy, how can it be considered anything less than great, and what greatness needs to be restored?  During the Biden administration, a house down the block from where I live had been displaying the American flag upside down.  It is now right side up again, along with another flag exhorting the new victors to “Take back America”.  And again I wonder: take back America . . . from who?  From me?

I certainly agree that America – and the world – is facing a series of crises that will threaten our very civilization, and that the only way to address these crises is by addressing them in bold and unconventional ways.  In a previous entry (https://johnsemeraldtablet.blogspot.com/2017/05/house-of-cards.html), I described two of the greatest of these: an out-of-control, ballooning national debt, and the degradation of our global ecosystem.  Like a house of cards, these looming catastrophes have been and can be ignored until a breaking point is reached, but by then it will be too late to remedy them economically, effectively, and painlessly.  Even worse, if we suffer from an economic collapse stemming from an ecological collapse, it will be like no economic downturn that we have ever experienced before, because the typical policy approaches to get us out of it – raising or lowering interest rates, cutting taxes, increasing government spending – simply won’t work.  Here in America and in the rest of the world there are growing social problems as well, particularly among the young, which some psychologists like Johnathan Haidt have attributed to helicopter parenting and the growing attachment to social media.  There is a genuine “boy crisis” which has both sociological and economic causes, as young men – particularly in the working classes – find it more difficult to find meaningful roles in society, including gainful employment.  Many feminists have downplayed discussion of this as merely a form of male backlash against the gains that women have made in both education and employment, but the problem is real.  It certainly cannot be solved, however, as J. D. Vance seems to believe, by shaming, encouraging, or compelling women to stop being “childless cat ladies”, make babies, and return to more traditional roles.  What is required is to acknowledge that men and women are different, and that these differences should be recognized at the earliest ages when educating, training, and preparing them for the meaningful roles that both will choose to assume in society.  The rise of the surveillance state poses threats to both men and women all over the globe, as it facilitates totalitarianism.  The writer Yuval Harari, in his latest book Nexus, describes how Iran is now using universal surveillance and soft power oppression to enforce dress codes on women.  Like the speed cameras in the U.S., if a camera in Iran photographs a woman not wearing the traditional veil, she will be given a citation, and further offenses will result in suspension of her driver’s license, and ultimately confiscation of her car.  It is a very subtle and effective way to impose behavioral norms that will prevent the general outrage and protests that occurred in 2022 after a woman was killed while in custody for violating Iran’s dress code.  China and North Korea are also effectively using the surveillance state to control their own populations, and it is distressing to realize how easily it could be applied by other non-democratic regimes.  (According to the Economist Democracy Index, less than 8% of the world’s population now lives under full democracies, and that percentage has been declining.)  As Harari also points out, the rise of artificial intelligence could be either a blessing or a curse to humanity and human freedoms, depending upon how the political and business powers of the world guide and regulate its development.  And finally, as an economist, I believe that one of the most pressing challenges that our civilization is going to face is to come up with a new economic paradigm that is no longer premised on unlimited growth.  I do not believe, as Elon Musk seems to believe, that our most intractable economic problems can be solved by growing out of them: by essentially creating a new Baby Boom.  Setting aside again that reactionary scenario of incentivizing or pressuring women to make more babies (visions of The Handmaid’s Tale come to mind), our global ecosystem is already tottering and crumbling under the weight of more than 8 billion people.  As fertility rates fall, and as actual populations are beginning to decline in many countries, instead of treating that as the problem, it is imperative to come up with an economics that does not define economic well-being in terms of GDP growth.  We need a steady-state economics, and a complete rethinking of what makes – or should make – a healthy economy . . . and society.

All of these challenges and crises - economic, environmental, societal, totalitarian, and technological – are presently setting our country and our entire world on a collision course with disaster, unless they are addressed in a timely and intelligent manner.  It is a very distressing thought to realize, that in spite of my relatively advanced age, I might actually live long enough to see the end of civilization as we know it.  We now have a kamikaze leader, leading a kamikaze country, in a kamikaze world.  But the Japanese word “kamikaze” actually translates to “divine wind” in English.  And that is now my hope: that the disaster of a new Trump presidency – if we survive it – will act like a “divine wind”, clearing away the idiocy in both the Left and the Right, the Democrat and Republican parties, and that among the wreckage of what’s left over, real, rational leaders will emerge who are willing to tackle the problems that really matter – the ones that are putting our civilization and the world at the brink of collapse.

 

 

Monday, July 22, 2024

Wild in the Streets


 

What do Vladimir Putin, Ayatollah Ali Khamenei, Xi Jinping, Benjamin Netanyahu, Donald Trump, and Joe Biden have in common?  Each of them (with the exception of Donald Trump, who did, and may again) lead very powerful countries.  And the leadership of each has been tarnished by controversy, generally characterized by their exhibiting a reluctance to cede power but, in the more extreme cases, engaging in and/or supporting brutal acts of terrorism and oppression: against neighboring countries, political opponents, and even the general population in their own countries.  But these men all have something else in common:  They are all over 70 years old.  Four of them are “Baby Boomers”, meaning that they were children or teenagers in the 1960s, and Biden, while technically too old to be counted as a Baby Boomer, also entered the 1960s as a teenager.  (Ali Khamenei just misses this benchmark, as he was born in 1939.) 


(A note:  When I started writing this article, Joe Biden was still tenaciously resisting growing calls from his fellow Democrats to end his candidacy for reelection to America’s presidency.  He has finally done so, while lending his endorsement to Vice President Kamala Harris, who, while much younger than him, is a Baby Boomer.) 

The irony here is that the youth of the 1960s – at least in America – became famous for opposing their elders, and a popular slogan among them at the time was “Don’t trust anybody over 30.”  They flouted standards and conventions, organized marches and protests, and genuinely scared those who held the reins of power.  Songs by popular musical artists of the time openly espoused revolution – even violent revolution, such as “Street Fighting Man” by the Rolling Stones, and “Something in the Air” by Thunderclap Newman.  Other musical groups, however, expressed cynicism about violent revolution, most notably the Beatles, in their song, “Revolution”, which had the lyrics “But if you want money for people that minds that hate, all I can tell you brother, you have to wait” and “But if you go carrying pictures of Chairman Mao, you ain’t going to make it with anyone anyhow”.  The Who, as well, expressed cynicism in their revolutionary song, “Won’t Get Fooled Again” with the (prescient) lyrics, “Meet the new boss . . . same as the old boss”.

Growing terror among the older generations of the restive “hippie” youth was finally given full expression in 1968 in a now long-forgotten movie, Wild in the Streets.   At the center of this film is a rock star, Max Frost, who fronts a band named the Troopers.  An ambitious senator (played by Hal Holbrook), hoping to capitalize on Frost’s popularity among younger voters, invites Max and his band to perform at one of his rallies.  But Max Frost turns the tables and uses the opportunity to promote a revolutionary movement that begins with a call to lower the voting age.  The senator, still believing that he can use Max to advance his own political ambitions, supports the movement, and the voting age is lowered from 21 to 15.  The new teen voting block eventually succeeds in getting Max elected president, and he eventually abandons those among the older generation in Congress (referred to as the “Old Guard”’ by Max) who were still hoping to work with him in a way that would serve their ends as well.  The mandatory retirement age is set at 30, and anyone over 35 is arrested and sent to “re-education camps” where they are permanently dosed on LSD.  This successful youth revolution in the United States inspires identical revolutions in the other major countries of the world.  The movie ends with the 24-year-old Max Frost facing an uncertain future, however, as an even younger generation clamors for more power of their own. 


The fictional band Max and the Troopers actually had a bona fide hit record, in their rousing rock-and-roll anthem “Nothing Can Change the Shape of Things to Come”.  (If I recall, my sister actually owned it, because I remember hearing it being played on her record player.)  The lyrics, rather than calling for revolution, simply asserted that it was inevitable:

 

There's a new sun

Risin' up angry in the sky

And there's a new voice

Sayin' "we're not afraid to die"

 

Let the old world make believe

It's blind and deaf and dumb

But nothing can change the shape of things to come

 

There are changes

Lyin' ahead in every road

And there are new thoughts

Ready and waiting to explode

 

When tomorrow is today

The bells may toll for some

But nothing can change the shape of things to come

 

The future's comin' in, now

Sweet and strong

Ain't no-one gonna hold it back for long

 

There are new dreams

Crowdin' out old realities

There's revolution

Sweepin' in like a fresh new breeze

 

Let the old world make believe

It's blind and deaf and dumb

 

(But) nothing can change the shape of things

To come

 

Looking back, one can see that the fears of a youth-led revolution were overblown, but the youth of that generation eventually did take over the reins of power – often after they had become older than their former adversaries – and those of that generation at the pinnacle of power now are holding onto it with a vice-like grip.  In some ways, they seem like ugly caricatures of the “Old Guard” that they were railing against in the 1960s.

And I can’t help but wonder if what this country – and the world – really needs today is a true-life version of Max Frost, to lead a revolution of the young vs. the old.  (Or, since some of the world leaders seem to be particularly hard on women in their policies and practices, maybe a Maxine Frost, leading a feminist-youth revolt.)  We have certainly seen noble attempts at this outside of the United States, such as the periodic anti-Putin uprisings in Russia that began with the Dissenters’ March in 2006, the Hong Kong protests of 2019-2020 which arose in response to increasingly autocratic behavior by the government of mainland China, and the widespread protest in Iran that began in September 2022 after an Iranian woman was arrested by the “morality police” for not covering her hair and who then died while in police custody.  And yet, here in America, the younger generation has been conspicuously quiescent, at least when it comes to criticizing or protesting this country’s older generation of leaders.  Granted, the “Old Guard” leadership in America is not nearly as draconian as that of Putin’s Russia, Xi’s China, or Khamenei’s Iran.  But the danger seems increasingly ever-present that, unless things change, American democracy could be at risk in the very near future.


I think that many would blame this on the very character of our crop of college-aged youth, colloquially known as “Generation Z”.  Mollycoddled by helicopter parents as children, and then psychologically crippled by a too-early exposure to smart phones and social media, these “snowflake” youth are often perceived by their elders as simply unfit to hold any positions of responsibility, or to take on any significant challenge.  I wonder, though, if we’re giving up on them too soon.  There was another “lost generation” of youth, who were raised as children in the “Jazz Age” of the 1920s: an era when Prohibition incited criminal behavior and the glamorization of gangsters, moral codes were openly flouted in speakeasies and jazz clubs, and religion seemed to be withering in the face of scientific assaults, as in the famous Scopes Monkey Trial of 1925, where attorney Clarence Darrow championed the teaching of evolution in the classroom by holding the opposing attorney’s religious beliefs up to ridicule.  One could imagine that it would be difficult for any children brought up in that decade to have a moral compass, or any good character at all.  But those kids, after being hardened in the Great Depression of the 1930s, and called into public service in the 1940s to join the war effort against the Axis Powers in World War II, are remembered today as the Greatest Generation: a generation of heroes of the highest order.  So I think it is very premature to write off Generation Z as a “lost generation”.  Their youth might surprise us, and in a very good way.  And if the younger generations here in America take up political activism on a scale comparable to the protesters in the 1960s, then perhaps, as happened in the Wild in the Streets movie, this will embolden youth throughout the world to push their own protests to successful outcomes.  


I just hope that if our youth do finally assert themselves, they don’t go to the extremes that Max Frost and his followers did to drive their elders out of power.  And I especially hope that it won’t take a crisis of the order of the Great Depression, or World War II, to rouse them out of their apathetic slumber and stir them into action.

Tuesday, July 16, 2024

Creative Evolution

 




Recently a member of my book club sent around, to me and the other members, an excerpt from a poem by Henry David Thoreau which he found particularly moving.  It read:

 

I witness a beauty in the form or coloring of the clouds which addresses itself to my imagination... You tell me it is a mass of vapor which absorbs all other rays and reflects the red, but that is nothing, for this red vision excites me, stirs my blood, makes my thoughts flow... If there is not something mystical in your explanation, something unexplainable to the understanding, some elements of mystery, it is quite insufficient.

 

Among the expressions of approval and admiration from the other members, one of these included an intriguing question:

 

After reading the above, would you be able to say unequivocally, that a human wrote it? If not, how would an AI [artificial intelligence] be able to generate that text? And if it generates the text, does it possess the sentiment? How do you know?

 

And this prompted another member to write:

 

Thoreau's words resonate with a profound appreciation for the beauty and mystery inherent in nature. His ability to see beyond the scientific explanation of the clouds' formation and delve into the emotional and mystical impact of their appearance reveals a deep connection between the external world and the inner realm of imagination.  In a world often dominated by scientific explanations, he reminds us of the importance of embracing the mystical and unexplainable elements that contribute to the richness of our experiences with nature.

 

But he quickly added that these remarks actually were generated by an artificial intelligence application (ChatGPT) that he had “asked” to express its opinions on that poem by Thoreau.  In reply, the other member who had posed the question said that this computer-generated response was something that she could have easily predicted.  I must confess that I was apparently much more impressed by this appraisal than she was, and could have been easily fooled by it, had the other member not confessed immediately to his deception.

 


This is just one example of how AI is and will increasingly be encroaching upon domains that were once considered to be exclusive to human beings.  Last April, an AI facsimile of the late comedian George Carlin was featured in a podcast titled “George Carlin: I’m glad I’m dead”.  The podcast featured AI-generated jokes in the style of Carlin’s comedy, done in a voice that sounded like his.  It was eventually removed after the Carlin estate filed a lawsuit on the grounds that the podcast had violated his estate’s copyright protections.

 

Comedy, of course, is an art form, and requires a nuanced understanding of what people find humorous, and what makes them laugh.  And on top of this, it is a very fickle art form, in that what is funny to some may be completely unfunny to others – even offensive and unpleasant.  As Louis Armstrong once said about music, “If it sounds good, it is good,” so too the test of genuinely good comedy is if people find it funny.  Is it really then possible for a computer to create comedy . . . to be funny?  On first blush it sounds absurd, but perhaps it actually is possible to use an analytic approach in order to distill the elements that make certain styles of comedy funny to certain audiences.  George Carlin’s style might even have been particularly amenable to this approach, since his comedy relied heavily upon wordplay.  And if it can be done for comedy, it doesn’t seem to be that much of a great leap to imagine computer-generated music, or visual art.  In fact, it is already being done.  Last March, an application called Suno, nicknamed “ChatGPT for music”, was launched, which can produce original music in response to textual prompts.  This was followed a few weeks later by a similar application named “Udio”.  And the Turkish-born artist Refik Anadol featured a successful exhibition in London this year that consisted of AI-generated images of natural landscapes.

 


In a blog entry that I posted more than ten years ago, “The New World Order?” (https://johnsemeraldtablet.blogspot.com/2013/05/the-new-world-order-in-my-occupation-as.html), I noted that as machines take on an increasing role in producing all of the necessaries of life, including food, it will become more and more challenging for human beings to find ways in which they can exchange things of value for what they need and desire.  I wrote:

 

In some ways, this actually does seem to be the trajectory that we are on.  In America, the proportion of manufacturing jobs has been steadily declining, while that of service jobs has increased, and a third type of employment, in which persons are compensated for thinking (e.g., as executives, consultants, accountants, engineers, and other professionals), has rapidly grown.  We seem to be moving toward a two-tiered society, in which “cognitive” employment and more skilled service jobs are well compensated (although, as satirized in movies such as Office Space, even these jobs can devolve into degrading, poorly paid lackey positions), while lower-level service and unskilled manufacturing jobs receive very meager wages.  And of course, our most successful entertainers – including professional athletes – are extremely well paid.  Another form of “entertainer”, the drug dealer, has become a prominent figure in the underclass and the underground economy (and those who control the production of these drugs are often at the heads of powerful private empires in foreign countries), while the less successful members of this group make up the huge prison population that is now a part of the American social system.  Other less successful and less powerful “entertainers”, such as strippers and prostitutes, lead lives that are only at one or two removes from those of prisoners and the destitute.

 

At that time, concerns about AI had not yet entered the general public conversation, and since then, intelligent machines have conspicuously moved beyond the manufacturing sector and have made strong inroads into the service sector as well.  But even beyond this, recent advances in AI have made it very plausible that jobs in the “cognitive” employment area could be taken over by machines as well.  And if this includes art and entertainment, then it really is becoming increasingly plausible that there will be little left of value that individual human beings can offer in return for the compensation that will allow them to procure the things that they want and need.  Even if some kind of technocratic socialism evolves where human beings do not have work to obtain the things that are produced by machines, this could still result in a very dystopian utopia, not unlike that featured in the 2008 animated feature film Wall-E, where overweight people spend their entire days sitting in comfortable recliner chairs, drinking soda pop in large cups while being entertained by video screens comfortably placed directly in front of them.

 


But setting this dismal scenario aside, there is a more depressing, fundamental question:  If computers will eventually be able to do just about everything that we can do, including producing works of art and entertainment – even if they can only produce effective simulations or imitations of these – then what, exactly, is it of fundamental value that human beings have brought to the universe as a result of their existence?  Will the final culmination of our civilization be the creation of lifeless machines that make our own existence superfluous, meaningless, . . . even unnecessary?  If our physical evolution and the consequent evolution of our civilization represent a sort of culmination of life on earth, then what did we evolve into?  What, then, does the word “evolution” even mean?

 

The theory of evolution, although considered a foundational element of biological science, has not been without controversy – even setting aside those perennial attacks raised by proponents of Creationism and “Intelligent Design” – as both the mechanisms and drivers have been debated and critiqued.  One fundamental criticism – raised, for example, in Norman Macbeth’s 1971 book Darwin Retried – is that the phrase often associated with Darwinism, “survival of the fittest” (which was actually coined by the philosopher Herbert Spencer, but with Darwin’s approval), is a meaningless tautology since, after all, the “fittest” by definition are those that survive.  I find this particular criticism to be rather lame, because one could argue, for example, that the phrase “those who are successful in their profession get promoted” is hollow on the grounds that the “professionally successful” are by definition those who get promoted.  Just as there are specific, identifiable, talents and behaviors that contribute to “professional success”, so too in biological ecosystems those organisms within a species that possess certain characteristics which better enable them to live long enough to reproduce are “more fit” for survival in a very meaningful sense of the word.  Nevertheless, while successful adaptation of species to their environments seems to be a critical component of the evolutionary process, I don’t think that this fully embodies the concept of “evolution” as it is generally understood, by both the scientific and the general community.  There is more to evolution than just developing the ability to thrive in one’s particular surroundings.

 

The evolutionary process involves an increase in complexity: a counter movement to the increase in entropy or disorder that comes about as a result of the Second Law of Thermodynamics.  “Complexity”, however, is a term that defies simple definition, and has inspired a whole science of its own.  What does it mean for something to become more complex?  An evolving ecosystem seems to become more diverse, and yet more interconnected.  In a seminal article published in 1962, “The Architecture of Complexity” (Proceedings of the American Philosophical Society, Vol. 106, No. 6, December, 1962), Herbert Simon contended that “complexity frequently takes the form of hierarchy”, with systems that can be decomposed into interdependent subsystems.  I think that the 20th-Century psychologist Kurt Lewin best applied a model like this to living entities in his theory of personality.  Personhood, he believed, entailed a two-step process which begins with a separation between self and non-self, a property that he referred to as “differentiation”.  The second step of this process is inclusion of the self into a larger environment or totality, in what he referred to as a “part-whole” relationship.  When looked at from a broader ecological perspective, this two-step process of differentiation and integration is evident at all levels of life and in every phase of evolution.  Single-celled organisms exist because of self-constructed barriers between themselves and their environment, and yet to survive they have to interact with that environment, taking in necessary elements for sustenance, competing with other organisms, and perhaps avoiding predatory organisms as well.  Increased complexity is evident when cells are no longer autonomous entities, but instead are components of larger, multicellular organisms – hierarchies like those described by Herbert Simon.  And from the simplest unicellular organisms to the largest multicellular ones, webs of interconnected relationships continue to emerge and expand: cooperative, competitive, combative, parasitical, and predatory.


Kurt Lewin's Field Theory of Personality
 

I have remarked elsewhere that an interesting characteristic of all living beings with organs of perception, and certainly those that possess some form of consciousness, is that they tend to exclude at least as much as they take in.  This is in part due to simple physical limitations of course: we can only see so far, and can only discern sounds to a certain level of softness.  But other exclusions and limitations seem to have been intentionally “hard-wired” into living beings, such as the limited range of light and sound frequencies that they can perceive.  These limitations make the surrounding environment more “manageable” to the various creatures that live within it, providing them with just enough information to sustain themselves, to avoid predation, and to propagate, without being overwhelmed by sensory input.  Even the function of memory exhibits this tendency to selectivity, so that the present awareness is not cripplingly clouded by a torrent of images, sounds, and feelings from past experiences.

 

And yet, even among the simplest unicellular organisms, and certainly among the higher ones, there is a countervailing impulse to expand this limited cone of awareness, beginning with the act of movement – self-directed, or brought about through some other means.  The expansion becomes more effective through communication, beginning with the simple recognition that there are other living entities identical to or at least similar to oneself.  The successful actions of these others may at first only be observed and mimicked, but at some phase of evolution, information is actively shared, resulting in coordination of activities to serve a common end, and access to a wider pool of knowledge that one can use to specifically serve one’s own purposes.  The means of sharing information become more sophisticated among higher species, culminating in the verbal exchanges of human beings and, beyond this, their writing.  As a result of scientific advancement, we human beings have also found other ways to expand our cones of awareness, through microscopes, telescopes, and even reading glasses and hearing aids.  Among the most social organisms, Simon’s hierarchies are very much in evidence, whether they are simple dominance hierarchies, or highly coordinated activities, as seen among ants and bees, for example.  Hierarchies are the hallmark of human civilization, manifesting in our political organizations, our large private corporations, and even in the way that we structure our general knowledge and information.  Hierarchy, in fact, is a hallmark of life, present across the entire spectrum of species.  Cells that make up larger organisms do not have the free-ranging existence of one-celled creatures, but in return for this loss of freedom they are sustained and protected by the organism of which they are a part.  At the other end of the spectrum we human beings voluntarily and regularly give up parts of our autonomy to our political institutions, our workplaces, and our religious organizations because we believe that by doing so this enriches our individual existences.  We may not be cells rigidly connected to a larger body, but we are all parts, in various ways and to various degrees, of larger wholes. 

 

Is this, then, the culmination of the evolutionary process: perfecting the art of sharing, collecting, and coordinating information so that we can maximize, in a pragmatic sort of way, our individual and collective cones of awareness?  If so, then it is very easy to imagine that intelligent machines could represent the ultimate culmination of this process: organizing, coordinating, and disseminating information in the most efficient way possible, and perhaps even guiding and managing our activities in a way that we believe best serves our ends.  Constitutions and legal codes might even be replaced by systems of social regulation guided by computer programs.  Intelligent machines may logically end up, then, at the apex of the various organizing hierarchies that will bring this model to its greatest fruition.  In the most utopian and benign form of this vision (and science fiction has provided us with an abundance of scenarios where it could go terribly wrong), it is – at least in some ways – an exhilarating one.  And yet even if this best case represents our future and the culmination of civilization – of evolution itself – there is something about it that seems ultimately hollow, empty.

 

A Wildflower Meadow: Paradise or Ecological Battleground?


Is it our emotions, then, that breathe life into evolution?  This is certainly the fundamental thing that sets us – and all animal life – apart from machines, and probably always will.  Have emotions reached a peak of refinement and intensity in human beings, after evolving from the simple anger/fear fight-or-flight emotions of the lower animals?  Even in Darwin’s day, there was a realization that there is a strange paradox underlying evolution, in that behind the image of a beautiful garden meadow, for example, there is a life-and-death struggle involving competition, conflict, and predation among its denizens.  Fear, anger, and selfish desire are just as much fundamental elements of the life process as love and compassion.  Have these latter, more benign, emotions become more prominent with the progression of higher life forms?  They are not unique to human beings of course.  Some species of animals, in addition to being capable of harboring feelings of compassion and affection beyond those required for parental care, even seem to possess their own forms of a sense of humor, and many species have even shown an appreciation for music.  But if the bloody history of human civilization is a representative guide of the life process at its most elevated level, then we must admit that the dark and sordid emotional elements must and do continue to exist right alongside the lighter, more benevolent ones.  If it is not then in emotional refinement where we can hope to find the “soul” of evolution, like that which stirred Emerson’s appreciation of that mystical element of beauty in natural things, and which he said must always defy logical description (even that made possible by artificial intelligence), then where else can we turn?

 

Perhaps the soul of evolution lies in the act of creation, of shaping and organizing the environment as the result of an intentional act of will: marking territory, building nests, making tools, cultivating agriculture, constructing buildings, making music, making art, inventing machines.  Creation is at the very core of consciousness, shaping both our external and internal environments.  Even that limited, filtered stream of perceptions that organisms allow into their consciousness comprise a cacophony of impressions that must be actively ordered into an internal recreation of at least a facsimile of the surrounding environment, and, as the philosopher Immanuel Kant suggested, what we create internally might not entirely correspond with what’s actually out there, around us.  We might already be introducing a little fiction – not just into our present awareness, but also into the growing flood of memories that we are similarly screening and ordering, so that they link our pasts and presents in ways that form coherent life histories. 

 

The philosopher Henri Bergson saw evolution as a progressive, creative act, and his book, Creative Evolution, inspired the title of this present article.  But in that book, he warned that creation – true creation – is incompatible with some of our common conceptions of how the universe works.  For example, if we truly live in a universe where cause-and-effect rules, then in it, “creation” becomes a meaningless concept.  If every invention, every writing, every work of art has come about as the result of an inevitable chain of causes, then no matter how novel or innovative these various productions may appear, they were not the result of creative acts.  In theory, at least, they could have all been foreseen by tracing out, from any time before they appeared, the chain of events that would inevitably lead to them.  This would not be evolutionary complexity brought out by living, genuine creativity, but instead the complexity of a very elaborate wind-up toy.

 

Henri Bergson


Similarly, Bergson argued that if there is some specific “end state” that represents the culmination of evolution, be it a particular form of intelligence, or type of organization, or capacities for perception, communication, or action, then this, too, undercuts the idea of evolution as a creative process.  He likens that end state to a jigsaw puzzle:  As species – and then civilizations – advance, they move farther along in completing that ultimate puzzle, and reaching the perfect end state.  Some may collapse before succeeding, and some may only succeed in piecing together parts of the puzzle.  At least in principle, however, some species, some civilization, may someday reach that ultimate solution: the one that corresponds to the pinnacle of evolution.  But in spite of the diversity of the various approaches, and relative degrees of success, in reaching this end state, perhaps played out by multiple species and multiple civilizations on many worlds over millions of years, the blunt fact is that the single end state always existed, at least as a potential for realization.  And if there is a unique and specific end to evolution, then ultimately the evolutionary process, while perhaps characterized by a multitude of radically different paths, is not a creative one.  Like the inventions inevitably “created” as the result of causal chains, and so not really “inventions” at all, the jigsaw puzzle that defined the culmination of evolution was always there, waiting to be completed, even if it could never be completely seen or comprehended by those who were on the path to completing it.  A genuine work of creation, then, is a novelty: unpredictable right up to the point where it finally comes into existence, and therefore creative evolution is a process that defies causality, and has no pre-determined end.  In Bergson’s own words: “Life in its entirety, regarded as a creative evolution . . . transcends finality, if we understand by finality the realization of an idea conceived or conceivable in advance.”

 

What exactly are we doing when we engage in a genuinely creative act?  At the very least, we are effecting an intentional break in the causal chain of our existence.  The idea that the universe is not limited to unbroken chains of causality has actually become popular among scientists, particularly quantum physicists, who assert that there is a randomness to the most fundamental processes of our universe that cannot be removed by better methods of observation and detection.  Unfortunately, this idea has been pounced upon as a justification for belief in free will.  “Loose play” (a phrase originally coined by the 19th-Century philosopher William James) in sequences of thoughts and events, it is contended, might provide just that space needed for true freedom of action, unconstrained by prior conditions and conditioning.  This is an empty argument for free will, however.  Imagine a driver (or, for that matter, a driverless car, following a computer program) that has been directed to follow a particular route.  But while the initial set of left- or right-turns have been explicitly specified in the instructions, at some point, the driver is instructed to make a turn based upon some method of random selection, like a coin toss (e.g., left-turn for “heads”, right-turn for “tails”).  Has this introduced an element of “free will” into the trip?  Hardly, since in spite of the randomness introduced into the outcome – the final destination – the driver is still following directions, and is not exercising any form of personal choice into the navigational decisions.  If we are to believe in free will, we won’t be able to find justification for it in the randomness postulated by modern physics.  If God is “playing dice” (as Einstein put it) with the universe, then God may be playing dice with our minds as well, but this won’t make us self-directed, autonomous agents: we would simply be living beings that are ultimately unpredictable, even to ourselves.

 

"Blue Poles" (aka "Number 11, 1952") by Jackson Pollack

Assuming, then, that we are truly free, how do we create, and how do these acts of creation lead to evolution, in the open-ended way that Bergson envisioned it?  At its most basic, we create by imposing an order in our universe that didn’t exist before: an order that is within our level of comprehension, but is somehow different than the order that has been given to us, in our surroundings.  I do this if I paint, for example, a bowl of fruit.  It may be a perfect likeness of an actual bowl of fruit in front of me, or I might innovate by changing particular elements of the picture, like the shades of color, or the number of pieces of each type of fruit within the bowl.  Beyond this, I may produce a scene or images that draw from multiple experiences of mine, including various elements of each, thereby making it completely unique, and original.  The art of painting has evolved over time, with advanced techniques of representing perspective allowing painters to more accurately reproduce scenes of depth.  But with the growth of technology, and particularly the advent of the camera, artists were spurred to go beyond producing, through painting, reproductions of actual scenes, or even original but realistic portrayals of fictional scenes, and do something more: something that would require the viewer to be more creative in his or her interpretation of the work.  We find an extreme example of this in the work of Jackson Pollock, who randomly splattered colors of paint onto a canvas.  In the chaotic array of colors that resulted, the viewer is invited – or challenged – to find in that chaos an underlying order that is pleasing to the eye.  This introduction of intentional randomness, chaos, or cacophony, is actually the hallmark of all great works of art.  Great musical compositions move beyond the simple repetition of pleasant-sounding melodies and introduce novel sequences of notes that at first seem to disrupt or even destroy the comfortable pattern, but then resolve into a greater, more complex melody that is ultimately more pleasing to the ear.  In his book, The Rigor of Angels, William Egginton describes how in the creative process of writing a mystery novel, this combination of order and chaos, familiarity and unpredictability, in a suitably balanced way, is absolutely essential: 

 

You make it through three-hundred-plus pages, and the culprit turns out to be a character you’ve never met before with no connection to the story so far. Surprising? Certainly, but no one in their right mind would judge this to be an excellent way to end the story. Now let’s say that in the mystery you’re reading, each clue led so inevitably to the final revelation that you could see it coming a hundred pages before the end. Clearly that would be unsatisfying as well. In the first case you would be faced with a random occurrence, an unmotivated insertion into the structure of the novel that destroyed any sense of its coherence, its purposiveness. The novel is certainly surprising, but there is nothing inevitable about it—the occurrences seem random, without purpose. In the second case, however, you’ve seen the author’s handiwork the whole way through; it’s all inevitability with no surprise, all artifice with no naturalness.

 

And it is not just in the arts, but in scientific invention and discovery where a process like this plays out, where random discoveries or unanticipated outcomes of controlled experiments are combined with existing systems of knowledge (sometimes over the resistance of those who are intellectually wedded to the existing systems) to produce something new, or better.  A popular apocryphal story about Thomas Edison’s invention of the incandescent light bulb is that he tried running an electric current through an immense number of alternative metal filaments – each which burned out rather quickly, if working at all to produce light – until he finally tried tungsten as a filament material.  Whether it is the artist playing with different combinations of colors or sequences of words or musical notes and then integrating them into a structured work, or the scientist experimenting with different materials and techniques, the process is the same: it is an intentional foray into randomness in order to find a new, satisfactory product of order.

 

But in defining creativity as an act of imposing a new order in our environment, we have to be careful and not imply that by doing so we are always creating order where none existed, or creating a greater order than existed before.  When a new building is raised, regardless of the sophistication of its design, or the novelty of its architecture, it may not be imposing a new or greater order in the area that it is occupying.  If the land where it is constructed had first to be cleared, then an entire ecosystem, with complex interconnecting webs of diverse animal and plant species had to be swept away . . . destroyed.  As so eloquently expressed by songstress Joni Mitchell in her classic song Big Yellow Taxi, “They paved paradise, put up a parking lot”.  “Yellow Taxiing” creation – the tearing down of one form of order to create another – reminds me of that old joke about a dim-witted man who fell off of a cruise ship and found himself washed up on a deserted island.  After a thorough search, all that he could find was a small, fully intact sailboat floating in the harbor.  And after giving the matter some thought, the man realized that this was his way of escaping the island: all he had to do was break down the sailboat and use the wood to build a raft.  But “Yellow Taxiing” is actually a fundamental element of creation, which is not just a nasty byproduct of civilization.  It is ubiquitous in nature: even the simple process of food digestion is a breaking down of one form of order in order to preserve or enhance another. 

 

Picasso Untitled Sculpture (Chicago)

Of course, to the architect and the architect’s contractors, what was standing in the way of their building (and its parking lot) was a chaotic mess of weeds, wildflowers, and pests, and here we stumble upon another insight: that what is order to one entity may be chaos to another, with chaos often representing a higher order that is beyond the comprehension of many entities.  The spider that builds a web in a corner of my living room has no appreciation for the furniture in that room: the individual expertly-crafted designs and the way that the furniture is carefully placed and positioned.  I, on the other hand, upon encountering its web, might see it as an ugly, gooey, incoherent mess, completely unable to appreciate how the individual strands of webbing were carefully constructed and conformed to an intentional plan.  Pigeons have little or no appreciation for the public statues that they perch on: who or what these statues represent, and why they were erected.  On the other hand, both pigeons and humans alike might have difficulty making sense of the Picasso sculpture that occupies Chicago’s Daley Plaza.  I personally believe that the Chicago architectural skyline looks magnificent when viewed from Lake Michigan, but I have no idea what ecological diversity was destroyed to make it possible.  Creativity, then, often involves creative destruction, and, rather than the creation or advancement of order, often results in the replacement of one form of order with another (perhaps even less complex) alternative order, imposed by its creator.

 




These creative processes – “Pollacking” and “Yellow Taxiing” – while explaining how intentional novelty comes about, still fall short of explaining how novelty leads to increasing complexity.  After all, if every living creature has a limit in how much order it is capable of seeing and comprehending, then it seems that no amount of effort is going to enable it to produce something of a greater order, but simply new and different things that fall within its limits of comprehending order.  And the Darwinian drive to effective adaptation would seem to lead only to a living world of perfectly-fitting parts, rather than an evolving one, as popularly understood.

 

It seems that it is the process of interaction, among living entities, that is responsible for this – for creating a whole greater than its parts.  This is particularly evident in human civilization, where no individual is capable of understanding all of the science and technology which supports and sustains that civilization.  I remember watching a documentary about the great 19th-Century German mathematical genius Carl Friedrich Gauss.  The narrator said that he was the last human being who was capable of understanding everything that was known at the time about mathematics.  After Gauss, the field had become too broad, diversified, and complex for any single person – no matter how gifted – to take it all in.  I actually had experience of this firsthand when I encountered a dirty little secret in the university system.  It happened when I was in the graduate program in economics.  To earn a degree, a student in that program has to take two comprehensive exams – one in macroeconomics, and one in microeconomics – and, as the name implies, the exams were intended for the student to demonstrate that he or she had a sufficiently broad understanding spanning the complete breadth and depth of these fields.  To my shock and horror, I flunked the macroeconomics exam the first time I attempted it: shocked because I had gotten all A’s in the macroeconomics courses I had taken.  When I went into the economics department office, completely distraught, the secretary offered her sincere sympathy, but also asked me if I had studied the practice exams.  And then it hit me.  “Of course,” I thought to myself, while probably slapping the palm of my hand onto my forehead, “the practice exams!”  I had forgotten my experiences as a graduate student in the mathematics program at another university.  That program, too, required the passing of comprehensive exams as prerequisites for getting a degree, but everyone knew that to succeed at these, one had to look at the practice exams.  The field of mathematics was simply too broad (as the Gauss documentarian had said) for even the most gifted student to have a comprehensive knowledge of it.  The practice exams provided clues to the student on exactly what kinds of questions would be asked: sometimes the only difference between these and the actual exams were minor changes in wording or numbers.  With this resource, I had passed those two mathematics comprehensive exams and now, after reviewing the practice exams available in the economics department office, I went on to the pass the macroeconomics exam on my second attempt (fortunately we were allowed a second attempt), and the microeconomics exam on the first one.

Carl Friedrich Gauss
 

It is probably not too broad a generalization to say that all of the sciences have grown so much in scope and complexity that no single individual could have complete comprehension and mastery of any one of them, or even of any of their subdisciplines.  And yet they continue to grow, as the underlying base of knowledge that supports them expands through the contributions and interactions of their practitioners.  Even our economy works that way, as so eloquently described by Adam Smith in The Wealth of Nations.  In his famous pin factory example, he describes how several individuals, each acquainted with only a simple distinct task associated with making pins, can, when their work is coordinated, produce many thousands of pins a day.  And through the “invisible hand” of the free market, individuals acting in their own limited self-interest can promote broader economic ends that had not been part of their intention, and probably beyond their comprehension.

 

In civilization, then, our sciences, technologies, markets, and social institutions are larger in scope than any individual can completely comprehend or manage, but it is through our collective activity that they are not just sustained, but continue to grow in complexity.  And if we look at the life process in general, from the simplest single-celled organisms to the multicellular vertebrates, we can see that it is a collection of individual living entities, each a separate “I-ness”, with a boundary separating “self” from “non-self”, and a limited, often selective ability to perceive its environment, which paradoxically exhibits an innate drive to expand its particular cone of awareness, and of activity, in order to survive, if nothing else.  And this leads to interaction with other “I-nesses”, which in turn leads to those expanding webs of interconnection that produce the growing complexity of the ecosystem.  But this is a living process, not a mechanistic one, beginning with that mysterious “I-Thou” relationship (as Martin Buber would call it) in which a living entity senses that another entity it encounters is somehow, in some way, like itself.

Female Praying Mantis Eating Its Mate
 

I know that biologists have a counterargument to this view of individual entities acting in their self-interest, and by doing so, in a manner similar to Adam Smith’s economic “invisible hand”, creating and sustaining complex systems greater than what they could ever individually manage or even comprehend.  This is the “selfish gene” theory: a more mechanistic view of evolution in which genes, rather than individual self-interest, are the real drivers of adaptation.  And there are many examples to support their view, in which individual members of certain species engage in acts of self-sacrifice that, while self-destructive, actually further the chances for their particular species’ survival.  Many examples of this stem from the sexual impulse which, like emotions, are a characteristic of living beings that will never be duplicated by machines.  (I have no doubt that someday robotic sex toys will be able to convincingly simulate a human being in the throes of sexual passion, but I firmly believe that no artificially intelligent machine will ever be able to genuinely feel sexual desire or passion.)  The most extreme cases involve those unfortunate male insects and arachnids for which surrendering to the sexual impulse is their final living act.  Even in human beings, a surrender to the sexual impulse is often a self-destructive act, resulting in the destruction of public reputations, careers, marriages, and – in cases involving the abuse of power – the loss of freedom.  But from the perspective of the selfish gene theory, this occasional self-destructive tendency of sexual desire – at least in those cases involving insects and spiders – actually serves to further the survival advantages of the species, particularly when the dying male becomes the next meal for the mother of his future children.

 

But there are counterexamples, where an individual of one species sacrifices its life to save an individual of another.  I’m thinking of those cases where a dog has died fighting off some wild beast, like a bear, to protect its master.  Now even here a biologist might argue that the “selfish gene” behavior is actually at play, because the noble behavior of such dogs will encourage humans to perpetuate their lineage.  But this can’t explain the reverse situation.  Humans will go to great lengths to protect their animals, even at risk to their own lives.  I can think of one particular tragic case, involving a young man who was visiting a hot springs national park.  His dog dived into one of these hot springs, and in spite of the shouts of others standing nearby to let the dog be, the young man dived in as well, to try to save it.  Both of them died.  Less dramatic but significantly more common examples of human benevolence to other species include the placement of bird feeders and bird baths by homeowners in their yards, and taxpayer-supported wild animal sanctuaries and national parks.  But even beyond this, we have seen, over the past century or so, increasing numbers of human beings taking up the cause of environmentalism, and expressing an active concern for the welfare of other species.  This actually seems to be a byproduct of the growing cones of awareness and action made accessible to each of us by civilization: a growing cone of empathy as well.  To be sure, many if not most human beings still direct most of their empathy to relatives, or close friends and, to a lesser extent, those who share their religious views, or political views, or ethnic, racial, and national identity.  Sadly, one only has to check the daily news to see rampant and often tragic examples of political, ethnic, and religious tribalism.  And yet, that growing cone of empathy is there as well, and it continues to grow.  It provides a hopeful sign that maybe evolution – including the evolution of human civilization, with its computers and other machinery – actually does make us better, somehow, or at least increases the potential for each of us to be better versions of ourselves.  Artificial intelligence, if properly harnessed, will only enhance that potential, perhaps significantly.  In this light, I can understand the ebullient (but still cautious) optimism of futurists like Ray Kurzweil, who believes that technology, including that of AI, is accelerating our evolution.  The point, however, is that at the heart of evolution there will always be a real, living heart . . . and mind (“. . . and,” some might add, “spirit”).  AI will augment our potential to engage in creative projects of a higher order.  Yes, it will also present a potent lure to indolence, as well.  I suspect that in our future, many if not most of us will spend at least part of our days – and some will spend all of their days – behaving like those overstuffed couch potatoes in Wall-E, and perhaps there is a real risk that an excess of “Yellow-Taxiing” creation will leave our world as ecologically desolate as that in the movie.  But for at least some of us, hopefully many of us, and ideally most or all of us, we will have and use a greatly enhanced ability to put our individual creative powers to currently unimaginable greater ends.

 


And the ultimate arbiters of how valuable those creations are will be living, human beings, who are capable of feeling the awe expressed by Henry David Thoreau while gazing up at the clouds.  Computers will be able to write essays on these creations, and even imitate works of art “in the style of” certain artists, or in conformity with the tastes of a particular audience.  The greatest works of art, however, are those that create new styles and move beyond conforming to existing tastes.  Sometimes they actually compel the tastes of their audience to “evolve” in order for them to truly appreciate the genius of an artistic innovator.  That’s real evolution – creative evolution – and it is the evolution that is at the heart of life.

 

I couldn’t resist ending this long essay by simply putting the fundamental question which I was attempting to address to ChatGPT: “What is the ultimate purpose of evolution?”  Here is the reply:

 

The "ultimate purpose" of evolution is not a question that has a straightforward scientific answer. Evolution, as understood through biological science, is a natural and ongoing process without inherent purpose. However, the question of purpose can be explored through various philosophical, religious, and personal lenses:

·        In science: Evolution is a process without inherent purpose or end goal.

·        In philosophy: Purpose is a human concept; evolution itself lacks an intrinsic meaning.

·        In religion: Some views see evolution as part of a divine plan.

·        In practical terms: Evolution helps us understand life and informs ethical frameworks.

Ultimately, the purpose of evolution can be seen differently depending on one’s perspective and the questions one is seeking to answer.

 

It’s a good, comprehensive and noncommittal answer, and ultimately a comforting one, in that I know that at least at the present time, AI has not developed any concrete plans for my future, or for that of the rest of the planet.