Saturday, February 28, 2015

Fear of Music

One of the greatest loves of my life is music.  I think that this love was first inculcated in me by my father, who was himself an ardent fan of music.  His particular brand of choice was American country western music.  He was a native of Arkansas, and among my earliest memories are of him spending a relaxing weekend evening at home, drinking his favorite beer, listening to his country records.  In fact, I think that I probably learned to read at a tender age because he would enlist me to pick out the particular albums that he wanted to listen to during his evenings of repose.  I still remember his favorite musical artists to this day: Johnny Cash, Waylon Jennings, Buck Owens, Conway Twitty, Floyd Cramer, and Marty Robbins.  Whenever my father would take us on our annual pilgrimage from the suburbs of Chicago to his ancestral homeland in Arkansas, to visit his parents, he would always point out the birthplace of Johnny Cash, with a reverence that seemed fitting for a military war hero or a great statesman.

I learned to love this music, too, but as I grew older, my tastes expanded into other musical genres.  At the age of thirteen (a fittingly appropriate one, in retrospect), a childhood friend introduced me to rock and roll, with all of the conspiratorial air of offering to someone his first cigarette.  It was a rapturous introduction, and afterward I eagerly explored the popular songs then in vogue.  I remember when I bought my first rock and roll album, entitled “Made in Japan” by the group Deep Purple.  When my mother discovered that I had spent a large proportion of my meager salary as a stock boy at a local department store on this purchase, she anxiously exhorted me to hide it from my father.  I am not sure, in retrospect, if it was actually because of the extravagance of this purchase, or rather because I had ventured into this subversive realm of rock and roll music, that my mother felt it necessary for me to hide the fact of it from my father.

But my record collection continued to grow – mainly in the form of 45 RPM “singles”, but also an occasional album, when I could afford it.  This was the early 1970s, and it was not until many, many years later, with the benefit of hindsight, that I realized I had discovered rock music at the height of its Golden Age.  Listening to the popular rock AM radio stations at the time was a joy, an ecstasy, and I devoted many hours a week to the pastime.  I was hooked.  I became such an adept “connoisseur” of rock and roll that by my late teens I could impress my friends with the ability to identify any song playing on our favorite radio stations after hearing just the opening bars of the song, only a second or two after it had begun.

Of course, there is a downside to discovering a genre of art or music at the peak of its era, and that is that one has to experience its decline.  When I had first started listening to AM radio, nearly every song was good, and there was just the occasional disappointment.  But this ratio changed, very quickly, and AM radio soon turned into a commercial musical wasteland.  The weekly list of “Top 40” hits, which had originally truly ranked the best music of the current week, evolved into a ranking instead of commercially successful but bland and forgettable music.  This is when FM came to the rescue – a discovery that was for me just as exciting as the original discovery of rock music itself.  On FM one could hear something more interesting than mere “pop” music.  There were songs from albums that never made it into popular airplay on AM, some of these that far exceeded the requisite 3-4 minute length of “Top 40” hits.  In fact, there were songs that took up the entire side of an album – some 20 minutes or more in length – such as the long version of Rare Earth’s “Get Ready”, and Iron Butterfly’s “In-A-Gadda-Da-Vida”.  FM stations played these songs in their entirety, and often played whole albums as well.  Quality rock-and-roll music had gone “underground”, and FM was the voice of this underground.

But even FM could not stave off the decline of rock-and-roll: it merely slowed its demise.  By the latter half of the 1970s, it was clear that the caliber of both the bands and the music was on the wane.  FM stations, in fact, began to sound more like AM stations, as the focus shifted to commercially successful songs, rather than ones that were deemed to be of high quality.  And the music that was out of the mainstream also changed.  In the past, such marginalization was a badge of honor: a place for music that was too creative and interesting to be included among the vapid commercially successful “hits”.  But now, a growing proportion of the marginalized music just sounded eccentric, and even unpleasant.

When “disco” music emerged as a popular alternative for teenagers and young adults in the 1970s, it was a wake-up call for rock-and-roll.  It is fascinating, in retrospect, to recall the rabid reaction to disco music that eventually developed among rock-and-roll fans.  The harsh criticism that disco music took – as mindless, crowd-pleasing, pap – was, I think, largely undeserved, and even had racist overtones.  Its sudden popularity merely highlighted the fact that rock-and-roll had let a large portion of its core audience down: that group of fans – mainly women, I think – who preferred music that one could dance to, or at least feel festive about.  If one visited a rock bar in the late 1970s, what one would often see is young men with glazed, alcohol- or drug-addled expressions, sitting listlessly at the bar or shooting pool, while the women in their company, if they were not in the same condition, looked palpably bored.  Disco music, and the establishments that played it, provided a venue where one could dress up, and dance, and flirt, and possibly do more than flirt.  It brought a certain type of romantic excitement back to music.  And much of it was genuinely good, and compared favorably with rock music even in its better days.  Artists and bands such as Evelyn “Champagne” King, Tanaa Gardner, Parliament Funkadelic, Chic, Slave, and Prince made songs that livened the mood, and quickened the spirit.  White male fans of rock and roll at first grudgingly accepted this new musical genre.  It provided, after all, a much more pleasant environment (discotheques) for looking at and meeting women (and one that was probably much less threatening for women than many of the rock-and-roll “dives” that abounded at the time).  I think the charm faded for these men, however, as they realized that the new venue put them more often in competition with non-whites for the favorable attention of the women, hence the racist element of the rabid backlash that ensued.  Admittedly, there were other causes that were grounded in genuine criticism.  Pop musicians (including former rock-and-rollers, such as Rod Stewart) quickly capitalized on the disco craze, and the result was some dismally bad hits that still produce a cringe when one encounters them on the radio today.

In the early 1980s, rock had something of a renaissance.  The challenge that disco had presented, before it was successfully repressed, probably was an initial driver for this, but the rise of MTV and the new medium of music videos played at least as great of a role in revivifying the genre.  Even rock bands whose better days were seemingly long behind them, such as ZZ Top, started producing interesting music again, and other bands, such as the Pretenders, INX, and Duran Duran, came into prominence, helped along by the exposure that their music videos received.  Sadly, the renaissance did not last long, but it was a welcome reminder of what rock had been in its better days.

By the end of the 1980s, this renaissance had played itself out.  I went on to graduate school at this time, and it was during this period of my life that I began to explore, with great pleasure, an entirely different musical genre.  This was classical music.  With an enthusiasm almost equal to that which accompanied my introduction to rock and roll, I explored the works of Rimsky-Korsakov, Tchaikovsky, Wagner, Strauss, Mozart, Sibelius, and Beethoven.  I found that their music could send me into rapturous moods almost as powerful as the ones that I had experienced while listening to my favorite rock songs.  But I discovered something else as well.  I realized that classical music, too, had gone through a rise and fall very similar to the one that had happened with rock music.

In both genres there had been an early phase, where the music had been simpler, more rudimentary, but powerful and deeply inspired, nonetheless.  With rock and roll, this had been characterized by the driving three-chord compositions associated with artists such as Chuck Berry, Little Richard, and Bill Haley and the Comets, along with the “rockabilly” artists such as Carl Perkins and Elvis Presley, in the 1950s and early 1960s.  With classical music, the comparable period occurred in the 16th and 17th centuries, and was exemplified in the works of Johann Sebastian Bach, and George Frederic Handel.

This early phase was followed, in both cases, by an evolution in the musical genre brought about by a development of technique.  The musical compositions became more complex, with a more intricate interweaving of melody and harmony, and the usage of an expanded ensemble of instruments (as well as, in the case of rock music, electronic effects) to perform them.  This was exemplified, in classical music, by the works of Mozart, Beethoven, Strauss, Tchaikovsky, and Wagner.  With rock and roll, it came about during the so-called “British invasion” of the 1960s, when artists there who had taken up the mantle of this genre – which until then had been primarily an American enterprise – led the charge in taking it to a higher level of sophistication.  British musical groups such as the Beatles, the Rolling Stones, the Who, Yes, and Led Zeppelin ushered in rock music’s Golden Age, along with North American artists and groups such as the Doors, Jimi Hendrix, Janis Joplin, and Crosby, Stills, Nash, & Young.  What made this music so great, in the Golden Ages of both the classical and rock genres, was that its sophistication was matched by inspiration: both spirit and technique infused the compositions.  Beethoven’s “Violin Concerto” and Jimi Hendrix’s “Voodoo Child (Slight Return)”, for example, have this in common: that each exhibits a masterful virtuosity in the performance of the primary instrument, with a musical score that seems to skirt dangerously at times on the borders of chaotic dissonance, and yet each is ultimately a deeply satisfying musical experience.  These examples, admittedly, probably represent cases where the artistry and sophistication did not appeal to everybody.  But consider, as another example, Strauss’s “Blue Danube Waltz” and the Beatles’ “Lady Madonna”.  These crowd-pleasers are no less representative of the marriage of soul and compositional finesse.

The third phase that is seemingly common to all music genres – whether classical, or rock-and-roll, or even jazz and country western – is one of decline.  The Golden Age in which the equal marriage of technique and artistry produces complex compositions that are immensely pleasing to the ear gives way to something of a markedly inferior caliber.  And the decline seems to happen in two distinct ways.  First, there is a growing dominance of technique over inspiration.  The music characterized by this flaw is no longer being composed to please even the discriminating listener, but rather the critics: who by this time have become an effete intelligentsia who are so enamored with the “how” of music – the technical prowess in composition and performance – that they have managed to make themselves incapable of appreciating the “why” – the production of something that is pleasing to the ear.  There is much truth in that famous saying of Louis Armstrong’s about music, that “if it sounds good and feels good, then it is good.”  This is a basic, fundamental truth about music that critics and their pseudo-sophisticated followers in a genre’s age of decline seem to forget.  But such an intellectually-driven divorce from the reality of what lies at the base of music’s greatness would not be enough to bring about this decline, unless the composers and performers themselves had fallen under the sway of these critics, and sadly, this is just what happens to a great many of them.  The result is a “new wave” of music that is technically sophisticated, but lifeless and even grating to the general listener.

I saw this happen with much of the so-called “New Wave” movement of rock-and-roll in the late 1970s and early 1980s.  Granted, some of this music represented a sincere attempt, on the part of the bands that were part of it, to try to get back to basics – back to that earlier, more primal phase of the genre – and thereby re-inject raw emotion into their compositions.  But most of it sounded rather contrived: a sort of synthesized cacophony that was constructed to answer the call of the music critics of that time for a next generation of music.  And later, after I had become acquainted with classical music, I discovered the same evidences of a decline that had been brought about due to soul being smothered by technique.  The decline there happened in the 20th century.  And the evidence of this decline can still regularly be seen at just about any contemporary classical music concert, as I was later to discover.  A tradition has evolved, at these concerts, to play three or four musical compositions by a variety of artists.  Most of these compositions will consist of well-loved symphonic pieces composed by the likes of Haydn, Mozart, Beethoven, Tchaikovsky, Rimsky-Korsakov, Mendelssohn.  But invariably one of these compositions will be by a twentieth-century “modern” composer, characterized by discordant – even chaotic – melodies.  It is an unpleasant labor to listen to these, and seems to be forced upon the audience by the organizer of the concert as a sort of “duty” to do so in order to receive a well-rounded musical experience, like the parents who compel their children to remain at the dinner table until they have eaten everything on their plates: not just the things that they enjoy, but also the things that they despise, such as broccoli, or spinach, or kale.  The concert organizers really seem to be imposing these modern pieces on us for our own good, regardless of how hideous they sound.  (As I have often joked to friends, this “modern” classical music has always sounded to me like the musical score from a low-budget horror movie.)

But there is a second form of musical decline which generally occurs alongside of the first.  This is the development of sterile, bland musical pieces that, while superficially similar to the musical genre that they are aping, and harmlessly pleasing to the ear, are unmemorable.  Collectively, they comprise the “Muzac” that is often played in shopping malls, elevators, and dentists’ offices.  In a way, this avenue of decline, too, is a consequence of the suffocation of inspiration by technique, though in this case there is no pretence of technical wizardry compensating for a lack of artistry.  Instead, there is merely the crass commercialism of selecting melodic sequences that sound good, patching them together, and dressing them up in the garb of the genre that they are made to resemble (violin strings and woodwinds for classical music, guitar and drums for rock and roll).  Classical music was generally spared the indignity of actually having artistic pretenders prepare these pieces and pass them off as genuine musical compositions.  (Although it could be argued that many of the popular “crooner” love ballads of the early and mid-twentieth century constituted just such an attempt.)  But sadly, there were all too many rock and roll “artists” – some of whom composed genuine works of merit earlier in their careers – who trotted out inane, vapid ditties for a ready source of income.  (Phil Collins of Genesis and Pete Cetera of Chicago come to mind.)  Their music, too, can now often be heard serenading shoppers in grocery stores and patients in medical offices.  Mercifully, much of this empty music is merely quickly forgotten, fading from memory within years of its release, if not sooner.

If technique, through its eclipse of inspiration, is responsible for the ultimate decline of a musical genre, technology often serves to prolong or even revivify the genre.  The development of new instruments, or methods of augmenting existing instruments, or even new mediums, often results in a spurt of new creative development.  The opera, for example, though its origins nearly coincide with that of classical music itself, provided a visual medium to accompany the musical one, and as the stage settings and dramas became more elaborate, the operatic music did as well.  A rough parallel can be seen with rock-and-roll music, when, as described earlier, music videos became popular in the early 1980s, and the popularization of this new format seemed to coincide with a spurt of interesting new songs.

Classical music, it seems, met its demise (though some of its followers might vehemently deny that a decline ever occurred) sometime in the early 20th century.  It is hard to determine exactly when rock when into its final decline.  Like most contemporary listeners of music, I have become a collector of MP3s, amassing a collection of all of the favorite songs of mine that I can remember, going back to my youth.  And, as a typical economist, I have taken advantage of this collection to do an analysis of the underlying data.  Noting the year that each song in my collection was released, and then plotting a histogram (a bar chart, with a vertical bar for each year, and the relative height of the bar corresponding to how many songs in my collection were released in that year), I can get a good visual record of the rise and fall of rock and roll.  Of course, my personal tastes are not necessarily representative of all lovers of rock and roll, but I suspect that the pattern I found is fairly representative.  The highest bar in my chart is in the year 1970 – a year I remember well, because even the songs being played on AM radio in that year sounded terrific.  The bars gradually decline after this year, reaching a low in the very late 1970s.  But then the bars begin to rise again, corresponding to the rock and roll “renaissance” of the early 1980s, and a second, smaller peak is reached in 1983.  This is followed by another decline, and another smaller “renaissance” which peaks around 1990, and then a final, even smaller one, peaking in 1999.

The year 2001is the last one for which I have an MP3.  Apparently it is after this year that music “died” to me, since I no longer encountered even the occasional interesting song that I might like to purchase, or at least hear again.  I remember feeling uncomfortable, even in the 1990s, about the changes that popular music seemed to be undergoing.  “Rap” music was growing in popularity, and while I did like a song or two from this genre, by and large it just sounded to me like angry people yapping at real or imagined slights perpetrated upon them.  There were also slow songs, apparently intended as love ballads, but the singers of these all had a nasal, plaintive quality about them which really didn’t inspire much sentimentality.  I remember thinking to myself, back then, that if the music of my era was called “rock and roll”, then an appropriate moniker for this new music would be “bark and whine”.

I have often wondered if my reaction to the music of today is simply typical of somebody of my age, and is no different than the reactions of those in my parents’ generation to the music that I had come to love when I was a teenager.  But there are some critical differences.  As the title of this particular blog entry, “Fear of Music” (which was taken from the name of an album by a group that became popular during the first rock renaissance in the late 1970s and 1980s, the Talking Heads), suggests, much of the antipathy to rock and roll was, I believe, based in fear.  There was the racial element, as there had been with the reaction to jazz and blues when those genres first crossed over into a general audience: the unease that many white elders had over the fact that the youth of their generation were becoming enthusiastic fans of non-white performers.  And as rock and roll became increasingly associated with the turbulent protest movements of the 1960s, this became an additional cause of unease among the older generation, or at least that part of the older generation who were generally unsympathetic to these movements. 

There is no fear underlying my revulsion toward today’s music – it is simple loathing.  Whenever I happen to turn on the radio, and come across a contemporary “hit”, it produces an immediate, visceral reaction of disgust in me.  The best analogy that comes to mind is if one were to walk into a strange room, and the very first thing that one notices is a very foul odor in that room.  The immediate reaction is to want to get out of that room as quickly as possible.  When I signed up for an MP3 service a few years ago, I was “treated” to a large number of free tunes that were all produced by artists within the past few years.  After just a cursory listening of these, I was compelled to remove nearly all of them from my hard drive.  And this leads to another other critical difference between my resentment of contemporary popular music and those of my elders when I was a teenager.  I have had the opportunity, on several occasions, to engage people in their teens or twenties in a conversation about music, and have asked them right out if they are familiar with the music of my generation, and, if so, how they feel it compares to their own.  Without exception, they have actually said that they liked the music of my generation better.  There have been times, I must admit, when a friend or colleague has said to me that if I liked such-and-such a song from the 1970s or 1980s, then I would probably like such-and-such a song by Beyonce, or some other contemporary artist.  I will have to take their word for it.  It couldn’t have been a song that I heard on the radio, because if it was, then I definitely didn’t like it.

I think that truly great music brings people – even people of different generations – together, rather than divides them.  As evidence of this, I note the many parents who have taken their teen children to rock concerts, to enjoy bands that they had probably enjoyed when they were teens.  And I can remember a particularly poignant personal example of this. 

As I had mentioned earlier, when I had my own personal encounter with rock and roll as a young teenager, my mother had urged me to keep this a secret from my father, and in particular to hide from him the fact that I was spending my meager earnings as a department store stock boy on the purchase of rock and roll records.  At first I abided by her wishes, but as my collection grew, and I encountered a variety of interesting styles, I found it increasingly difficult to believe that my father, who loved music as much as I did, would disapprove of all of them.  I finally mustered up the courage to put my theory to the test.  It happened on a weekend evening, when he was enjoying his own record collection.  The rest of our family was away attending a funeral, and so it was just the two of us at home.  He was in one of those mellow moods – helped along by indulgence in his favorite beer – in which he was at peace with the world.  I was fulfilling the role that I had played since I was a little boy, picking out the particular albums that he wanted to listen to, and putting them on the turntable.  He requested a certain Buck Owens album.  I knew it well, but decided that the time was ripe for me to take my gamble.  Earlier that week, I had purchased an album by Creedence Clearwater Revival entitled Bayou Country, and, after listening to it, I suspected that this was just the type of music that my father might enjoy as well.  I put this album on instead, and waited for his reaction.

As the opening guitar notes of the first song, “Born on the Bayou”, began to sound, my father slowly looked over to me with a quizzical expression and said, “That’s not Buck Owens.”  I noticed that he was smiling.  I quickly explained to him that this was an album that I had recently purchased, by a band that I thought he might like.  He continued to listen, still smiling.  Before the rest of my family returned home that evening, we listened to this album together, in its entirety, twice.  Creedence Clearwater Revival had won a new fan.

From that moment on, whenever I happened to buy another album by Creedence Clearwater Revival, rather than hide this fact from my father, I would eagerly share it with him at the first opportunity, and he would listen to it with at least as much enthusiasm as I had.  There was one particular song by this band that he came to love, called “Bad Moon Rising”.  Whenever I would put it on, he would let out a shout of glee, and his face would erupt into a broad smile.  I am convinced that this was not only his favorite song by Creedence Clearwater Revival, but his favorite song of all time.

Sadly, in the years that followed, as I entered my late teens and young adulthood, the relationship between my father and me became increasingly contentious.  I suppose that such things are not uncommon between fathers and sons, particularly at that time, when there was a pronounced ideological divide between the younger and older generations.  But it seemed that the conflicts between my father and me were particularly bitter ones, and even after they had subsided, as I moved into my mid-twenties and embarked upon a career that he approved of, I think that there was still a lot of residual bitterness on both sides – memories of hurtful remarks that each had said to the other.

I was shocked, while still in my twenties, when I learned that my father – who was only in his fifties – had contracted an illness that would ultimately prove to be terminal.  I lived a couple of hundred miles away at the time, and when I returned home to visit him in his hospital room, it was an awkward meeting.  One would like to believe that such a situation would provide an ideal setting for a deep, meaningful conversation in which two persons who had had a contentious relationship could finally talk through any unresolved issues between them.  But usually what happens instead, as happened between my father and me during that visit, is that the conversation settles upon comfortable, uncontroversial topics.  For us, one of these was music, and we spent much of that visit talking about the songs that we both loved.

When I attended my father’s wake and funeral, not much later after this visit, I noted the maudlin music (or Muzac) that was playing in the background: the kind that is played with the specific intent to try to produce a melancholy mood and tug at the heartstrings.  But it did not have the intended effect upon me, as I found myself standing there, dry-eyed and unmoved all the time that I was in attendance.  I wondered what was wrong with me.  Had the earlier period of enmity with my father really left me so hard-hearted and remorseless at his passing?  Was I so unforgiving?
  

But weeks later, when I happened to be out one evening sitting at a bar, the familiar strains of the song “Bad Moon Rising” began playing in the background.  Suddenly, tears started to stream down my face, and I found myself sobbing, as happy memories of my father flooded my mind.  In the midst of my tears, I actually smiled, and lifted up my bottle of beer in a silent toast, as I said, silently, “I miss you, Dad.”  Here was a moving example of the enduring power of music.

Sunday, February 1, 2015

The Decades

I have been enjoying, in recent months, a number of television series on American history that have been structured around specific decades.  If I recall, the first one was a program on the 1980s, and this was followed (in what order, I don’t remember) by a series on the 1990s, and one on the 1960s.  (There have probably been others, such a series on the 1970s, but if so I have apparently missed them.)  It is always fun to watch descriptions of events that are only dimly remembered, if remembered at all, and to see them put into a larger historical perspective.

America has always had a tendency to measure its history in decades.  There were the “Gay Nineties” (the 1890s, back when the word “gay” exclusively meant a sort of frivolity, and did not have the connotations regarding sexual orientation that it acquired in the late 20th century), the “Roaring Twenties”, and the Depression era (the 1930s).  I have always assumed that we do this in America because of the relative youth of the country, and that in other places, such as Europe and China, eras are probably more conveniently marked off in centuries.

But it is fascinating how each decade in this country does seem to bear its unique stamp, with a particular set of cultural fads and fixations, and problems that seemed to engage the national consciousness only in that particular time period.  As I have looked back over the decades – not just the ones that I remember, or vaguely remember, in my lifetime, but the ones immediately preceding it – I have always had a pronounced sense that there has been a real trajectory underlying these, which has traced out a sort of social evolution in this country.  Here is a summary of that evolutionary path as I have perceived it.


A convenient place to start is the “Roaring Twenties”.  This was an era in our history when Americans seemed to throw away the trappings of convention, and contemplated a life of unbounded possibilities – possibilities not hemmed in by traditional morality or even traditional work ethics.  This was the “Jazz Age”, when a misguided social experiment to ban alcohol consumption known as “Prohibition” was roundly flouted by the rank and file in society.  Gangsters who delivered the prized contraband to “speakeasies” (secret clubs for drinking and dancing) and concealed neighborhood taverns flourished, and their more flamboyant members, such as Al Capone, achieved a sort of celebrity status.  The youth of society rebelled against the restrictive social mores of their parents: dressing in provocative clothing (“flappers”), talking openly about sex, and enjoying a new type of frenetic dances, such as the “Charleston”, inspired by the jazz music that they were listening to.  Women, having been granted the voting franchise in 1920, and entering the work force in unprecedented numbers after the end of World War I, eagerly embraced the idea of sexual equality and the universe of new possibilities open to them.  Many women scandalized society by actually smoking in public.  The booming stock market extended the lure of investing beyond the business class, to those in all walks of life, inducing many to actually borrow money to buy stocks, with the confident hope that rising stock prices would bring them certain wealth in just a matter of years, if not months.  And as a nation, America was relishing its newfound status as a world power, with an army and a navy that rivaled the traditional dominant powers of Europe.



The defiant, chaotic euphoria of the 1920s came to a sudden end with the Wall Street stock market crash in October, 1929.  What followed came to be known as the Great Depression, and it was the most severe economic downtown in U.S. history.  During this downturn, the stock market lost 90% of its value, and the unemployment rate reached 25%.  And its length was as unparalleled as its severity: the Depression was technically two recessions that were nearly back to back, with the first beginning in August 1929 and ending in March 1933, and the second beginning in May 1937 and ending in June 1938.  (The first, more severe recession lasted for nearly four years.  Technically, there was one other recession in U.S. history that was longer than this one, which began in October 1873 and lasted for over five years.  But when the Depression is considered as a single event, as it generally is, then its length of nearly an entire decade counts it as the longest period of American economic malaise.)  The collapse of the American economy had global repercussions, producing comparable recessions in many of the other major economies of the world, including Great Britain, France, and Germany.  Compounding the upheaval of the Depression was a series of extreme dust storms that ravaged the North American Great Plains in the early 1930s, leading to a severe, prolonged drought that forced farmers to abandon their livelihoods and ultimately left more than half a million people homeless.  During the decade, between three and four million people from this area undertook a mass migration to other regions of the country, such as California, in search of work.  The Depression era was an age of dashed hopes, characterized by a general loss of confidence in the political and economic institutions that had always been relied upon to maintain at least a decent standard of living. A different breed of criminal – the bank robber – captured the imagination – and in some cases the admiration – of the general public as contemporary incarnations of “Robin Hood”: for example, the band of gangsters headed by Bonnie and Clyde.  The perceived failure of government and business had far-reaching consequences in other countries, such as Germany and Italy, where the disaffected populace became receptive to fascism, and while many in the U.S. also began to embrace various radical and subversive ideologies, none of these movements developed sufficient momentum to overturn the established political and economic order.  Much of the reason for this can probably be attributed to the charisma of President Franklin Delano Roosevelt, who with his litany of government programs instituted to relieve the suffering of the masses, along with his “fireside chats” (evening radio addresses to the American public) and skillful use of propaganda, managed to keep at least a modicum of general confidence in his program of economic resuscitation, regardless of its actual effectiveness.



The beginning of the next decade was also heralded by catastrophe for America, with the attack on Pearl Harbor in December 1941, but this trauma had an aftermath entirely different from that of the previous one.  In the calamitous years following the stock market crash of 1929, Germany and Italy had succumbed to fascism, which in turn plunged Europe into war, and while most Americans had been adamantly opposed to any involvement in this war, the events at Pearl Harbor changed the national mood.  America became a principal combatant in the conflict, joining Britain and the Soviet Union in the struggle against the Axis powers of Germany, Italy, and Japan.  It is probably no exaggeration to say that World War II was the most momentous conflict in the history of human civilization.  Global in scope (as its name implies), with weapons and technologies engaged in it that were undreamed of just decades earlier, it also was epic in the sense that it had a moral dimension, with the Axis powers representing the “dark side” of the struggle.  (The lines of demarcation between good and evil are never, of course, indelibly clear, particularly in a war, and there were certainly depredations committed by all combatants.  But the Axis powers, both in their domestic and foreign policies, represented something that was unambiguously immoral.  Still, one can’t ignore the irony that as soon as World War II ended, the Soviet Union, which had been fighting on the side of the “good”, was immediately branded by its former allies as a dangerous, malignant power that needed to be contained.)  At the end of this apocalyptic showdown, the forces of light had triumphed over the forces of darkness.  And America not only shared in this victory, but it had also, unlike Europe, Japan, and China, emerged from the conflict relatively unscathed.  There had been a tragic loss of many lives, to be sure, but America itself had never become an arena for the conflict, and its infrastructure remained intact.  By the end of the 1940s, the U.S. was not only the most militarily powerful nation in the world, but also the wealthiest, with the largest per capita GDP (a measure of goods and services produced) of any country.  And the men and women who had served in America’s armed forces in World War II would later be labeled by their descendants as “the greatest generation”: a generation of heroes.  (They were indeed a remarkable generation of individuals, most of whom had been children in the chaotic 1920s, passed through adolescence in a decade without hope, and reached adulthood at a time when the country – and the world – was facing one of the greatest crises in human history.)  The years immediately after World War II constituted a glorious moment in American history, characterized by economic well-being, national security, and a culture that seemed to be guided by an unambiguous moral compass.



In 1949, the Soviet Union successfully detonated a nuclear weapon, which signaled that the world in the coming decade would be a bipolar one, and that the conflict between the Free World and Communism would include at least the threat of devastating destruction on a massive, unprecedented scale.  What ensued, instead, was a “cold war” which involved a network of alliances between the major powers and third world countries, and “proxy” battles that pitted minor powers against one another in limited, but nevertheless devastating, engagements.  In its effort to enlist allies against Communism in every region of the globe, the United States often supported regimes that were of an unsavory character, and which did not serve the best interests of their peoples.  The moral compass that had been so unambiguous in World War II began to blur, not just in America’s relations with the rest of the world, but within its own borders as well.  “Witch hunts” orchestrated by the U.S. Congress which targeted suspected Communists had already begun in the late 1940s with the Hollywood “black list” that purported to identify Communist sympathizers and propagandists in the entertainment industry, and gained momentum in the early 1950s, with investigations directed against the U.S. government and military, under the guidance of Senator Joseph McCarthy.  Underneath the veneer of a prosperous, wholesome society, enjoying a “baby boom” among families consisting of hardworking husbands and industrious housewives, there were dissonant cracks in the American dream: evidences that this modern utopia was not all that it seemed to be.  Blacks and whites drank from separate drinking fountains, used separate bathrooms, and their children attended different schools, and in some areas of the American South, blacks faced particularly militant harassment and intimidation.  Country clubs and other establishments throughout the country barred their doors not only to blacks, but to other ethnic groups, such as Jews, as well.  And, as Betty Friedan later documented so well in her book The Feminine Mystique, women in America had actually lost many of the gains won for them by feminists in earlier generations, and had been pressured by mass culture into believing that their capabilities and opportunities were limited, and that the role of the housewife was the most satisfying and rewarding one to aspire to.  By the latter part of the decade, a cultural counter-current was emerging: the “Beat generation”, challenging the materialist values espoused by society at large, with writers such as Allen Ginsberg, William S. Burroughs, and Jack Kerouac giving it voice.  In his book, The Dharma Bums, for example, Kerouac painted a cynical picture of American life at that time as “. . . rows of well-to-do houses with lawns and television sets in each living room with everybody watching the same thing at the same time. . . .”



In 1961, Soviet cosmonaut Yuri Gagarin became the first human being to journey into outer space.  The successes of the Soviet space program, which began in the late 1950s, shattered any lingering conceptions that Americans might have retained about the U.S.S.R. being a relatively backward country in comparison to the United States.  Hence, the decade began with a challenge, and the challenge was eagerly taken on by America’s charismatic young president, John F. Kennedy, who declared in a speech in 1962 that America would launch a successful (manned) mission to the moon by the end of that decade.  One year later, the popular president was killed by an assassin’s bullet, leaving Americans shocked, horrified, and demoralized.  Historians have debated to what extent the untimely and violent death of this idealistic young leader fueled the social events that followed in the 1960s.  Cracks in the façade of the American dream had already begun to emerge by the end of the 1950s, and the social climate had become ripe for a reaction, which might have been a more orderly one had it been shepherded by a popular political leader who aligned himself with the direction of the reforms.  But nobody could have foreseen, in the aftermath of Kennedy’s assassination, the extent or the intensity of the social upheaval that would emerge in the ensuing years.  The series of widespread protests and protest movements – against the war in Vietnam, racial discrimination, male chauvinism, and the forces of social conformism in general – together with the countervailing assassinations of other revered social reformers and the beating and even gunning down of unarmed protesters, created a climate of chaos in the United States, along with the general perception that a sweeping, societal revolution was under way.  Of course, most of the citizenry were uncomfortable with this drama, and a large proportion of the population was generally unsympathetic to those taking part in the dissent, branding them “hippies”, “radicals”, “long-hairs”, and, of course, “Communists”.  Many of the protesters, on the other hand, saw the general social conflict as one being fought between generations, and a popular expression among liberal college students at the time was “Don’t trust anybody over 30”.  Older Americans, too, began to see this as a “youth revolt”, and fears of a violent takeover by the young were given expression in a (now long forgotten) movie called Wild in the Streets, released in 1968, which was a fictional drama about the overthrow of the United States government by teenagers.  It is perhaps ironic, then, that in spite of the tumult of the intervening years, the Kennedy dream of sending an American manned space mission to the moon by the end of the decade was actually realized with the successful landing of Apollo 11 on July 20, 1969.



In the early 1970s, two signal events occurred which seemed to suggest that the work of the 1960’s had reached fruition.  Richard Nixon, a U.S. president who had come to be associated in the popular imagination with the forces of reactionary conservatism, resigned his office in disgrace in the wake of discoveries involving political malfeasance on the part of his direct subordinates.  And the unpopular war in Vietnam came to an end, ironically as the result of peace negotiations conducted under the Nixon administration, but which eventually led to the takeover of Vietnam by Communist forces in early 1975.  Although real gains had been achieved as the result of the agitations in the 1960s, in civil rights and general social reform, the result of the scandalous demise of the Nixon administration and loss of the Vietnam War was a demoralization of the American populace.  A popular legend among the American people was that the United States had never lost a war during its entire history, and the Vietnam War summarily destroyed that legend.  The Vietnam War, along with the shortcomings in foreign and domestic policy brought to light during the 1960s, had a pernicious, lingering impact upon the national consciousness.  Gone was the idea that America was always on the side of the virtuous and the just.  A widespread cultural malaise set in, characterized by a general mood of defeatism and cynicism.  Belief in America’s invincibility was further undermined by the inroads made by former enemies Germany and Japan in the production of superior automobiles and other manufactured goods, and the consequent loss of America’s dominance as a world producer of these goods.  Among the youth of the nation, there was a sense of frustration.  I myself had entered my teens in this decade, and I remember that mood among my peers very well.  There was a sense that those who had been teens and young adults in the 1960s had achieved monumental social gains, and that it was now up to us to carry on that movement into its next phase.  Concurrent with the social activism of the 1960s was the blossoming of a musical movement, known as rock and roll, or simply rock, which had flourished by the end of that decade, and many of the singers and bands of that era had given voice to the social unrest and ideals of their fans.  In the 1970s, there was an almost messianic hope and expectation that a next wave of music would arise to similarly give voice to the struggles and goals of that decade engaged in by those who were carrying on the legacy of their elder peers.  But this next wave never emerged, either in music (which descended into mediocrity) or in the social activism of the youth of that decade.  Instead, there was a sort of turning inward, in the form of a belief that the social revolution of the 1960s would now be followed by a personal revolution: a transformation of the self through the discovery and realization of one’s human potential.  (There was admittedly more than a little self-absorption behind this “movement”.  Its widespread popularity led to the branding of those who were swept up in it as the “Me Generation”.)  In the 1960s, there had been a flirtation with mind-altering drugs and eastern religions, as a means of counteracting and overcoming the stifling, conformist mindset of American society.  In the 1970s, these avenues for self-transcendence were retained and combined with pop psychology and “New Age” mysticism in various ways to create programs of self-improvement, popularized by books and motivational speakers.  But this “human potential movement” could not shake the national mood of despondence and disillusionment.  America no longer believed in itself.  And although a president, Jimmy Carter, was elected in the latter part of the decade who promised to restore a faith in America’s leadership and its ideals, in the end he only managed to highlight the national malaise that seemed to be crippling the country.



As I mentioned at the beginning of this piece, my ruminations about America’s historical epochs had been inspired by a television documentary that I watched about the 1980s.  I had forgotten just what a tumultuous beginning that decade had.  As seems to have occurred with so many of these decades, it was ushered in with a crisis, or rather a series of crises.  In 1979, the Iranian people overthrew a despotic regime that had been supported by the United States, and in November of that year fifty-two Americans in Iran were taken hostage during a student uprising.  One month later, Soviet troops invaded Afghanistan, reigniting the Cold War.  Rising oil prices stemming from the instability in the Middle East sent the United States economy into a recession in the beginning of 1980, and a double-digit general price inflation followed.  When Jimmy Carter left office in 1981, none of these crises had been resolved (the recession had ended, but the double-digit inflation remained), and his successor in office was a former Hollywood actor named Ronald Reagan.  The despondent national mood of the 1970s gave way to one of outright despair.  There was a feeling that America was on the brink of catastrophe.  Newsletters at the time fanned the flames of fear that a general economic collapse was imminent.  These fears intensified when another, extremely severe, recession began in the summer of 1981.  A popular movement, called “Survivalism”, emerged in which groups of individuals and families made collective preparations to isolate themselves in remote areas of the country with weapons and self-sustaining agriculture techniques, so that they could weather the coming crisis.  In 1983, the Cold War intensified when the Soviet Union shot down a commercial Korean airliner that had accidentally drifted into its airspace.  Later that year, a television movie called The Day After aired in America, about a nuclear war between the Soviet Union and the U.S., and the following year the movie Red Dawn, about a Soviet land invasion of the United States, was released.  It is remarkable, in retrospect, how completely these crises and corresponding panics had worked themselves out by the end of the decade: the U.S. economy was back on sound footing, with inflation reined in, by as early as 1983, and the Soviet Union, under Mikhail Gorbachev, defused hostilities with the West, and lifted the Iron Curtain, culminating in the taking down of the Berlin Wall in 1989.  And Ronald Reagan, the actor-president, had succeeded, during his administration, in restoring a general mood of optimism and self-confidence in the country.  (According to the popular American historical account, the U.S. “won” the Cold War because of President Reagan’s stalwart anti-Communism, while Gorbachev’s attempt to liberalize the Soviet Union from within was only of secondary consequence.)  Americans stopped loathing themselves.  But the insecurity of the early 1980s did seem to have a lingering, pernicious effect on the American population.  “Shop till you drop” was a popular phrase that emerged at the time, and it was motivated by that general sense that one should enjoy the good things in life now – even if one couldn’t afford them – since the future was very precarious.  Even after the outlook for the future turned rosy, this mindset of “spend now and pay later” remained ingrained in the consciousness of many American consumers.  And the 1980s is remembered for something else.  This was the age of the “yuppie”: the young, upwardly-mobile professional.  As the American economy began to boom again, the goal of making a lot of money suddenly became a very, very popular one.  In a way, this was a logical next step from the self-actualization movement of the 1970s: after all, if one wanted to maximize one’s potential, then it seemed reasonable to assume that the best way to do this was to significantly raise one’s station in life, monetarily.  American television fanned the flames of this new passion for getting rich, with evening soap operas about wealthy family dynasties, living the good life, but also scheming and squabbling among themselves.  These television programs rarely showed their characters actually earning their fortunes, producing or creating anything of value, but seemed to be suggesting that the rich made their money merely by the skillful manipulation of money – and people.  I went to college in the 1980s, and I remember well that the business degree was one of the most popular on campus.

After the 1980s, the trajectory of the American psyche – at least as far as I have experienced it and remembered it – becomes a bit murkier.  The various younger generations that followed my own were given clever appellations, such as “Generation X”, “Generation Y”, and, more recently, “Millenials”, and social pundits were always on hand to assign to each of these generations certain signature psychological traits and distinctive worldviews that set them apart.  But the events that have unfolded over the past quarter century can probably be better interpreted in terms of technological and economic trends, rather than psychological and political ones.  The decade of the 1990s began with a recession, accompanied by a savings and loan crisis brought on by reckless lending practices and fraudulent accounting: an ominous precursor to the causes of later scandals and economic calamities.  This recession was the first that was followed by what has come to be known as a “jobless recovery”, in that employment growth was non-existent in the recovery’s early phases.  And more disturbing, longer-term, trends were beginning to emerge: many of the types of jobs that had supported America’s middle class were disappearing.  America’s manufacturing sector had taken a beating in the 1970s and 1980s, in the wake of increasing competition from overseas, particularly in Japan, where a national focus on innovation and quality control, and the existence of a workforce that accepted relatively lower wages, allowed it to make phenomenal inroads into many major industries once dominated by the U.S., such as steel and automobiles.  These competitive pressures from abroad, along with the steady decline in the power of America’s unions, led to lower wage growth in the manufacturing jobs that the U.S. did manage to retain.  Decent paying jobs for life, which had traditionally buttressed the economic security of America’s working class, were becoming a thing of the past.  The severity and speed of this collapse of the middle class was probably tempered by the mass movement of women into the labor force which had begun in the mid-1970s, as household incomes could be buttressed by two wage-earners instead of just one.  By the 1990s, however, even two-wage households found it difficult to maintain, let alone improve, the level of their aggregate income, after adjusting for increases in the cost of living.

There have been two recessions since the one that preceded the 1990s: the “dot-com” recession of 2001, and the very severe recession which began at the end of 2007.  Both were followed by “jobless recoveries”, and both exacerbated long-term trends of a widening gap between the rich and the poor, and a narrowing of the middle class.  And the underlying causes of both were similar: a society living beyond its means by saving little and borrowing much, and the hope of a secure and better future attained by funneling money into the stock market and real estate.  The stock market rose precipitously in the 1990s, setting new records with each passing year, until it collapsed with the bursting bubble of over-valued internet ventures.  The most recent recession began after both a stock market collapse and a crash in real estate prices which had been propped up by reckless mortgage lending practices.  For several decades now, America has been a society that has consumed more than it has produced, and the periodic economic downturns of increasing severity have only served to highlight that fact.  The U.S. government has exacerbated this phenomenon by also consistently spending more than it has taken in through taxes, but it is not alone in this practice, and certainly not the worst offender: the current crisis in the Eurozone was brought on by the governments of many countries bankrolling their citizenry through deficit spending.



Of course, since the 1980s, we have seen a technological revolution which has transformed our society, beginning with the rise of the personal computer, and followed by the internet and wireless phones.  But this, too, while raising the quality of life in many significant ways, has not come without problems.  As I wrote in my blog entry entitled “The New World Order” (May 2013), the trajectory of our economy seems to be toward a two-tiered society with capitalists, thinkers (such as computer programmers and consultants), and successful entertainers in the upper tier and menial workers and those on the dole in the lower tier.  Some philosophers, as I described in my blog entry “Apocalypse Then” (April 2013), have gone even further, and contended that the machinery of technology has come to play such a pervasive role in our lives, that modernism – or “post-modernism” as it is sometimes called – has actually resulted in the extinction of authentic selves.

Beginning in the 1990s, there was an occasional wistful hope expressed by some that we might see another decade like the 1960s: a social revolution that would counteract the materialist trends that seemed to resurrect themselves in the 1980s and persisted beyond that decade.  In last month’s blog entry “Man and Superman”, I spoke of “Dionysian” movements, which were revolts – like those that characterized the 1960s – against rigid, conformist codes and standards and against a suffocating, dogmatic, authoritarian society.  Ironically, the social movements that we have seen in recent decades – not just in the U.S. but in the rest of the world – are distinctly anti-Dionysian.  In the face of an increasingly complex society, where traditional world views and their associated standards of conduct have lost their sway, movements have arisen that are characterized by fundamentalism, dogmatism, and authoritarianism, not unlike the fascist movements that arose in the wake of the economic chaos of the 1920s.  Such monotonic thinking is even present among groups at the margins of both U.S. political parties, and a general attitude of rigid adherence to principles, rather than one embracing a creative, cooperative approach to solving the nation’s urgent problems, has crippled the democratic process (as I described in my blog entry “The Great Divide”, September 2013).


Perhaps, at some future time, television documentaries will be able to summarize the events of the first decades of this millennium in a neat, concise, narrative form, and those who watch them and who lived through these decades will be able to look back nostalgically and remember them with bemusement.  I hope that this will be the case, if for no other reason than that it might indicate that the great, pressing problems which we are now facing will have been satisfactorily solved, and that our next generation will be able to look forward to a better world than the one in which we are currently living.  In America’s history, human ingenuity has known no bounds in addressing the seemingly insurmountable problems of the past century.  It is this fact which underlies my hope that the daunting problems that face us in the present age will be surmounted – if not by our generation, then by the ones that succeed us.