Tuesday, September 30, 2014

The Prime Directive

What if a machine were able to think?  This, of course, is a perennially popular theme in science fiction, with stories about computers or androids becoming so sophisticated that they achieve a state of self-awareness.  They become, in essence, living beings, albeit ones that are composed of circuits and wires rather than blood and veins.  And usually, in these stories, dire consequences follow for the human race.




But creating a genuinely thinking, self-aware machine is a very tall order, and, after an initial enthusiastic rush of optimism among proponents of “artificial intelligence” in the early years of the computer revolution, the outlook for this achievement has dimmed considerably.  It has been discovered that human (and animal) intelligence involves much more than a process of simply engaging in sequential trains of logic, and, even with the development of “neural nets” which simulate certain non-linear patterns of thinking, such as pattern recognition, the gulf between this and genuine thinking still seems to be immense.  All of these operations, after all, no matter how sophisticated, simply involve the carrying out of instructions that have been defined by the programmer.  There is no genuine “understanding” on the computer’s part of what it is processing.  Philosopher John Searle has compared the operations of a computer to a man sitting in a room who is being sent messages in Chinese, a language that he doesn’t understand.  He is able to “communicate” with the senders because he has an instruction book, written in his own language, which directs him to send back particular sequences of Chinese symbols whenever he encounters certain sequences of Chinese symbols in the messages that he is receiving.  He has absolutely no comprehension of what any of these symbols mean, and if there is indeed some sort of “conversation” going on between those who are exchanging messages with him in Chinese, he is completely unaware of it.  Searle contends that this is exactly what is happening with computers, and that it represents a nearly insurmountable problem in creating a genuine, “thinking”, machine. 

Philosopher Ned Block, however, took Searle’s thought experiment one step further.  What, he said, if there was not just one processor of these symbols, but millions, or billions, of people, each of whom had an instruction book of their own, with directions in their own language on what to do whenever they received a set of symbols.  Each of them might be directed to pass another set of symbols on to a particular person nearby, who then passes on a different set to somebody else, in accordance with his instructions, and so on.  None of the individuals in this massive group would understand the symbols that each of them was processing, just as the single man in the room in Searle’s thought experiment couldn’t.  But would the group somehow comprehend the messages being exchanged with the external communicant, even if none of the individuals that made it up did?  The obvious answer seems to be that there would be no difference between this scenario and the one described by Searle.  In both cases we have people carrying out instructions on how to send words in a language that they don’t understand.  Whether it is just one person or a billion would seem to make no difference. 

But wait.  Aren’t the billions of neurons in a human (or animal) brain doing exactly this?  Isn’t each neuron receiving electro-chemical signals from such sources as the optical nerves of the eye, and then passing on electro-chemical signals to other neurons, with each neuron simply automatically carrying out instructions on how to respond to certain stimuli in a manner that nature had “hard-wired” it to do?  And yet, somehow, from this process I am able to create an image in my mind, think about it, have feelings about it, and remember it.  My neurons can’t have a conversation with the external world, but I can, although I couldn’t do it without them.  But has consciousness really emerged from these processes alone, as a sort of “epiphenomenon”, or is something else responsible for it – something which has yet to be discovered by modern science?

In spite of this apparent counter-example from living organisms, Searle’s arguments still seem pretty compelling: it is hard to believe that the mere processing of a series of simple instructions, regardless of how many millions of these processes are now being carried out in a modern computer, could ever amount to anything like genuine thinking, or even experiencing, let alone self-awareness.  Hence, many computer architects have lowered the bar, and set as the ultimate goal for artificial intelligence the mere simulation of thinking.  While conceding that a computer might never be able to genuinely think, they aspire to develop a machine that can behave in ways that make us think that they’re thinking. 

Alan Turing


In 1950, Alan M. Turing, a British mathematician, logician, and visionary of computer science, suggested a criterion for determining when such a threshold has been reached: the “Turing Test”.  Imagine communicating with some other entity via a keyboard and a monitor.  You don’t know if this other entity, which is located somewhere else, is a human being or a machine.  You must try to determine this by typing questions with the keyboard and viewing the entity’s responses on the monitor.  Can a machine ever be built that will be able to convince any such person that it is a living, thinking being, regardless of the questions that are asked?  Turing was optimistic.  He predicted, in the year that he suggested this test, “. . . that in about fifty years’ time, it will be possible to programme computers . . . to . . . play the imitation game so well that an average interrogator will not have more than a 70 percent chance of making the right identification in five minutes of questioning.”

It turns out that Turing was overoptimistic.  It is now more than 60 years since he made that prediction, and to date no machine has been able to fulfill it.  (And this in spite of the fact that a $100,000 prize has been offered annually in a competition to anyone who can present a machine which passes a version of the Turing Test.  This competition, known as the Loebner Prize, has been in existence since 1990.  Smaller sums have been awarded to machines in the contest that have demonstrated simpler feats of conversational mimicry.)

How exactly should a computer be programmed to mimic living beings?  Are there a set of fundamental instructions that would be most effective in creating a “life-like” computer or automaton?  Some computer architects, rather than focusing on sophisticated programs that would simulate high-level life processes such as communication and problem-solving, have instead turned their attention to the opposite end of the biological ladder, and concentrated on the creation of self-replicating automata.  Self-replication, after all, seems to be at the basis of all life, down to the cellular level.  The science fiction writer Isaac Asimov, in his 1942 short story Runaround, put equal programming emphasis on what androids could not do, with his “Three Laws of Robotics”, though even in these the goal of self-protection was explicitly acknowledged.

I believe that there actually is one basic directive or instruction, which, if it could somehow be programmed into a machine, would most effectively produce a machine that emulated a living being.  The directive would be to maximize its locus of meaningful activity.  Now on the surface I know that this doesn’t seem to offer much in the way of specifics.  What, I suspect one would ask, is “meaningful” supposed to mean?  The best way to get at this is to explain those conditions under which a machine (or even an actual living being) would completely fail at following this directive.

At one extreme, imagine a being that can perceive its environment (whether this is through the standard human senses of sight, hearing, touch, etc., or something different, and even mechanical, such as a machine receiving inputs, is immaterial), but is incapable of acting upon its environment, in any way.  For a person, this might be like being strapped to a table for his entire life, confined in an empty room, and forced to spend his entire time watching a television monitor, and hearing sounds from a speaker.  He wouldn’t even be able to “change the channels”: everything that he saw, heard, and felt would be guided by influences over which he had absolutely no control.  This, I argue, would be a life devoid of meaning.  If I have absolutely no influence over my environment – if I can’t even affect it in the most trivial of ways – then it is ultimately of no consequence to me.  One might argue that I might still develop sympathies for the other beings that I am forced to watch, and so be meaningfully engaged in that way.  But my level of engagement would be no more so than that I might have in the real world with the fictional character of a television program or movie.  My degree of substantial connection would be the same in both cases – which is to say I would have no real connection with them at all.

At the opposite extreme, imagine a being who could control everything in its environment, including other living beings.  These others would be like puppets, in that every action that they performed would be in accordance with the will of this omnipotent being.  In such an environment, for this being, there would be no novelty, no spontaneity, nothing new to learn or discover.  An existence such as this would be a sterile one, because of its complete predictability.  Complete power, complete control, would ultimately be meaningless, because with it would come an existence that offered no possibilities for growth through encounter with the unknown.  For one living such an existence, it would be like spending life confined in a room filled with games in which one moved all of the pieces.

Meaning, then, for me, represents a happy middle between these two extremes of complete control and complete lack of control.  It represents life in a world where I have some power over circumstances that affect me, so that I am more than a passive observer, but not a power so extensive, so complete, that I eliminate any possibility of novelty, of surprise, of having a genuinely spontaneous experience.  And to maximize meaning – where absence of meaning would be represented by either of these two extremes – would entail expanding one’s perceptual capabilities, while also attempting to exert some control over the widening field of experience.  It would also entail risk, of course, since my efforts to widen my personal horizons will also potentially put me into contact with somebody or something that could diminish, or even destroy, my ability to exercise any kind of control over my life.

Would a machine ever be capable of doing this?  Would mobility be a prerequisite for success?  Would a machine have to, at the very least, be able to procure the necessary resources for its continued sustenance?  Perhaps not.  If the machine had the capability to communicate with its surrounding environment, it might be able to “persuade” others to attend to its needs, and even to assist it in its project of both sustaining itself and increasing its loci of perception and control.  Such an idea is not as far-fetched as it sounds. While computers may not have yet reached the degree of sophistication in communication to pass the Turing Test, they have for some time had the capability to engage in at least rudimentary forms of conversation with human subjects.  In the mid-1960s, a computer program named ELIZA was developed which could mimic a psychotherapist, by asking the sort of leading questions that are often used by practitioners in this field.  (These questions were constructed based upon prior answers of the subject that the computer was communicating with.)  In the 2013 science fiction film Her, a man falls in love with a computer that has a female voice.  The film was inspired by an actual internet web application called “Cleverbot”, which mimics human conversation extremely well and has scored highly on the Turing Test.  Given the capacity for human beings to develop deep emotional ties to animals of much lower intelligence – and even in many cases inanimate objects – it does not seem unlikely at all that some computer could someday enlist a human being who has fallen under its spell to support its objectives.  (And isn’t it the case that many people already practice a form of this art?)


BBC - Culture - Spike Jonze's Her: Sci-fi as social criticism


This idea of the prime directive came to me many years ago: not as the result of thinking about computers and artificial intelligence, but actually as a consequence of an ongoing debate that I had with a college friend of mine.  We were starkly different in temperament: I tended to live a very ordered existence, planning the activities of each day very carefully and thoroughly, while she was much more of a spontaneous sort, and relished the novelty of unplanned experience.  She was disdainful of my own behavior, saying that by controlling the activities of my day so completely, I was squelching any chance of doing something genuinely interesting, let alone exciting.  I, on the other hand, had developed a contempt for persons who exercised little or no control over their own lives, because in the working class environment of my youth, I had seen many people who seemed to be little more than helpless pawns or victims of life: they “got pregnant”, “got married”, “got into trouble”, “got hired”, “lost a job” – life for them seemed to be a series of passive verbs.  I made a resolution early in my life that to escape this environment, I would exert a greater level of control over my life than they had their own.

But still, I could see that my college friend had a point.  After all, in spite of her love of spontaneity, and contempt for a regimented life, it was plain to me that she was not a victim like so many of the kids in my neighborhood had been.  She was in fact doing very well in college, and later went on to a very successful professional career.  As I contemplated our respective behaviors, as well as those of my childhood peers, I finally concluded that there were two equally perilous extremes that one could succumb to in the management of one’s life.  At the one extreme there was the over-controlled life, which I likened to a ship that visited the same ports at rigorously scheduled intervals.  For the captain and crew of such a ship, existence was a very circumscribed thing, with a very predictable and repetitious sequence of experiences.  The captain was in complete control of his ship, his crew, and his destiny, but this complete control effectively eliminated any opportunities for learning, growth, or change.  At the opposite extreme were those who were lost at sea on a makeshift raft, without a sail or oar, completely at the mercy of the elements, tossed to and fro by the random waves that hit the raft.  Such an existence was also a pathetic one, in its own sort of way, and an extremely dangerous one as well.  Between these two extremes there was the explorer, a captain who, while in control of his ship, regularly steered it into uncharted waters, in search of new lands to discover and explore.  There were risks, to be sure, from the elements, as well as from peoples and animals that might be encountered in foreign shores, but the rewards generally outweighed the dangers.  And even when the ship was sailing into unfamiliar territory, its captain was doing so with a purpose, and with the benefit of years of accumulated knowledge acquired from many such voyages in the past.  I would like to think that my college friend and I both – at least in our better moments – were like the explorer, though while she occasionally ran the risk of ending up on that life raft, I, on the other hand, regularly risked succumbing to the temptation of limiting my voyages to familiar ports.

If a machine, then, could be programmed to emulate the behavior of an explorer, endeavoring to broaden its boundaries of perception by branching out into new and different environments, but also endeavoring to maintain some degree of control over these encounters, then I think it would most effectively mimic the behavior of a living being.  But I do think that this would merely be a successful act of mimicry, or simulation, rather than a genuine replication of the life process.  After all, in this case, the machine would still simply be following a programmed set of instructions, and the procedure would be just as open to Searle’s critique.  It represents, then, a necessary, but not a sufficient, condition for the attainment of something like living consciousness.

What would give genuine meaning to this process would be two things.  The first is a conception of self, a feeling of identity.  To be alive, there must be a sense of an I: these things are happening to me; I want to do this or that.  Would a self-concept arise naturally out of a sufficiently complex process of information processing, as some believe happens with our own vast network of billions of neurons in our brains?  Would the programmed machine explorer eventually attain self-consciousness after expanding its sphere of perceptions and accumulated information beyond some threshold?  (This is certainly a popular motif in science fiction, though exactly how such a thing could happen is never, of course, satisfactorily explained.)  Or is something else required – something that we have yet to discover?  In any case, without this sense of identity, this self-directed frame of reference, there could not possibly be something that is manifesting an authentic existence.

And even the existence of an I would not be sufficient for genuinely meaningful experience.  Imagine yourself in a universe where you were completely alone, living in a room that contained (aside from whatever was required to sustain you physically, such as food and water) a set of simple puzzles to solve.  As soon as you have completed these, a door opens onto a wider space, with more elaborate furnishings, and where there are more puzzles and other diversions, of a higher level of complexity.  You set about working on these, and when you successfully master them, yet another door opens, into an even wider space, with a set of more diverse and interesting surroundings, and even more challenging puzzles to contend with.  You eventually surmise that this is – and will be – the sum total of your existence.  You are within what may very well be an infinite set of concentric enclosed spaces, with each wider space moving outward containing more elaborate and interesting scenery and a more sophisticated set of challenges to grapple with.  As you successfully master the puzzles in each wider space, both your locus of control and your locus of perceptual awareness increase.  At the end of such a life, regardless of how “successful” you were (as measured by the number of widening spaces you were able to attain, during your lifetime), will you feel that your life was a meaningful one?  Will you feel that it was a satisfying one?  I suspect that you wouldn’t, nor would any reasonably intelligent being with a sense of self.

What’s missing?  An “other”: one or more beings, also existing in your universe, that are able to engage with you and your projects: competing with you, collaborating with you, evaluating you, and seeking evaluation from you.  The philosopher Martin Buber talked of two dimensions of relationship that one has with one’s external environment: “I-It”, and “I-Thou”.  The first relationship is a strictly instrumental one, in which what is encountered is treated as a thing, to be manipulated, shaped, destroyed, or put aside.  The second relationship is a personal one, in which a sort of resonance occurs: a recognition that the other you are encountering is, in some sort of way, like you: more than a thing, for the same (perhaps only vaguely understood) reason that you realize that you, yourself, are more than merely a thing.

As I engage in following the “prime directive” in an “I-It” universe, in which there is only me and what I perceive to be a universe consisting only of things (even if many of these “things” are actually alive, such as trees and grass), I run the risk of falling into one or the other of the two extremes of meaninglessness.  On the one hand, I may find myself in a sterile, unchallenging environment: a gray, relatively empty world that leaves me little to contemplate or contend with.  On the other hand, I may find myself completely overwhelmed by the environment around me – crippled by it somehow, perhaps literally, for example if I stumble and plummet down into a ravine, and find myself unable to move.  But even if I am successful in widening my sphere of experience, there will still be a certain emptiness or sterility to my existence.

In an “I-Thou” universe, as I set about trying to increase my range of experience and control, I face a much more interesting, and potentially fulfilling, set of challenges.  Any encounter with a “Thou” (another being) – even a friendly one – is inherently a contest.  Each of us vies for control during the encounter, while at the same time yearning to benefit from the experience – to widen our horizons as a result of it.  When I converse with others, I relish the opportunity to draw from the experiences conveyed in their words, while at the same time I subtly compete to insert my own statements, and to thereby maintain a sufficient level of influence over the dialogue.  I want to be heard, at least as much as I want to listen.

Every encounter with an other – a “Thou” – is fraught with both opportunity and danger.  In my daily commute on the subway, there is the regular temptation to settle into that over-controlled life of the captain who never strays from the familiar ports, in this case by sitting silently next to another passenger during the entire ride.  The bold move into spontaneity – starting a conversation with that fellow traveler – opens a universe of new possibilities for experience: an enlightening discussion, a new friend, a business opportunity, or an opportunity for romance.  On the other hand, risking an encounter with the “Thou” brings dangers as well: an unpleasant exchange, an outright rejection or rebuff, or worse, violence or predation.  The struggle to maximize one’s locus of meaning becomes a much more highly charged one in a universe populated with “Thous” and not just “Its”.

And the line of demarcation between an “It” and a “Thou” is a very fuzzy one.  In many cases, we treat our fellow human beings as means solely, rather than ends (as in the case of dictators who were responsible for the deaths of millions of human lives in the service of an abstract idea or policy), while  even animals can assume a central place in our lives, such as family pets.  Each of us draws that line in unique ways, and we conduct our behavior in accordance with it. 

Robert Gray: 'You, Mr. Bemis, Are a Reader!' | Shelf Awareness




            I am reminded of an episode from the classic American television series, The Twilight Zone, about a man who shunned the company of other people, and who found meaning in his life only through the reading of books.  He was a lowly employee of a bank, and one day, after emerging from the sheltered seclusion of that bank’s vault, he discovered that the entire human race had been instantly annihilated as the result of a nuclear war.  His reaction was one of elation, as he hurried to the local library, and exulted in the prospect of now being able to spend the rest of his entire life alone, leisurely poring through all of the volumes now at his disposal, without interference from others.  But just as he was about to enjoy his first book in what he perceived to be a paradise of seclusion, he dropped his glasses, accidentally stepped on them, and smashed them to pieces, leaving him virtually blind.  In that moment, his heaven became a hell, as there was nobody that he could turn to replace the broken spectacles.  His one source of real pleasure was now permanently beyond his reach, and he had only a life of misery, utter helplessness, and desolation to look forward to.  He learned only too late that even for him, a misanthrope, the road to meaning entailed an ongoing engagement with the “Thou”, with the other people that could make his pursuit of the prime directive possible.


I don’t know if we will ever actually be able to program a machine to follow that prime directive: to follow a goal of maximizing its locus of meaningful experience by constantly seeking out new means to expand its avenues of experience and its capacity for exerting a reasonable modicum of control over its widening environment.  I think that it would be very interesting, in such a case, to see how such a machine would actually behave.  Would it live out the worst nightmares embodied in science fiction novels: of computers or androids who seek to conquer or even eradicate the human race?  Or would it provide a rational model for how living, intelligent beings should actually conduct themselves?  Would such a machine ever attain genuine consciousness, with the capacity to know itself as an “I”, and to engage in real relationships with others as “Thous”?  That is the far more interesting question.  If the answer proves to be an affirmative one, then it will certainly have a profound impact on how we view life, consciousness, and meaning itself.

Sunday, August 31, 2014

Something in the Air

[In a blog earlier this year, I reprinted a speech that I had given on challenges which the electricity industry is facing due to technological change and evolving consumer preferences.  In this blog, I discuss business transformation in general, and in particular some of the ideas that I have personally found most helpful and enlightening in providing guidance on how to deal with it.  Much of the following content appeared in an article that I authored for Electricity Perspectives magazine entitled “The New and You”.]

Incumbent providers in many industries may have to reinvent themselves in the years ahead, or risk getting sidelined by innovative new competitors, and possibly even driven into oblivion.  The lyrics from a popular song in 1969, “Something in the Air”, come to mind:  “We’ve got to get together sooner or later, because the revolution’s here . . . We have got to get it together, we have got to get it together now.”  Change is definitely in the air, and it is bringing both challenges and opportunities for providers of products and services.  In order to successfully ride these waves of change, rather than be overrun by them, providers will have to have a superior strategic focus.  The risks of not doing so will be particularly pronounced for incumbents, who may be overly invested in business models that will become outmoded and irrelevant.  Unless their leaders “get it together”, then change could lead to very unhappy consequences: established business models could be upended, traditional revenue streams threatened, and entire customer segments put at risk.

How does one plan for change?  The very idea seems almost oxymoronic, like being prepared to be surprised.  If the innovations that may overturn a particular industry are still unknown, then how exactly can business models be developed to address them, and incorporate them?  Many business strategists have contended with this question, and have come up with some interesting insights. 

Jesse Berst, who has been observing and writing about business transformation in the electricity industry (he shares his views on his website, www.smargridnews.com), has attempted to draw lessons from previous industrial transformations.  He has noted that several industries underwent fundamental change as new entrants challenged existing providers, including transportation (i.e., the railroads), telecommunications, and retail sales.  The new market entrants – who in some cases virtually supplanted the previous incumbents in their respective industries – generally succeeded, according to Berst, by capitalizing on “disintermediation”, a process whereby the new entrant provided the product more efficiently (e.g., more quickly, more conveniently, at lower cost) than the existing supplier(s).  This was generally done by capitalizing on some recent technological innovation which had occurred: an innovation that was recognized by the new entrant (but not the incumbent – at least not at first) as a means of introducing new efficiencies into the market.  The classic example is Amazon.com, which introduced a more convenient and efficient way for persons to buy books through the use of the internet.  Amazon did not invent the internet; but Jeff Bezos, Amazon’s founder, recognized how the internet could be used to introduce disintermediation into the book-buying value chain (and he then later expanded this market model to a whole range of diverse products and services).  Similarly, Netflix used the internet as a tool for disintermediation in the video rental business, and in doing so upended the traditional “brick and mortar” stores that had been offering this service.

The Amazon and Netflix strategies have earlier historical precedents.  At the turn of the last century, Sears Roebuck – the “original Amazon”, says Berst – became the first company to create a virtual superstore using a new infrastructure (the expanding rail system) for ordering and fulfillment and for undercutting the generally more expensive mom-and-pop stores.  The store’s famous catalog initially was mailed to people in towns with railroad stops.

The railroads themselves found quick competition with the new interstate highway system started by President Eisenhower in the 1950s.  By the 1960s and 1970s, commuter rail had almost disappeared (going from 2,500 in the mid-1950s to fewer than 500 by the late 1960s) and trucking ate into freight profits.  The highly traditional, highly regulated, railroads failed to adapt.

But there is another process that has often played a role in industrial transformation, and that is “disruptive innovation”, a term introduced by Clayton M. Christensen in his book, The Innovator’s Solution (coauthored with Michael Raynor).  In this and his earlier work, The Innovator’s Dilemma, Christensen provides a number of examples where technological discoveries or advances completely overturned existing markets.  Whereas disintermediation is the application of an existing technology to improvements in the delivery chain (the process), disruptive innovation capitalizes on improvements in the underlying product.  And, based on his research, Christensen came to some very counterintuitive conclusions about how existing businesses succumb to the threat of disruptive innovation. He found, for example, that the technology adopted by the new competitor resulted in a product that was usually inferior, at least from the perspective of the incumbent providers’ existing business model.  If it was genuinely more highly prized by any particular class of customers, these customers generally comprised a small or low-margin segment of the business, whose loss was not perceived to be a significant threat to the incumbents.  In fact, the incumbent business model would suggest that an investment in this new product would be unprofitable, particularly if doing so would result in sales losses from the established product line.  Some examples illustrate disruptive innovation at work:

·       The 5.25-inch disk drive was vastly inferior to the 8-inch drive, which was the standard for minicomputers manufactured in 1981.  But makers of this smaller drive marketed it to manufacturers of the new desktop computer, where the size alone presented a distinct advantage for the more compact machines.  By the time the desktop market supplanted that of minicomputers, many of the sellers of the larger 8-inch drive failed to adapt quickly enough by creating their own versions of the smaller drive, and fell by the wayside.
·       Minicomputers themselves were originally marketed as an inexpensive alternative to mainframes, and were not perceived as a threat to mainframe manufacturers, until their sales exceeded that of mainframes – a fate which in turn befell the minicomputer market as personal computers grew in popularity in the 1980s.
·       When the steamship was invented, its more expensive design (relative to traditional sailing ships) seemed to only justify its applicability to inland water routes, where its greater manageability made it a profitable alternative.  But after conquering the inland water route market, steamships eventually supplanted sailing vessels for long distance hauling as well.
·       Western Union, the dominant provider of long-distance communication services in the 19th century, rejected an offer to buy Alexander Graham Bell’s patent for the telephone (for $100,000).  While it was clear that telephones could serve the (at the time) relatively small market for short-distance electronic messaging, for communication at longer distances, it was not considered to be an economic alternative to Western Union’s existing nationwide network of telegraphs.
·       Manufacturers of music CDs abandoned the market for musical singles, concluding that it had become “extinct”, or at least had become a market that no longer presented a profitable way of being served.  The market was not extinct, however, and after the demand for singles was met through a new medium – downloadable MP3s – providers in this new medium went on to capture a sizable chunk of the market share that had been held by sellers of CDs.

The important point made by Mr. Christensen in his writings is that the decisions made by incumbent providers in the face of these innovations were generally – from their perspective, and based upon sound business principles and analysis – the logical and correct ones to make.  Their business models hadn’t failed them: they were merely caught in a paradigm that prevented them from seeing the transformational opportunities that were only visible from the perspective of an outside entrepreneur.  The actions of the incumbents were in fact usually guided by what they perceived to be the interests and desires of their existing customers – in other words, they were following what is generally touted as the hallmark of a successful business: they were customer-centric and customer-focused.  And, when they did begin to see some of their customer segments falling away (often those that were considered less vital or profitable to serve), they concentrated even more intently on their high-margin customer segments, and on specific strategies for retaining them, but often with less than successful – or even disastrous – consequences.

When presented with these cautionary tales, incumbent providers respond with the logical question: “What can I do to prevent a loss of sales to new market entrants, or, even better, how can I make sure that my company is on the right side of change and innovation?”  Unfortunately, it is easier to characterize the underlying threats than it is to map out distinct solutions, and in many cases consultants fall back on two clichéd recommendations: “Customers want choice, find ways to provide more of it to them,” and “Learn to be more competitive, because you will have to be in this approaching competitive environment.”  I would now like to kick these two particular sacred cows – at least a little bit – as I describe a third term that has been linked with very successful businesses: “creative monopoly”.

            In a New York Times article (“The Creative Monopoly”, April 23, 2012), columnist David Brooks described the life and philosophy of Peter Thiel, the founder of PayPal.  Having started his career in the very competitive field of law, Thiel had been following a very successful trajectory, getting into Stanford University, and then into Stanford Law School, and becoming a clerk for a federal judge, but his promising career had a setback when he failed in the particularly intense competition to obtain a Supreme Court clerkship.  He then changed course in his life, becoming an entrepreneur, investing in many technology startups that went on to success, and eventually starting his own hugely successful company.

Thiel’s experiences led him to some interesting conclusions, principal of which is that the benefits of competition are overrated, particularly in the business arena.  Rather than being a good competitor, he contends, it is often better to be a good monopolist.  Brooks sums up the Thiel philosophy this way:

Competition has trumped value-creation . . . the competitive arena undermines innovation.

You know somebody has been sucked into the competitive myopia when they start using sports or war metaphors. Sports and war are competitive enterprises. If somebody hits three home runs against you in the top of the inning, your job is to go hit four home runs in the bottom of the inning.

But business, politics, intellectual life and most other realms are not like that. In most realms, if somebody hits three home runs against you in one inning, you have the option of picking up your equipment and inventing a different game. You don’t have to compete; you can invent.

Thiel’s observations may sound like they fly in the face of basic economics, and even a little heretical.  After all, isn’t competition at the base of the free enterprise system – isn’t it the engine of capitalism, motivating individual businesses to provide superior products and services at the lowest price possible?

            Yes . . . and no.  In truly competitive markets, where there are many suppliers selling an identical product, profit margins are virtually nonexistent, and there are little or no interesting differences in the level of product quality or service between the competing suppliers.  Think of the market for agricultural commodities, or certain basic raw materials, such as cotton.  There is little profit to be made here.

            By contrast, think of some of the largest and most successful companies that have appeared on the landscape in recent years: Apple, Amazon, Netflix, Facebook, and Google.  These companies were successful, not because they rose to the top of a heap of companies selling identical products or services, but because each carved out a niche for itself – or in some cases created a market niche where none had even existed before.  Their success – and corresponding profitability – laid in their ability to become creative monopolists.

In strict economic parlance, a monopoly is a company with a dominant market share, generally obtained through exercising market power over potential rivals, but for Thiel, the term means something a little different, with a more positive connotation.  A monopoly is a company with a customer base loyal to its product.  The monopoly “owns” its market, not through nefarious means, but through some combination of branding, scale cost advantages, network effects, or proprietary technology.  Apple is an example of a company that has all four.

While it is true that customers love variety, and the ability to express themselves and their personal tastes through the selection of alternative product offerings, it does not follow that providing customers with more choices – including choices of alternative suppliers – is a panacea for customer satisfaction.  The creative monopolist succeeds, in fact, by doing the opposite:  by establishing a relationship with its customers in which choice becomes unnecessary, or even undesirable.

When I walk into the grocery store and buy cola, for example, I don’t have to – and don’t want to – think about choices.  I don’t have to compare the prices of all the competing brands.  I look for Coca Cola, and I buy it.  For customers like me, Coca Cola has succeeded in becoming a monopolist.  The other brands in the cola universe are invisible to me.  And to the extent that any cola brand (Pepsi, for example) can do the same thing, they also enjoy, among their sphere of committed customers, the benefits of a monopolist.  They can charge more for their product, compared to generic colas, and they can enjoy a fairly predictable earnings stream based upon customer loyalty.  It truly is a “relationship”, based upon the customer’s faith that they are receiving a quality product or service at a reasonable price.  It is only when the customer’s faith is shaken that this relationship becomes threatened

This can happen in many ways.  For Coke, in fact, it nearly happened in the 1980s, when it attempted to replace its long established formula with a new one, which had been developed based upon blind taste tests.  Many of its loyal customers were ready to abandon Coke, and would have, had Coca Cola not quickly corrected its misstep, by reintroducing the original brand as “Coke Classic”.  America Online – once a promising internet provider – suffered from a stampede of exiting customers when it was revealed that AOL was making it difficult for customers to leave its service.  This policy demonstrated a lack of faith in its own product and a lack of regard for its customer relationships.  Many cable or satellite TV providers, in their zeal to attract new customers with special rate offerings and product giveaways, have alienated some of their existing customers, who realize that they are paying much more and getting less for the same service.  When a creative monopolist fails in maintaining the faith and loyalty of its customers, it always learns too late that it is far less expensive to retain a satisfied customer than it is to bring on a new one – particularly a former customer who has been driven away.

Peter Thiel now teaches a course at Stanford, where he shares his ideas and theories on what make a successful creative monopolist, such as the following:

The best kind of business is thus one where you can tell a compelling story about the future.  The stories will all be different, but they take the same form: find a small target market, become the best in the world at serving it, take over immediately adjacent markets, widen the aperture of what you’re doing, and capture more and more.  Once the operation is quite large, some combination of network effects, technology, scale advantages, and even brand should make it very hard for others to follow.  This is the recipe for building valuable businesses.

. . . Of course, putting together a completely accurate narrative of your company’s future requires nothing less than figuring out the entire future of the world, which isn’t likely to happen.  But not being able to get the future exactly right doesn’t mean you don’t have to think about it.  And the more you think about it, the better your narrative and better your chances of building a valuable company.

Peter Thiel is cognizant of the fact that most of the successful creative monopolies that have come into existence have done so on the heels of innovation, disruptive or otherwise.  But he warns that it is dangerous to come into an industry that is currently undergoing technological change: come in too early and your particular innovations will be quickly superseded by others (as happened in the floppy disk market); come in too late, and there will be nothing new left to offer.  However, “if nothing has happened in an industry for a long time, and you come along and dramatically improve something important, chances are that no one will come and do that again, to you.”

            Many of Peter Thiel’s insights echo those of an earlier business strategist, Michael Porter.  In a seminal article published in the Harvard Business Review (Nov.-Dec. 1996) entitled “What is Strategy”, Porter identified two distinct means by which businesses attempt to gain and sustain a competitive edge.  The first, which he called “operational effectiveness”, comprises all of those activities that a business undertakes to outperform its rivals.  These are the classic “competitive” strategies that all companies rely upon in some form or another: benchmarking, total quality management, “six-sigma”, outsourcing, efficiency improvements, cost reductions, etc.  While Porter acknowledges that such activities are necessary, he contends that they are not sufficient to ensure a lasting competitive edge.  At worst, they consign a company to constantly running to stay in place with its competitors.  The key to a more endurable competitive edge, said Porter, is to engage in “strategic positioning”.  In essence, this means finding a means to establish a lasting, significant difference between oneself and one’s competitors, a way of doing things that is difficult to imitate.  Southwest is an example of an airline that has done exactly that.  The theme of establishing a difference, rather than simply engaging in competitive behavior, has also found expression in a book published in 2005 by W. Chan Kim and Renée Mauborgne entitled Blue Ocean Strategy, where a “blue ocean” is a metaphor for an uncontested market space (as opposed to a “red ocean”, where competitors are engaged in an ongoing struggle with each other to gain an edge through superior performance).  All of these theories share a common theme: that to find true, long-lasting success in any market, one must establish a niche: in the “brand” of the product itself and/or in the process by which the product or service is provided.

            But what if a company has a “niche” – a unique product or service that has provided a virtually uncontested market share – and finds that its market is no longer a growing one?  A common model illustrating this phenomenon is the “S-curve”.  A newly-introduced product or service might exhibit slow sales growth at first, as it is purchased only by that segment of the population that likes to risk trying new things (“early adopters”).  However, as more and more of these risk-takers buy it and express satisfaction with it, then its popularity expands into the general population, and sales growth increases dramatically – in fact, exponentially.  But there comes a day when this sales growth tapers off, just as dramatically.  This might simply be due to product saturation: everybody who could have bought it, already has.  On the other hand, sales could evaporate because something else has come along to replace the existing product or service – something that is seen as better, or trendier, in the eyes of consumers.  How does one contend with this dreaded “S-curve”?  Pamela Morgan, a consultant and former electric utility executive, has proposed one possible strategic solution.  In an article published in Electricity Policy magazine entitled “From VHS to DVD: Need for a New Business Model for the Electricity Industry in the 21st Century” (September 21, 2010), Morgan suggests that what is needed is a broadening of strategic vision.  Companies, she argues, are often too wedded to the particular products or services that they are offering, and neglect to understand what the broader needs of their customers are which these products and services are answering.  She cites VHS player/recorders as a classic example.  These devices definitely exhibited an “S-curve”, with slow sales growth that eventually became phenomenal sales growth, but which was then followed by rapidly declining sales, as the VHS was superseded by the rise of the DVD.  This in turn, is also seeing its rapid growth undercut by the subsequent rise of video streaming services.  Morgan suggests that none of these transformations needed to be catastrophic for the business that had a sufficiently broad vision of what service it was providing: on-demand home video entertainment.  Such a business would have kept abreast of technological development and thereby found the means to envision what the next potential “S-curve” could be.  With this type of strategic focus, technological change presents an opportunity for continual rebirth and evolution, rather than a threat of catastrophic demise.


            These, then, are some of the most salient strategies, I think, for facing technological change and industry upheaval.  The greatest successes, both recently and in decades past, have found ways to use innovation to create niches for themselves – in the design of their product or service, or the channels that they have used to deliver it, or both – and in so doing have ensured their long-term survival and profitability.  Finding that niche requires strategic vision, of course.  At the very least, it requires the ability to spot means by which new innovations will improve existing products or services or enhance the methods of their delivery.  A broader vision will benefit from Pamela Morgan’s insight that there might be a next generation of products and services that will better serve the underlying needs of customers who are buying the current offerings.  But what of those innovators who seem to anticipate future needs and desires of consumers – those who offer things that consumers didn’t realize that they even wanted?  As I indicated in a previous blog (“Thoughts on the Future of the Electricity Industry", May 2014), I believe that the greatest innovations are those that have improved the quality of personal time, by eliminating drudgery, or by finding ways to inject pleasure or happiness into existing time.  It is the entrepreneur who is able to do this, and is able to do so in a way that is difficult to replicate or even emulate, who will enjoy the greatest success in the economy of tomorrow.

Thursday, July 31, 2014

On Submission and Compliance

I ended my previous blog entry (“Who is Number 1”), which was about the exercise and abuse of power, with a statement that the psychology of enforcing compliance – of getting people to do what you want them to do – is a science that has been evolving in potentially menacing ways.  The successful exercise of power – by any person or institution – ultimately rests on the ability to ensure submission and compliance from the targets of that power.

How do you make people do what you want them to do?  A simple and direct method practiced by tyrants and conquerors throughout human history has been to threaten potentially recalcitrant subjects with violent harm and/or the loss of possessions (or persons) of personal value to them.  Machiavelli’s book The Prince was in large part a manual on how to effectively exercise this particular collection of tactics.  (And for this reason, it was read and studied by Lenin, Stalin, Mussolini, and Hitler.)  A more indirect form of keeping the masses in check is to distract their attention from abuses of power or other improprieties being practiced by those in leadership.  Roman emperors, for example, practiced distraction through entertainments, such as gladiatorial games and other public spectacles.  Another form of distraction is to redirect potential domestic resentment against outside, foreign enemies, either real or imaginary.  In Orwell’s dystopian novel Nineteen Eighty-Four, the oppressive superstate Oceania is engaged in a state of perpetual warfare against foreign enemies, and its citizens are provided with regular newsfeeds about recent victories or defeats.  A third tactic is to bribe those who are being kept in check by giving them a token share of the spoils of oppression.  Roman emperors often provided subsidized or free grain or bread to the populace to appease them (and this, together with the public entertainments described earlier, gave rise to the phrase “bread and circuses”).  In modern democracies such as the United States, senators and representatives often funnel government expenditures into their own legislative districts for “pork barrel” projects: in essence bribing local constituents with their own money.

There are many disincentives for “rocking the boat”.  A direct challenge to political authority could get one arrested.  And even engaging in forms of civil disobedience or non-conformist behavior that are not “against the law” might result in the loss of one’s job.  In both of these cases – having a criminal record, or an employment record that includes incidents of one’s being terminated – a person might find it difficult or even impossible to find future employment.  Someone who is considering a course of resistance, but then engages in a stark calculation of what the future quality of one’s life will be (i.e., without a job, and possibly fined or incarcerated), might come to the conclusion that the current state of things is not really so bad after all.

This all-too-human aversion to “rocking the boat” is a powerful impediment to freedom and the active defense of civil liberties.  We are quick to console ourselves with the belief that the most egregious abuses of those in power are directed to others: members of less-favored ethnic groups, those lower on the socioeconomic ladder, the fanatically radical, or the criminally-inclined.  Skillful governments capitalize on this belief, and subtly support it.  The Chinese artist-activist Ai Weiwei, in an essay entitled “On Self-Censorship” described how this is currently being done in China in effectively subtle ways, to suppress freedom of thought and expression:

Censorship and self-censorship act together in this society to ensure that independent thinking and creativity cannot exist without bowing to authority.  More often than not, self-censoring and the so-call threats related to it, are based on a memory or a vague sense of danger, and not necessarily a direct instruction of high officials.  The Chinese saying sha ji jing hou puts it succinctly: killing the chicken to save the monkey.  Punishing an individual as an example to others again incites this policy of intimidation that can resound for lifetimes and even generations.

A horrifying example of the human tendency to fear excessive involvement played itself out in New York City, on March 14, 1964, when a young woman named Kitty Genovese returned home from work late that night to her apartment.  As she approached her home, an assailant mugged her and stabbed her, repeatedly.  Thirty-eight witnesses, neighbors, saw the crime, but none ever bothered to contact the police until after the killer had returned to her a third time, raped her, and mortally wounded her.  For them, getting involved, even in a trivial way, was a price that was too high, too threatening to their personal security. 

The Kitty Genovese incident, along with the trial of Nazi war criminal Adolf Eichmann two years earlier, motivated a psychologist, Stanley Milgram, to clinically test the extent to which persons would suspend compassion for other human beings, particularly when they were told to do so by a person in authority.  He constructed a “learning experiment” in which a test subject was instructed by an “expert” to administer a shock to another test subject every time that the test-taker answered a multiple choice question incorrectly.  With each incorrect answer, the first subject was instructed to increase the voltage.  What this subject didn’t know was that the test-taker was an actor, merely mimicking the phenomenon of being shocked.  At higher voltages, the actor would writhe about violently, and even cry out in agony, and at the highest levels, would pretend to lose consciousness.  In this experiment, 26 out of the 40 participants continued to administer increasing shocks up to the highest level, which was believed by them to be 450 volts.  In evaluating the results of the experiment, Milgram could find no determinants of compassionate behavior (or lack thereof) based upon differences in gender, ethnicity, or socioeconomic status.  What he did discover was that the testing environment itself played a very significant role in the outcome:  Subjects who had come into an environment which was coldly formal and impersonal tended to be the ones who administered the more lethal shocks, while those who had been greeted warmly and treated kindly by the test administrators at the outset of the experiment were more prone to resist the directive to continue raising the voltage.

Lest one be tempted to believe that such an unquestioning willingness to follow the dictates of one in authority was a final relic of earlier, less enlightened times, which passed away with the end of the 20th Century, one need only read of more recent examples, which are equally disturbing.  A case occurred ten years ago in the U.S., when a prankster, calling a fast food restaurant and claiming to be a police officer, convinced a manager there that one of her employees was under suspicion for a crime, and induced the manager to confine this young woman in a room and subject her to various forms of interrogation and intimidation.  The eighteen-year-old girl was forced to remove all of her clothes, and was then compelled to endure a variety of humiliations for hours, culminating in sexual abuse perpetrated by the manager’s fiancée (in accordance with the caller’s instructions), who had been enlisted by the manager to assist in the interrogation while the manager returned to her duties elsewhere in the restaurant.  It was only when another employee refused to join in on the interrogation that the manager herself finally began to question the legitimacy of the caller’s directives.  Never, during the preceding several hours that she subjected her employee to this ordeal, did the manager consider the fact that what she was doing was well outside the bounds of what any rational person would consider to be ordinary police procedure, let alone basic codes of civilized human conduct.  A simple voice on a telephone, claiming to belong to a person in an official capacity, was enough to get her to suspend any such considerations.  It is frightening to imagine the lengths that supposedly ordinary people might go if urged to do so by someone with more tangible credentials of authority, such as a uniform.

An even more effective tool of enforcing compliance is to make one’s subjects actually believe in and support the oppressive institution or regime, and the starkest means of doing so is to resort to what has been traditionally called “brainwashing”.  This was the fate of Winston Smith, the central character and would-be rebel of Orwell’s Nineteen Eighty-Four, after he was mentally broken through psychological torture at the end of the novel.  In a book entitled Brainwashing: The Story of Men Who Defied It, author Edward Hunter described the experiences of prisoners in the Korean War and citizens in Mao Tse-tung’s China who had been submitted to brainwashing techniques.  The process, he found, was essentially a two-fold one, which involved “softening up” or breaking down the mental resistance of the subject, along with the indoctrination of whatever ideas were to be implanted into that subject.  The most effective “softening up” techniques – used alone or in various combinations – were: 1) hunger or malnutrition, 2) fatigue and sleep deprivation, 3) tenseness and anxiety stemming from not knowing what was going to happen next, 4) threats, 5) physical violence, 6) drugs, and 7) hypnotism.  Indoctrination involved controlled exposure to lies and propaganda, and the subtle dissemination and acceptance of select ideas through study and discussion groups.  A particularly subtle but effective technique practiced upon prisoners was to direct them to write “confessions” of their crimes, but without explicitly telling them what to confess.  Instead, each time that a draft of the confession was submitted, the interrogator would express dissatisfaction with the content, while never saying exactly what it was that he had wanted to see, or not see, in the confession.  The prisoner would continue to amend and revise the document, and in doing so would voluntarily – but unconsciously – admit to more and more imaginary offenses, and come to support ideas and untruths that comported with the propaganda disseminated by his detainers.  By taking an active role in creating these revisions, the prisoner felt a greater degree of psychological attachment to and personal ownership of what was being said in them.  It was an insidiously effective tactic, and one that has been effectively applied ever since in other environments: by leaders in both business and government who want their underlings to engage in practices that are unethical or questionable.  While avoiding making explicit directives to carry out morally ambiguous tasks, these leaders, through a system of subtle rewards and vague expressions of disapproval, can guide their subordinates into performing the desired deeds, and, should the actions of these subordinates be held up to the light of critical scrutiny and condemnation, the leaders can effectively make the case that they never ordered or even endorsed the actions taken.  These actions, they could plausibly argue, were carried out on the “personal initiative” of the guilty parties.

Edward Hunter, in his interviews with persons subjected to brainwashing techniques, found that some were more resistant to it than others.  Persons with strong religious faith and/or moral convictions were hard to crack.  Conversely, those who tended to be “relativists” in their thinking, willing to see a little truth in all points of view, were more susceptible to indoctrination.  I remember a man that I once worked with who was a recovering heroin addict, who was just such a moral relativist, and in fact took great pleasure in demonstrating the underlying absurdity of any system of belief that his coworkers supported.  For him, believing in anything too strongly was a symptom of mental weakness.  I always wondered if there might have been a connection between his moral relativism and his addiction problem.  For many, indeed, the cure for alcoholism and other addictions often begins with faith in a higher power, and perhaps this provides an equally potent antidote against systematic techniques to break the mind.  Similarly, Hunter found that those who had a strong sense of mission or purpose in their lives were resistant to mind control.  A belief that their suffering had meaning gave them the strength to endure more hardship than others who were subjected to similar conditions.  Victor Frankl, the concentration camp survivor who authored Man’s Search for Meaning, came to a similar conclusion.  He wrote:

... (A)ny attempt to restore a man’s inner strength in the camp had first to succeed in showing him some future goal.  Nietzsche’s words, “He who has a why to live for can bear with almost any how,” could be the guiding motto for all psychotherapeutic and psychohygienic efforts regarding prisoners.  Whenever there was an opportunity for it, one had to give them a why – an aim – for their lives in order to strengthen them to bear the how of their existence.  Woe to him who saw no more sense in his life, no aim, no purpose, and therefore no point in carrying on.  He was soon lost.

Hunter identified a number of other tactics and characteristics that were effective aids in defending oneself against brainwashing, including keeping one’s mind occupied, confidence, adaptability, strong group ties with others in the same circumstances, being true to oneself, and finding ways to expose the shortcomings of the oppressors – “cutting them down to size”, so to speak.  But it is that sense of meaning or purpose that is the foundational defense.

            There are, of course, less onerous, but more insidious, means of getting people to voluntarily do what you want them to do, and these fall under the relatively benign-sounding rubric of “influence”.  In Western cultures, these are more often associated with the advertising practices of the private sector, but the skillful art of influence has become just as pervasive in political campaigns.  This is a discipline that has truly evolved into a complete science, and the strategies and tactics of which it is comprised are perhaps even more finely honed – and therefore more effective – than those which are used in brainwashing.  The psychologist Robert Cialdini, in his book Influence: The Psychology of Persuasion identified and described the major strategies that are employed.  Two of these are strategies already discussed above: 1) the use of authority figures to get people to do things – even objectionable acts, and 2) the ability to obtain compliance and support from people by inducing them to commit, orally or in writing, to an idea or a goal.  Other strategies rely on social proof, or the tendency of people to do things that others are already doing, and on the fact that they can be easily persuaded by others who they like or find physically attractive.  Reciprocity, the impulse to “return a favor”, and perceived scarcity are also powerful motivators for targeted behaviors.

            The use of influence, since it generally does not rely upon overtly coercive tactics, seems more benign, but its effects can be just as pernicious as the more unsavory methods of inducing compliance.  A government or regime that has succeeded in winning the unquestioning loyalty of most of its populace has an army of domestic allies to rely upon for quelling any potential dissent.  A classic case of this occurred during the second war between the U.S. and Iraq, which began with the invasion of U.S. troops in that country in 2003.  When the war began, it was immensely popular with the American citizenry, because they had been presented with information (later proved to be false) which induced them to believe that the Iraq government posed a legitimate, powerful threat to the U.S. and its allies.  When a popular music group, the Dixie Chicks, publicly spoke out against the war, its members were ostracized, and some of them received death threats.  The skillful use of propaganda can sometimes be far more effective in keeping dissenters at bay than the direct application of brutal, police state tactics.

Is there a natural limit to oppression – a point beyond which a person will finally take a defiant stand?  The French existentialist philosopher Albert Camus, in his book The Rebel, argued that there is such a natural limit: it is the point at which conditions have become so intolerable that a person would rather be dead than alive.  Beyond this limit, a person will theoretically be willing to risk or sacrifice anything to rebel against the forces which have created these unbearable conditions.  But even this limit can be overcome through effective techniques of socialization, such as those used to train military personnel, who are then willing to sacrifice their lives – often in conflicts that seem to have no direct connection with protecting their homeland or their loved ones.

Conversely, there are – and always have been – persons who have been willing to rebel and dissent even when conditions have not become nearly as intolerable as those embodied in Camus’ natural limit.  What is it that finally motivates a sufficient number of people to say “Enough!” so that an effective counter tide forms against the established order?  Must it be necessary that several of those who are more rebelliously-inclined happen to be in the same place and the same time? 

Perhaps not.  A diversity of tolerance levels toward oppression may be enough.  Consider this example:  Suppose that several people are in a crowded store, and each of them has a different propensity to panic.  One will run out of the store at the slightest hint that something might be awry.  Another will only panic if he sees somebody else panic first.  And yet another will not panic unless he sees two other people fleeing the store, and so on, and so on, with the most unflappable person oblivious to panic unless he sees everybody else fleeing the store.  It is easy to imagine a scenario where the first person is startled by something, and runs out of the store, prompting the second person to follow him, and, with these two seen fleeing the premises, the third person will then join them, and so the chain will continue until even the bravest person in the store, now seeing everyone else scrambling in panic, will head for the exit as well.  Perhaps revolutions proceed in a similar fashion, with dissent being initiated by one or more general malcontents, who are joined by those who are usually reluctant to engage in such activities unless they see somebody else doing them first, and then finally by others who only get involved when they see a sufficiently large group involved.  Chance probably plays a role as well.  An abuse that might have been tolerated by the public for a long time might become suddenly intolerable when some seemingly insignificant additional provocation is added to it.  Or perhaps it becomes intolerable to one particular person, one day, because that person’s mood had been darkened by something else.  An offhand remark, a fleeting insult, or even a simple accident might become the catalyst that starts the blaze of insurrection.


            In recent years, it seems that revolution is breaking out everywhere, and the sheer frequency of these events might tempt one to believe that – regardless of the evolving technology of psychological compliance – the innate fickleness of the human spirit guarantees an inexhaustible resource of resistance to oppression.  Somewhere, somehow, it seems that there is always somebody who is ready to throw down the gauntlet for human liberty, and that once this gauntlet is thrown down, there are a multitude of revolutionaries, ready to rally to the cause of freedom.  But one should not be so ready to succumb to complacency.  One need only look to present day examples of completely effective, long-lived totalitarian states such as North Korea to come to the realization that freedom comes at a great price, and that, once lost, it is not always easily repurchased.  Even in contemporary democracies, governments, political parties, and businesses continue to hone and develop the tools and techniques of control and influence (along with those of surveillance, which frequently accompanies these other two).  The price of freedom is eternal vigilance, and in these modern times, vigilance is requiring an increasing degree of sophistication.  The year 1984 has come and gone, but the terrors that Orwell forever linked with it are still with us – literally at our doorstep – and will only become increasingly dangerous as civilization continues to evolve.

Monday, June 30, 2014

Who Is "Number 1"?

There was an item in the news recently that the government of Thailand, currently under martial law, banned a screening of Nineteen Eighty-Four, the film version of George Orwell’s classic dystopian novel about totalitarianism.  That book has become very popular in Thailand recently, after the military seized power from Thailand’s democratically-elected government last month.

Democracy has actually become unfashionable in recent years, as its shortcomings seemed to have been highlighted by the failure of the “Arab Spring” revolutions, by the apparent incapacity of the United States Congress to effectively work together in addressing any of the most serious problems that the country is facing, and, in contrast, by the continued success of China to raise the standard of living of its citizens and continue on a path toward becoming the world’s largest economy, in spite of the fact that it is a very undemocratic country.

Is democracy passé?  Is it a quaint, archaic concept that has not stood the test of time in facing the challenges and rigors of modern civilization?  And, if so, then what should take the place of democracy?  Was Plato correct, when he argued in The Republic that an ideal society would be governed by those who were best endowed to exert authority over others, rather than by the arbitrary whims of a voting populace?

I, for one, am very much in agreement with Winston Churchill, who declared, “It has been said that democracy is the worst form of government except all those other forms that have been tried.”  Of course, Churchill was also quoted as saying, “The best argument against democracy is a five-minute conversation with the average voter.”  There is wisdom in both of these remarks, and, with respect to the second, it is frustrating to see how easily the voting public can be distracted by crafty politicians from addressing themselves to the most important issues at hand, if, indeed, they are willing to spend the time to familiarize themselves with these issues at all.  But even if Plato had been right – that that government is governed best which is governed by the best – then how does one insure that the best actually gain the reins of power?  And, having somehow succeeded at this, how can we the governed be assured that this ruling elite does not become intoxicated by the power that they enjoy, and direct the machinery of the economy and the state exclusively to maintaining themselves in power, and maximizing their enjoyment of it, regardless of the consequences to the rest of us?  These, I think, are ultimately intractable problems, and comprise the reasons why democracy, for all of its limitations, is still the system that comes closest to adequately providing for the welfare of all of the persons who comprise a society.

Democracy is a very fragile institution, however, and depends upon both the capacity and the willingness of most or all of its citizens to defend the institutions and customs that support it.  It is a tribute to the institution that it can survive and endure in the wake of widespread voter apathy, but the ultimate test of its survival is the extent to which its citizens will put themselves at risk to defend it when it is under attack, either from subversion within its borders, or from invasion without.

When the institutions of democracy begin to fail, it rests upon the citizenry to arrest the damage, and to repair it.  But in order to do so, it must 1) realize that the damage has occurred and/or is occurring and 2) have the willpower to address it.  The first is a problem of perception, the second a problem of will.  But there is an important third element, which is just as essential to preserving the trappings of democracy:  in the face of injustice, where resistance is required, it must be known against whom or what the force of resistance must be applied.

In totalitarian states, and other repressive regimes, particularly ones with a relatively short history, the face of oppression is generally a very prominent one, because the locus of power is very transparent.  Topple the dictator, and there is a very good chance (but far from a certainty) that the institutions of oppression will fall away with him.  On the other hand, long established institutions of power are harder to contend with, because they often are more nebulous.  Who, or what, is it that must be toppled, or pushed back, or constrained?

            I remember a poster that a boss of mine had in his office, back in the early 1980s.  It was a picture of Leonardo da Vinci, and underneath it was the caption, “We are in control.”  I always wondered what that caption meant.  Who was in control?  Did it mean that it was the inventors and artists of our society who truly ran things?  That seemed rather unlikely to me.  Perhaps, I speculated, it was a reference to the individuals who manned the engines of our civilization: the capitalist entrepreneurs and industry barons who were lionized in the novels of Ayn Rand.  The question, “Who is in control?” has often taken on a special urgency when the “controllers” have been perceived as being responsible for the oppression of various segments of humanity.  The idea of a collegial elite, meeting in secrecy to make critical decisions about the fate of civilization, has been a recurring popular one, and there have been various suspect groups and institutions.  In the 1960s, in America, the power elite were referred to as the “Establishment”, although it was unclear who the denizens of this particular group were.  (The injunction popular at that time – “Don’t trust anybody over the age of thirty” – fell out of fashion when the revolutionaries and social reformers of that era moved well past that age.)  In the 1970s and 1980s, other associations were held under suspicion, including the Trilateral Commission and the Bilderberg Group.  Wilder conspiracy theories have focused on historical fraternities, such as the Freemasons, Rosicrucians, or Illuminati.  The image of a shadowy conspiracy of elderly and middle-aged Caucasian men, sitting around a table in a darkened, paneled room, making decisions on all matters of importance, including the outcomes of certain sporting events, even made its way into popular television, in such programs as The X-Files.

In the 1960s, a British television program, The Prisoner, starring Patrick McGoohan, painted a particularly compelling portrait of the evils of institutionalized oppression.  The title character, played by McGoohan, was an ex-operative of his government who wakes up one day to find himself living in an Alice-in-Wonderland society – called simply “The Village” – peopled by citizens who have numbers rather than names, and who content themselves by wiling away their time in completely inane activities.  Privacy is non-existent in this little “village”, because surveillance cameras are literally everywhere.  Ostensibly, the title character is being held here simply as part of an elaborate form of interrogation, with his keepers wanting to know what had prompted him to resign from his occupation as a government secret agent.  But as the series progressed, a much more insidious goal became apparent – both to him and to us the viewers: that of inducing him to accept and even embrace his new role as a citizen of this dystopian community.

In each episode, the Prisoner is confronted by a new nominal head of the Village, known simply as “Number 2”, and each of these brings a fresh technique for trying to get the Prisoner to crack.  “Who is Number 1?” the Prisoner logically asks, suspecting that there is a single, unchanging locus of power hiding behind the scenes, pulling the strings of this parade of puppet rulers.  The question is asked at the beginning of every episode, and no answer is given.

In one of my favorite episodes of the series, the Prisoner makes a keen observation about the inhabitants of the Village that he believes will allow him to upend the entire structure of power there.  He realizes that there are two types of inhabitants in this society: jailers and inmates.  Although everyone has a number, and everyone dresses alike, there are clear differences in the behaviors of these two classes of people.  The inmates are fearful, submissive, and eager to avoid creating disturbances which might bring negative attention upon themselves.  The jailers, on the other hand, are haughty and overbearing, and take it upon themselves to ensure that the rules and customs of the Village are being observed by everyone around them.  The Prisoner discovers that by mimicking the behavior of the jailers, he is treated like a peer by the other, genuine, jailers, and treated with deference by the inmates.  He reasons that he can use his newly won social prestige to orchestrate a general social revolution, by counting on the loyalty of the obsequious inmates.  His plan backfires, however, as he discovers that the inmates do not trust him – or his intentions – because they have come to genuinely believe that he is one of the jailers, trying to deceive them into revealing themselves as potentially disloyal.

In the penultimate episode of the series, the reigning Number 2 resorts to one final, seemingly fool-proof, tactic to break the Prisoner’s will: he submits him to a sort of psychoanalysis, in order to “cure” him of his anti-social, non-conformist behaviors.  The intense analysis turns into a personal battle of wills between the two men, and when it is actually Number 2 who breaks down, the Prisoner is allowed to ask for anything that he wants.  He requests to be taken to Number 1: to find out who or what the real power is that has been manipulating this macabre society.

If the ending of this series was ultimately an unsatisfying one, it is because there really is not a nice, neat answer to the question: “Who is Number 1?”.  Who is it that actually holds the reins of power?  Who is making the really important decisions that affect all of our lives, and our collective destiny?

In my life experiences working in the corporate world, and for other organizations, I have found that those who are in control – who are making the important decisions – are not necessarily those at the top of the organizational chart.  In one particularly extreme case, several years ago, I worked for a company that brought on a president who was completely oblivious to the machinations of power all around him, and remained so until his relatively brief tenure there ended.  I remember sitting in a meeting, ostensibly being run by this man, where the plans for a very important project were being crafted.  The meeting itself consisted of a series of rather inane slide presentations, discussing the project in very general terms.  And all during the meeting, two rival factions within the company, consisting of men and women at various levels in the organization, were hashing out the real features of the project, as they huddled together in small groups outside of the meeting room, particularly during breaks.  The president had no idea that a herculean power struggle – which actually became extremely contentious – was going on just outside of the perimeters of this meeting, and when the final slide presentation had ended, he lauded the group of attendees, saying how proud he was to be in charge of such a talented, harmonious team.  When the project eventually did take final form, he had nothing to do with its actual development, or even with deciding which of the various rival features were adopted.

In every organization that I have ever been a part of, I’ve noticed that there is a genuine architecture of power, and that one must look beyond job titles and reporting responsibilities to find it.  Like the “Village” with its jailers and inmates, there are persons who are actively making decisions, orchestrating changes, and behaving as if they have a personal stake in the outcome, while others are seemingly content to just show up for work, avoid bringing unpleasant attention upon themselves, and dutifully follow any assignments that are given to them.  Of course, complicating things is that the architecture of power is never a rigid one – it is fluid, ever-changing.  I have seen many “heirs-apparent” – executives who are seemingly next in line for the top leadership position in the organization – suddenly ousted from the organization entirely.  Even political tyrants with absolute or near-absolute power – as history has shown over and over again – can find themselves unexpectedly divested of all of their power – and in many cases their lives as well.

And, to complicate things even further, often power is wielded by persons who are not even part of the official power hierarchy.  In the Persian and Byzantine Empires, eunuchs could exercise a great deal of influence in the royal court.  In modern corporations, consultants often play a critical role in carrying out important projects or even determining the future course of the organization, in either a temporary or ongoing basis, and in fact there is often a “revolving door” relationship between the two, with successful consultants becoming executives at corporations they were once hired to serve, and retiring executives joining consulting firms.  A similar relationship often exists between lobbyists and political organizations.

Perhaps, at each of the apexes of the power hierarchies of our civilization, there isn’t a person, or a caste, or a cabal.  At times, it seems that we are all just pawns, being swept along by the tide of socio-historical forces, and that the most powerful among us have merely deluded themselves into believing that they are shaping destiny rather than merely acting as its most prominent agents.  Some post-modernist philosophers, as I described in my blog entry “Apocalypse Then” (April 2013), believe that we are – or are moving toward – a civilization in which no person will genuinely be in control of anything, because the desires, goals, and beliefs of all humanity will be completely shaped and conditioned by the impersonal machinery of civilization itself.

And yet, both the triumphs and tragedies of history – ancient as well as modern – provide ample evidence that human beings can and do exercise power in ways that go against the tide.  Despotic regimes are toppled within days, while other societies that seemed to have been following, for generations, a trajectory toward greater freedom and tolerance suddenly descend into nightmares of oppression and chaos.  Within every corporate organization, within every political and religious movement, and within every government, there are real people, exercising real power, for a variety of different ends, in ways that are not completely transparent or comprehensible.


The exercise of power is older than civilization itself, but the science of power – the understanding of its architecture and its sociology – is still a relatively young one, hardly past the phase of observation and rudimentary explanation.  There should be a renewed sense of urgency in advancing this science, because the technology of power has been growing at an alarming rate over this past century.  In the United States, we recently discovered just how little privacy we really do have.  The all-seeing eyes of Big Brother in Orwell’s 1984, and of the Village in The Prisoner, are no longer elements of science fiction.  But surveillance is just a part of the exercise of power.  There is also the imposition of control.  And the psychology of compliance is one feature of the technology of power that has been advancing at a particularly alarming rate.

Monday, May 26, 2014

Thoughts on the Future of the Electricity Industry

(The following is a slightly modified and abridged version of a dinner speech that I gave at the Rutgers Center for Research in Regulated Industries annual Eastern Conference in Pennsylvania earlier this month.)

What I am going to do is to sketch out, in broad strokes, an outline of the core issues that the American electricity industry is facing, and what I feel are the methods that it will have to adopt to effectively face these issues. And I’m going to do this in a very personal sort of way, by highlighting some of the life lessons and experiences that have come to mind as I’ve pondered the challenges confronting it.

Let me begin with an experience that I had as a very young man, when I was working my way through college as a lab technician for a metallurgical company. Now the manager of this laboratory prided himself on having state-of-the-art equipment, and apparently equipment manufacturers had picked up on this, because it seemed that we had a steady stream of salesmen passing through that laboratory, each trying to convince him that they had the next big thing in laboratory devices. One of the more laborious tasks that we technicians regularly had to perform was to polish little test pieces of metal so that we could examine them under a metallograph. This was done by pressing them – one piece at a time – down onto a rotating polishing wheel, and it would sometimes take several minutes to get the necessary flawless finish that would enable testing of the piece. And then, one day, a salesman came into our lab with something that he promised would change our lives. It was an automatic polisher! Several little metal pieces could be attached to mechanical arms, which would then polish them, all at the same time, and no lab technician would ever have to sit hunched over a polishing wheel again. It sounded wonderful. It sounded fantastic. It sounded too good to be true. We lab technicians were given a demonstration of the machine. About a half dozen pieces were polished at once. One of the lab technicians was given one of these pieces after the machine had finished, and he was asked to render his opinion. “Terrific!” he said. “Let’s buy it!” I picked up another of the pieces and looked at it. It was terrible. The piece had not been polished properly at all, and was far from suitable for inspection under a metallograph. I brought this to the manager’s attention. The salesman protested that the machine had probably just not had its settings properly calibrated. I challenged him to a contest: his machine versus me. He could have as many tries as he liked, while calibrating his machine, but he had to eventually demonstrate that his machine could produce suitably polished test pieces as rapidly as I could. After several trials, with the machine still not able to properly polish a single piece, the salesman eventually gave up, saying that the machine just wasn’t suited for the particular kind of lab work we did. He was sent packing. Meanwhile, that other lab technician – the one who said we should buy the thing – pulled me aside and apologized for his early endorsement. “I was just trying to be a good company man,” he explained. “A good company man,” I thought to myself, “What did he mean by that? How was giving his rubber stamp approval to a machine that would have cost us thousands of dollars, and would have been completely worthless, doing a favor to the company?” I remembered this incident when I heard the president of an electric utility give a speech recently, talking about many of the dubious “innovations” that others have been trying to foist on his industry, and particularly when he quoted Stanford economist Thomas Sowell, who said, “Much of the social history of the Western world, over the past three decades, has been a history of replacing what worked with what sounded good.”

Three decades sounds about right, because it was approximately three decades ago when Coca Cola nearly made one of the most devastating product “innovations” in its entire history. Having been convinced by a third party – its competitor, Pepsi – that its formula was no longer a winning one – in spite of its continued dominance in the soft drink market – Coca Cola redesigned its signature brand, labeling it “New Coke”. Public reaction was swift and negative, and had Coca Cola not quickly realized its error and reintroduced its signature brand as “Coke Classic”, this misstep might have resulted in complete disaster for the company. Here, then, was a case of replacing something that worked with something that sounded good. Coca Cola’s product designers assured upper management that, based upon taste tests, the new formula would be preferred over Coke’s “classic” one as well as the one used by its principal rival, Pepsi. What management failed to realize was that the long-running success of Coca Cola was due to a product brand that had been thoroughly embraced by a loyal customer base which expected a corresponding loyalty from its provider. The switch to New Coke constituted a betrayal to them of the worst sort, and one which they could only grudgingly forgive when the original brand was restored to them.

Now in the midst of all of this clamoring for change in the electricity industry, its leaders must be careful to not lose sight of what their own special “brand” is, and thereby risk losing it – and with it, everything that contributed to the levels of customer satisfaction that the industry may have enjoyed in the past. And I do think that it has a brand: its own version of a “secret sauce” or formula that worked for its customers. That brand, quite simply, was the ability of a customer to flip on the switch to any electrical appliance in their home, and know with almost perfect confidence that the appliance would operate. It was a combination of simplicity and reliability. There were no complex procedures involved in bringing electricity into the home – no market transactions, or negotiations, or elaborate sequences of necessary steps to make it happen. Electricity, quite frankly, has always been something that we have never had to think about. We don’t have to care about where it comes from, or how it gets into our home, or whether we’ll have enough of it from minute to minute, or even hour to hour. We flip a switch, and the light comes on. That’s all there is to it. End of story. It is the same magic that underlies all of our most precious services – the utilities: natural gas, water, the telephone, and electric service.

This is electricity’s brand, and if electricity providers depart from it, they do so at their own great risk. I had an experience of this first hand, when I worked for a natural gas utility several years ago. We introduced a customer choice program: not because our customers demanded it, but because we became convinced by third parties – like Coke did in the mid-1980s – that it was a change that would be for the better. Customers were given the option to choose a different natural gas supplier, while still receiving delivery service from us. I’ll never forget the experience that I had one afternoon, when I was invited to speak to a group of senior citizens about my company and some of the new services that it was offering. I waxed eloquent about our new customer choice program, saying that it was a bold and wonderful step into the future, and how it would improve the quality of the lives of all of our customers. After I gave my talk, and invited questions from the audience, the last woman who stood up with a question said this: “I don’t see what’s so great about your ‘choice’ program. Since you’ve introduced it, I’ve gotten a barrage of calls from gas telemarketers, confusing me with offers that I don’t understand. And all of this time, my gas bill has actually gone up rather than down. Your program has been nothing but a source of grief to me.” It was a real wake-up call: here was something that most customers didn’t want, and at least some of them genuinely resented. It was a change that sounded good, but ultimately sounded better than it actually was.

Now at this point I know that it is sounds as if I am arguing against change – that change would not be a good thing when it comes to electricity. But nothing could be further from the truth. I believe that change is going to have to occur. Let me explain why: There was a historical event in North America, called the Great Blackout of 1965. It was a massive power outage that affected parts of Canada and the northeastern United States. It was not just the geographical scope, but the duration of the outage that made it so memorable. Over thirty million customers were left without power for nearly thirteen hours. Thirteen hours! Now the duration of that outage doesn’t produce the same reaction of shock and horror from those reading or hearing about this event as it did, say, ten or twenty years ago. At least it doesn’t from me. For most of the past few years, if the worst outage that I had during the entire year was only thirteen hours in length, rather than a few days in length, I would have counted that as a good year. Sadly, we have seen a marked drop in electricity reliability, in an era when continuous electricity service is more important than it has ever been. Thirty or forty years ago, if we found ourselves without electricity, we might spend the time sitting on the front porch with a glass of lemonade, talking with our neighbors. But now, not just our business life, but our social and leisure life as well is contingent upon being continuously connected electronically to a network. Why has electricity reliability declined? We have an aging infrastructure in this industry, just as we do in the rest of the country. The American Society of Civil Engineers gives the nation a grade of “D” for the quality of its infrastructure. It gives the electricity industry a “D+”. I guess that means the industry is “above average”, but that sounds like a dubious honor. But we’ve also seen an increasing frequency of very disastrous weather events in this country which have caused widespread outages. My vocabulary for these calamities has expanded in just the past few years, with words like “derecho” and “polar vortex”.

And this leads me to the second reason that change has to occur: the environment. Producing electricity is a dirty business. That has always been true, and electricity producers have already made great strides in cleaning up its power plants. But more needs to be done, and the growing consensus that greenhouse gas emissions are moving our climate along a trajectory to disaster only adds to the urgency of this task. One-fourth of all non-natural greenhouse gas emissions that have been produced since the beginning of the industrial revolution have come from the United States alone. And currently one-fourth of all non-natural greenhouse gas emissions produced in the United States come from electricity power plants. I know that there is a lingering debate about what the real impact of these emissions are, upon temperatures, and upon climate in general, but among the scientific community there really is an overwhelming consensus that climate change is real, and that it is dangerous. I know that I’m a believer, and so are many if not most of the CEOs of our electric utilities in the U.S. As Jim Rogers, former President and CEO of Duke Energy, once said, climate change is a serious problem, electricity power producers are a significant part of the problem, and electricity providers have to be a part of the solution.

This, I think, is the essence of what is driving change in our industry, from the customers’ perspective. But there are all sorts of other drivers that have come into the national conversation on this issue. Managers of investor-owned utilities are concerned about flat or declining sales, and how they will be able to maintain earnings growth. National policymakers, think tanks, and other third parties have become intoxicated with the idea of a decentralized grid, with electricity supply and delivery being managed by just about everybody, using solar-paneled roofs, microturbines, windmills, electricity storage, and price-responsive devices. But ultimately, it is what the customer wants that will drive the really important and substantial changes to the electricity system. And what I believe that the customer wants is a more reliable, and a cleaner, electricity service. That’s it. In spite of all of this talk of “smart” this or “smart” that, “cyber” this or “cyber” that, “prices to devices”, et cetera, et cetera, when you get to the real base of it, that’s really what our customers – and our citizens – are looking for. And of course a customer is always sensitive to price. It really comes down to a very simple formula: Achieve a desired level of reliability and clean energy at as low a cost to the customer as possible. And every public policy initiative, regulatory action, and business decision made by electricity providers should be done in the context of this formula.

Now I know that there is this other conversation going on, about how there is a new breed of customers that are more “tech savvy”, and who want to play a greater role in managing their electricity service, as they do in other areas of their lives. And I don’t dispute that there probably are some people out there who would love to have a “smart app” with which they can turn on or shut off their water heater at any time of the day, in response to hourly electricity prices, or have a “smart toaster”, or a “smart thermostat”. I suppose that I could eventually warm up to the idea myself of being able to remotely run the electric appliances in my home, so that potential burglars, seeing certain lights being turned on throughout the day, would not realize that I am away, and it would be nice to be able to run my heating and air conditioning units remotely so that the house will be at an optimal temperature by the time that I get home from work. The rise of smart phones and smart phone "apps" is often pointed to as a prominent example of how people want to use modern information technology in ways that were unimaginable just a few years ago. And this is true. Up to this point, I have been stressing the fact that a significant part of the value proposition for electricity is that customers don’t have to think – or do – too much about it. They flip a switch and it’s there. There was a time when the same thing could have been said about telephone service. So what is it that motivates somebody to invest more time (and money) in a service that they are receiving, rather than less? Why are all these smart phone “apps” so popular, and is there a similar potential hiding somewhere in electricity service?

I have been a student of business transformation, and it was questions like these that eventually led me to a fundamental insight. The greatest product innovations in our economy have one thing in common: they have moved customers from a condition of “bad time” to one of “good time”.

What do I mean by this? I believe that the single most important feature of each of our lives is how we spend our time, and there is a continuum stretching from extremely unpleasant ways, to extremely pleasant ways, to spend time. Let me give you a few examples. Bad time includes the performance of drudgery: scrubbing floors, mowing the lawn, or doing some mundane task over and over and over again. It includes unpleasant interactions with other human beings, like a rude clerk in a store, an annoying coworker, or an insensitive customer service representative. It includes long waits, whether in line or on one of those annoying calls to a customer service number, where a recorded message comes on every thirty seconds or so, saying “Your call is very important to us”: a lie which makes the long wait even more unbearable. Another example of bad time is when we have to drive all over town to try to find something that we want or need. Good time, on the other hand, corresponds to those experiences that we like to savor, and preserve in memory. Of course, the greatest of these involve happy times with significant persons in our lives, such as family members, spouses, or close friends. Entertainments – music, television, and movies – are also important examples of good time. But good time also includes new experiences, such as novel encounters, interesting new information, or other discoveries that are of interest or practical value to us. Even a pleasant interaction with a customer service representative, or the website of a company, might be counted as an example of good time.

All of the greatest product or service innovations have moved us from bad time to good time. Think of the vacuum cleaner, or the washing machine, or the dishwasher. Or think of the radio, and television, and the internet. Even bank ATMs count as a significant example of this. I remember some people actually saying, when ATMs first came out, “Oh, they’re so terrible – they’ve replaced the experience of interacting with real human beings with that of interacting with machines!” Well, I can tell you firsthand, I would much rather deal with an ATM than with a rude or indifferent bank teller, and I would definitely prefer using an ATM rather than waiting in a line at a bank. And as we move to more recent times, and the great successes which have emerged in the past couple of decades, the same phenomenon can be observed. It is certainly true that Borders Books had made the process of book-buying more pleasurable: they added chairs in their stores where people could read books at their leisure, a cafeteria where people could buy coffee and snacks, and even allowed their customers to do other things, like play Chess, in their stores. But Amazon.com did something even better. They made it unnecessary for book buyers to even leave their homes! Borders improved the quality of book-buyers’ time by giving them a more pleasant environment to shop, but Amazon improved it even more by making it unnecessary to make a shopping trip at all. And what about those “apps” on smart phones? What is the value in those? I think that the Blackberry was the first machine that demonstrated to customers how they could improve the quality of their time just about anywhere. We have all been in meetings, or other events, where we feel that our time would be served better by doing something else. Blackberry made that possible, by allowing us to check our e-mails, or even go onto the internet to catch up on the news. They gave us a means by which we could move from bad time to good time – or rather, inject good time right into the midst of bad time. And contemporary smart phones have only expanded on that service, allowing us to communicate with our peers via text messaging, play games, or even listen to music.

Every major product innovation – and every major industry overhaul – came about as the result of somebody figuring out a way to move customers from bad time to good time, or from good time to better time. I remember a personal experience of being put into bad time. Blockbuster (remember them?) called me one evening to tell me that I had never returned a rented movie, and that I owed a large fine for holding it past its due date. I argued with that person for twenty minutes, explaining that I had returned the movie weeks ago, and finally, after putting me on hold for several minutes while she checked her records, the Blockbuster employee came back on the phone and told me that I was right. And then she hung up. No apology – she just hung up. That experience, of course, made me furious. But when Blockbuster subjected somebody else to a similar experience, he did more than just get angry. I’m talking about a gentleman named Reed Hastings. He got $40 in overdue fines from Blockbuster for holding a movie too long. You might recognize the name: he was one of the cofounders of Netflix.

Have utilities ever put their customers into a “bad time” experience? I’ve already explained how one of my former company’s customers felt about our choice program. She definitely had a “bad time” experience. And, closer to home, I remember when my neighborhood was out of power for several days a couple of summers ago, after a severe storm. As we watched all of the surrounding neighborhoods get their lights back on, while we remained in the dark, our feelings of despair and frustration grew with each passing hour. Finally, when I was walking to the subway station one morning, I noticed that somebody had posted a sign facing a major street which bordered our neighborhood. It was addressed to our electric utility and said, “Please don’t forget about us. We’re your paying customers, too!” That desperate sign was evidence that a lot of people had been experiencing really, really bad time.

And so, as the movers and shapers of the electricity industry look to their future, they have to ask themselves what needs to be done to move their customers from bad time to good time, or from good time to better time. It’s really as simple as that. Whatever the eventual winning strategy is for future success, I am convinced that it will answer that basic call, better than any other strategy that had been tried or proposed. Like Coca Cola, the electric industry should never forget what its longstanding secret recipe for success had been, and should continue to be: give customers access to all of the electricity that they will ever need, any time, and in a way that they don’t have to think too much about it. And yes, there may be customers who want to take a more active role in managing their electricity supply. There may even be customers who want to produce their own electricity. And we all want an electric system that will not do irreparable damage to our environment, either locally or globally. The successful utility will be there, for all of us, finding ways to give us what we want, and in so doing, move us into a happier state. We have to be careful, though – all of us, including regulators and other policymakers – and avoid being lured into believing that something that sounds good should replace something that works. There are already many versions in the electricity industry of the “automatic polisher” that I described earlier – a device that sounded good, but that would have ultimately been more expensive, more time-consuming, and more unpleasant than the systems already in place. The successful entrepreneur, the successful innovator, and the successful incumbent provider have always succeeded by focusing on what is truly important and valuable from the perspective of their customers, and then finding the optimal way to improve the quality of their customers’ time and their lives – and keep them in that happy place.