Managing Oneself

An outline of Peter Drucker’s “Managing Oneself”, a short but essential book for navigating your professional life.

Peter Drucker’s Managing Oneself is a short book (originally published as an article in 1999) that provides the best framework I’ve found for understanding what your professional strengths are and how best to use them.

Given the short length of the book, I decided to make a complete outline of its major points. It should be useful as both a reference for those who have already read the book and a quick summary for those who haven’t.

I’ve largely quoted Drucker verbatim within the outline, but have made some minor edits to several sections for the sake of clarity.

  1. What are my strengths?

    1. Most people think they know what they are good at. They are usually wrong. More often, people know what they are not good at—and even then more people are wrong than right. And yet, a person can perform only from strength.
    2. The only way to discover your strengths is through feedback analysis. Whenever you make a key decision or take a key action, write down what you expect will happen. Nine or 12 months later, compare the actual results with your expectations.
    3. Several implications for action follow from feedback analysis.
      1. First and foremost, concentrate on your strengths.
      2. Second, work on improving your strengths.
      3. Third, discover where your intellectual arrogance is causing disabling ignorance and overcome it.
    4. It is equally essential to remedy your bad habits—the things you do or fail to do that inhibit your effectiveness and performance.
    5. One should waste as little effort as possible on improving areas of low competence. It takes far more energy and work to improve from incompetence to mediocrity than it takes to improve from first-rate performance to excellence.
  2. How do I perform?

    1. Just as people achieve results by doing what they are good at, they also achieve results by working in ways that they best perform. A few common personality traits usually determine how a person performs.
      1. Am I a reader or a listener?
      2. How do I learn?
        1. Some people learn by writing.
        2. Some people learn by doing.
        3. Others learn by hearing themselves talk.
      3. In what relationship to other people do I work best?
        1. Some people work best as subordinates.
        2. Some people work best as team members.
        3. Others work best alone.
        4. Some are exceptionally talented as coaches and mentors
      4. Do I produce results as a decision maker or as an adviser?
      5. Do I perform well under stress, or do I need a highly structured and predictable environment?
      6. Do I work best in a big organization or a small one?
    2. Do not try to change yourself—you are unlikely to succeed. But work hard to improve the way you perform. And try not to take on work you cannot perform or will only perform poorly.
  3. What are my values?

    1. Values are not just a question of ethics. Ethics requires that you ask yourself, What kind of person do I want to see in the mirror in the morning? What is ethical behavior in one kind of organization or situation is ethical behavior in another. But ethics is only part of a value system.
    2. Different values bespeak different views of the relationship between organizations and people; different views of the responsibility of an organization to its people and their development; and different views of a person’s most important contribution to an enterprise.
    3. For instance, whether a business should be run for short-term results or with a focus on the long term is a question of values.
    4. To work in an organization whose value system is unacceptable or incompatible with one’s own condemns a person both to frustration and to nonperformance.
    5. There is sometimes a conflict between a person’s values and his or her strengths. What one does well—even very well and successfully—may not fit with one’s value system. In that case, the work may not appear to be worth devoting one’s life to (or even a substantial portion thereof).
  4. Where do I belong?

    1. A small number of people know very early where they belong. But most people, especially highly gifted people, do not really know where they belong until they are well past their mid-twenties.
    2. By that time, however, they should know the answers to the three questions: What are my strengths? How do I perform? and, What are my values? And then they can and should decide where they belong. Or rather, they should be able to decide where they do not belong.
    3. Equally important, knowing the answer to these questions enables a person to say to an opportunity, an offer, or an assignment, “Yes, I will do that. But this is the way I should be doing it. This is the way it should be structured. This is the way the relationships should be. These are the kind of results you should expect from me, and in this time frame, because this is who I am.”
  5. What should I contribute?

    1. Throughout history, the great majority of people never had to ask the question, What should I contribute? They were told what to contribute, and their tasks were dictated either by the work itself—as it was for the peasant or artisan—or by a master or a mistress—as it was for domestic servants. And until very recently, it was taken for granted that most people were subordinates who did as they were told.
    2. There is no return to the old answer of doing what you are told or assigned to do. Knowledge workers in particular have to learn to ask a question that has not been asked before: What should my contribution be? To answer it, they must address three distinct elements.
      1. What does the situation require?
      2. Given my strengths, my way of performing, and my values, how can I make the greatest contribution to what needs to be done?
      3. Where and how can I achieve results that will make a difference within the next year and a half?
        1. The results should be hard to achieve-they should require “stretching.”
        2. The results should be meaningful.
        3. The results should be visible, and if at all possible, measurable.
  6. Managing yourself requires taking responsibility for relationships.

    1. Accept the fact that other people are as much individuals as you yourself are. They perversely insist on behaving like human beings. This means that they too have their strengths; they too have their ways of getting things done; they too have their values. To be effective, therefore, you have to know the strengths, the performance modes, and the values of your coworkers.
    2. The second part of relationship responsibility is taking responsibility for communication. Whenever I, or any other consultant, start to work with an organization, the first thing I hear about are all the personality conflicts. Most of these arise from the fact that people do not know what other people are doing and how they do their work, or what contribution the other people are concentrating on and what results they expect. And the reason they do not know is that they have not asked and therefore have not been told.
    3. Organizations are no longer built on force but on trust. The existence of trust between people does not necessarily mean that they like one another. It means that they understand one another. Taking responsibility for relationships is therefore an absolute necessity. It is a duty.
  7. The second half of your life

    1. When work for most people meant manual labor, there was no need to worry about the second half of your life. You simply kept on doing what you had always done.
    2. Today, however, most work is knowledge work, and knowledge workers are not “finished” after 40 years on the job, they are merely bored. That is why managing oneself increasingly leads one to begin a second career.
    3. There are three ways to develop a second career.
      1. The first is actually to start one. Often this takes nothing more than moving from one kind of organization to another: the divisional controller in a large corporation, for instance, becomes the controller of a medium-sized hospital. But there are also growing numbers of people who move into different lines of work altogether: the business executive or government official who enters the ministry at 45, for instance; or the midlevel manager who leaves corporate life after 20 years to attend law school and become a small-town attorney.
      2. The second way to prepare for the second half of your life is to develop a parallel career. Many people who are very successful in their first careers stay in the work they have been doing, either on a full-time or part-time or consulting basis. But in addition, they create a parallel job, usually in a nonprofit organization, that takes another ten hours of work a week.
      3. Finally, there are the social entrepreneurs. These are usually people who have been very successful in their first careers. They love their work, but it no longer challenges them. In many cases they keep on doing what they have been doing all along but spend less and less of their time on it. They also start another activity, usually a nonprofit.
  8. Conclusion

    1. Managing oneself demands that each knowledge worker think and behave like a chief executive officer. Further, the shift from manual workers who do as they are told to knowledge workers who have to manage themselves profoundly challenges social structure. Every existing society, even the most individualistic one, takes two things for granted, if only subconsciously: that organizations outlive workers, and that most people stay put. But today the opposite is true. Knowledge workers outlive organizations, and they are mobile. The need to manage oneself is therefore creating a revolution in human affairs.

Copenhagen versus the universe

The trouble with the Copenhagen interpretation of quantum physics.

Adam Becker’s What Is Real? is an impressive book. And infuriating. Becker traces in painstaking detail the development of the Copenhagen interpretation of quantum physics, the role that its main architect Niels Bohr (and others) had in suppressing competing theories, and the consequences not only within the field of physics, but the general intellectual atmosphere of the past hundred years.

Regarding the Copenhagen interpretation, Einstein wrote to a friend that it “reminds me a little of the system of delusions of an exceedingly intelligent paranoiac….” Philosopher Imre Lakatos went further:

…Bohr and his associates introduced a new and unprecedented lowering of critical standards for scientific theories. This led to a defeat of reason within modern physics and to an anarchist cult of incomprehensible chaos.

Despite this, the theory has served as the standard interpretation of quantum physics since the 1920s.

“There is no quantum world”

Bohr, in both speech and writing, was notoriously difficult to pin down. That is part of the reason why it is so difficult to state exactly what the Copenhagen interpretation claims. In some ways, it’s easier to explain what it doesn’t say:

Rather than telling us a story about the quantum world that atoms and subatomic particles inhabit, the Copenhagen interpretation states that quantum physics is merely a tool for calculating the probabilities of various outcomes of experiments. According to Bohr, there isn’t a story about the quantum world because “there is no quantum world. There is only an abstract quantum physical description.” That description doesn’t allow us to do more than predict probabilities for quantum events, because quantum objects don’t exist in the same way as the everyday world around us.

According to the Copenhagen interpretation, at the level of quantum reality particles have no definite position until someone or something “measures” them. They are both everywhere and nowhere until they suddenly snap into a definite position when someone comes looking for them. This aspect of the theory follows from the fact that it isn’t concerned about the underlying physical reality. It’s a theory for calculating the outcomes of experiments and isn’t bothered by what the particles are up to in between measurements.

As Tim Maudlin explains in his excellent review of Becker’s book, Einstein’s opposition to the theory was due in part to his commitment to a belief in “a real, objective, mind-independent physical world”, where the idea of things being nowhere in particular goes against common sense. He believed “the goal of physics is to describe that world. Mere prediction, no matter how precise, is not enough: explanation is the goal.”

But still, the math worked and physics moved forward. The outbreak of World War II, where applied physics would play a leading role, followed by a post-war boom in jobs within applied physics led to a premature halt to debates about the foundations of quantum physics. Physicists were now expected to “shut up and calculate”.


Which brings me to the part of Becker’s book that is so infuriating. There have been a number of alternative theories that have not been given a proper hearing. More than that, their authors and advocates were sometimes driven out of academia for refusing to disavow their ideas and accept the orthodoxy.

One of these “renegades” was Hugh Everett:

Also rejecting Copenhagen, Hugh Everett took Schrödinger’s evolving wavefunction and removed the collapse. He argued that rather than an incomprehensible smear resulting, as Schrödinger’s neither-alive-nor-dead cat suggested, a multiplication of worlds results. Schrödinger’s cat ends up both dead and alive, as two cats in two equally real physical worlds. Today this approach is called the many-worlds interpretation.

Everett’s thesis advisor, John Wheeler, had great enthusiasm for Everett’s innovation. But he insisted that Everett get the nod of approval from Bohr. Bohr refused, and Wheeler required Everett to bowdlerize his thesis. Everett left academia and did not look back. His work lay in obscurity.

Another was David Bohm:

In Bohm’s interpretation of quantum physics, much of the mystery of the quantum world simply falls away. Objects have definite positions at all times, whether or not anyone is looking at them. Particles have a wave nature, but there’s nothing “complementary” about it—particles are just particles, and their motions are guided by pilot waves. Particles surf along these waves, guided by the waves’ motion (hence the name).

Robert Oppenheimer suggested to a roomful of physicists that “if we cannot disprove Bohm, then we must agree to ignore him.”

The third was John Stewart Bell. Maudlin writes:

Spurred by Bohm’s papers, Bell queried whether Einstein’s dreaded spooky action at a distance could be avoided. Copenhagen and the pilot wave theory had both failed this test. Bell proved that the nonlocality is unavoidable. No local theory—the type Einstein had sought—could recover the predictions of quantum mechanics. The predictions of all possible local theories must satisfy the condition called Bell’s inequality. Quantum theory predicts that Bell’s inequality can be violated. All that was left was to ask nature herself. In a series of sophisticated experiments, the answer has been established: Bell’s inequality is violated. The world is not local. No future innovation in physics can make it local again. The spookiness that Einstein spent decades deriding is here to stay.

How did the physics community react to this epochal discovery? With a shrug of incomprehension. For decades, discussion of the foundations of quantum theory had been suppressed. Physicists were unaware of the problems and unaware of the solutions.

The worst of the lot

The “anarchist cult of incomprehensible chaos” has proved to be an alluring force within both intellectual and pop culture. The paradoxical, counterintuitive ideas of the Copenhagen interpretation have served the purposes of countless New Age thinkers and sloppy philosophers.

The TV show Futurama skewered this fairly accurately, showing a physics professor in the year 3008 claiming that “as Deepak Chopra taught us, quantum physics means anything can happen at any time for no reason.” Chopra does in fact claim that consciousness arises from quantum entanglement and that “quantum healing” allows the mind to heal the body through sheer willpower.

Becker points to the role of logical positivism in propping up the Copenhagen interpretation. The positivist believes that two different theories which make the same predictions are for all purposes equivalent. The details of the underlying reality can be discarded. As Maudlin points out, “[l]ogical positivism has been killed many times over by philosophers. But no matter how many stakes are driven through its heart, it arises unbidden in the minds of scientists.”

In 1951, Einstein expressed his belief that the Copenhagen interpretation would continue to hold sway “for many more years, mainly because physicists have no understanding of logical and philosophical arguments.”

In terms of the current state of quantum physics and the on-going work to find better explanations, Becker summarizes the situation like this:

So what is real? Pilot waves? Many worlds? Spontaneous collapse? Which interpretation of quantum physics is the right one? I don’t know. Every interpretation has its critics (though the proponents of basically every non-Copenhagen interpretation are usually agreed that Copenhagen is the worst of the lot).

Reductionism can explain neither carrots nor consciousness

Trying to understand a whole as simply a sum of its parts hasn’t worked out well in nutrition or many other fields of science.

I have recently taken up the habit of going to the gym. More new habits have followed, like counting the calories I eat and being hyper-aware of how much protein is in everything.

But since I’m generally skeptical of things, I started to wonder whether all these changes to my diet are actually for the best. After skimming a handful of books on nutrition to get some clarity on the question (noting how many contradictions I was finding even on first glance), I settled on reading Michael Pollan’s 2008 book, In Defense of Food.

Pollan shares my skepticism about whether the various foods that cater to “healthy, active” people are truly a leap forward in nutritional science or just more fads. He approaches the subject by discussing how nutrition science, since the beginning, has been obsessed with viewing food as primarily a collection of nutrients, vitamins, and other too-small-to-be-seen substances.

Food = nutrients?

In 1827, William Prout proposed three “staminal principles” that make up all food. We know these today as protein, fat and carbohydrates (a.k.a. macronutrients). Others soon built on his work, feeling confident the secrets of nutrition had been unlocked. Evidence of how much was left to know quickly appeared. According to Pollen, babies unlucky enough to be fed an early baby formula “failed to thrive” because it lacked numerous nutrients found in breast milk.

As the field progressed, each additional discovery (vitamins! amino acids!) would give nutrition scientists hope they had finally completed the picture and were able to understand what really makes food “healthy”.

So if you’re a nutrition scientist you do the only thing you can do, given the tools at your disposal: Break the thing down into its component parts and study those one by one, even if that means ignoring subtle interactions and contexts and the fact that the whole may well be more than, or maybe just different from, the sum of its parts. This is what we mean by reductionist science.

And so the story goes to the present day, with the creations of food scientists still failing to equal or surpass the health benefits of “real” food. Pollan puts it simply: “We know how to break down a kernel of corn or grain of wheat into its chemical parts, but we have no idea how to put it back together again.”

From a common sense point of view, the approach taken by these scientists can’t be faulted. How do we learn about things? We take them apart, see what’s inside and try to put them back together again. (And maybe make some improvements along the way.)

But reading Pollan’s critique of reductionist science made me curious to look deeper at this approach within other disciplines. It seems that the failure of reductionism to understand nutrition is just one failure among many.

Nothing but a pack of neurons

Physicist David Deutsch has defined reductionism as “the misconception that science must or should always explain things by analysing them into components (and hence that higher-level explanations cannot be fundamental).”

This desire to break down complex phenomena into the smallest possible components and claim the complex phenomena to be nothing but these components can shoulder much of the blame for what philosopher Galen Strawson has called “the silliest claim that has ever been made”. What’s the claim? That consciousness doesn’t exist. Francis Crick gives a rough guide to the sort of reasoning behind this belief (though he doesn’t personally deny the existence of consciousness):

The Astonishing Hypothesis is that “You”, your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules. As Lewis Carroll’s Alice might have phrased it: “You’re nothing but a pack of neurons”.

We know that “a pack of neurons” can’t linger on embarrassing moments from childhood or feel bored during a meeting, and since we’re nothing but that, then consciousness must not exist.

Yeah, that sounds pretty silly to me, too. Later in the same essay Strawson touches on the limitations of science to explain phenomena like consciousness:

Physics may tell us a great deal about the structure of physical reality insofar as it can be logico-mathematically represented, but it doesn’t and can’t tell us anything about the intrinsic nature of reality insofar as its intrinsic nature is more than its structure…

Philosopher Bryan Magee has also written about this same limitation of science:

Physics, for example, reduces the phenomena with which it deals to constant equations concerning energy, light, mass, velocity, temperature, gravity, and the rest. But that is where it leaves us. If we then raise fundamental questions about that ground-floor level of explanation itself, the scientist is at a loss to answer. This is not because of any inadequacy on his part, or on science’s. He and it have done what they can. If one says to the physicist: “Now please tell me what exactly is energy? And what are the foundations of this mathematics you’re using all the time?” it is no discredit to him that he cannot answer. These questions are not his province.

Perhaps it’s just these limitations of scientific explanation that drives the reductionist to claim the non-existence of things that cannot be caught within its grasp.

The Universal Flux

With all that said, the spirit behind reductionism has paid handsome dividends. As Magee wrote right before the passage quoted above, “What [science] does—and this is one of the supreme cultural as well as intellectual achievements of mankind—is reduce everything it can deal with to a certain ground-floor level of explanation.”

But as we attempt to peer ever deeper into the heart of things, it’s unlikely that the ultimate nature of reality can be found by just looking for smaller and smaller components.

David Bohm was a physicist who explored the strange territory of ultimate foundations. In his 1980 book, Wholeness and the Implicate Order, he set forth the concept of implicate and explicate order, a mind-bending account of ultimate reality. Rejecting reductionism, he had this to say:

So one will not be led to suppose that all properties of collections of objects, events, etc., will have to be explainable in terms of some knowable set of ultimate substances. At any stage, further properties of such collections may arise, whose ultimate ground is to be regarded as the unknown totality of the universal flux.

At this stage of our understanding as a species, we’re still ignorant to the full complexities of things, whether it be consciousness or a carrot. Looking carefully at the parts is one way of approaching difficult problems, but better explanations will likely be found by considering them as whole, complex systems.