Monday, August 26, 2019

Robert Nozick: Utility Monsters and Experience Machines

One of the classic problems of economics involves how to make comparisons between the welfare of different people. As a common example, imagine taxing a high-income person and redistributing the money to a low-income person. In the utilitarian framework beloved of economists, a high-income person would receive less "utility" or happiness from that additional income than a low-income person would gain from receiving a transfer. Thus, it is argued, redistribution from high-income to low-income will increase the overall happiness or utility of society.

At this point, economists often plunge into questions of incentives, and how taxes on the rich or transfers to the poor might potentially affect incentives to work, acquire skills, innovate, and so on. But some philosophers take a different track, focusing instead on the assumption that utility can be so quickly linked to income, or even that utility itself is the appropriate goal for human well-being. Economists mostly don't root around in these questions very deeply. However, the philosopher Robert Nozick was not a utilitarian. Here's are some thoughts from his 1974 classic in philosophy, Anarchy, State, and Utopia that runs through some of these issues. I'll intersperse some thoughts of my own.

What if people vary substantially in how much happiness they get from income, or from consumption? Maybe some people are "utility monsters," meaning that they get so much happiness from consumption that we should all be transferring our income to them, because the sum-total of utilitarian social happiness rises when they consume more. Nozick writes:
Utilitarian theory is embarrassed by the possibility of utility monsters who get enormously greater gains in utility from any sacrifice of others than these others lose. For, unacceptably, the theory seems to require that we all be sacrificed in the monster’s maw, in order to increase total utility. ...
If utilitarianism is based on subjective feelings, then perhaps the best possible social investment would be in some brain implants or drugs to give people an extremely high degree of perceived subjective happiness. Nozick questions the idea of whether subjective happiness is all that matters by asking whether people should thus be encouraged to hook up to an "experience machine" that would let them experience whatever they wanted.  Nozick writes: 
Suppose there were an experience machine that would give you any experience you desired. Superduper neuropsychologists could stimulate your brain so that you would think and feel you were writing a great novel, or making a friend, or reading an interesting book. All the time you would be floating in a tank, with electrodes attached to your brain. Should you plug into this machine for life, preprogramming your life’s experiences? If you are worried about missing out on desirable experiences, we can suppose that business enterprises have researched thoroughly the lives of many others. You can pick and choose from their large library or smorgasbord of such experiences, selecting your life’s experiences for, say, the next two years. After two years have passed, you will have ten minutes or ten hours out of the tank, to select the experiences of your next two years. Of course, while in the tank you won’t know that you’re there; you’ll think it’s all actually happening. Others can also plug in to have the experiences they want, so there’s no need to stay unplugged to serve them. (Ignore problems such as who will service the machines if everyone plugs in.) Would you plug in? What else can matter to us, other than how our lives feel from the inside? Nor should you refrain because of the few moments of distress between the moment you’ve decided and the moment you’re plugged. What’s a few moments of distress compared to a lifetime of bliss (if that’s what you choose), and why feel any distress at all if your decision is the best one?
What does matter to us in addition to our experiences? First, we want to do certain things, and not just have the experience of doing them. In the case of certain experiences, it is only because first we want to do the actions that we want the experiences of doing them or thinking we’ve done them. (But why do we want to do the activities rather than merely to experience them?)
A second reason for not plugging in is that we want to be a certain way, to be a certain sort of person. Someone floating in a tank is an indeterminate blob. There is no answer to the question of what a person is like who has long been in the tank. Is he courageous, kind, intelligent, witty, loving? It’s not merely that it’s difficult to tell; there’s no way he is. Plugging into the machine is a kind of suicide. It will seem to some, trapped by a picture, that nothing about what we are like can matter except as it gets reflected in our experiences. But should it be surprising that what we are is important to us? Why should we be concerned only with how our time is filled, but not with what we are?
 Thirdly, plugging into an experience machine limits us to a man-made reality, to a world no deeper or more important than that which people can construct. There is no actual contact with any deeper reality, though the experience of it can be simulated. Many persons desire to leave themselves open to such contact and to a plumbing of deeper significance. This clarifies the intensity of the conflict over psychoactive drugs, which some view as mere local experience machines, and others view as avenues to a deeper reality; what some view as equivalent to surrender to the experience machine, others view as following one of the reasons not to surrender!
We learn that something matters to us in addition to experience by imagining an experience machine and then realizing that we would not use it.
I am not as confident as Nozick seems to be that people would avoid the "experience machine." He goes on to consider other machines, like a  "a transformation machine which transforms us into whatever sort of person we’d like to be (compatible with our staying us)" or a  a "result machine. which produces in the world any result you would produce and injects your vector input into any joint activity ..." Nozick writes: 
We shall not pursue here the fascinating details of these or other machines. What is most disturbing about them is their living of our lives for us. Is it misguided to search for particular additional  functions beyond the competence of machines to do for us? Perhaps what we desire is to live (an active verb) ourselves, in contact with reality. (And this, machines cannot do for us.) Without elaborating on the implications of this, which I believe connect surprisingly with issues about free will and causal accounts of knowledge, we need merely note the intricacy of the question of what matters for people other then their experiences.
Yet another challenge to utilitarianism is that if the goal of society is to have the greatest sum of happiness of the members of that society, how does that society address issues related to the total number of people in the society? For example, is a society with a much larger number of people who are an average level of happy a "better" society than one with a smaller number of people who are extremely happy? Can one place a social value on policies that result in a smaller population, based on the loss of utility from those who are not actually ever born? Worse still, what about someone who gets extreme happiness from killing someone who is unhappy, and in this way increases the sum-total of social happiness. Nozick writes: 
Utilitarianism is notoriously inept with decisions where the number of persons is at issue. (In this area, it must be conceded, eptness is hard to come by.) Maximizing the total happiness requires continuing to add persons so long as their net utility is positive and is sufficient to counterbalance the loss in utility their presence in the world causes others. Maximizing the average utility allows a person to kill everyone else if that would make him ecstatic, and so happier than average. (Don’t say he shouldn’t because after his death the average would drop lower than if he didn’t kill all the others.) Is it all right to kill someone provided you immediately substitute another (by having a child or, in science-fiction fashion, by creating a full-grown person) who will be as happy as the rest of the life of the person you killed? After all, there would be no net diminution in total utility, or even any change in its profile of distribution. Do we forbid murder only to prevent feelings of worry on the part of potential victims? (And how does a utilitarian explain what it is they’re worried about, and would he really base a policy on what he must hold to be an irrational fear?) Clearly, a utilitarian needs to supplement his view to handle such issues; perhaps he will find that the supplementary theory becomes the main one ... 
Nozick died back in 2002. I met him once, briefly, two decades before that, when I was teaching a summer course in political philosophy to high school students at the Phillips Andover summer session.  My memory of the encounter is that  I stuttered through an attempt to tell him how much I enjoyed Anarchy, State and Utopia, and he was very kind.