Tuesday, August 16, 2016

What are Motivated Beliefs?

"Motivated beliefs" is a relatively recent development economics which offers a position between traditional assumptions of rational and purposeful behavior and the conventional approaches of behavioral economics. It is introduced and explored in a symposium in the Summer 2016 Journal of Economic Perspectives. Nicholas Epley and Thomas Gilovich contribute an introductory essay in "The Mechanics of Motivated Reasoning." Roland Bénabou and Jean Tirole have written: "Mindful Economics: The Production, Consumption, and Value of Beliefs."  Russell Golman, George Loewenstein, Karl Ove Moene, and Luca Zarri look at one aspect of motivated beliefs in "The Preference for Belief Consonance."  Francesca Gino, Michael I. Norton, and Roberto A. Weber focus on another aspect in "Motivated Bayesians: Feeling Moral While Acting Egoistically." 

Of course, I encourage you to read the actual papers. I'm worked as the Managing Editor of JEP for 30 years, so I always want everyone to read the papers! But here's an overview and a taste of the arguments.

In traditional working assumptions of microeconomics, people act in purposeful and directed ways to accomplish their goals. Contrary to the complaints I sometimes hear, this approach doesn't require that people have perfects and complete information or that they are perfectly rational decision-makers. It's fairly straightforward to incorporate imperfect information and bounded rationality into these models. But even so, this approach is built on the assumption that people act purposefully to achieve their goals and do not repeatedly make the same mistakes without altering their behavior.

 Behavioral economics, as it has usually been practiced, is sometimes called the "heuristics and biases" approach. It points to certain patterns of behavior that have been well-demonstrated in the psychology literature: for example, people often act in a short-sighted or myopic way that puts little weight on long-term consequences; people have a hard time evaluating how to react to low-probability events; people are "loss averse" and treat a loss of a certain amount as a negative outcome that is bigger in absolute value than a gain of the same amount; the "confirmation bias" of interpreting new evidence so that it tends to support previously held beliefs; and others. In this view, people can make decisions and regret them, over and over. Short-sighted people may fail to save, or fail to exercise, and regret it. People who are loss-averse and have a hard time evaluating low-probability events may be sucked into buying a series of service plans and warranties that don't necessarily offer them a good value. When decision-making includes heuristics and biases, people can make the same mistakes repeatedly.

The theory of motivated beliefs fall in-between these possibilities. In these arguments, people are not strictly rational or purposeful decision-makers, but nor does their decision-making involve built-in flaws. Instead, people have a number of goals, which include fitting in with their social group, feeling moral, competent, and attractive, fitting in with their existing social group or achieving higher social status. As Epley and Gilovich explain in their introductory essay,
"This idea is captured in the common saying, “People believe what they want to believe.” But people don’t simply believe what they want to believe. The psychological mechanisms that produce motivated beliefs are much more complicated than that. ... People generally reason their way to conclusions they favor, with their preferences influencing the way evidence is gathered, arguments are processed, and memories of past experience are recalled. Each of these processes can be affected in subtle ways by people’s motivations, leading to biased beliefs that feel objective ...
One of the complexities in understanding motivated reasoning is that people have many goals, ranging from the fundamental imperatives of survival and reproduction to the more proximate goals that help us survive and reproduce, such as achieving social status, maintaining cooperative social relationships, holding accurate beliefs and expectations, and having consistent beliefs that enable effective action. Sometimes reasoning directed at one goal undermines another. A person trying to persuade others about a particular point is likely to focus on reasons why his arguments are valid and decisive—an attentional focus that could make the person more compelling in the eyes of others but also undermine the accuracy of his assessments. A person who recognizes that a set of beliefs is strongly held by a group of peers is likely to seek out and welcome information supporting those beliefs, while maintaining a much higher level of skepticism about contradictory information (as Golman, Loewenstein, Moene, and Zarri discuss in this symposium). A company manager narrowly focused on the bottom line may find ways to rationalize or disregard the ethical implications of actions that advance short-term profitability (as Gino, Norton, and Weber discuss in this symposium). 
The crucial point is that the process of gathering and processing information can systematically depart from accepted rational standards because one goal— desire to persuade, agreement with a peer group, self-image, self-preservation—can commandeer attention and guide reasoning at the expense of accuracy. Economists are well aware of crowding-out effects in markets. For psychologists, motivated reasoning represents an example of crowding-out in attention. In any given instance, it can be a challenge to figure out which goals are guiding reasoning ... 
In one classic study, mentioned in the overview and several of the papers, participants were given a description of a trial and asked to evaluate whether they thought the accused was guilty or innocent. Some of the players were assigned to play the role of prosecutors or defense attorneys before reading the information; others were not assigned a role until after evaluating the information. Those who were assigned to be prosecutors before reading the evidence were more likely to evaluate the evidence as showing the defendant was guilty, while those assigned to be defense attorneys before reading the evidence were more likely to evaluate the evidence as showing the defendant to be not guilty. The role you play will often influence your reading of evidence. 

Bénabou and Tirole offer a conceptual framework for thinking about motivated beliefs, and then apply the framework in a number of context. They argue that motivated beliefs arise for two reasons,which they label "self-efficacy" and "affective."  In the self-efficacy situation, people use their beliefs to give their immediate actions a boost. Can I do a good job in the big presentation at work? Can I save money? Can I persevere with a diet? In such situations, people are motivated to distort their interpretation of information and their own actions in a way that helps support their ability to persevere with a certain task.  In the "affective" situation, people get immediate and visceral pleasure from seeing themselves as smart, attractive, or moral, and they can also get "anticipatory utility" from contemplating pleasant future outcomes.

However, if your motivated beliefs do not reflect reality, then in some cases reality will deliver some hard knocks in response. They analyze certain situations in which these hard knocks, again through a process of motivated beliefs, makes you cling to those beliefs harder than ever. Moreover, if you are somewhat self-aware and know that you are prone to motivated beliefs, then you may be less likely to trust your own interpretations of evidence, which complicates the analysis further. Bénabou and Tirole apply these arguments in a wide array of contexts: political beliefs (a subject of particular interest in 2016), social and organizational beliefs, financial bubbles, and personal identity. Here's one example of a study concerning political beliefs (most citations omitted). 

The World Values Survey reveals considerable differences in beliefs about the role of effort versus luck in life. In the United States, 60 percent of people believe that effort is key; in Western Europe, only 30 percent do on average, with major variations across countries. Moreover, these nationally dominant beliefs bear no relationship to the actual facts about social mobility or how much the poor are actually working, and yet they are strongly correlated with the share of social spending in GDP. At the individual level, similarly, voters’ perceptions of the extent to which people control their own fate and ultimately get their just desserts are first-order determinants of attitudes toward inequality and redistribution, swamping the effects of own income and education. 
In Bénabou and Tirole (2006), we describe how such diverse politico-ideological equilibria can emerge due to a natural complementarity between (self-)motivation concerns and marginal tax rates. When the safety net and redistribution are minimal, agents have strong incentives to maintain for themselves, and pass on to their children, beliefs that effort is more important than luck, as these will lead to working hard and persevering in the face of adversity. With high taxes and generous transfers, such beliefs are much less adaptive, so fewer people will maintain them. Thus, there can coexist: i) an “American Dream” equilibrium, with just-world beliefs about social mobility, and little redistribution; and ii) a “Euro-pessimistic” equilibrium, with more cynical beliefs and a large welfare state. In the latter, the poor are less (unjustly) stigmatized as lazy, while total effort (annual hours worked) and income are lower, than in the former. More generally, across all steady-states there is a negative correlation between just-world beliefs and the size and the welfare state, just as observed across countries.
Golman, Loewenstein. Moene, and Zarri consider one aspect of motivated beliefs, the "preference for belief consonance," which is the desire to be in agreement with others in one's immediate social group. They endeared themselves to me by starting with a quotation from Adam Smith's first great work, The Theory of Moral Sentiments (Part VII, Section IV): "The great pleasure of conversation, and indeed of society, arises from a certain correspondence of sentiments and opinions, from a certain harmony of minds, which like so many musical instruments coincide and keep time with one another." They write:

Why are people who hold one set of beliefs so affronted by alternative sets of beliefs—and by the people who hold them? Why don’t people take a live-and-let-live attitude toward beliefs that are, after all, invisibly encoded in other people’s minds? In this paper, we present evidence that people care fundamentally about what other people believe, and we discuss explanations for why people are made so uncomfortable by the awareness that the beliefs of others differ from their own. This preference for belief consonance (or equivalently, distaste for belief dissonance) has far-ranging implications for economic behavior. It affects who people choose to interact with, what they choose to exchange information about, what media they expose themselves to, and where they choose to live and work. Moreover, when people are aware that their beliefs conflict with those of others, they often try to change other people’s beliefs (proselytizing). If unsuccessful in doing so, they sometimes modify their own beliefs to bring them into conformity with those around them. A preference for belief consonance even plays an important role in interpersonal and intergroup conflict, including the deadliest varieties: Much of the conflict in the world is over beliefs—especially of the religious variety—rather than property ... 
A substantial group of studies show that if you ask people about their opinions on certain issues, and if you ask people about their opinions while telling them that certain other specific groups hold certain opinions, the patterns of answers can be quite different. Personally, I'm always disconcerted that for every opinion I hold, some of the others who hold that same opinions are people I don't like very much.

Gino, Norton, and Weber take on another dimension of motivated beliefs in  their essay on "feeling moral while acting egoistically." They explain that when given some wiggle room to manage their actions or their information, people often choose to act in a way that allows them to feel moral while acting selfishly. Gino, Norton, and Weber write: 
 In particular, while people are often willing to take a moral act that imposes personal material costs when confronted with a clear-cut choice between “right” and “wrong,” such decisions often seem to be dramatically influenced by the specific contexts in which they occur. In particular, when the context provides sufficient flexibility to allow plausible justification that one can both act egoistically while remaining moral, people seize on such opportunities to prioritize self-interest at the expense of morality. In other words, people who appear to exhibit a preference for being moral may in fact be placing a value on feeling moral, often accomplishing this goal by manipulating the manner in which they process information to justify taking egoistic actions while maintaining this feeling of morality.
They cite many studies of this phenomenon. Here's an overview of one: 
[P]articipants in a laboratory experiment distribute two tasks between themselves and another participant: a positive task (where correct responses to a task earn tickets to a raffle) and a negative task (not incentivized and described as “rather dull and boring”). Participants were informed: “Most participants feel that giving both people an equal chance— by, for example, flipping a coin—is the fairest way to assign themselves and the other participant to the tasks (we have provided a coin for you to flip if you wish). But the decision is entirely up to you.” Half of participants simply assigned the tasks without flipping the coin; among these participants, 90 percent assigned themselves to the positive task. However, the more interesting finding is that among the half of participants who chose to flip the coin, 90 percent “somehow” ended up with the positive task—despite the distribution of probabilities that one would expect from a two-sided coin. Moreover, participants who flipped the coin rated their actions as more moral than those who did not—even though they had ultimately acted just as egoistically as those who did not flip in assigning themselves the positive task. These results suggest that people can view their actions as moral by providing evidence to themselves that they are fair (through the deployment of a theoretically unbiased coin flip), even when they then ignore the outcome of that coin flip to benefit themselves.
The theory of motivated beliefs still views people as motivated by self-interest. However, the dimensions of self-interest expand beyond the standard concerns like consumption and leisure, and encompass how we feel about ourselves and the social groups we inhabit. In this way, the analysis opens up insights into insights into behavior that is otherwise puzzling in the context of economic analysis, as well as building intellectual connections to other social sciences of psychology and sociology.