Roland Bénabou publish a new paper on his site.
I often refer to him about groupthink theory, as he propose a model of Mutual Assured Delusion.
His new paper is "The Economics of Motivated Beliefs"
http://www.princeton.edu/~rbenabou/papers/REP_4_BW_nolinks_corrected 1.pdf
QuoteDisplay More
I present key ideas and results from recent work incorporating “motivated” belief distortions into Economics, both at the individual level (overconfidence, wishful thinking, willful blindness) and at the social one (groupthink, team morale, market exuberance and crises). To do so I develop a flexible model that unifies much of this line of research, then relate its main assumptions and testable predictions to the relevant experimental and observational evidence.
...
The model presented below will incorporate both motives for departures from objective cognition: affective (feeling better) and instrumental (performing better). Depending on the context and tasks at hand, either one may be most relevant, and certain beliefs can also serve both functions. An important example of the latter is religion, which (to some) simultaneously provides self-discipline and reassurance, or consolation. Turning now to the supply side, how are desired beliefs achieved and maintained, sometimes against strong evidence? The paths to self-deception are countless, but three main categories can be distinguished: willful blindness, reality denial, and self-signaling.
The first one consists in avoiding information sources that may hold bad news. For Huntington’s disease or HIV, for instance, this means not getting the test even though it is cheap or free, accurate, and can be done anonymously. Critical decisions need to be made, yet the person’s words and deeds reveal a negative ex-ante value for information. In the second scenario the news are already accumulating, though not yet completely final: symptoms are worsening, the objective probability of disease is rising to 70%, 80%, etc., yet the patient finds ways of not internalizing the data, rationalizing it away and convincing himself that his risk is still only (say) 15%, and behaving accordingly in most respects.
The third strategy is one where it is the agent himself who manufactures “diagnostic” signals of the desired type, which he then interprets as impartial (Quattrone and Tversky 1984, Bodner and Prelec 2003, Bénabou and Tirole 2004, 2011). Keeping with the health example, this correspond to a person who “pushes” himself to overcome their symptoms, carrying out difficult or even dangerous activities not only for their own sake but also as “proof” that things are fine.
...
Motives vs. heuristics.
It is worth pointing out three fundamental differences between such motivated beliefs or cognitive tendencies and the more purely mechanical mistakes in inference associated to the “heuristics and biases” view (e.g., Tversky and Kahneman (1974)) and typically found in most models of bounded-rationality:
- The latter types of “errors” are automatic and undirected (an “intuitive” System I is often invoked), the former valenced (pleasant or aversive) and goal-oriented, though in general not consciously so. A clear example of the difference is that of confirmation bias versus selfenhancement, for someone who is already not very confident in their skill, attractiveness, health or other key characteristic. In the first case the person tends to interpret any ambiguous signals received as confirming and hardening their negative self-view. In the second they see the same evidence positively, as showing that things are actually pretty good, or not so bad. In practice, the great majority of people show the latter type of response, and only depressive ones the former.
- A second major difference is that people who are more analytically sophisticated, educated or numerate can actually be more prone to making distorted inferences –rationalizing away evidence and compartmentalizing knowledge to protect valued beliefs – than those with lower cognitive abilities. Moreover, such reversals of the standard bounded-rationality logic occur only when the issue at hand is value-laden (e.g., gun control, climate change; see Kahan 2013 and Kahan et al. 2014), and not when it is neutral.
- Unlike computational and statistical mistakes, motivated cognition is emotionally charged. This feature is revealed almost instantly by a “fighting response” (agitation, anger, outrage, hostility) whenever a cherished belief pertaining to a person’s identity, morality, religion, politics, etc., is directly challenged by evidence. This view of belief formation is also consistent with the renewal of interest in emotions and their influence on decision-making currently under way in psychology and neuroscience (e.g. Sharot et al. (2012)).
CONCLUSION: BAD INCENTIVES OR / AND BAD BELIEFS?
In firms and organizations, the standard moral-hazard explanation for misbehavior is also often insufficient. A large literature in organizational psychology emphasizes the key roles of moral self-deception and overoptimistic hubris in many cases of corporate misconduct and financial fraud. 28 Most individuals engaging in dishonest behavior find ways to convince themselves that they are not doing anything wrong, are still good persons. Transgressions most often start small and even unplanned, then gradually escalate through a series of selfserving rationalizations increasingly at odds with objective judgment and reality. Group dynamics, both of the “common fate” type analyzed in the groupthink model and linked to social norms (judging oneself relative to peers, excluding dissenters) also powerfully amplify these tendencies.
The above distinction is important and deserves more attention than it has so far received. In practice, of course, my take is that most cases involve bad incentives and bad beliefs, acting together as complements. This is still a topic for ongoing and future work –a coming together of agency theory and behavioral economics which, I like to think, Jean-Jacques Laffont would have looked upon favorably