You are on page 1of 288

PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information.

PDF generated at: Tue, 01 Jan 2013 07:53:32 UTC


List of biases in judgment
and decision making
Contents
Articles
List of biases in judgment and decision making 1
Ambiguity effect 13
Anchoring 14
Attentional bias 18
Availability heuristic 21
Availability cascade 27
Confirmation bias 27
Bandwagon effect 44
Base rate fallacy 47
Belief bias 49
Bias blind spot 51
Choice-supportive bias 52
Clustering illusion 58
Congruence bias 59
Conjunction fallacy 61
Conservatism (belief revision) 64
Contrast effect 65
Curse of knowledge 67
Decoy effect 68
Denomination effect 69
Distinction bias 70
Duration neglect 71
Empathy gap 72
Endowment effect 76
Essentialism 80
Experimenter's bias 85
False-consensus effect 89
Functional fixedness 94
Forer effect 99
Framing effect (psychology) 102
Gambler's fallacy 104
Hindsight bias 111
Hostile media effect 117
Hyperbolic discounting 118
Illusion of control 123
Illusion of validity 128
Illusory correlation 129
Information bias (psychology) 133
Insensitivity to sample size 134
Just-world hypothesis 135
Less-is-better effect 141
Loss aversion 142
Ludic fallacy 147
Mere-exposure effect 149
Money illusion 153
Moral credential 155
Negativity bias 156
Neglect of probability 159
Normalcy bias 161
Observer-expectancy effect 163
Omission bias 164
Optimism bias 164
Ostrich effect 170
Outcome bias 171
Overconfidence effect 172
Pareidolia 177
Pessimism bias 184
Planning fallacy 184
Post-purchase rationalization 187
Pro-innovation bias 187
Pseudocertainty effect 188
Reactance (psychology) 189
Reactive devaluation 192
Serial position effect 193
Recency illusion 197
Restraint bias 198
Rhyme-as-reason effect 199
Risk compensation 199
Selective perception 205
Semmelweis reflex 206
Selection bias 207
Social comparison bias 210
Social desirability bias 213
Status quo bias 216
Stereotype 221
Subadditivity effect 235
Subjective validation 236
Survivorship bias 236
Texas sharpshooter fallacy 239
Time-saving bias 241
Well travelled road effect 243
Zero-risk bias 243
Actorobserver asymmetry 244
Defensive attribution hypothesis 248
DunningKruger effect 250
Egocentric bias 252
Extrinsic incentives bias 253
Halo effect 254
Illusion of asymmetric insight 258
Illusion of external agency 259
Illusion of transparency 260
Illusory superiority 262
In-group favoritism 272
Nave cynicism 275
Worse-than-average effect 275
Google effect 276
References
Article Sources and Contributors 277
Image Sources, Licenses and Contributors 283
Article Licenses
License 284
List of biases in judgment and decision making
1
List of biases in judgment and decision making
Many biases in judgment and decision making have been demonstrated by research in psychology and behavioral
economics. These are systematic deviations from a standard of rationality or good judgment.
Although the reality of these biases is confirmed by replicable research, there are often controversies about how to
classify these biases or how to explain them.
[1]
Some are effects of information-processing rules, called heuristics,
that the brain uses to produce decisions or judgments. These are called cognitive biases.
[2][3]
Biases in judgment or
decision-making can also result from motivation, such as when beliefs are distorted by wishful thinking. Some biases
have a variety of cognitive ("cold") or motivational ("hot") explanations. Both effects can be present at the same
time.
[4][5]
There are also controversies about whether some of these biases count as truly irrational or whether they result in
useful attitudes or behavior. An example is that, when getting to know others, people tend to ask leading questions
which seem biased towards confirming their assumptions about the person. This kind of confirmation bias has been
argued to be an example of social skill: a way to establish a connection with the other person.
[6]
The research on these biases overwhelmingly involves humans. However, some of the findings have appeared in
animals as well. For example, hyperbolic discounting has also been observed in rats, pigeons and monkeys.
[7]
Decision-making, belief and behavioral biases
Many of these biases affect belief formation, business and economic decisions, and human behavior in general. They
arise as a replicable result to a specific condition: when confronted with a specific situation, the deviation from what
is normatively expected can be characterized by:
Ambiguity effect the tendency to avoid options for which missing information makes the probability seem
"unknown."
[8]
Anchoring or focalism the tendency to rely too heavily, or "anchor," on a past reference or on one trait or piece
of information when making decisions.
Attentional bias the tendency to pay attention to emotionally dominant stimuli in one's environment and to
neglect relevant data, when making judgments of a correlation or association.
Availability heuristic the tendency to overestimate the likelihood of events with greater "availability" in
memory, which can be influenced by how recent the memories are, or how unusual or emotionally charged they
may be.
Availability cascade a self-reinforcing process in which a collective belief gains more and more plausibility
through its increasing repetition in public discourse (or "repeat something long enough and it will become true").
Backfire effect when people react to disconfirming evidence by strengthening their beliefs.
[9]
Bandwagon effect the tendency to do (or believe) things because many other people do (or believe) the same.
Related to groupthink and herd behavior.
Base rate fallacy or base rate neglect the tendency to base judgments on specifics, ignoring general statistical
information.
[10]
Belief bias an effect where someone's evaluation of the logical strength of an argument is biased by the
believability of the conclusion.
[11]
Bias blind spot the tendency to see oneself as less biased than other people, or to be able to identify more
cognitive biases in others than in oneself.
[12]
Choice-supportive bias the tendency to remember one's choices as better than they actually were.
[13]
Clustering illusion the tendency to over-expect small runs, streaks or clusters in large samples of random data
Confirmation bias the tendency to search for or interpret information or memories in a way that confirms one's
preconceptions.
[14]
List of biases in judgment and decision making
2
Congruence bias the tendency to test hypotheses exclusively through direct testing, instead of testing possible
alternative hypotheses.
Conjunction fallacy the tendency to assume that specific conditions are more probable than general ones.
[15]
Conservatism or regressive bias tendency to underestimate high values and high
likelihoods/probabilities/frequencies and overestimate low ones. Based on the observed evidence, estimates are
not extreme enough
[16][17][18]
Conservatism (Bayesian) the tendency to revise belief insufficiently when presented with new evidence
(estimates of conditional probabilities are conservative)
[16][19][20]
Contrast effect the enhancement or diminishing of a weight or other measurement when compared with a
recently observed contrasting object.
[21]
Curse of knowledge when knowledge of a topic diminishes one's ability to think about it from a less-informed
perspective.
Decoy effect preferences change when there is a third option that is asymmetrically dominated
Denomination effect the tendency to spend more money when it is denominated in small amounts (e.g. coins)
rather than large amounts (e.g. bills).
[22]
Distinction bias the tendency to view two options as more dissimilar when evaluating them simultaneously
than when evaluating them separately.
[23]
Duration neglect the neglect of the duration of an episode in determining its value
Empathy gap the tendency to underestimate the influence or strength of feelings, in either oneself or others.
Endowment effect the fact that people often demand much more to give up an object than they would be
willing to pay to acquire it.
[24]
Essentialism categorizing people and things according to their essential nature, in spite of variations.
[25]
Exaggerated expectation based on the estimates, real-world evidence turns out to be less extreme than our
expectations (conditionally inverse of the conservatism bias).
[16][26]
Experimenter's or expectation bias the tendency for experimenters to believe, certify, and publish data that
agree with their expectations for the outcome of an experiment, and to disbelieve, discard, or downgrade the
corresponding weightings for data that appear to conflict with those expectations.
[27]
False-consensus effect - the tendency of a person to overestimate how much other people agree with him or her.
Functional fixedness - limits a person to using an object only in the way it is traditionally used
Focusing effect the tendency to place too much importance on one aspect of an event; causes error in
accurately predicting the utility of a future outcome.
[28]
Forer effect or Barnum effect the observation that individuals will give high accuracy ratings to descriptions
of their personality that supposedly are tailored specifically for them, but are in fact vague and general enough to
apply to a wide range of people. This effect can provide a partial explanation for the widespread acceptance of
some beliefs and practices, such as astrology, fortune telling, graphology, and some types of personality tests.
Framing effect drawing different conclusions from the same information, depending on how or by whom that
information is presented.
Frequency illusion the illusion in which a word, a name or other thing that has recently come to one's attention
suddenly appears "everywhere" with improbable frequency (see also recency illusion).
[29]
Gambler's fallacy the tendency to think that future probabilities are altered by past events, when in reality they
are unchanged. Results from an erroneous conceptualization of the law of large numbers. For example, "I've
flipped heads with this coin five times consecutively, so the chance of tails coming out on the sixth flip is much
greater than heads."
Hard-easy effect Based on a specific level of task difficulty, the confidence in judgments is too conservative
and not extreme enough
[16][30][31][32]
Hindsight bias sometimes called the "I-knew-it-all-along" effect, the tendency to see past events as being
predictable
[33]
at the time those events happened. Colloquially referred to as "Hindsight is 20/20".
List of biases in judgment and decision making
3
Hostile media effect the tendency to see a media report as being biased, owing to one's own strong partisan
views.
Hyperbolic discounting the tendency for people to have a stronger preference for more immediate payoffs
relative to later payoffs, where the tendency increases the closer to the present both payoffs are.
[34]
Illusion of control the tendency to overestimate one's degree of influence over other external events.
[35]
Illusion of validity when consistent but predictively weak data leads to confident predictions
Illusory correlation inaccurately perceiving a relationship between two unrelated events.
[36][37]
Impact bias the tendency to overestimate the length or the intensity of the impact of future feeling states.
[38]
Information bias the tendency to seek information even when it cannot affect action.
[39]
Insensitivity to sample size the tendency to under-expect variation in small samples
Irrational escalation the phenomenon where people justify increased investment in a decision, based on the
cumulative prior investment, despite new evidence suggesting that the decision was probably wrong.
Just-world hypothesis the tendency for people to want to believe that the world is fundamentally just, causing
them to rationalize an otherwise inexplicable injustice as deserved by the victim(s).
Less-is-better effect a preference reversal where a dominated smaller set is preferred to a larger set
Loss aversion "the disutility of giving up an object is greater than the utility associated with acquiring it".
[40]
(see also Sunk cost effects and endowment effect).
Ludic fallacy - the misuse of games to model real-life situations.
Mere exposure effect the tendency to express undue liking for things merely because of familiarity with
them.
[41]
Money illusion the tendency to concentrate on the nominal (face value) of money rather than its value in terms
of purchasing power.
[42]
Moral credential effect the tendency of a track record of non-prejudice to increase subsequent prejudice.
Negativity bias the tendency to pay more attention and give more weight to negative than positive experiences
or other kinds of information.
Neglect of probability the tendency to completely disregard probability when making a decision under
uncertainty.
[43]
Normalcy bias the refusal to plan for, or react to, a disaster which has never happened before.
Observer-expectancy effect when a researcher expects a given result and therefore unconsciously manipulates
an experiment or misinterprets data in order to find it (see also subject-expectancy effect).
Omission bias the tendency to judge harmful actions as worse, or less moral, than equally harmful omissions
(inactions).
[44]
Optimism bias the tendency to be over-optimistic, overestimating favorable and pleasing outcomes (see also
wishful thinking, valence effect, positive outcome bias).
[45][46]
Ostrich effect ignoring an obvious (negative) situation.
Outcome bias the tendency to judge a decision by its eventual outcome instead of based on the quality of the
decision at the time it was made.
Overconfidence effect excessive confidence in one's own answers to questions. For example, for certain types
of questions, answers that people rate as "99% certain" turn out to be wrong 40% of the time.
[16][47][48][49]
Pareidolia a vague and random stimulus (often an image or sound) is perceived as significant, e.g., seeing
images of animals or faces in clouds, the man in the moon, and hearing non-existent hidden messages on records
played in reverse.
Pessimism bias the tendency for some people, especially those suffering from depression, to overestimate the
likelihood of negative things happening to them.
Planning fallacy the tendency to underestimate task-completion times.
[38]
Post-purchase rationalization the tendency to persuade oneself through rational argument that a purchase was
a good value.
List of biases in judgment and decision making
4
Pro-innovation bias the tendency to reflect a personal bias towards an invention/innovation, while often failing
to identify limitations and weaknesses or address the possibility of failure.
Pseudocertainty effect the tendency to make risk-averse choices if the expected outcome is positive, but make
risk-seeking choices to avoid negative outcomes.
[50]
Reactance the urge to do the opposite of what someone wants you to do out of a need to resist a perceived
attempt to constrain your freedom of choice (see also Reverse psychology).
Reactive devaluation devaluing proposals that are no longer hypothetical or purportedly originated with an
adversary.
Recency bias a cognitive bias that results from disproportionate salience attributed to recent stimuli or
observations the tendency to weigh recent events more than earlier events (see also peak-end rule, recency
effect).
Recency illusion the illusion that a phenomenon, typically a word or language usage, that one has just begun to
notice is a recent innovation (see also frequency illusion).
Restraint bias the tendency to overestimate one's ability to show restraint in the face of temptation.
Rhyme as reason effect rhyming statements are perceived as more truthful. A famous example being used in
the O.J Simpson trial with the defenses use of the phrase "If the gloves don't fit then you must acquit."
Risk compensation / Peltzman effect the tendency to take greater risks when perceived safety increases.
Selective perception the tendency for expectations to affect perception.
Semmelweis reflex the tendency to reject new evidence that contradicts a paradigm.
[51]
Selection bias - the distortion of a statistical analysis, resulting from the method of collecting samples. If the
selection bias is not taken into account then certain conclusions drawn may be wrong.
Social comparison bias the tendency, when making hiring decisions, to favour potential candidates who don't
compete with one's own particular strengths.
[52]
Social desirability bias - the tendency to over-report socially desirable characteristics or behaviours and
under-report socially undesirable characteristics or behaviours.
[53]
Status quo bias the tendency to like things to stay relatively the same (see also loss aversion, endowment
effect, and system justification).
[54][55]
Stereotyping expecting a member of a group to have certain characteristics without having actual information
about that individual.
Subadditivity effect the tendency to estimate that the likelihood of an event is less than the sum of its (more
than two) mutually exclusive components.
[56]
Subjective validation perception that something is true if a subject's belief demands it to be true. Also assigns
perceived connections between coincidences.
Survivorship bias - concentrating on the people or things that "survived" some process and inadvertently
overlooking those that didn't because of their lack of visibility.
Texas sharpshooter fallacy - pieces of information that have no relationship to one another are called out for
their similarities, and that similarity is used for claiming the existence of a pattern.
Time-saving bias underestimations of the time that could be saved (or lost) when increasing (or decreasing)
from a relatively low speed and overestimations of the time that could be saved (or lost) when increasing (or
decreasing) from a relatively high speed.
Unit bias the tendency to want to finish a given unit of a task or an item. Strong effects on the consumption of
food in particular.
[57]
Well travelled road effect underestimation of the duration taken to traverse oft-traveled routes and
overestimation of the duration taken to traverse less familiar routes.
Zero-risk bias preference for reducing a small risk to zero over a greater reduction in a larger risk.
Zero-sum heuristic Intuitively judging a situation to be zero-sum (i.e., that gains and losses are correlated).
Derives from the zero-sum game in game theory, where wins and losses sum to zero.
[58][59]
The frequency with
List of biases in judgment and decision making
5
which this bias occurs may be related to the social dominance orientation personality factor.
Social biases
Most of these biases are labeled as attributional biases.
Actor-observer bias the tendency for explanations of other individuals' behaviors to overemphasize the
influence of their personality and underemphasize the influence of their situation (see also Fundamental
attribution error), and for explanations of one's own behaviors to do the opposite (that is, to overemphasize the
influence of our situation and underemphasize the influence of our own personality).
Defensive attribution hypothesis defensive attributions are made when individuals witness or learn of a
mishap happening to another person. In these situations, attributions of responsibility to the victim or harm-doer
for the mishap will depend upon the severity of the outcomes of the mishap and the level of personal and
situational similarity between the individual and victim. More responsibility will be attributed to the harm-doer as
the outcome becomes more severe, and as personal or situational similarity decreases.
DunningKruger effect an effect in which incompetent people fail to realise they are incompetent because they
lack the skill to distinguish between competence and incompetence
[60]
Egocentric bias occurs when people claim more responsibility for themselves for the results of a joint action
than an outside observer would credit them.
Extrinsic incentives bias an exception to the fundamental attribution error, when people view others as having
(situational) extrinsic motivations and (dispositional) intrinsic motivations for oneself
False consensus effect the tendency for people to overestimate the degree to which others agree with them.
[61]
Forer effect (aka Barnum effect) the tendency to give high accuracy ratings to descriptions of their personality
that supposedly are tailored specifically for them, but are in fact vague and general enough to apply to a wide
range of people. For example, horoscopes.
Fundamental attribution error the tendency for people to over-emphasize personality-based explanations for
behaviors observed in others while under-emphasizing the role and power of situational influences on the same
behavior (see also actor-observer bias, group attribution error, positivity effect, and negativity effect).
[62]
Halo effect the tendency for a person's positive or negative traits to "spill over" from one area of their
personality to another in others' perceptions of them (see also physical attractiveness stereotype).
[63]
Illusion of asymmetric insight people perceive their knowledge of their peers to surpass their peers'
knowledge of them.
[64]
Illusion of external agency when people view self-generated preferences as instead being caused by insightful,
effective and benevolent agents
Illusion of transparency people overestimate others' ability to know them, and they also overestimate their
ability to know others.
Illusory superiority overestimating one's desirable qualities, and underestimating undesirable qualities, relative
to other people. (Also known as "Lake Wobegon effect," "better-than-average effect," or "superiority bias").
[65]
Ingroup bias the tendency for people to give preferential treatment to others they perceive to be members of
their own groups.
Just-world phenomenon the tendency for people to believe that the world is just and therefore people "get
what they deserve."
Moral luck the tendency for people to ascribe greater or lesser moral standing based on the outcome of an
event rather than the intention
Naive cynicism expecting more egocentric bias in others than in oneself
Outgroup homogeneity bias individuals see members of their own group as being relatively more varied than
members of other groups.
[66]
Projection bias the tendency to unconsciously assume that others (or one's future selves) share one's current
emotional states, thoughts and values.
[67]
List of biases in judgment and decision making
6
Self-serving bias the tendency to claim more responsibility for successes than failures. It may also manifest
itself as a tendency for people to evaluate ambiguous information in a way beneficial to their interests (see also
group-serving bias).
[68]
System justification the tendency to defend and bolster the status quo. Existing social, economic, and political
arrangements tend to be preferred, and alternatives disparaged sometimes even at the expense of individual and
collective self-interest. (See also status quo bias.)
Trait ascription bias the tendency for people to view themselves as relatively variable in terms of personality,
behavior, and mood while viewing others as much more predictable.
Ultimate attribution error similar to the fundamental attribution error, in this error a person is likely to make
an internal attribution to an entire group instead of the individuals within the group.
Worse-than-average effect a tendency to believe ourselves to be worse than others at tasks which are
difficult
[69]
Memory errors and biases
In psychology and cognitive science, a memory bias is a cognitive bias that either enhances or impairs the recall of
a memory (either the chances that the memory will be recalled at all, or the amount of time it takes for it to be
recalled, or both), or that alters the content of a reported memory. There are many types of memory bias, including:
Bizarreness effect: bizarre, or uncommon material, is better remembered than common material
Choice-supportive bias: remembering chosen options as having been better than rejected options
[70]
Change bias: after an investment of effort in producing change, remembering one's past performance as more
difficult than it actually was
[71]
Childhood amnesia: the retention of few memories from before the age of four
Conservatism or Regressive Bias tendency to remember high values and high
likelihoods/probabilities/frequencies lower than they actually were and low ones higher than they actually were.
Based on the evidence, memories are not extreme enough
[72][73]
Consistency bias: incorrectly remembering one's past attitudes and behaviour as resembling present attitudes and
behaviour.
[74]
Context effect: that cognition and memory are dependent on context, such that out-of-context memories are more
difficult to retrieve than in-context memories (e.g., recall time and accuracy for a work-related memory will be
lower at home, and vice versa)
Cross-race effect: the tendency for people of one race to have difficulty identifying members of a race other than
their own
Cryptomnesia: a form of misattribution where a memory is mistaken for imagination, because there is no
subjective experience of it being a memory.
[71]
Egocentric bias: recalling the past in a self-serving manner, e.g., remembering one's exam grades as being better
than they were, or remembering a caught fish as bigger than it really was
Fading affect bias: a bias in which the emotion associated with unpleasant memories fades more quickly than the
emotion associated with positive events.
[75]
False memory a form of misattribution where imagination is mistaken for a memory.
Generation effect (Self-generation effect): that self-generated information is remembered best. For instance,
people are better able to recall memories of statements that they have generated than similar statements generated
by others.
Google effect: the tendency to forget information that can be easily found online.
Hindsight bias: the inclination to see past events as being predictable; also called the "I-knew-it-all-along" effect.
Humor effect: that humorous items are more easily remembered than non-humorous ones, which might be
explained by the distinctiveness of humor, the increased cognitive processing time to understand the humor, or the
emotional arousal caused by the humor.
List of biases in judgment and decision making
7
Illusion-of-truth effect: that people are more likely to identify as true statements those they have previously
heard (even if they cannot consciously remember having heard them), regardless of the actual validity of the
statement. In other words, a person is more likely to believe a familiar statement than an unfamiliar one.
Illusory correlation inaccurately remembering a relationship between two events.
[16][76]
Lag effect: see spacing effect
Leveling and Sharpening: memory distortions introduced by the loss of details in a recollection over time, often
concurrent with sharpening or selective recollection of certain details that take on exaggerated significance in
relation to the details or aspects of the experience lost through leveling. Both biases may be reinforced over time,
and by repeated recollection or re-telling of a memory.
[77]
Levels-of-processing effect: that different methods of encoding information into memory have different levels of
effectiveness
[78]
List-length effect: a smaller percentage of items are remembered in a longer list, but as the length of the list
increases, the absolute number of items remembered increases as well.
[79]
Misinformation effect: that misinformation affects people's reports of their own memory.
Misattribution: when information is retained in memory but the source of the memory is forgotten. One of
Schacter's (1999) Seven Sins of Memory, Misattribution was divided into Source Confusion, Cryptomnesia and
False Recall/False Recognition.
[71]
Modality effect: that memory recall is higher for the last items of a list when the list items were received via
speech than when they were received via writing.
Mood-congruent memory bias: the improved recall of information congruent with one's current mood.
Next-in-line effect: that a person in a group has diminished recall for the words of others who spoke immediately
before or after this person.
Osborn effect: that being intoxicated with a mind-altering substance makes it harder to retrieve motor patterns
from the Basal Ganglion.
[80]
Part-list cueing effect: that being shown some items from a list makes it harder to retrieve the other items
[81]
Peak-end rule: that people seem to perceive not the sum of an experience but the average of how it was at its
peak (e.g. pleasant or unpleasant) and how it ended.
Persistence: the unwanted recurrence of memories of a traumatic event.
Picture superiority effect: that concepts are much more likely to be remembered experientially if they are
presented in picture form than if they are presented in word form.
[82]
Placement bias tendency of people to remember themselves as better than others at tasks at which they rate
themselves above average (also Illusory superiority or Better-than-average effect)
[83]
and tendency to
remember themselves as worse than others at tasks at which they rate themselves below average (also
Worse-than-average effect
[16][69]
Positivity effect: that older adults favor positive over negative information in their memories.
Primacy effect, Recency effect & Serial position effect: that items near the end of a list are the easiest to recall,
followed by the items at the beginning of a list; items in the middle are the least likely to be remembered.
[84]
Processing difficulty effect
Reminiscence bump: the recalling of more personal events from adolescence and early adulthood than personal
events from other lifetime periods
[85]
Rosy retrospection: the remembering of the past as having been better than it really was.
Self-relevance effect: that memories relating to the self are better recalled than similar information relating to
others.
Self-serving bias perceiving oneself responsible for desirable outcomes but not responsible for undesirable
ones.
Source Confusion: misattributing the source of a memory, e.g. misremembering that one saw an event personally
when actually it was seen on television.
List of biases in judgment and decision making
8
Spacing effect: that information is better recalled if exposure to it is repeated over a longer span of time.
Stereotypical bias: memory distorted towards stereotypes (e.g. racial or gender), e.g. "black-sounding" names
being misremembered as names of criminals.
[71]
Suffix effect: the weakening of the recency effect in the case that an item is appended to the list that the subject is
not required to recall
[86]
Suggestibility: a form of misattribution where ideas suggested by a questioner are mistaken for memory.
Subadditivity effect the tendency to estimate that the likelihood of a remembered event is less than the sum of
its (more than two) mutually exclusive components.
[16][87]
Telescoping effect: the tendency to displace recent events backward in time and remote events forward in time,
so that recent events appear more remote, and remote events, more recent.
Testing effect: that frequent testing of material that has been committed to memory improves memory recall.
Tip of the tongue phenomenon: when a subject is able to recall parts of an item, or related information, but is
frustratingly unable to recall the whole item. This is thought an instance of "blocking" where multiple similar
memories are being recalled and interfere with each other.
[71]
Verbatim effect: that the "gist" of what someone has said is better remembered than the verbatim wording
[88]
Von Restorff effect: that an item that sticks out is more likely to be remembered than other items
[89]
Zeigarnik effect: that uncompleted or interrupted tasks are remembered better than completed ones.
Common theoretical causes of some cognitive biases
Bounded rationality limits on optimization and rationality
Prospect theory
Mental accounting
Adaptive bias basing decisions on limited information and biasing them based on the costs of being wrong.
Attribute substitution making a complex, difficult judgment by unconsciously substituting it by an easier
judgment
[90]
Attribution theory
Salience
Nave realism
Cognitive dissonance, and related:
Impression management
Self-perception theory
Heuristics, including:
Availability heuristic estimating what is more likely by what is more available in memory, which is biased
toward vivid, unusual, or emotionally charged examples
[36]
Representativeness heuristic judging probabilities on the basis of resemblance
[36]
Affect heuristic basing a decision on an emotional reaction rather than a calculation of risks and benefits
[91]
Some theories of emotion such as:
Two-factor theory of emotion
Somatic markers hypothesis
Introspection illusion
Misinterpretations or misuse of statistics; innumeracy.
A 2012 Psychological Bulletin article suggested that at least eight seemingly unrelated biases can be produced by the
same information-theoretic generative mechanism that assumes noisy information processing during storage and
retrieval of information in human memory.
[16]
List of biases in judgment and decision making
9
Methods for dealing with cognitive biases
Reference class forecasting was developed by Daniel Kahneman, Amos Tversky, and Bent Flyvbjerg to eliminate or
reduce the impact of cognitive biases on decision making.
[92]
Notes
[1] Dougherty, M. R. P., Gettys, C. F., & Ogden, E. E. (1999). MINERVA-DM: A memory processes model for judgments of likelihood.
Psychological Review, 106(1), 180209.
[2] Kahneman, D.; Tversky, A. (1972), "Subjective probability: A judgment of representativeness", Cognitive Psychology 3: 430454,
doi:10.1016/0010-0285(72)90016-3.
[3] [3] Baron, J. (2007). Thinking and deciding (4th ed.). New York, NY: Cambridge University Press.
[4] Maccoun, Robert J. (1998), "Biases in the interpretation and use of research results" (http:/ / socrates. berkeley. edu/ ~maccoun/
MacCoun_AnnualReview98.pdf), Annual Review of Psychology 49: 25987, doi:10.1146/annurev.psych.49.1.259, PMID15012470,
[5] Nickerson, Raymond S. (1998), "Confirmation Bias; A Ubiquitous Phenomenon in Many Guises", Review of General Psychology
(Educational Publishing Foundation) 2 (2): 175220, doi:10.1037/1089-2680.2.2.175, ISSN1089-2680
[6] Dardenne, Benoit; Leyens, Jacques-Philippe (1995), "Confirmation Bias as a Social Skill", Personality and Social Psychology Bulletin
(Society for Personality and Social Psychology) 21 (11): 12291239, doi:10.1177/01461672952111011, ISSN1552-7433
[7] Alexander, William H.; Brown, Joshua W. (1 June 2010). "Hyperbolically Discounted Temporal Difference Learning". Neural Computation
22 (6): 15111527. doi:10.1162/neco.2010.08-09-1080.
[8] [8] Baron 1994, p.372
[9] Sanna, Lawrence J.; Schwarz, Norbert; Stocker, Shevaun L. (2002). "When debiasing backfires: Accessible content and accessibility
experiences in debiasing hindsight." (http:/ / www.nifc.gov/ PUBLICATIONS/ acc_invest_march2010/ speakers/ 4DebiasBackfires. pdf).
Journal of Experimental Psychology: Learning, Memory, and Cognition 28 (3): 497502. doi:10.1037//0278-7393.28.3.497. ISSN0278-7393.
.
[10] Baron 1994, pp.224228
[11] Klauer, K. C.; J. Musch, B. Naumer (2000), "On belief bias in syllogistic reasoning", Psychological Review 107 (4): 852884,
doi:10.1037/0033-295X.107.4.852, PMID11089409
[12] Pronin, Emily; Matthew B. Kugler (July 2007), "Valuing thoughts, ignoring behavior: The introspection illusion as a source of the bias blind
spot", Journal of Experimental Social Psychology (Elsevier) 43 (4): 565578, doi:10.1016/j.jesp.2006.05.011, ISSN0022-1031.
[13] Mather, M.; Shafir, E.; Johnson, M.K. (2000), "Misrememberance of options past: Source monitoring and choice" (http:/ / www. usc. edu/
projects/ matherlab/ pdfs/ Matheretal2000. pdf), Psychological Science 11: 132138, doi:10.1111/1467-9280.00228, .
[14] Oswald, Margit E.; Grosjean, Stefan (2004), "Confirmation Bias", in Pohl, Rdiger F., Cognitive Illusions: A Handbook on Fallacies and
Biases in Thinking, Judgement and Memory, Hove, UK: Psychology Press, pp.7996, ISBN978-1-84169-351-4, OCLC55124398
[15] Fisk, John E. (2004), "Conjunction fallacy", in Pohl, Rdiger F., Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking,
Judgement and Memory, Hove, UK: Psychology Press, pp.2342, ISBN978-1-84169-351-4, OCLC55124398
[16] Martin Hilbert (2012) " "Toward a synthesis of cognitive biases: How noisy information processing can bias human decision making" (http:/
/ psycnet. apa. org/ psycinfo/ 2011-27261-001)". Psychological Bulletin, 138(2), 211237; Also at http:/ / www. martinhilbert. net/
HilbertPsychBull.pdf
[17] Attneave, F. (1953). Psychological probability as a function of experienced frequency. Journal of Experimental Psychology, 46(2), 8186.
[18] Fischhoff, B., Slovic, P., & Lichtenstein, S. (1977). Knowing with certainty: The appropriateness of extreme confidence. Journal of
Experimental Psychology: Human Perception and Performance, 3(4), 552564. doi:10.1037/0096-1523.3.4.552
[19] DuCharme, W. M. (1970). Response bias explanation of conservative human inference. Journal of Experimental Psychology, 85(1), 6674.
[20] Edwards, W. (1968). Conservatism in human information processing. In B. Kleinmuntz (Ed.), Formal representation of human judgment,
(pp. 1752). New York: Wiley.
[21] Plous 1993, pp.3841
[22] Why We Spend Coins Faster Than Bills (http:/ / www. npr. org/ templates/ story/ story. php?storyId=104063298) by Chana Joffe-Walt. All
Things Considered, 12 May 2009.
[23] Hsee, Christopher K.; Zhang, Jiao (2004), "Distinction bias: Misprediction and mischoice due to joint evaluation", Journal of Personality
and Social Psychology 86 (5): 680695, doi:10.1037/0022-3514.86.5.680, PMID15161394
[24] (Kahneman, Knetsch & Thaler 1991, p.193) Richard Thaler coined the term "endowment effect."
[25] (http:/ / www. human-nature. com/ nibbs/ 04/ gelman. html)
[26] Wagenaar, W. A., & Keren, G. B. (1985). Calibration of probability assessments by professional blackjack dealers, statistical experts, and
lay people. Organizational Behavior and Human Decision Processes, 36(3), 406416.
[27] Jeng, M. (2006). "A selected history of expectation bias in physics". American Journal of Physics 74 (7): 578583. doi:10.1119/1.2186333.
[28] Kahneman, Daniel; Alan B. Krueger, David Schkade, Norbert Schwarz, Arthur A. Stone (2006-06-30), "Would you be happier if you were
richer? A focusing illusion" (http:/ / www.morgenkommichspaeterrein. de/ ressources/ download/ 125krueger. pdf), Science 312 (5782):
190810, doi:10.1126/science.1129688, PMID16809528,
List of biases in judgment and decision making
10
[29] Zwicky, Arnold (2005-08-07). "Just Between Dr. Language and I" (http:/ / itre. cis. upenn. edu/ ~myl/ languagelog/ archives/ 002386. html).
Language Log. .
[30] Lichtenstein, S., & Fischhoff, B. (1977). Do those who know more also know more about how much they know? Organizational Behavior
and Human Performance, 20(2), 159183. doi:10.1016/0030-5073(77)90001-0
[31] Merkle, E. C. (2009). The disutility of the hard-easy effect in choice confidence. Psychonomic Bulletin & Review, 16(1), 204213.
doi:10.3758/PBR.16.1.204
[32] Juslin, P, Winman, A., & Olsson, H. (2000). Naive empiricism and dogmatism in confidence research: a critical examination of the
hard-easy effect. Psychological Review, 107(2), 384396.
[33] Pohl, Rdiger F. (2004), "Hindsight Bias", in Pohl, Rdiger F., Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking,
Judgement and Memory, Hove, UK: Psychology Press, pp.363378, ISBN978-1-84169-351-4, OCLC55124398
[34] [34] Hardman 2009, p.110
[35] Thompson, Suzanne C. (1999), "Illusions of Control: How We Overestimate Our Personal Influence", Current Directions in Psychological
Science (Association for Psychological Science) 8 (6): 187190, ISSN09637214, JSTOR20182602
[36] Tversky, Amos; Daniel Kahneman (September 27, 1974), "Judgment under Uncertainty: Heuristics and Biases", Science (American
Association for the Advancement of Science) 185 (4157): 11241131, doi:10.1126/science.185.4157.1124, PMID17835457
[37] Fiedler, K. (1991). The tricky nature of skewed frequency tables: An information loss account of distinctiveness-based illusory correlations.
Journal of Personality and Social Psychology, 60(1), 2436.
[38] Sanna, Lawrence J.; Schwarz, Norbert (2004), "Integrating Temporal Biases: The Interplay of Focal Thoughts and Accessibility
Experiences", Psychological Science (American Psychological Society) 15 (7): 474481, doi:10.1111/j.0956-7976.2004.00704.x,
PMID15200632
[39] Baron 1994, pp.258259
[40] (Kahneman, Knetsch & Thaler 1991, p.193) Daniel Kahneman, together with Amos Tversky, coined the term "loss aversion."
[41] Bornstein, Robert F.; Crave-Lemley, Catherine (2004), "Mere exposure effect", in Pohl, Rdiger F., Cognitive Illusions: A Handbook on
Fallacies and Biases in Thinking, Judgement and Memory, Hove, UK: Psychology Press, pp.215234, ISBN978-1-84169-351-4,
OCLC55124398
[42] Shafir, Eldar; Diamond, Peter; Tversky, Amos (2000), "Money Illusion", Choices, values, and frames, Cambridge University Press,
pp.335355, ISBN978-0-521-62749-8
[43] [43] Baron 1994, p.353
[44] [44] Baron 1994, p.386
[45] [45] Baron 1994, p.44
[46] [46] Hardman 2009, p.104
[47] Adams, P. A., & Adams, J. K. (1960). Confidence in the recognition and reproduction of words difficult to spell. The American Journal of
Psychology, 73(4), 544552.
[48] Hoffrage, Ulrich (2004), "Overconfidence", in Rdiger Pohl, Cognitive Illusions: a handbook on fallacies and biases in thinking, judgement
and memory, Psychology Press, ISBN978-1-84169-351-4
[49] Sutherland 2007, pp.172178
[50] [50] Hardman 2009, p.137
[51] Edwards, W. (1968). Conservatism in human information processing. In: B. Kleinmutz (Ed.), Formal Representation of Human Judgment.
(pp. 1752). New York: John Wiley and Sons.
[52] Stephen M. Garciaa, Hyunjin Song and Abraham Tesser (November 2010), "Tainted recommendations: The social comparison bias",
Organizational Behavior and Human Decision Processes 113 (2): 97101, doi:10.1016/j.obhdp.2010.06.002, ISSN07495978, Lay summary
(http:/ / bps-research-digest.blogspot.com/ 2010/ 10/ social-comparison-bias-or-why-we. html)BPS Research Digest (2010-10-30).
[53] Dalton, D. & Ortegren, M. (2011). "Gender differences in ethics research: The importance of controlling for the social desirability response
bias". Journal of Business Ethics 103 (1): 73-93. doi:10.1007/s10551-011-0843-8.
[54] Kahneman, Knetsch & Thaler 1991, p.193
[55] [55] Baron 1994, p.382
[56] Tversky, A., & Koehler, D. J. (1994). Support theory: A nonextensional representation of subjective probability. Psychological Review,
101(4), 547567.
[57] "Penn Psychologists Believe 'Unit Bias' Determines The Acceptable Amount To Eat" (http:/ / www. sciencedaily. com/ releases/ 2005/ 11/
051121163748. htm). ScienceDaily (Nov. 21, 2005)
[58] Meegan, Daniel V. (2010). "Zero-Sum Bias: Perceived Competition Despite Unlimited Resources". Frontiers in Psychology 1.
doi:10.3389/fpsyg.2010.00191. ISSN1664-1078.
[59] Chernev, Alexander (2007). "Jack of All Trades or Master of One? Product Differentiation and Compensatory Reasoning in Consumer
Choice". Journal of Consumer Research 33 (4): 430444. doi:10.1086/510217. ISSN0093-5301.
[60] Morris, Errol (2010-06-20). "The Anosognosics Dilemma: Somethings Wrong but Youll Never Know What It Is (Part 1)" (http:/ /
opinionator. blogs.nytimes. com/ 2010/ 06/ 20/ the-anosognosics-dilemma-1/ ). Opinionator: Exclusive Online Commentary From The Times.
New York Times. . Retrieved 2011-03-07.
[61] Marks, Gary; Miller, Norman (1987), "Ten years of research on the false-consensus effect: An empirical and theoretical review",
Psychological Bulletin (American Psychological Association) 102 (1): 7290, doi:10.1037/0033-2909.102.1.72
List of biases in judgment and decision making
11
[62] Sutherland 2007, pp.138139
[63] [63] Baron 1994, p.275
[64] Pronin, E.; Kruger, J.; Savitsky, K.; Ross, L. (2001), "You don't know me, but I know you: the illusion of asymmetric insight", Journal of
Personality and Social Psychology 81 (4): 639656, doi:10.1037/0022-3514.81.4.639, PMID11642351
[65] Hoorens, Vera (1993), "Self-enhancement and Superiority Biases in Social Comparison", European Review of Social Psychology
(Psychology Press) 4 (1): 113139, doi:10.1080/14792779343000040.
[66] [66] Plous 2006, p.206
[67] Hsee, Christopher K.; Reid Hastie (2006), "Decision and experience: why don't we choose what makes us happy?", Trends in Cognitive
Sciences 10 (1): 3137, doi:10.1016/j.tics.2005.11.007, PMID16318925.
[68] [68] Plous 2006, p.185
[69] [69] Kruger, J. (1999). Lake Wobegon be gone! The "below-average effect" and the egocentric nature of comparative ability judgments. Journal
of Personality and Social Psychology, 77(2)
[70] Mather, Shafir & Johnson, 2000
[71] Schacter, Daniel L. (1999). "The Seven Sins of Memory: Insights From Psychology and Cognitive Neuroscience". American Psychologist
54 (3): 182203. doi:10.1037/0003-066X.54.3.182. PMID10199218.
[72] [72] Attneave, F. (1953). Psychological probability as a function of experienced frequency. Journal of Experimental Psychology, 46(2), 81-86.
[73] Fischhoff, B., Slovic, P., & Lichtenstein, S. (1977). Knowing with certainty: The appropriateness of extreme confidence. Journal of
Experimental Psychology: Human Perception and Performance, 3(4), 552-564. doi:10.1037/0096-1523.3.4.552
[74] Cacioppo, John (2002). Foundations in social neuroscience. Cambridge, Mass: MIT Press. pp.130-132. ISBN026253195X.
[75] Walker, W. Richard; John J. Skowronski, Charles P. Thompson (2003). "Life Is Pleasantand Memory Helps to Keep It That Way!" (http:/
/ www.apa. org/ journals/ releases/ gpr72203.pdf). Review of General Psychology (Educational Publishing Foundation) 7 (2): 203210.
doi:10.1037/1089-2680.7.2.203. . Retrieved 2009-08-27.
[76] [76] Fiedler, K. (1991). The tricky nature of skewed frequency tables: An information loss account of distinctiveness-based illusory correlations.
Journal of Personality and Social Psychology, 60(1), 24-36.
[77] Koriat, A.; M. Goldsmith, A. Pansky (2000). "Toward a Psychology of Memory Accuracy". Annual Review of Psychology 51 (1): 481537.
doi:10.1146/annurev.psych.51.1.481. PMID10751979.
[78] Craik & Lockhart, 1972
[79] Kinnell, Angela; Dennis, S. (2011). "The list length effect in recognition memory: an analysis of potential confounds.". Memory &
Cognition (Adelaide, Australia: School of Psychology, University of Adelaide) 39 (2): 348-63.
[80] [80] e.g., Shushaka, 1958
[81] [81] e.g., Slamecka, 1968
[82] Nelson, D. L.; U. S. Reed, J. R. Walling (1976). "Pictorial superiority effect". Journal of Experimental Psychology: Human Learning &
Memory 2: 523528.
[83] Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing ones own incompetence lead to inflated
self-assessments. Journal of Personality and Social Psychology, 77(6), 1121-1134. doi:10.1037/0022-3514.77.6.1121
[84] Martin, G. Neil; Neil R. Carlson, William Buskist (2007). Psychology (3rd ed.). Pearson Education. pp.309310. ISBN978-0-273-71086-8.
[85] Rubin, Wetzler & Nebes, 1986; Rubin, Rahhal & Poon, 1998
[86] Morton, Crowder & Prussin, 1971
[87] Tversky, A., & Koehler, D. J. (1994). Support theory: A nonextensional representation of subjective probability. Psychological Review,
101(4), 547-567.
[88] Poppenk, Walia, Joanisse, Danckert, & Khler, 2006
[89] [89] von Restorff, 1933
[90] Kahneman, Daniel; Shane Frederick (2002), "Representativeness Revisited: Attribute Substitution in Intuitive Judgment", in Thomas
Gilovich, Dale Griffin, Daniel Kahneman, Heuristics and Biases: The Psychology of Intuitive Judgment, Cambridge: Cambridge University
Press, pp.4981, ISBN978-0-521-79679-8, OCLC47364085
[91] Slovic, Paul; Melissa Finucane, Ellen Peters, Donald G. MacGregor (2002), "The Affect Heuristic", in Thomas Gilovich, Dale Griffin,
Daniel Kahneman, Heuristics and Biases: The Psychology of Intuitive Judgment, Cambridge University Press, pp.397420,
ISBN0-521-79679-2
[92] Flyvbjerg, B., 2008, "Curbing Optimism Bias and Strategic Misrepresentation in Planning: Reference Class Forecasting in Practice."
European Planning Studies, vol. 16, no. 1, January, pp. 321. (http:/ / www. sbs. ox. ac. uk/ centres/ bt/ Documents/ Curbing Optimism Bias
and Strategic Misrepresentation. pdf)
List of biases in judgment and decision making
12
References
Baron, Jonathan (1994), Thinking and deciding (2nd ed.), Cambridge University Press, ISBN0-521-43732-6
Baron, Jonathan (2000), Thinking and deciding (3rd ed.), New York: Cambridge University Press,
ISBN0-521-65030-5
Bishop, Michael A.; J.D. Trout (2004), Epistemology and the Psychology of Human Judgment, New York:
Oxford University Press, ISBN0-19-516229-3
Gilovich, Thomas (1993), How We Know What Isn't So: The Fallibility of Human Reason in Everyday Life, New
York: The Free Press, ISBN0-02-911706-2
Gilovich, Thomas; Dale Griffin, Daniel Kahneman (2002), Heuristics and biases: The psychology of intuitive
judgment, Cambridge, UK: Cambridge University Press, ISBN0-521-79679-2
Greenwald, A. (1980), "The Totalitarian Ego: Fabrication and Revision of Personal History", American
Psychologist (American Psychological Association) 35 (7), ISSN0003-066X
Hardman, David (2009), Judgment and decision making: psychological perspectives, Wiley-Blackwell,
ISBN978-1-4051-2398-3
Kahneman, Daniel; Paul Slovic, Amos Tversky (1982), Judgment under Uncertainty: Heuristics and Biases,
Cambridge, UK: Cambridge University Press, ISBN0-521-28414-7
Kahneman, Daniel; Knetsch, Jack L.; Thaler, Richard H. (1991), "Anomalies: The Endowment Effect, Loss
Aversion, and Status Quo Bias", The Journal of Economic Perspectives (American Economic Association) 5 (1):
193206
Plous, Scott (1993), The Psychology of Judgment and Decision Making, New York: McGraw-Hill,
ISBN0-07-050477-6
Schacter, Daniel L. (1999), "The Seven Sins of Memory: Insights From Psychology and Cognitive
Neuroscience", American Psychologist (American Psychological Association) 54 (3): 182203,
doi:10.1037/0003-066X.54.3.182, ISSN0003-066X, PMID10199218
Sutherland, Stuart (2007), Irrationality, Pinter & Martin, ISBN978-1-905177-07-3
Tetlock, Philip E. (2005), Expert Political Judgment: how good is it? how can we know?, Princeton: Princeton
University Press, ISBN978-0-691-12302-8
Virine, L.; M. Trumper (2007), Project Decisions: The Art and Science, Vienna, VA: Management Concepts,
ISBN978-1-56726-217-9
Ambiguity effect
13
Ambiguity effect
The ambiguity effect is a cognitive bias where decision making is affected by a lack of information, or "ambiguity".
The effect implies that people tend to select options for which the probability of a favorable outcome is known, over
an option for which the probability of a favorable outcome is unknown. The effect was first described by Daniel
Ellsberg in 1961.
As an example, consider a bucket containing 30 balls. The balls are colored red, black and white. Ten of the balls are
red, and the remaining 20 are some combination of black and white, with all combinations of black and white being
equally likely. In option X, drawing a red ball wins a person $100, and in option Y, drawing a black ball wins them
$100. The probability of picking a winning ball is the same for both options X and Y. In option X, the probability of
selecting a winning ball is 1 in 3 (10 red balls out of 30 total balls). In option Y, despite the fact that the number of
black balls is uncertain, the probability of selecting a winning ball is also 1 in 3. This is because the number of black
balls is equally distributed among all possibilities between 0 and 20, so the probability of there being (10 - n) black
balls is the same as there being (10 + n) black balls. The difference between the two options is that in option X, the
probability of a favorable outcome is known, but in option Y, the probability of a favorable outcome is unknown
("ambiguous").
In spite of the equal probability of a favorable outcome, people have a greater tendency to select a ball under option
X, where the probability of selecting a winning ball is perceived to be more certain. The uncertainty as to the number
of black balls means that option Y tends to be viewed less favorably. Despite the fact that there could possibly be
twice as many black balls as red balls, people tend not to want to take the opposing risk that there may be fewer than
10 black balls. The "ambiguity" behind option Y means that people tend to favor option X, even when the
probability is equivalent.
One possible explanation of the effect is that people have a rule of thumb (heuristic) to avoid options where
information is missing (Frisch & Baron, 1988; Ritov & Baron, 1990). This will often lead them to seek out the
missing information. In many cases, though, the information cannot be obtained. The effect is often the result of
calling some particular missing piece of information to the person's attention.
However, not all people act this way. In Wilkinson's Modes of Leadership, what he describes as Mode Four
individuals do not require such disambiguation and actively look for ambiguity especially in business and other such
situations where an advantage might be found. This response appears to be linked to an individual's understanding of
complexity and the search for emergent properties.
References
Baron, J. (2000). Thinking and deciding (3d ed.). New York: Cambridge University Press.
Ellsberg, D. (1961). Risk, ambiguity, and the Savage axioms. Quarterly Journal of Economics, 75, 643699.
Frisch, D., & Baron, J. (1988). Ambiguity and rationality. Journal of Behavioral Decision Making, 1, 149-157.
Ritov, I., & Baron, J. (1990). Reluctance to vaccinate: omission bias and ambiguity. Journal of Behavioral
Decision Making, 3, 263-277.
Wilkinson, D.J. (2006). The Ambiguity Advantage: what great leaders are great at. London: Palgrave Macmillian.
Anchoring
14
Anchoring
Anchoring or focalism is a cognitive bias that describes the common human tendency to rely too heavily on the first
piece of information offered (the "anchor") when making decisions. During decision making, anchoring occurs when
individuals use an initial piece of information to make subsequent judgments. Once an anchor is set, other judgments
are made by adjusting away from that anchor, and there is a bias toward interpreting other information around the
anchor. For example, the initial price offered for a used car sets the standard for the rest of the negotiations, so that
prices lower than the initial price seem more reasonable even if they are still higher than what the car is really worth.
Focusing effect
The focusing effect (or focusing illusion) is a cognitive bias that occurs when people place too much importance on
one aspect of an event, causing an error in accurately predicting the utility of a future outcome.
People focus on notable differences, excluding those that are less conspicuous, when making predictions about
happiness or convenience. For example, when people were asked how much happier they believe Californians are
compared to Midwesterners, Californians and Midwesterners both said Californians must be considerably happier,
when, in fact, there was no difference between the actual happiness rating of Californians and Midwesterners. The
bias lies in that most people asked focused on and overweighed the sunny weather and ostensibly easy-going
lifestyle of California and devalued and underrated other aspects of life and determinants of happiness, such as low
crime rates and safety from natural disasters like earthquakes (both of which large parts of California lack).
[1]
A rise in income has only a small and transient effect on happiness and well-being, but people consistently
overestimate this effect. Kahneman et al. proposed that this is a result of a focusing illusion, with people focusing on
conventional measures of achievement rather than on everyday routine.
[2]
Anchoring and adjustment heuristic
Anchoring and adjustment is a psychological heuristic that influences the way people intuitively assess
probabilities. According to this heuristic, people start with an implicitly suggested reference point (the "anchor") and
make adjustments to it to reach their estimate. A person begins with a first approximation (anchor) and then makes
incremental adjustments based on additional information. These adjustments are usually insufficient, giving the
initial anchor a great deal of influence over future assessments.
Daniel Kahneman, one of the first
researchers to study anchoring.
The anchoring and adjustment heuristic was first theorized by Amos
Tversky and Daniel Kahneman. In one of their first studies, participants
were asked to compute the product of the numbers one through eight, either
as or reversed as
. The anchor was the number shown
first in the sequence, either 1 or 8. When 1 was the anchor, the average
estimate was 512; when 8 was the anchor, the average estimate was 2,250.
The correct answer was 40,320, indicating that both groups made
insufficient adjustments away from the initial anchor. In another study by
Tversky and Kahneman, participants observed a roulette wheel that was
predetermined to stop on either 10 or 65. Participants were then asked to
guess the percentage of African nations that were members of the United
Nations. Participants whose wheel stopped on 10 guessed lower values
(25% on average) than participants whose wheel stopped at 65 (45% on
average).
[3]
The pattern has held in other experiments for a wide variety of
different subjects of estimation.
Anchoring
15
As a second example, in a study by Dan Ariely, an audience is first asked to write the last two digits of their social
security number and consider whether they would pay this number of dollars for items whose value they did not
know, such as wine, chocolate and computer equipment. They were then asked to bid for these items, with the result
that the audience members with higher two-digit numbers would submit bids that were between 60 percent and 120
percent higher than those with the lower social security numbers, which had become their anchor.
[4]
Difficulty of avoiding anchoring
Various studies have shown that anchoring is very difficult to avoid. For example, in one study students were given
anchors that were obviously wrong. They were asked whether Mahatma Gandhi died before or after age 9, or before
or after age 140. Clearly neither of these anchors can be correct, but the two groups still guessed significantly
differently (average age of 50 vs. average age of 67).
[5]
Other studies have tried to eliminate anchoring much more directly. In a study exploring the causes and properties of
anchoring, participants were exposed to an anchor and asked to guess how many physicians were listed in the local
phone book. In addition, they were explicitly informed that anchoring would "contaminate" their responses, and that
they should do their best to correct for that. A control group received no anchor and no explanation. Regardless of
how they were informed and whether they were informed correctly, all of the experimental groups reported higher
estimates than the control group. Thus, despite being expressly aware of the anchoring effect, participants were still
unable to avoid it.
[6]
A later study found that even when offered monetary incentives, people are unable to effectively
adjust from an anchor.
[7]
Causes
Several theories have been put forth to explain what causes anchoring, although some explanations are more popular
than others, there is no consensus as to which is best.
[8]
In a study on possible causes of anchoring, two authors
described anchoring as easy to demonstrate, but hard to explain.
[5]
At least one group of researchers has argued that
multiple causes are at play, and that what is called "anchoring" is actually several different effects.
[9]
Anchoring-and-adjusting
In their original study, Tversky and Kahneman put forth a view later termed anchoring-and-adjusting. According
to this theory, once an anchor is set, people adjust away from it to get to their final answer; however, they adjust
insufficiently, resulting in their final guess being closer to the anchor than it would be otherwise.
[10]
Other
researchers also found evidence supporting the anchoring-and-adjusting explanation.
[11]
However, later researchers criticized this model, saying that it only works when the initial anchor is outside the range
of acceptable answers. To use an earlier example, since Mahatma Gandhi obviously did not die at age 9, then people
will adjust from there. If a reasonable number were given, though (e.g. age 60), then adjustment would not explain
the anchoring effect.
[12]
Another study found that the anchoring effect holds even when the anchor is subliminal.
According to Tversky and Kahneman's theory, this is impossible, since anchoring is only the result of conscious
adjustment.
[13]
Because of arguments like these, anchoring-and-adjusting has fallen out of favor.
Selective accessibility
In the same study that criticized anchoring-and-adjusting, the authors proposed an alternate explanation regarding
selective accessibility, which is derived from a theory called "confirmatory hypothesis testing". In short, selective
accessibility proposes that when given an anchor, a judge (i.e. a person making some judgment) will evaluate the
hypothesis that the anchor is a suitable answer. Assuming it is not, the judge moves on to another guess, but not
before accessing all the relevant attributes of the anchor itself. Then, when evaluating the new answer, the judge
looks for ways in which it is similar to the anchor, resulting in the anchoring effect.
[12]
Various studies have found
empirical support for this hypothesis.
[14]
This explanation assumes that the judge considers the anchor to be a
Anchoring
16
plausible value so that it is not immediately rejected, which would preclude considering its relevant attributes.
Attitude change
More recently, a third explanation of anchoring has been proposed concerning attitude change. According to this
theory, providing an anchor changes someone's attitudes to be more favorable to the particular attributes of that
anchor, biasing future answers to have similar characteristics as the anchor. Leading proponents of this theory
consider it to be an alternate explanation in line with prior research on anchoring-and-adjusting and selective
accessibility.
[15][16]
Factors that influence anchoring
Mood
A wide range of research has linked sad or depressed moods with more extensive and accurate evaluation of
problems.
[17]
As a result of this, earlier studies hypothesized that people with more depressed moods would tend to
use anchoring less than those with happier moods. However, more recent studies have shown the opposite effect: sad
people are more likely to use anchoring than people with happy or neutral mood.
[18]
Experience
Early research found that experts (those with high knowledge, experience, or expertise in some field) were more
resistant to the anchoring effect.
[19]
Since then, however, numerous studies have demonstrated that while experience
can sometimes reduce the effect, even experts are susceptible to anchoring. In a study concerning the effects of
anchoring on judicial decisions, researchers found that even experienced legal professionals were affected by
anchoring. This remained true even when the anchors provided were arbitrary and unrelated to the case in
question.
[20]
Personality
Research has correlated susceptibility to anchoring with most of the Big Five personality traits. People high in
agreeableness and conscientiousness are more likely to be affected by anchoring, while those high in extroversion
are less likely to be affected.
[21]
Another study found that those high in openness to new experiences were more
susceptible to the anchoring effect.
[22]
Cognitive ability
The impact of cognitive ability on anchoring is contested. A recent study on willingness to pay for consumer goods
found that anchoring decreased in those with greater cognitive ability, though it did not disappear.
[23]
Another study,
however, found that cognitive ability had no significant effect on how likely people were to use anchoring.
[24]
Anchoring in negotiations
In negotiations, anchoring refers to the concept of setting a boundary that outlines the basic constraints for a
negotiation; subsequently, the anchoring effect is the phenomenon in which we set our estimation for the true value
of the item at hand.
[25]
In addition to the initial research conducted by Tversky and Kahneman, multiple other studies
have shown that anchoring can greatly influence the estimated value of an object.
[26]
For instance, although
negotiators can generally appraise an offer based on multiple characteristics, studies have shown that they tend focus
on only one aspect. In this way, a deliberate starting point can strongly affect the range of possible counteroffers.
[10]
The process of offer and counteroffer results in a mutually beneficial arrangement. However, multiple studies have
shown that initial offers have a stronger influence on the outcome of negotiations than subsequent counteroffers.
[27]
Anchoring
17
An example of the power of anchoring has been conducted during the Strategic Negotiation Process Workshops.
During the workshop, a group of participants is divided into two sections: buyers and sellers. Each side receives
identical information about the other party before going into a one-on-one negotiation. Following this exercise, both
sides debrief about their experiences. The results show that where the participants anchor the negotiation had a
significant effect on their success.
[28]
Anchoring affects everyone, even people who are highly knowledgeable in a field. Northcraft and Neale conducted a
study to measure the difference in the estimated value of a house between students and real-estate agents. In this
experiment, both groups were shown a house and then given different listing prices. After making their offer, each
group was then asked to discuss what factors influenced their decisions. In the follow-up interviews, the real-estate
agents denied being influenced by the initial price, but the results showed that both groups were equally influenced
by that anchor.
[29]
Anchoring can have more subtle effects on negotiations as well. Janiszewski and Uy investigated the effects of
precision of an anchor. Participants read an initial price for a beach house, then gave the price they thought it was
worth. They received either a general, seemingly nonspecific anchor (e.g. $800,000) or a more precise and specific
anchor (e.g. $799,800). Participants with a general anchor adjusted their estimate more than those given a precise
anchor ($751,867 vs $784,671). The authors propose that this effect comes from difference in scale; in other words,
the anchor affects not only the starting value, but also the starting scale. When given a general anchor of $20, people
will adjust in large increments ($19, $21, etc.), but when given a more specific anchor like $19.85, people will adjust
on a lower scale ($19.75, $19.95, etc.).
[30]
Thus, a more specific initial price will tend to result in a final price closer
to the initial one.
References
[1] Schkade, D.A., & Kahneman, D. (1998). "Does living in California make people happy? A focusing illusion in judgments of life satisfaction".
Psychological Science, 9, 340346.
[2] Kahneman, Daniel; Alan B. Krueger, David Schkade, Norbert Schwarz, Arthur A. Stone (2006-06-30). "Would you be happier if you were
richer? A focusing illusion" (http:/ / www.morgenkommichspaeterrein. de/ ressources/ download/ 125krueger. pdf). Science 312 (5782):
190810. doi:10.1126/science.1129688. PMID16809528. .
[3] Tversky, A. & Kahneman, D. (1974). "Judgment under uncertainty: Heuristics and biases" (http:/ / www. hss. caltech. edu/ ~camerer/ Ec101/
JudgementUncertainty. pdf). Science, 185, 11241130.
[4] Edward Teach, " Avoiding Decision Traps (http:/ / www. cfo. com/ article. cfm/ 3014027)", CFO (1 June 2004). Retrieved 29 May 2007.
[5] Strack, F., & Mussweiler, T. (1997). "Explaining the enigmatic anchoring effect: Mechanisms of selective accessibility". Journal of
Personality and Social Psychology, 73(3), 437-446.
[6] Wilson, T. D., Houston, C. E., Etling, K. M., & Brekke, N. (1996). "A new look at anchoring effects: Basic anchoring and its antecedents".
Journal Of Experimental Psychology, 125(4), 387-402.
[7] Simmons, J., LeBoeuf, R., Nelson, L. (2010). "The effect of accuracy motivation on anchoring and adjustment: Do people adjust from
provided anchors?". Journal of Personality and Social Psychology, 99(6), 917-932.
[8] Furnham, A. & Boo, H. C. (2011). "A literature review of the anchoring effect". Journal of Socio-Economics, 40(1), 35-42.
[9] Epley, N. & Gilovich, T. (2005). "When effortful thinking influences judgmental anchoring: Differential effects of forewarning and
incentives on self-generated and externally provided anchors". Journal of Behavioral Decision Making, 18, 199212
[10] Tversky, A., & Kahneman, D. (1992). Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and
Uncertainty, 5, 297-323.
[11] Epley, N. & Gilovich, T. (2001). "Putting adjustment back into the anchoring and adjustment heuristic: Differential processing of
self-generated and experimenter-provided anchors". Psychological Science, 12, 391-396.
[12] Mussweiler, T. & Strack, F. (1999). "Hypothesis-consistent testing and semantic priming in the anchoring paradigm: A selective
accessibility model". Journal of Experimental Social Psychology, 35, 136-164.
[13] Mussweiler, T. & Englich, B. (2005). "Subliminal anchoring: Judgmental consequences and underlying mechanisms". Organizational
Behavior and Human Decision Processes, 98, 133-143.
[14] Chapman, G. B. & Johnson, E. J. (1999). "Anchoring, activation, and the construction of values". Organizational Behavior and Human
Decision Processes, 79, 139.
[15] Wegener, D. T., Petty, R. E., Detweiler-Bedell, B., & Jarvis, W. B. G. (2001). "Implications of attitude change theories for numerical
anchoring: Anchor plausibility and the limits of anchor effectiveness". Journal of Experimental Social Psychology, 37, 6269
[16] Blankenship, K. L., Wegener, D. T., Petty, R. E., Detweiler-Bedell B., & Macy, C. L. (2008). "Elaboration and consequences of anchored
estimates: An attitudinal perspective on numerical anchoring". Journal of Experimental Social Psychology, 44, 14651476
Anchoring
18
[17] Bodenhausen, G. V., Gabriel, S., & Lineberger, M. (2000). "Sadness and susceptibility to judgmental bias: The case of anchoring".
Psychological Science, 11, 320323.
[18] Englich, B., & Soder, K. (2009). "Moody experts: How mood and expertise influence judgmental anchoring". Judgmental and Decision
Making, 4, 4150.
[19] Wilson, T. D., Houston, C. E., Etling, K. M., Brekke, N. (1996). "A new look at anchoring effects: Basic anchoring and its antecedents".
Journal of Experimental Psychology, 125, 387402.
[20] Englich, B., Mussweiler, T., & Strack, F. (2006). "Playing dice with criminal sentences: The influence of irrelevant anchors on experts
judicial decision making". Personality and Social Psychology Bulletin, 32, 188200.
[21] Eroglu, C., & Croxton, K. L. (2010). "Biases in judgmental adjustments of statistical forecasts: The role of individual differences".
International Journal of Forecasting, 26, 116133.
[22] McElroy, T., & Dowd, K. (2007). "Susceptibility to anchoring effects: How openness-to-experience influences responses to anchoring
cues". Judgment and Decision Making, 2, 4853.
[23] Bergman, O., Ellingsen, T., Johannesson, M., & Svensson, C. (2010). "Anchoring and cognitive ability". Economics Letters, 107, 6668.
[24] Oechssler, J., Roider, S., & Schmitz, P. W. (2009). "Cognitive abilities and behavioural biases". Journal of Economic Behavior and
Organization, 72, 147152.
[25] Tversky, A. & Kahneman, D. (1974). "Judgment under uncertainty: Heuristics and biases". Science, 185, 1124, 1128-1130.
[26] Orr, D. & Guthrie, C. (2005). "Anchoring, information, expertise, and negotiation: New insights from meta-analysis". Ohio St. J. Disp.
Resol., 597, 21.
[27] Kristensen, H. & Garling, T. (1997). "The effects of anchor points and reference points on negotiation processes and outcomes". Goteborg
Psychological Reports, 2, 8:27.
[28] [28] Dietmeyer, B. (2004). Strategic Negotiation: A Breakthrough Four-Step Process for Effective Business Negotiation. New York City: Kaplan
Publishing.
[29] Northcraft, G. B., & Neale, M. A. (1987). "Expert, amateurs, and real estate: An anchoring-and-adjustment perspective on property pricing
decisions". Organizational Behavior and Human Decision Processes, 39, 228-241.
[30] Janiszewski, C., & Uy, D. (2008). "Precision of the anchor influences the amount of adjustment". Psychological Science, 19(2), 121-127.
Attentional bias
Several types of cognitive bias occur due to an attentional bias. One example occurs when a person does not
examine all possible outcomes when making a judgment about a correlation or association. They may focus on one
or two possibilities, while ignoring the rest.
The most commonly studied type of decision for attentional bias, is one in which there are two conditions (A and B),
which can be present (P) or not present (N). This leaves four possible combination outcomes: both are present
(AP/BP), both are not present (AN/BN), only A is present (AP/BN), only B is present (AN/BP). This can be better
shown in table form:
A Present A Not Present
B Present AP/BP AN/BP
B Not Present AP/BN AN/BN
In everyday life, people are often subject to this type of attentional bias when asking themselves, "Does God answer
prayers?"
[1]
Many would say "Yes" and justify it with "many times I've asked God for something, and He's given it
to me." These people would be accepting and overemphasizing the data from the present/present (top-left) cell,
because an unbiased person would counter this logic and consider data from the present/absent cell. "Has God ever
given me something that I didn't ask for?" Or "Have I asked God for something and didn't receive it?" This
experiment too supports Smedslund's general conclusion that subjects tend to ignore part of the table.
Attentional biases can also influence what information people are likely to focus upon. For instance, patients with
anxiety disorders
[2]
and chronic pain
[3]
show increased attention to information representing their concerns (i.e.,
angry and painful facial expressions respectively) in studies using the dot-probe paradigm. It is important to note that
two different forms of attentional bias may be measured. A 'within-subjects' bias occurs when an individual displays
greater bias towards one type of information (e.g., painful faces) when compared to different types of information
Attentional bias
19
(e.g., neutral faces). A 'between-subjects' bias, alternatively, occurs when one group of participants displays greater
bias than another group of participants (e.g., chronic pain patients shown greater bias towards painful expressions
than healthy control participants). These two types of bias therefore arise due to different mechanisms, and both are
not always present in the same sample of participants. Another commonly used paradigm to measure attentional
biases is the Stroop paradigm.
Attentional Bias and Smoking
Recent research has found a strong correlation between smoking cues and attentional bias. These studies not only
illustrate the importance of attentional bias in addiction and cravings but also how we look at addiction from a
scientific standpoint. The behavioral aspects of cravings are extensively covered; however, the perceptual and
neurological aspects of attentional bias and the role they play is supported by recent research to be significant.
Smoking Cues
The Stroop paradigm is used in attentional bias research to distinguish types of smoking cues and their effect on
smokers. Research using the Stroop paradigm tested the effect of smoking related words such as cigarette, puff, and
smoke, with negative effect words such as sick, pain and guilty and positive effect words such as safe, glad and
hopeful and neutral words such as tool, shovel and hammer. Results showed a strong correlation in a slower reaction
time between smoking related and negative-effect word lists. A slower reaction time to negative and smoking word
lists indicates lingering attention or attentional bias by the participant. This is significant because the task call for the
participant to focus on the color of the word rather than meaning, possibly implicating an underlying negative feeling
towards their smoking behavior.
[4]
Smokers have attentional bias to a subliminal images and therefore are more
likely to be influenced by environmental cues such as seeing other people smoking, ads for cigarettes or triggers such
as coffee or alcohol.
[5]
This idea further illustrates that influence of smoking cues implicate that dependence on
nicotine is reinforced by attentional bias.Smokers may have underlying negative feelings toward smoking, when
asked to think of the negative consequences of smoking, they showed less craving than those who were encouraged
to smoke.
[6]
This illustrates the influence of attentional bias on environmental smoking cues and could contribute to
a smokers inability to quit.
Similar Stroop paradigm studies have explained that attentional bias is not dependent on smoking itself, but rather
the person who is the smoker displays attentional bias. A recent study required one group of smokers to refrain from
smoking the night before and another less than an hour before. Abstinence from smoking created slower reaction
time, but a smoke break between study sessions showed increased reaction time. Researchers say this shows that
nicotine dependence intensifies attention, but does not directly depend on smoking itself due to lack of evidence.
[7]
The longer reaction time suggests smokers craving a cigarette linger on smoking related words.
[8]
Smokers and
smokers attempting to quit displayed the same slower reaction time for smoking related words,
[9]
supporting research
that implies attentional bias is a behavioral mechanism versus a dependence mechanism.
Neurological Basis
Attentional bias often seen in eye tracking movements is thought to be an underlying issue of addiction. Smokers
linger on smoking cues compared with neutral cues. Researchers found higher activation in the insular cortex, the
orbitofrontal cortex and the amygdala when presented with smoking cues. The orbitofrontal cortex is known to be
coordinated with drug-seeking behavior and the insular cortex and amygdala are involved in the autonomic and
emotional state of an individual.
[10][11]
Neural activity is also known to decrease upon the beginning of smoking, focusing the smokers attention on their
upcoming cigarette. Therefore when smoking cues are nearby it is harder for a smoker to concentrate on other tasks.
This is seen in the activation of the dorsal anterior cingulate cortex, known for focusing attention on relevant
Attentional bias
20
stimuli.
[12][13]
References
[1] Nisbett, R.E., & Ross, L. (1980). Human inference: Strategies and shortcomings of social judgment. Englewood Cliffs, N.J.: Prentice-Hall.
[2] (Bar-Haim, Y., Lamy, D., Pergamin, L., Bakermans-Kranenburg, M.J., & van IJzendoorn, M.H. (2007). Threat-related attentional bias in
anxious and non-anxious individuals: A meta-analytic study. Psychological Bulletin.
[3] Schoth, D.E., & Liossi, C. (2010). Attentional bias towards pictorial representations of pain in individuals with chronic headache. The
Clinical Journal of Pain. 26 (3): 244250.
[4] Drobes, David J.; Elibero, Andrea; Evans, David E. (2006). "Attentional bias for smoking and affective stimuli: A Stroop task study.".
Psychology of Addictive Behaviors 20 (4): 490495. doi:10.1037/0893-164X.20.4.490. ISSN1939-1501.
[5] Yan, Xiaodan; Jiang, Yi; Wang, Jin; Deng, Yuan; He, Sheng; Weng, Xuchu (2009). "Preconscious attentional bias in cigarette smokers: a
probe into awareness modulation on attentional bias". Addiction Biology 14 (4): 478488. doi:10.1111/j.1369-1600.2009.00172.x.
ISSN13556215.
[6] Szasz, P. L., Szentagotai, A., & Hofmann, S. G. (2012).Effects of emotion regulation strategies on smoking craving, attentional bias, and task
persistence.Behaviour Research and Therapy, 50, 333-340.
[7] Canamar, Catherine P.; London, Edythe (2012). "Acute cigarette smoking reduces latencies on a Smoking Stroop test". Addictive Behaviors
37 (5): 627631. doi:10.1016/j.addbeh.2012.01.017. ISSN03064603.
[8] Field, M., Munaf, M. R., & Franken, I. A. (2009). A meta-analytic investigation of the relationship between attentional bias and subjective
craving in substance abuse.Psychological Bulletin, 135(4), 589-607. doi:10.1037/a0015843
[9] Cane, J. E., Sharma, D. D., &Albery, I. P. (2009). The addiction Stroop task: Examining the fast and slow effects of smoking and
marijuana-related cues. Journal Of Psychopharmacology, 23(5), 510-519. doi:10.1177/0269881108091253
[10] [10] Janes, A. C., Pizzagalli, D. A., Richardt, S., Frederick, B. D. B., Holmes, A. J., Sousa, J., . . .Kaufman, M. J. (2012). Neural substrates of
attentional bias for smoking-related cues: An fMRI study. Neuropsychopharmacology, 35, 2339-2345.
[11] Kang, O-Seok; Chang, Dong-Seon; Jahng, Geon-Ho; Kim, Song-Yi; Kim, Hackjin; Kim, Jong-Woo; Chung, Sun-Yong; Yang, Seung-In et
al. (2012). "Individual differences in smoking-related cue reactivity in smokers: An eye-tracking and fMRI study". Progress in
Neuro-Psychopharmacology and Biological Psychiatry 38 (2): 285293. doi:10.1016/j.pnpbp.2012.04.013. ISSN02785846.
[12] Luijten, M., Veltman, D., den Brink, W., Hester, R., Field, M., Smits, M., ,&Franken, I. (2011). Neurobiological substrate of
smoking-related attentional bias.Neuroimage, 54(3), 2374-2381. doi:10.1016/j.neuroimage.2010.09.064
[13] Stippekohl, B., Walter, B., Winkler, M. H., Mucha, R. F., Pauli, P., Vaitl, D., & Stark, R. (2012).An early attentional bias to BEGIN stimuli
of the smoking ritual is accompanied with mesocorticolimbic deactivations in smokers. Psychopharmacology, 222, 593-607.
Further reading
Baron, Jonathan. (2000). Thinking and Deciding (3d edition). Cambridge University Press.
Smith, N.K.; Chartrand, T.L.; Larsen, J.T.; Cacioppo, J.T.; Katafiasz, HA; Moran, KE (2006). "Being bad isn't
always good: Affective context moderates the attention bias towards negative information" (http:/ / psychology.
uchicago. edu/ people/ faculty/ cacioppo/ jtcreprints/ beingbadisnt. pdf). Journal of Personality and Social
Psychology 90 (2): 210220. doi:10.1037/0022-3514.90.2.210. PMID16536647.
Availability heuristic
21
Availability heuristic
The availability heuristic is a mental shortcut that occurs when people make judgments about the probability of
events by the ease with which examples come to mind. The availability heuristic operates on the notion that, "if you
can think of it, it must be important." The availability of consequences associated with an action is positively related
to perceptions of the magnitude of the consequences of that action. In other words, the easier it is to recall the
consequences of something, the greater we perceive these consequences to be. Sometimes, this heuristic is
beneficial, but the frequencies that events come to mind are usually not accurate reflections of their actual
probability in real life.
[1]
For example, if someone asked you whether your college had more students from Colorado
or more from California, under the availability heuristic, you would probably answer the question based on the
relative availability of examples of Colorado students and California students. If you recall more students that come
from California that you know, you will be more likely to conclude that more students in your college are from
California than from Colorado.
[2]
Overview and History
When faced with the difficult task of judging probability or frequency, people use a limited number of strategies,
called heuristics, to simplify these judgements. One of these strategies, the availability heuristic, is the tendency to
make a judgement about the frequency of an event based on how easy it is to recall similar instances.
[1]
In 1973,
Amos Tversky and Daniel Kahneman first studied this phenomenon and labeled it the Availability Heuristic. The
availability heuristic is an unconscious process that operates on the notion that, "if you can think of it, it must be
important."
[1]
In other words, how easily an example can be called to mind is related to perceptions about how often
this event occurs. Thus, people tend to use a readily accessible attribute to base their beliefs about a relatively distant
concept.
[3]
In an experiment to test this heuristic, Tversky and Kahneman presented participants with four lists of names: two
lists with the names of 19 famous women and 20 less famous men, and two lists with the names of 19 famous men
and 20 less famous women. The first group was asked to recall as many names as possible and the second group was
asked to estimate which class of names was more frequent: famous or less famous. The famous names were most
easily recalled compared to the less famous names, and despite the fact that the less famous names were more
frequent, the majority of the participants incorrectly judged that the famous names occurred more often. While the
availability heuristic is an effective strategy in many situations, when judging probability the availability heuristic
can lead to systematic errors.
[1]
Research
In a study by Schwarz et al., participants were asked to describe either 6 or 12 examples of assertive, or unassertive
behavior. Participants were later asked to rate their own assertiveness. The results indicated that participants rated
themselves as more assertive after describing 6, rather than 12, examples for the assertive behavior condition, and
conversely rated themselves as less assertive after describing 6, rather than 12, examples for the unassertive behavior
condition. The study reflected that the recalled content was qualified by the ease with which the content could be
brought to mind (it was easier to recall 6 examples than 12).
[4]
In another study, subjects were asked, If a random word is taken from an English text, is it more likely that the word
starts with a K, or that K is the third letter? Most English-speaking people could immediately think of many words
that begin with the letter "K" (kangaroo, kitchen, kale), but it would take a more concentrated effort to think of any
words where "K" is the third letter (acknowledge). Results indicated that participants overestimated the number of
words that began with the letter K, but underestimated the number of words that had K as the third letter.
Researchers concluded that people answer questions like these by comparing the availability of the two categories
Availability heuristic
22
and assessing how easily they can recall these instances. In other words, it is easier to think of words that begin with
"K", than words with "K" as the third letter. Thus, people judge words beginning with a "K" to be a more common
occurrence. In reality, however, a typical text contains twice as many words that have "K" as the third letter than "K"
as the first letter. Additionally, there are three times as many words that have the letter "K" in the third position, as
have it in the first position.
[1]
Chapman (1967) described a bias in the judgment of the frequency with which two events co-occur. This
demonstration showed that the co-occurrence of paired stimuli resulted in participants overestimating the frequency
of the pairings.
[5]
To test this idea, participants were given information about several hypothetical mental patients.
The data for each patient consisted of a clinical diagnosis and a drawing made by the patient. Later, participants
estimated the frequency with which each diagnosis had been accompanied by various features of the drawing. The
subjects vastly overestimated the frequency of this co-occurrence (such as suspiciousness and peculiar eyes). This
effect was labeled the illusory correlation. Tversky and Kahneman suggested that availability provides a natural
account for the illusory-correlation effect. The strength of the association between two events could provide the basis
for the judgment of how frequently the two events co-occur. When the association is strong, it becomes more likely
to conclude that the events have been paired frequently. Strong associations will be thought of as having occurred
together frequently.
[1]
Research in 1992 used mood manipulation to influence the availability heuristic by placing participants into a sad
mood condition or a happy mood condition. People in the sad mood condition recalled better than those in the happy
mood condition, revealing that the power of the availability heuristic changes in certain conditions.
[6]
Examples
A person claims to a group of friends that those who drive red cars receive more speeding tickets. The group
agrees with the statement because a member of the group drives a red car and frequently receives speeding
tickets. The reality could be that he just drives fast and would receive a speeding ticket regardless of the color of
car that he drove. Even if statistics show fewer speeding tickets were given to red cars than to other colors of cars,
he is an available example which makes the statement seem more plausible.
[7]
Where an anecdote ("I know a Brazilian man who...") is used to "prove" an entire proposition or to support a bias,
the availability heuristic is in play. In these instances the ease of imagining an example or the vividness and
emotional impact of that example becomes more credible than actual statistical probability. Because an example
is easily brought to mind or mentally "available," the single example is considered as representative of the whole
rather than as just a single example in a range of data.
[1]
A specific example of this taking place would be when a
person argues that cigarette smoking is not unhealthy because his grandfather smoked three packs of cigarettes
each day and lived to be 100 years old. The grandfather's health could simply be an unusual case that does not
speak to the health of smokers in general.
[8]
A person sees several news stories of cats leaping out of tall trees and surviving, so he believes that cats must be
robust to long falls. However, these kinds of news reports are far more common than reports where a cat falls out
of the tree and dies, which may in fact be a more common event.
[1]
A recent newspaper subscriber might compare the number of newspapers delivered versus those that were not
delivered in order to calculate newspaper delivery failure. In this case, the calculation of delivery failure depends
on the number of incidents recalled. However, it will be hard to recall all specific instances if the subscriber is
trying to recall all newspaper deliveries over an extensive period of time.
[9]
After seeing many news stories of home foreclosures, people may judge that the likelihood of this event is greater.
This may be true because it is easier to think of examples of this event.
[1]
Availability heuristic
23
Applications
Media
After seeing news stories about child abductions, people may judge that the likelihood of this event is greater. Media
coverage can help fuel a person's example bias with widespread and extensive coverage of unusual events, such as
homicide or airline accidents, and less coverage of more routine, less sensational events, such as common diseases or
car accidents. For example, when asked to rate the probability of a variety of causes of death, people tend to rate
"newsworthy" events as more likely because they can more readily recall an example from memory. For example, in
the USA, people rate the chance of death by homicide higher than the chance of death by stomach cancer, even
though death by stomach cancer is five times higher than death by homicide. Moreover, unusual and vivid events
like homicides, shark attacks, or lightning are more often reported in mass media than common and unsensational
causes of death like common diseases.
[10]
For example, many people think that the likelihood of dying from shark attacks is greater than that of dying from
being hit by falling airplane parts, when more people actually die from falling airplane parts. When a shark attack
occurs, the deaths are widely reported in the media whereas deaths as a result of being hit by falling airplane parts
are rarely reported in the media.
[11]
In a 2010 study exploring how vivid television portrayals are used when forming social reality judgments, people
watching vivid violent media gave higher estimates of the prevalence of crime and police immorality in the real
world than those not exposed to vivid television. These results suggest that television violence does in fact have a
direct causal impact on participants social reality beliefs. Repeated exposure to vivid violence leads to an increase in
peoples risk estimates about the prevalence of crime and violence in the real world.
[12]
Counter to these findings,
researchers from a similar study argued that these effects may be due to effects of new information. Researchers
tested the new information effect by showing movies depicting dramatic risk events and measuring their risk
assessment after the film. Contrary to previous research, there were no effects on risk perception due to exposure to
dramatic movies.
[13]
Health
According to the department of Psychology at Nancy University France, studies have examined the impact of the
availability heuristic in the perceptions of health-related events: lifetime risk of breast cancer, subjective life
expectancy, and subjective age of onset of menopause.
[14]
In each section, three conditions were set up: control,
anchoring heuristic, and availability heuristic. The findings revealed that availability and anchoring were being used
to estimate personal health-related events. Hypochondriac tendencies, optimism, depressive mood, subjective health,
internal locus of control and recall of information had a significant impact on judgments of riskiness. Availability
also impacted perceived health risks.
[14]
In another study, risk assessments of contracting breast cancer were based on experiences with an abnormal breast
symptom, experiences with affected family members and friends.Researchers analyzed interviews from women
talking about their own breast cancer risk. They found the availability, simulation, representativeness, affect, and
perceived control heuristics, and search were most frequently used for making risk assessments.
[15]
Researchers examined the role of cognitive heuristics in the AIDS risk-assessment process. 331 physicians reported
worry about on-the-job HIV exposure, and experience with patients who have HIV. They tested to see if participants
used the availability heuristic by analyzing their response to questions about talking and reading about AIDS.
Availability of AIDS information did not relate strongly to perceived risk. Availability was not significantly related
to worry after variance associated with simulation and experience with AIDS was removed.
[16]
Participants in a 1992 study read case descriptions of hypothetical patients who varied on their sex and sexual
preference. These hypothetical patients showed symptoms of two different diseases. Participants were instructed to
indicate which disease they thought the patient had and then they rated patient responsibility and interactional
Availability heuristic
24
desirability. Consistent with the availability heuristic, either the more common (influenza) or the more publicized
(AIDS) disease was chosen.
[17]
Business and Economy
A previous study sought to analyze the role of the availability heuristic in financial markets. Researchers defined and
tested two aspects of the availability heuristic:
[18]
1. Outcome Availability availability of positive and negative investment outcomes
2. Risk Availability availability of financial risk
[18]
Researchers tested the availability effect on investors' reactions to analyst recommendation revisions and found that
positive stock price reactions to recommendation upgrades are stronger when accompanied by positive stock market
index returns. On the other hand, negative stock price reactions to recommendation downgrades are stronger when
accompanied by negative stock market index returns. On days of substantial stock market moves, abnormal stock
price reactions to upgrades are weaker, and abnormal stock price reactions to downgrades are stronger. These
availability effects are still significant even after controlling for event-specific and company-specific factors.
[18]
Similarly, research has pointed out that under the availability heuristic, humans are not reliable because they assess
probabilities by overweighting current or easily recalled information instead of processing all relevant information.
Since information regarding the current state of the economy is readily available, researchers attempted to expose the
properties of business cycles to predict the availability bias in analysts growth forecasts. The availability heuristic
was shown to play a role in analysis of forecasts and influenced investments because of this.
[19]
Additionally, a study by Hayibor and Wasieleski found that the availability of others who believe that a particular act
is morally acceptable is positively related to others' perceptions of the morality of that act. This suggests that
availability heuristic also has an effect on ethical decision making and ethical behavior in organizations.
[20]
Education
A study done by Craig R. Fox, provides an example of how availability heuristics can work in the classroom. In this
study, Fox is testing if difficulty of recall influences judgment, specifically with course evaluations among college
students. In his study he had two groups complete a course evaluation form. He asked the first group to write two
recommended improvements for the course (a relatively easy task) and then write two positives about the class. The
second group was asked to write ten suggestions where the professor could improve (a relatively difficult task) and
then write two positive comments about the course. At the end of the evaluation both groups were asked to rate the
course on a scale from one to seven. The results showed that students asked to write ten suggestions (difficult task)
rated the course less harshly because it was more difficult for them to recall the information. Students asked to do the
easier evaluation with only two complaints had less difficulty in terms of availability of information, resulting in a
harsher rating of the course.
[21]
Criminal Justice
The media usually focuses on violent or extreme cases, which are more readily available in the public's mind. This
may come into play when it is time for the judicial to evaluate and determine the proper punishment for a crime. In a
previous study, respondents rated how much they agreed with hypothetical laws and policies such as "Would you
support a law that required all offenders convicted of unarmed muggings to serve a minimum prison term of two
years?" Participants then read cases and rated each case on several questions about punishment. As hypothesized,
respondents recalled more easily from long-term memory stories that contain severe harm, which seemed to
influence their sentencing choices to make them push for harsher punishments. This can be eliminated by adding
high concrete or high contextually distinct details into the crime stories about less severe injuries.
[22]
Availability heuristic
25
A similar study asked jurors and college students to choose sentences on four severe criminal cases in which prison
was a possible but not inevitable sentencing outcome. Respondents answering questions about court performance on
a public opinion formulated a picture of what the courts do and then evaluated the appropriateness of that behavior.
Respondents recalled from public information about crime and sentencing. This type of information is incomplete
because the news media present a highly selective and non-representative selection of crime, focusing on the violent
and extreme, rather than the ordinary. This makes most people think that judges are too lenient. But, when asked to
choose the punishments, the sentences given by students were equal to or less severe than those given by judges. In
other words, the availability heuristic made people believe that judges and jurors were too lenient in the courtroom,
but the participants gave similar sentences when placed in the position of the judge, suggesting that the information
they recalled was not correct.
[23]
Researchers in 1989 predicted that mock jurors would rate a witness to be more deceptive if the witness testified
truthfully before lying than when the witness was caught lying first before telling the truth. If the availability
heuristic played a role in this, lying second would remain in jurors minds and they would most likely remember the
witness lying over them telling the truth. To test the hypothesis, 312 university students played the roles of mock
jurors and watched a videotape of a witness presenting testimony during a trial. Results confirmed the hypothesis, as
mock jurors were most influenced by the most recent act.
[24]
Critiques
Some researchers have suggested that perceived causes or reasons for an event, rather than imagery of the event
itself, influence probability estimates.
[25]
Evidence for this notion stems from a study where participants either
imagined the winner of the debate, or came up with reasons for why Ronald Reagan or Walter Mondale would win
the 1984 U.S. Presidential Candidate debate. The results of this study explained that imagining Reagan or Mondale
winning the debate had no effect on predictions of who would win the debate. However, imagining and considering
reasons for why Reagan or Mondale would win the debate did significantly affect predictions.
[25]
Other psychologists argue that the classic studies on the availability heuristic are vague and do not explain the
underlying processes.
[26]
For example, in the famous Tversky and Kahneman study, Wanke et al. believe that this
differential ease of recall, may alter subjects frequency estimates in two different ways. In one way, as the
availability heuristic suggests, the subjects may use the subjective experience of ease or difficulty of recall as a basis
of judgment. Researchers also assert that if this is done, they would predict a higher frequency if the recall task is
experienced as easy rather than difficult. In a contrasting scenario, researchers suggest that the subjects may recall as
many words of each type as possible within the time given to them and may base their judgment on the recalled
sample of words. If it is easier to recall words which begin with a certain letter, these words would be
over-represented in the recalled sample, again producing a prediction of higher frequency. In the second scenario the
estimate would be based on recalled content rather than on the subjective experience of ease of recall.
[26]
Some researchers have shown concern about confounding variables in the original Tversky and Kahneman study.
[4]
Researchers question if the participants recalling celebrity names were basing frequency estimates on the amount of
content recalled or on the ease of recall. Some researchers suggest that the design of the earlier experiment was
flawed and did not actually determine how the availability heuristic works.
[4]
Recent research has provided some evidence that the availability heuristic is only one of many strategies involved in
frequency judgment.
[27]
Future research should attempt to incorporate all these factors.
Availability heuristic
26
References
[1] Tversky, A; Kahneman (1973). "Availability: A heuristic for judging frequency and probability". Cognitive Psychology 5 (1): 207233.
doi:10.1016/0010-0285(73)90033-9.
[2] Matlin, Margaret (2009). Cognition. Hoboken, NJ: John Wiley & Sons, Inc. pp.413. ISBN978-0-470-08764-0.
[3] Kahneman, D; Tversky, A (January 1982). "The psychology of preferences". Scientific American 246: 160173.
[4] Schwarz, N; Strack, F., Bless, H., Klumpp, G., Rittenauer-Schatka, H., & Simons, A. (1991). "Ease of retrieval as information: Another look
at the availability heuristic". Journal of Personality and Social Psychology 61 (2): 195202.
[5] Chapman, L.J (1967). "Illusory correlation in observational report". Journal of Verbal Learning 6: 151155.
[6] Colin, M; Campbell, L. (1992). "Memory accessibility and probability of judgements:An experimental evaluation of the availability
heuristic". Journal of Personality and Social Psychology 63 (6): 890902.
[7] Manis, Melvin; Shelder, J., Jonides, J., Nelson, N.E. (1993). "Availability Heuristic in Judgments of Set Size and Frequency of Occurrence".
Journal of Personality and Social Psychology 65 (3): 448457.
[8] Esgate, Groome, A, D (2004). An Introduction to Applied Cognitive Psychology. Psychology Press. ISBNISBN 1841693170.
[9] Folkes, Valerie S. (June 1988). "The Availability Heuristic and Perceived Risk". Journal of Consumer Research 15 (1).
[10] Briol, P; Petty, R.E, & Tormala, Z.L. (2006). "The malleable meaning of subjective ease". Psychological Science 17: 200206.
doi:10.1111/j.1467-9280.2006.01686.x.
[11] Read, J.D. (1995). "The availability heuristic in person identification: The sometimes misleading consequences of enhanced contextual
information". Applied Cognitive Psychology 9: 91121.
[12] riddle, Karen (2010). "Always on My Mind: Exploring How Frequent, Recent, and Vivid Television Portrayals Are Used in the Formation
of Social Reality Judgments". Media Psychology 13: 155179. doi:10.1080/15213261003800140.
[13] Sjoberg, Lennart; Engelberg, E. (2010). "Risk Perception and Movies: A Study of Availability as a Factor in Risk Perception". Risk Analysis
30 (1): 95106. doi:10.1111/j.1539-6924.2009.01335.x.
[14] Gana, K; Lourel, M., Trouillet, R., Fort, I., Mezred, D., Blaison, C., Boujemadi, V., K'Delant, P., Ledrich, J. (2010). "Judgment of riskiness:
Impact of personality, naive theories and heuristic thinking among female students". Psychology and Health 25 (2): 131147.
doi:10.1080/08870440802207975.
[15] Katapodi, M.C; Facione, N.C., Humphreys, J.C., Dodd, M.J. (2005). "Perceived breast cancer risk: Heuristic reasoning and search for a
dominance structure". Social Science & Medicine 60 (2): 421432.
[16] Heath, Linda; Acklin, M., Wiley, K. (1991). "Cognitive heuristics and AIDS risk assessment among physicians". Journal of Applied Social
Psychology 21 (22): 18591867.
[17] Triplet, R.G (1992). "Discriminatory biases in the perception of illness: The application of availability and representativeness heuristics to
the AIDS crisis". Basic and Applied Social Psychology 13 (3): 303322.
[18] Klinger, D; Kudryavtsev, A. (2010). "The availability heuristic and investors' reactions to company-specific events". The Journal of
Behavioral Finance 11 (50-65). doi:10.1080/15427561003591116.
[19] Lee, B; OBrien, J., Sivaramakrishnan, K. (2008). "An Analysis of Financial Analysts Optimism in Long-term Growth Forecasts". The
Journal of Behavioral Finance 9: 171184. doi:10.1080/15427560802341889.
[20] Hayibor, S; Wasieleski, D.M. (2009). "Effects of the use of availability". Journal of Business Ethics 84: 151165.
doi:10.1007/s10551-008-9690-7.
[21] Fox, Craig R. (July 2006). "The availability heuristic in the classroom: How soliciting more criticism can boost your course ratings".
Judgment and Decision Making 1 (1): 86-90.
[22] Stalans, L.J (1993). "Citizens' crime stereotypes, biased recall, and punishment preferences in abstract cases". Law and Human Behavior 17
(451-469).
[23] Diamond, S.S; Stalans, L.J (1989). "The myth of judicial leniency in sentencing". Behavioral Sciences & the Law 7: 7389.
[24] DeTurck, M.A; Texter, L.A., Harszlak, J.J. (1989). "Effects of information processing objectives on judgments of deception following
perjury". Communication Research 16 (3): 434452.
[25] Levi, A; Pryor, J.B. (1987). "Use of the availability heuristic in probability estimates of future events: The effects of imagining outcomes
versus imagining reasons". Organizational Behavior & Human Performance 40 (2).
[26] Wanke, M; Schwarz, N., Bless, H. (1995). "The availability heuristic revisited: Experienced ease of retrieval in mundane frequency
estimates". Acta Psychologica 89: 8390.
[27] Hulme, C; Roodenrys, S., Brown, G., Mercer, R. (1995). "The role of long-term memory mechanisms in memory span". British Journal of
Psychology 86 (4): 527536. doi:10.1111/j.2044-8295.1995.tb02570.x.
Availability heuristic
27
External links
How Belief Works (http:/ / www. tryingtothink. org/ wiki/ How_Belief_Works) - an article on the origins of the
availability bias.
Availability cascade
An availability cascade is a self-reinforcing cycle that explains the development of certain kinds of collective
beliefs. A novel idea or insight, usually one that seems to explain a complex process in a simple or straightforward
manner, gains rapid currency in the popular discourse by its very simplicity and by its apparent insightfulness. Its
rising popularity triggers a chain reaction within the social network: individuals adopt the new insight because other
people within the network have adopted it, and on its face it seems plausible. The reason for this increased use and
popularity of the new idea involves both the availability of the previously obscure term or idea, and the need of
individuals using the term or idea to appear to be current with the stated beliefs and ideas of others, regardless of
whether they in fact fully believe in the idea that they are expressing. Their need for social acceptance, and the
apparent sophistication of the new insight, overwhelm their critical thinking.
The idea of the availability cascade was first developed by Timur Kuran and Cass Sunstein, building upon the
concept of information cascades and on the availability bias as identified by Daniel Kahneman and Amos Tversky.
The concept has been highly influential in finance theory and regulatory research.
[1]
References
[1] Availability Cascades and Risk Regulation (http:/ / papers. ssrn. com/ sol3/ papers. cfm?abstract_id=138144)
Confirmation bias
Confirmation bias (also called confirmatory bias or myside bias) is a tendency of people to favor information that
confirms their beliefs or hypotheses.
[1][2]
People display this bias when they gather or remember information
selectively, or when they interpret it in a biased way. The effect is stronger for emotionally charged issues and for
deeply entrenched beliefs. For example, in reading about current political issues, people usually prefer sources that
affirm their existing attitudes. They also tend to interpret ambiguous evidence as supporting their existing position.
Biased search, interpretation and memory have been invoked to explain attitude polarization (when a disagreement
becomes more extreme even though the different parties are exposed to the same evidence), belief perseverance
(when beliefs persist after the evidence for them is shown to be false), the irrational primacy effect (a greater reliance
on information encountered early in a series) and illusory correlation (when people falsely perceive an association
between two events or situations).
A series of experiments in the 1960s suggested that people are biased toward confirming their existing beliefs. Later
work re-interpreted these results as a tendency to test ideas in a one-sided way, focusing on one possibility and
ignoring alternatives. In certain situations, this tendency can bias people's conclusions. Explanations for the observed
biases include wishful thinking and the limited human capacity to process information. Another explanation is that
people show confirmation bias because they are weighing up the costs of being wrong, rather than investigating in a
neutral, scientific way.
Confirmation biases contribute to overconfidence in personal beliefs and can maintain or strengthen beliefs in the
face of contrary evidence. Poor decisions due to these biases have been found in military, political, and
organizational contexts.
Confirmation bias
28
Types
Confirmation biases are effects in information processing, distinct from the behavioral confirmation effect, also
called "self-fulfilling prophecy", in which people's expectations affect their behaviour to make the expectations come
true.
[3]
Some psychologists use "confirmation bias" to refer to any way in which people avoid rejecting a belief,
whether in searching for evidence, interpreting it, or recalling it from memory. Others restrict the term to selective
collection of evidence.
[4][5]
Biased search for information
Confirmation bias has been described as an
internal "yes man", echoing back a person's
beliefs like Charles Dickens' character Uriah
Heep.
[6]
Experiments have repeatedly found that people tend to test hypotheses
in a one-sided way, by searching for evidence consistent with the
hypothesis they hold at a given time.
[7][8]
Rather than searching
through all the relevant evidence, they ask questions that are phrased
so that an affirmative answer supports their hypothesis.
[9]
They look
for the consequences that they would expect if their hypothesis were
true, rather than what would happen if it were false.
[9]
For example,
someone who is trying to identify a number using yes/no questions and
suspects that the number is 3 might ask, "Is it an odd number?" People
prefer this sort of question, called a "positive test", even when a
negative test such as "Is it an even number?" would yield exactly the
same information.
[10]
However, this does not mean that people seek
tests that are guaranteed to give a positive answer. In studies where
subjects could select either such pseudo-tests or genuinely diagnostic
ones, they favored the genuinely diagnostic.
[11][12]
The preference for positive tests is not itself a bias, since positive tests
can be highly informative.
[13]
However, in conjunction with other
effects, this strategy can confirm existing beliefs or assumptions,
independently of whether they are true.
[14]
In real-world situations,
evidence is often complex and mixed. For example, various
contradictory ideas about someone could each be supported by
concentrating on one aspect of his or her behavior.
[8]
Thus any search for evidence in favor of a hypothesis is likely
to succeed.
[14]
One illustration of this is the way the phrasing of a question can significantly change the answer.
[8]
For example, people who are asked, "Are you happy with your social life?" report greater satisfaction than those
asked, "Are you unhappy with your social life?"
[15]
Even a small change in the wording of a question can affect how people search through available information, and
hence the conclusions they reach. This was shown using a fictional child custody case.
[16]
Subjects read that Parent
A was moderately suitable to be the guardian in multiple ways. Parent B had a mix of salient positive and negative
qualities: a close relationship with the child but a job that would take him or her away for long periods. When asked,
"Which parent should have custody of the child?" the subjects looked for positive attributes and a majority chose
Parent B. However, when the question was, "Which parent should be denied custody of the child?" they looked for
negative attributes, but again a majority answered Parent B, implying that Parent A should have custody.
[16]
Similar studies have demonstrated how people engage in biased search for information, but also that this
phenomenon may be limited by a preference for genuine diagnostic tests, where they are available. In an initial
experiment, subjects had to rate another person on the introversion-extroversion personality dimension on the basis
of an interview. They chose the interview questions from a given list. When the interviewee was introduced as an
introvert, the subjects chose questions that presumed introversion, such as, "What do you find unpleasant about noisy
Confirmation bias
29
parties?" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such
as, "What would you do to liven up a dull party?" These loaded questions gave the interviewees little or no
opportunity to falsify the hypothesis about them.
[17]
However, a later version of the experiment gave the subjects less
presumptive questions to choose from, such as, "Do you shy away from social interactions?"
[18]
Subjects preferred to
ask these more diagnostic questions, showing only a weak bias towards positive tests. This pattern, of a main
preference for diagnostic tests and a weaker preference for positive tests, has been replicated in other studies.
[18]
Another experiment gave subjects a particularly complex rule-discovery task involving moving objects simulated by
a computer.
[19]
Objects on the computer screen followed specific laws, which the subjects had to figure out. They
could "fire" objects across the screen to test their hypotheses. Despite making many attempts over a ten hour session,
none of the subjects worked out the rules of the system. They typically sought to confirm rather than falsify their
hypotheses, and were reluctant to consider alternatives. Even after seeing evidence that objectively refuted their
working hypotheses, they frequently continued doing the same tests. Some of the subjects were instructed in proper
hypothesis-testing, but these instructions had almost no effect.
[19]
Biased interpretation
"Smart people believe weird things because they are skilled at defending beliefs they arrived at for non-smart reasons."
Michael Shermer
[20]
Confirmation biases are not limited to the collection of evidence. Even if two individuals have the same information,
the way they interpret it can be biased.
A team at Stanford University ran an experiment with subjects who felt strongly about capital punishment, with half
in favor and half against.
[21][22]
Each of these subjects read descriptions of two studies; a comparison of U.S. states
with and without the death penalty, and a comparison of murder rates in a state before and after the introduction of
the death penalty. After reading a quick description of each study, the subjects were asked whether their opinions
had changed. They then read a much more detailed account of each study's procedure and had to rate how
well-conducted and convincing that research was.
[21]
In fact, the studies were fictional. Half the subjects were told
that one kind of study supported the deterrent effect and the other undermined it, while for other subjects the
conclusions were swapped.
[21][22]
The subjects, whether proponents or opponents, reported shifting their attitudes slightly in the direction of the first
study they read. Once they read the more detailed descriptions of the two studies, they almost all returned to their
original belief regardless of the evidence provided, pointing to details that supported their viewpoint and
disregarding anything contrary. Subjects described studies supporting their pre-existing view as superior to those that
contradicted it, in detailed and specific ways.
[21][23]
Writing about a study that seemed to undermine the deterrence
effect, a death penalty proponent wrote, "The research didn't cover a long enough period of time", while an
opponent's comment on the same study said, "No strong evidence to contradict the researchers has been
presented".
[21]
The results illustrated that people set higher standards of evidence for hypotheses that go against their
current expectations. This effect, known as "disconfirmation bias", has been supported by other experiments.
[24]
Confirmation bias
30
An MRI scanner allowed researchers to examine
how the human brain deals with unwelcome
information.
A study of biased interpretation took place during the 2004 US
presidential election and involved subjects who described themselves
as having strong feelings about the candidates. They were shown
apparently contradictory pairs of statements, either from Republican
candidate George W. Bush, Democratic candidate John Kerry or a
politically neutral public figure. They were also given further
statements that made the apparent contradiction seem reasonable. From
these three pieces of information, they had to decide whether or not
each individual's statements were inconsistent. There were strong
differences in these evaluations, with subjects much more likely to
interpret statements by the candidate they opposed as contradictory.
[25]
In this experiment, the subjects made their judgments while in a
magnetic resonance imaging (MRI) scanner which monitored their
brain activity. As subjects evaluated contradictory statements by their
favored candidate, emotional centers of their brains were aroused. This did not happen with the statements by the
other figures. The experimenters inferred that the different responses to the statements were not due to passive
reasoning errors. Instead, the subjects were actively reducing the cognitive dissonance induced by reading about their
favored candidate's irrational or hypocritical behavior.
[25]
Biased interpretation is not restricted to emotionally significant topics. In another experiment, subjects were told a
story about a theft. They had to rate the evidential importance of statements arguing either for or against a particular
character being responsible. When they hypothesized that character's guilt, they rated statements supporting that
hypothesis as more important than conflicting statements.
[26]
Biased memory
Even if someone has sought and interpreted evidence in a neutral manner, they may still remember it selectively to
reinforce their expectations. This effect is called "selective recall", "confirmatory memory" or "access-biased
memory".
[27]
Psychological theories differ in their predictions about selective recall. Schema theory predicts that
information matching prior expectations will be more easily stored and recalled.
[28]
Some alternative approaches say
that surprising information stands out more and so is more memorable.
[28]
Predictions from both these theories have
been confirmed in different experimental contexts, with no theory winning outright.
[29]
In one study, subjects read a profile of a woman which described a mix of introverted and extroverted behaviors.
[30]
They later had to recall examples of her introversion and extroversion. One group was told this was to assess the
woman for a job as a librarian, while a second group were told it was for a job in real estate sales. There was a
significant difference between what these two groups recalled, with the "librarian" group recalling more examples of
introversion and the "sales" groups recalling more extroverted behavior.
[30]
A selective memory effect has also been
shown in experiments that manipulate the desirability of personality types.
[28][31]
In one of these, a group of subjects
were shown evidence that extroverted people are more successful than introverts. Another group were told the
opposite. In a subsequent, apparently unrelated, study, they were asked to recall events from their lives in which they
had been either introverted or extroverted. Each group of subjects provided more memories connecting themselves
with the more desirable personality type, and recalled those memories more quickly.
[32]
One study showed how selective memory can maintain belief in extrasensory perception (ESP).
[33]
Believers and
disbelievers were each shown descriptions of ESP experiments. Half of each group were told that the experimental
results supported the existence of ESP, while the others were told they did not. In a subsequent test, subjects recalled
the material accurately, apart from believers who had read the non-supportive evidence. This group remembered
significantly less information and some of them incorrectly remembered the results as supporting ESP.
[33]
Confirmation bias
31
Related effects
Backfire effect
A similar cognitive bias found in individuals is the Backfire effect. Here, individuals challenged with evidence
contradictory to their beliefs tend to reject the evidence and instead become an even firmer supporter of the initial
belief.
[34][35]
The phrase was first coined by Brendan Nyhan and Jason Reifler in a paper entitled "When Corrections
Fail: The persistence of political misperceptions".
[36]
Polarization of opinion
When people with opposing views interpret new information in a biased way, their views can move even further
apart. This is called "attitude polarization".
[37]
The effect was demonstrated by an experiment that involved drawing
a series of red and black balls from one of two concealed "bingo baskets". Subjects knew that one basket contained
60% black and 40% red balls; the other, 40% black and 60% red. The experimenters looked at what happened when
balls of alternating color were drawn in turn, a sequence that does not favor either basket. After each ball was drawn,
subjects in one group were asked to state out loud their judgments of the probability that the balls were being drawn
from one or the other basket. These subjects tended to grow more confident with each successive drawwhether
they initially thought the basket with 60% black balls or the one with 60% red balls was the more likely source, their
estimate of the probability increased. Another group of subjects were asked to state probability estimates only at the
end of a sequence of drawn balls, rather than after each ball. They did not show the polarization effect, suggesting
that it does not necessarily occur when people simply hold opposing positions, but rather when they openly commit
to them.
[38]
Strong opinions on an issue such as gun
ownership can bias how someone interprets new
evidence.
A less abstract study was the Stanford biased interpretation experiment
in which subjects with strong opinions about the death penalty read
about mixed experimental evidence. Twenty-three percent of the
subjects reported that their views had become more extreme, and this
self-reported shift correlated strongly with their initial attitudes.
[21]
In
later experiments, subjects also reported their opinions becoming more
extreme in response to ambiguous information. However, comparisons
of their attitudes before and after the new evidence showed no
significant change, suggesting that the self-reported changes might not
be real.
[24][37][39]
Based on these experiments, Deanna Kuhn and
Joseph Lao concluded that polarization is a real phenomenon but far
from inevitable, only happening in a small minority of cases. They
found that it was prompted not only by considering mixed evidence, but by merely thinking about the topic.
[37]
Charles Taber and Milton Lodge argued that the Stanford team's result had been hard to replicate because the
arguments used in later experiments were too abstract or confusing to evoke an emotional response. The Taber and
Lodge study used the emotionally charged topics of gun control and affirmative action.
[24]
They measured the
attitudes of their subjects towards these issues before and after reading arguments on each side of the debate. Two
groups of subjects showed attitude polarization; those with strong prior opinions and those who were politically
knowledgeable. In part of this study, subjects chose which information sources to read, from a list prepared by the
experimenters. For example they could read the National Rifle Association's and the Brady Anti-Handgun
Coalition's arguments on gun control. Even when instructed to be even-handed, subjects were more likely to read
arguments that supported their existing attitudes. This biased search for information correlated well with the
polarization effect.
[24]
Confirmation bias
32
Persistence of discredited beliefs
"[B]eliefs can survive potent logical or empirical challenges. They can survive and even be bolstered by evidence that most
uncommitted observers would agree logically demands some weakening of such beliefs. They can even survive the total destruction
of their original evidential bases."
Lee Ross and Craig Anderson
[40]
Confirmation biases can be used to explain why some beliefs remain when the initial evidence for them is
removed.
[41]
This belief perseverance effect has been shown by a series of experiments using what is called the
"debriefing paradigm": subjects read fake evidence for a hypothesis, their attitude change is measured, then the
fakery is exposed in detail. Their attitudes are then measured once more to see if their belief returns to its previous
level.
[40]
A typical finding is that at least some of the initial belief remains even after a full debrief.
[42]
In one experiment,
subjects had to distinguish between real and fake suicide notes. The feedback was random: some were told they had
done well while others were told they had performed badly. Even after being fully debriefed, subjects were still
influenced by the feedback. They still thought they were better or worse than average at that kind of task, depending
on what they had initially been told.
[43]
In another study, subjects read job performance ratings of two firefighters, along with their responses to a risk
aversion test.
[40]
These fictional data were arranged to show either a negative or positive association: some subjects
were told that a risk-taking firefighter did better, while others were told they did less well than a risk-averse
colleague.
[44]
Even if these two case studies had been true, they would have been scientifically poor evidence for a
conclusion about firefighters in general. However, the subjects found them subjectively persuasive.
[44]
When the
case studies were shown to be fictional, subjects' belief in a link diminished, but around half of the original effect
remained.
[40]
Follow-up interviews established that the subjects had understood the debriefing and taken it seriously.
Subjects seemed to trust the debriefing, but regarded the discredited information as irrelevant to their personal
belief.
[44]
Preference for early information
Experiments have shown that information is weighted more strongly when it appears early in a series, even when the
order is unimportant. For example, people form a more positive impression of someone described as "intelligent,
industrious, impulsive, critical, stubborn, envious" than when they are given the same words in reverse order.
[45]
This irrational primacy effect is independent of the primacy effect in memory in which the earlier items in a series
leave a stronger memory trace.
[45]
Biased interpretation offers an explanation for this effect: seeing the initial
evidence, people form a working hypothesis that affects how they interpret the rest of the information.
[41]
One demonstration of irrational primacy involved colored chips supposedly drawn from two urns. Subjects were told
the color distributions of the urns, and had to estimate the probability of a chip being drawn from one of them.
[45]
In
fact, the colors appeared in a pre-arranged order. The first thirty draws favored one urn and the next thirty favored
the other.
[41]
The series as a whole was neutral, so rationally, the two urns were equally likely. However, after sixty
draws, subjects favored the urn suggested by the initial thirty.
[45]
Another experiment involved a slide show of a single object, seen as just a blur at first and in slightly better focus
with each succeeding slide.
[45]
After each slide, subjects had to state their best guess of what the object was. Subjects
whose early guesses were wrong persisted with those guesses, even when the picture was sufficiently in focus that
other people could readily identify the object.
[41]
Confirmation bias
33
Illusory association between events
Illusory correlation is the tendency to see non-existent correlations in a set of data.
[46]
This tendency was first
demonstrated in a series of experiments in the late 1960s.
[47]
In one experiment, subjects read a set of psychiatric
case studies, including responses to the Rorschach inkblot test. They reported that the homosexual men in the set
were more likely to report seeing buttocks, anuses or sexually ambiguous figures in the inkblots. In fact the case
studies were fictional and, in one version of the experiment, had been constructed so that the homosexual men were
less likely to report this imagery.
[46]
In a survey, a group of experienced psychoanalysts reported the same set of
illusory associations with homosexuality.
[46][47]
Another study recorded the symptoms experienced by arthritic patients, along with weather conditions over a
15-month period. Nearly all the patients reported that their pains were correlated with weather conditions, although
the real correlation was zero.
[48]
This effect is a kind of biased interpretation, in that objectively neutral or unfavorable evidence is interpreted to
support existing beliefs. It is also related to biases in hypothesis-testing behavior.
[49]
In judging whether two events,
such as illness and bad weather, are correlated, people rely heavily on the number of positive-positive cases: in this
example, instances of both pain and bad weather. They pay relatively little attention to the other kinds of observation
(of no pain and/or good weather).
[50]
This parallels the reliance on positive tests in hypothesis testing.
[49]
It may also
reflect selective recall, in that people may have a sense that two events are correlated because it is easier to recall
times when they happened together.
[49]
Example
Days Rain No rain
Arthritis 14 6
No arthritis 7 2
In the above fictional example, arthritic symptoms are more likely on days with no rain. However, people are likely
to focus on the relatively large number of days which have both rain and symptoms. By concentrating on one cell of
the table rather than all four, people can misperceive the relationship, in this case associating rain with arthritic
symptoms.
[51]
History
Francis Bacon
Informal observation
Before psychological research on confirmation bias, the phenomenon had been
observed anecdotally by writers, including the Greek historian Thucydides (c.
460BC c. 395BC), Italian poet Dante Alighieri (12651321), English
philosopher and scientist Francis Bacon (15611626),
[52]
and Russian author Leo
Tolstoy (18281910). Thucydides, in the History of the Peloponnesian War
wrote, "it is a habit of mankind... to use sovereign reason to thrust aside what
they do not fancy."
[53]
In the Divine Comedy, St. Thomas Aquinas cautions
Dante when they meet in Paradise, "opinionhastyoften can incline to the
wrong side, and then affection for one's own opinion binds, confines the
mind."
[54]
Bacon, in the Novum Organum, wrote,
The human understanding when it has once adopted an opinion... draws all things else to support and agree
with it. And though there be a greater number and weight of instances to be found on the other side, yet these
Confirmation bias
34
it either neglects or despises, or else by some distinction sets aside or rejects[.]
[55]
Bacon said that biased assessment of evidence drove "all superstitions, whether in astrology, dreams, omens, divine
judgments or the like".
[55]
In his essay "What Is Art?", Tolstoy wrote,
I know that most mennot only those considered clever, but even those who are very clever, and capable of
understanding most difficult scientific, mathematical, or philosophic problemscan very seldom discern even
the simplest and most obvious truth if it be such as to oblige them to admit the falsity of conclusions they have
formed, perhaps with much difficultyconclusions of which they are proud, which they have taught to others,
and on which they have built their lives.
[56]
Wason's research on hypothesis-testing
The term "confirmation bias" was coined by English psychologist Peter Wason.
[57]
For an experiment published in
1960, he challenged subjects to identify a rule applying to triples of numbers. At the outset, they were told that
(2,4,6) fits the rule. Subjects could generate their own triples and the experimenter told them whether or not each
triple conformed to the rule.
[58][59]
While the actual rule was simply "any ascending sequence", the subjects had a great deal of difficulty in finding it,
often announcing rules that were far more specific, such as "the middle number is the average of the first and
last".
[58]
The subjects seemed to test only positive examplestriples that obeyed their hypothesized rule. For
example, if they thought the rule was, "Each number is two greater than its predecessor", they would offer a triple
that fit this rule, such as (11,13,15) rather than a triple that violates it, such as (11,12,19).
[60]
Wason accepted falsificationism, according to which a scientific test of a hypothesis is a serious attempt to falsify it.
He interpreted his results as showing a preference for confirmation over falsification, hence the term "confirmation
bias".
[61][62]
Wason also used confirmation bias to explain the results of his selection task experiment.
[63]
In this
task, subjects are given partial information about a set of objects, and have to specify what further information they
would need to tell whether or not a conditional rule ("If A, then B") applies. It has been found repeatedly that people
perform badly on various forms of this test, in most cases ignoring information that could potentially refute the
rule.
[64][65]
Klayman and Ha's critique
A 1987 paper by Joshua Klayman and Young-Won Ha argued that the Wason experiments had not actually
demonstrated a bias towards confirmation. Instead, Klayman and Ha interpreted the results in terms of a tendency to
make tests that are consistent with the working hypothesis.
[66]
They called this the "positive test strategy".
[8]
This
strategy is an example of a heuristic: a reasoning shortcut that is imperfect but easy to compute.
[2]
Klayman and Ha
used Bayesian probability and information theory as their standard of hypothesis-testing, rather than the
falsificationism used by Wason. According to these ideas, each answer to a question yields a different amount of
information, which depends on the person's prior beliefs. Thus a scientific test of a hypothesis is one that is expected
to produce the most information. Since the information content depends on initial probabilities, a positive test can
either be highly informative or uninformative. Klayman and Ha argued that when people think about realistic
problems, they are looking for a specific answer with a small initial probability. In this case, positive tests are usually
more informative than negative tests.
[13]
However, in Wason's rule discovery task the answerthree numbers in
ascending orderis very broad, so positive tests are unlikely to yield informative answers. Klayman and Ha
supported their analysis by citing an experiment that used the labels "DAX" and "MED" in place of "fits the rule"
and "doesn't fit the rule". This avoided implying that the aim was to find a low-probability rule. Subjects had much
more success with this version of the experiment.
[67][68]
Confirmation bias
35
If the true rule (T) encompasses the current
hypothesis (H), then positive tests (examining an
H to see if it is T) will not show that the
hypothesis is false.
If the true rule (T) overlaps the current hypothesis
(H), then either a negative test or a positive test
can potentially falsify H.
When the working hypothesis (H) includes the
true rule (T) then positive tests are the only way
to falsify H.
In light of this and other critiques, the focus of research moved away from confirmation versus falsification to
examine whether people test hypotheses in an informative way, or an uninformative but positive way. The search for
"true" confirmation bias led psychologists to look at a wider range of effects in how people process information.
[69]
Explanations
Confirmation bias is often described as a result of automatic, unintentional strategies rather than deliberate
deception.
[14][70]
According to Robert Maccoun, most biased evidence processing occurs through a combination of
both "cold" (cognitive) and "hot" (motivated) mechanisms.
[71]
Cognitive explanations for confirmation bias are based on limitations in people's ability to handle complex tasks, and
the shortcuts, called heuristics, that they use.
[72]
For example, people may judge the reliability of evidence by using
the availability heuristic, i.e. how readily a particular idea comes to mind.
[73]
It is also possible that people can only
focus on one thought at a time, so find it difficult to test alternative hypotheses in parallel.
[74]
Another heuristic is the
positive test strategy identified by Klayman and Ha, in which people test a hypothesis by examining cases where
they expect a property or event to occur. This heuristic avoids the difficult or impossible task of working out how
diagnostic each possible question will be. However, it is not universally reliable, so people can overlook challenges
to their existing beliefs.
[13][75]
Motivational explanations involve an effect of desire on belief, sometimes called "wishful thinking".
[76][77]
It is
known that people prefer pleasant thoughts over unpleasant ones in a number of ways: this is called the "Pollyanna
principle".
[78]
Applied to arguments or sources of evidence, this could explain why desired conclusions are more
likely to be believed true.
[76]
According to experiments that manipulate the desirability of the conclusion, people
demand a high standard of evidence for unpalatable ideas and a low standard for preferred ideas. In other words, they
ask, "Can I believe this?" for some suggestions and, "Must I believe this?" for others.
[79][80]
Although consistency is
a desirable feature of attitudes, an excessive drive for consistency is another potential source of bias because it may
prevent people from neutrally evaluating new, surprising information.
[76]
Social psychologist Ziva Kunda combines
the cognitive and motivational theories, arguing that motivation creates the bias, but cognitive factors determine the
size of the effect.
[81]
Explanations in terms of cost-benefit analysis assume that people do not just test hypotheses in a disinterested way,
but assess the costs of different errors.
[82]
Using ideas from evolutionary psychology, James Friedrich suggests that
people do not primarily aim at truth in testing hypotheses, but try to avoid the most costly errors. For example,
employers might ask one-sided questions in job interviews because they are focused on weeding out unsuitable
candidates.
[83]
Yaacov Trope and Akiva Liberman's refinement of this theory assumes that people compare the two
different kinds of error: accepting a false hypothesis or rejecting a true hypothesis. For instance, someone who
underestimates a friend's honesty might treat him or her suspiciously and so undermine the friendship.
Confirmation bias
36
Overestimating the friend's honesty may also be costly, but less so. In this case, it would be rational to seek, evaluate
or remember evidence of their honesty in a biased way.
[84]
When someone gives an initial impression of being
introverted or extroverted, questions that match that impression come across as more empathic.
[85]
This suggests that
when talking to someone who seems to be an introvert, it is a sign of better social skills to ask, "Do you feel
awkward in social situations?" rather than, "Do you like noisy parties?" The connection between confirmation bias
and social skills was corroborated by a study of how college students get to know other people. Highly
self-monitoring students, who are more sensitive to their environment and to social norms, asked more matching
questions when interviewing a high-status staff member than when getting to know fellow students.
[85]
Psychologists Jennifer Lerner and Philip Tetlock distinguish two different kinds of thinking process. Exploratory
thought neutrally considers multiple points of view and tries to anticipate all possible objections to a particular
position, while confirmatory thought seeks to justify a specific point of view. Lerner and Tetlock say that when
people expect to need to justify their position to other people, whose views they already know, they will tend to
adopt a similar position to those people, and then use confirmatory thought to bolster their own credibility. However,
if the external parties are overly aggressive or critical, people will disengage from thought altogether, and simply
assert their personal opinions without justification.
[86]
Lerner and Tetlock say that people only push themselves to
think critically and logically when they know in advance they will need to explain themselves to others who are
well-informed, genuinely interested in the truth, and whose views they don't already know.
[87]
Because those
conditions rarely exist, they argue, most people are using confirmatory thought most of the time.
[88]
Consequences
In finance
Confirmation bias can lead investors to be overconfident, ignoring evidence that their strategies will lose
money.
[6][89]
In studies of political stock markets, investors made more profit when they resisted bias. For example,
participants who interpreted a candidate's debate performance in a neutral rather than partisan way were more likely
to profit.
[90]
To combat the effect of confirmation bias, investors can try to adopt a contrary viewpoint "for the sake
of argument".
[91]
In one technique, they imagine that their investments have collapsed and ask themselves why this
might happen.
[6]
In physical and mental health
Raymond Nickerson, a psychologist, blames confirmation bias for the ineffective medical procedures that were used
for centuries before the arrival of scientific medicine.
[92]
If a patient recovered, medical authorities counted the
treatment as successful, rather than looking for alternative explanations such as that the disease had run its natural
course.
[92]
Biased assimilation is a factor in the modern appeal of alternative medicine, whose proponents are
swayed by positive anecdotal evidence but treat scientific evidence hyper-critically.
[93][94][95]
Cognitive therapy was developed by Aaron T. Beck in the early 1960s and has become a popular approach.
[96]
According to Beck, biased information processing is a factor in depression.
[97]
His approach teaches people to treat
evidence impartially, rather than selectively reinforcing negative outlooks.
[52]
Phobias and hypochondria have also
been shown to involve confirmation bias for threatening information.
[98]
Confirmation bias
37
In politics and law
Mock trials allow researchers to examine
confirmation biases in a realistic setting.
Nickerson argues that reasoning in judicial and political contexts is
sometimes subconsciously biased, favoring conclusions that judges,
juries or governments have already committed to.
[99]
Since the
evidence in a jury trial can be complex, and jurors often reach
decisions about the verdict early on, it is reasonable to expect an
attitude polarization effect. The prediction that jurors will become
more extreme in their views as they see more evidence has been borne
out in experiments with mock trials.
[100][101]
Both inquisitorial and
adversarial criminal justice systems are affected by confirmation
bias.
[102]
Confirmation bias can be a factor in creating or extending conflicts,
from emotionally charged debates to wars: by interpreting the evidence
in their favor, each opposing party can become overconfident that it is in the stronger position.
[103]
On the other
hand, confirmation bias can result in people ignoring or misinterpreting the signs of an imminent or incipient
conflict. For example, psychologists Stuart Sutherland and Thomas Kida have each argued that US Admiral Husband
E. Kimmel showed confirmation bias when playing down the first signs of the Japanese attack on Pearl
Harbor.
[64][104]
A two-decade study of political pundits by Philip E. Tetlock found that, on the whole, their predictions were not
much better than chance. Tetlock divided experts into "foxes" who maintained multiple hypotheses, and "hedgehogs"
who were more dogmatic. In general, the hedgehogs were much less accurate. Tetlock blamed their failure on
confirmation biasspecifically, their inability to make use of new information that contradicted their existing
theories.
[105]
In the paranormal
One factor in the appeal of psychic "readings" is that listeners apply a confirmation bias which fits the psychic's
statements to their own lives.
[106]
By making a large number of ambiguous statements in each sitting, the psychic
gives the client more opportunities to find a match. This is one of the techniques of cold reading, with which a
psychic can deliver a subjectively impressive reading without any prior information about the client.
[106]
Investigator
James Randi compared the transcript of a reading to the client's report of what the psychic had said, and found that
the client showed a strong selective recall of the "hits".
[107]
As a striking illustration of confirmation bias in the real world, Nickerson mentions numerological pyramidology:
the practice of finding meaning in the proportions of the Egyptian pyramids.
[108]
There are many different length
measurements that can be made of, for example, the Great Pyramid of Giza and many ways to combine or
manipulate them. Hence it is almost inevitable that people who look at these numbers selectively will find
superficially impressive correspondences, for example with the dimensions of the Earth.
[108]
In scientific procedure
A distinguishing feature of scientific thinking is the search for falsifying as well as confirming evidence.
[109]
However, many times in the history of science, scientists have resisted new discoveries by selectively interpreting or
ignoring unfavorable data.
[109]
Previous research has shown that the assessment of the quality of scientific studies
seems to be particularly vulnerable to confirmation bias. It has been found several times that scientists rate studies
that report findings consistent with their prior beliefs more favorably than studies reporting findings inconsistent
with their previous beliefs.
[70][110][111]
However, assuming that the research question is relevant, the experimental
design adequate and the data are clearly and comprehensively described, the found results should be of importance to
Confirmation bias
38
the scientific community and should not be viewed prejudicially, regardless of whether they conform to current
theoretical predictions.
[111]
Confirmation bias may thus be especially harmful to objective evaluations regarding nonconforming results since
biased individuals may regard opposing evidence to be weak in principle and give little serious thought to revising
their beliefs.
[110]
Scientific innovators often meet with resistance from the scientific community, and research
presenting controversial results frequently receives harsh peer review.
[112]
In the context of scientific research, confirmation biases can sustain theories or research programs in the face of
inadequate or even contradictory evidence;
[64][113]
the field of parapsychology has been particularly affected.
[114]
An experimenter's confirmation bias can potentially affect which data are reported. Data that conflict with the
experimenter's expectations may be more readily discarded as unreliable, producing the so-called file drawer effect.
To combat this tendency, scientific training teaches ways to prevent bias.
[115]
For example, experimental design of
randomized controlled trials (coupled with their systematic review) aims to minimize sources of bias.
[115][116]
The
social process of peer review is thought to mitigate the effect of individual scientists' biases,
[117]
even though the
peer review process itself may be susceptible to such biases.
[111][118]
In self-image
Social psychologists have identified two tendencies in the way people seek or interpret information about
themselves. Self-verification is the drive to reinforce the existing self-image and self-enhancement is the drive to
seek positive feedback. Both are served by confirmation biases.
[119]
In experiments where people are given feedback
that conflicts with their self-image, they are less likely to attend to it or remember it than when given self-verifying
feedback.
[120][121][122]
They reduce the impact of such information by interpreting it as unreliable.
[120][123][124]
Similar experiments have found a preference for positive feedback, and the people who give it, over negative
feedback.
[119]
Notes
[1] David Perkins, a geneticist, coined the term "myside bias" referring to a preference for "my" side of an issue. (Baron 2000, p.195)
[2] [2] Plous 1993, p.233
[3] Darley, John M.; Gross, Paget H. (2000), "A Hypothesis-Confirming Bias in Labelling Effects", in Stangor, Charles, Stereotypes and
prejudice: essential readings, Psychology Press, p.212, ISBN978-0-86377-589-5, OCLC42823720
[4] Risen & Gilovich 2007
[5] "Assimilation bias" is another term used for biased interpretation of evidence. (Risen & Gilovich 2007, p.113)
[6] Zweig, Jason (November 19, 2009), "How to Ignore the Yes-Man in Your Head" (http:/ / online. wsj. com/ article/
SB10001424052748703811604574533680037778184. html), Wall Street Journal (Dow Jones & Company), , retrieved 2010-06-13
[7] Nickerson 1998, pp.177178
[8] Kunda 1999, pp.112115
[9] Baron 2000, pp.162164
[10] Kida 2006, pp.162165
[11] Devine, Patricia G.; Hirt, Edward R.; Gehrke, Elizabeth M. (1990), "Diagnostic and confirmation strategies in trait hypothesis testing",
Journal of Personality and Social Psychology (American Psychological Association) 58 (6): 952963, doi:10.1037/0022-3514.58.6.952,
ISSN1939-1315
[12] Trope, Yaacov; Bassok, Miriam (1982), "Confirmatory and diagnosing strategies in social information gathering", Journal of Personality
and Social Psychology (American Psychological Association) 43 (1): 2234, doi:10.1037/0022-3514.43.1.22, ISSN1939-1315
[13] Klayman, Joshua; Ha, Young-Won (1987), "Confirmation, Disconfirmation and Information in Hypothesis Testing" (http:/ / www. stats.
org.uk/ statistical-inference/ KlaymanHa1987.pdf), Psychological Review (American Psychological Association) 94 (2): 211228,
doi:10.1037/0033-295X.94.2.211, ISSN0033-295X, , retrieved 2009-08-14
[14] Oswald & Grosjean 2004, pp.8283
[15] Kunda, Ziva; Fong, G.T.; Sanitoso, R.; Reber, E. (1993), "Directional questions direct self-conceptions", Journal of Experimental Social
Psychology (Society of Experimental Social Psychology) 29: 6263, ISSN0022-1031 via Fine 2006, pp.6365
[16] Shafir, E. (1993), "Choosing versus rejecting: why some options are both better and worse than others", Memory and Cognition 21 (4):
546556, PMID8350746 via Fine 2006, pp.6365
Confirmation bias
39
[17] Snyder, Mark; Swann, Jr., William B. (1978), "Hypothesis-Testing Processes in Social Interaction", Journal of Personality and Social
Psychology (American Psychological Association) 36 (11): 12021212, doi:10.1037/0022-3514.36.11.1202 via Poletiek 2001, p.131
[18] Kunda 1999, pp.117118
[19] Mynatt, Clifford R.; Doherty, Michael E.; Tweney, Ryan D. (1978), "Consequences of confirmation and disconfirmation in a simulated
research environment", Quarterly Journal of Experimental Psychology 30 (3): 395406, doi:10.1080/00335557843000007
[20] [20] Kida 2006, p.157
[21] Lord, Charles G.; Ross, Lee; Lepper, Mark R. (1979), "Biased assimilation and attitude polarization: The effects of prior theories on
subsequently considered evidence", Journal of Personality and Social Psychology (American Psychological Association) 37 (11): 20982109,
doi:10.1037/0022-3514.37.11.2098, ISSN0022-3514
[22] Baron 2000, pp.201202
[23] [23] Vyse 1997, p.122
[24] Taber, Charles S.; Lodge, Milton (July 2006), "Motivated Skepticism in the Evaluation of Political Beliefs", American Journal of Political
Science (Midwest Political Science Association) 50 (3): 755769, doi:10.1111/j.1540-5907.2006.00214.x, ISSN0092-5853
[25] Westen, Drew; Blagov, Pavel S.; Harenski, Keith; Kilts, Clint; Hamann, Stephan (2006), "Neural Bases of Motivated Reasoning: An fMRI
Study of Emotional Constraints on Partisan Political Judgment in the 2004 U.S. Presidential Election" (http:/ / psychsystems. net/ lab/
06_Westen_fmri.pdf), Journal of Cognitive Neuroscience (Massachusetts Institute of Technology) 18 (11): 19471958,
doi:10.1162/jocn.2006.18.11.1947, PMID17069484, , retrieved 2009-08-14
[26] Gadenne, V.; Oswald, M. (1986), "Entstehung und Vernderung von Besttigungstendenzen beim Testen von Hypothesen [Formation and
alteration of confirmatory tendencies during the testing of hypotheses]", Zeitschrift fr experimentelle und angewandte Psychologie 33:
360374 via Oswald & Grosjean 2004, p.89
[27] Hastie, Reid; Park, Bernadette (2005), "The Relationship Between Memory and Judgment Depends on Whether the Judgment Task is
Memory-Based or On-Line", in Hamilton, David L., Social cognition: key readings, New York: Psychology Press, p.394,
ISBN0-86377-591-8, OCLC55078722
[28] Oswald & Grosjean 2004, pp.8889
[29] Stangor, Charles; McMillan, David (1992), "Memory for expectancy-congruent and expectancy-incongruent information: A review of the
social and social developmental literatures", Psychological Bulletin (American Psychological Association) 111 (1): 4261,
doi:10.1037/0033-2909.111.1.42
[30] Snyder, M.; Cantor, N. (1979), "Testing hypotheses about other people: the use of historical knowledge", Journal of Experimental Social
Psychology 15 (4): 330342, doi:10.1016/0022-1031(79)90042-8 via Goldacre 2008, p.231
[31] Kunda 1999, pp.225232
[32] Sanitioso, Rasyid; Kunda, Ziva; Fong, G.T. (1990), "Motivated recruitment of autobiographical memories", Journal of Personality and
Social Psychology (American Psychological Association) 59 (2): 229241, doi:10.1037/0022-3514.59.2.229, ISSN0022-3514,
PMID2213492
[33] Russell, Dan; Jones, Warren H. (1980), "When superstition fails: Reactions to disconfirmation of paranormal beliefs", Personality and
Social Psychology Bulletin (Society for Personality and Social Psychology) 6 (1): 8388, doi:10.1177/014616728061012, ISSN1552-7433
via Vyse 1997, p.121
[34] "backfire effect" (http:/ / www. skepdic.com/ backfireeffect. html). The Skeptic's Dictionary. . Retrieved 26 April 2012.
[35] Silverman, Craig (2011-06-17). "The Backfire Effect" (http:/ / www. cjr. org/ behind_the_news/ the_backfire_effect. php?page=all).
Columbia Journalism Review. . Retrieved 2012-05-01. "When your deepest convictions are challenged by contradictory evidence, your beliefs
get stronger."
[36] Nyhan, Brendan; Reifler, Jason (2010). "When Corrections Fail: The Persistence of Political Misperceptions" (http:/ / www. dartmouth. edu/
~nyhan/ nyhan-reifler.pdf). Political Behavior 32 (2): 303330. doi:10.1007/s11109-010-9112-2. . Retrieved 1 May 2012.
[37] Kuhn, Deanna; Lao, Joseph (March 1996), "Effects of Evidence on Attitudes: Is Polarization the Norm?", Psychological Science (American
Psychological Society) 7 (2): 115120, doi:10.1111/j.1467-9280.1996.tb00340.x
[38] [38] Baron 2000, p.201
[39] Miller, A.G.; McHoskey, J.W.; Bane, C.M.; Dowd, T.G. (1993), "The attitude polarization phenomenon: Role of response measure, attitude
extremity, and behavioral consequences of reported attitude change", Journal of Personality and Social Psychology 64 (4): 561574,
doi:10.1037/0022-3514.64.4.561
[40] Ross, Lee; Anderson, Craig A. (1982), "Shortcomings in the attribution process: On the origins and maintenance of erroneous social
assessments", in Kahneman, Daniel; Slovic, Paul; Tversky, Amos, Judgment under uncertainty: Heuristics and biases, Cambridge University
Press, pp.129152, ISBN978-0-521-28414-1, OCLC7578020
[41] [41] Nickerson 1998, p.187
[42] [42] Kunda 1999, p.99
[43] Ross, Lee; Lepper, Mark R.; Hubbard, Michael (1975), "Perseverance in self-perception and social perception: Biased attributional
processes in the debriefing paradigm", Journal of Personality and Social Psychology (American Psychological Association) 32 (5): 880892,
doi:10.1037/0022-3514.32.5.880, ISSN0022-3514, PMID1185517 via Kunda 1999, p.99
[44] Anderson, Craig A.; Lepper, Mark R.; Ross, Lee (1980), "Perseverance of Social Theories: The Role of Explanation in the Persistence of
Discredited Information", Journal of Personality and Social Psychology (American Psychological Association) 39 (6): 10371049,
doi:10.1037/h0077720, ISSN0022-3514
Confirmation bias
40
[45] Baron 2000, pp.197200
[46] Fine 2006, pp.6670
[47] Plous 1993, pp.164166
[48] Redelmeir, D. A.; Tversky, Amos (1996), "On the belief that arthritis pain is related to the weather", Proceedings of the National Academy
of Sciences 93 (7): 28952896, doi:10.1073/pnas.93.7.2895 via Kunda 1999, p.127
[49] Kunda 1999, pp.127130
[50] Plous 1993, pp.162164
[51] Adapted from Fielder, Klaus (2004), "Illusory correlation", in Pohl, Rdiger F., Cognitive Illusions: A Handbook on Fallacies and Biases in
Thinking, Judgement and Memory, Hove, UK: Psychology Press, p.103, ISBN978-1-84169-351-4, OCLC55124398
[52] Baron 2000, pp.195196
[53] Thucydides; Crawley, Richard (trans) (431BCE), "XIV" (http:/ / classics. mit. edu/ Thucydides/ pelopwar. mb. txt), The History of the
Peloponnesian War, The Internet Classics Archive, , retrieved 2010-05-27
[54] Alighieri, Dante. Paradiso canto XIII: 118120. Trans. Allen Mandelbaum
[55] Bacon, Francis (1620). Novum Organum. reprinted in Burtt, E.A., ed. (1939), The English philosophers from Bacon to Mill, New York:
Random House, p.36 via Nickerson 1998, p.176
[56] Tolstoy, Leo. What is Art? p. 124 (http:/ / books.google. com/ books?id=0SYVAAAAYAAJ& pg=PA124& vq=falsity& dq=tolstoy+ + +
"what+ is+ art"& output=html& source=gbs_search_r& cad=1) (1899). In The Kingdom of God Is Within You (1893), he similarly declared,
"The most difficult subjects can be explained to the most slow-witted man if he has not formed any idea of them already; but the simplest
thing cannot be made clear to the most intelligent man if he is firmly persuaded that he knows already, without a shadow of doubt, what is laid
before him" (ch. 3). Translated from the Russian by Constance Garnett, New York, 1894. Project Gutenberg edition (http:/ / www. gutenberg.
org/ etext/ 4602) released November 2002. Retrieved 2009-08-24.
[57] Gale, Maggie; Ball, Linden J. (2002), "Does Positivity Bias Explain Patterns of Performance on Wason's 2-4-6 task?", in Gray, Wayne D.;
Schunn, Christian D., Proceedings of the Twenty-Fourth Annual Conference of the Cognitive Science Society, Routledge, p.340,
ISBN978-0-8058-4581-5, OCLC469971634
[58] Wason, Peter C. (1960), "On the failure to eliminate hypotheses in a conceptual task", Quarterly Journal of Experimental Psychology
(Psychology Press) 12 (3): 129140, doi:10.1080/17470216008416717, ISSN1747-0226
[59] [59] Nickerson 1998, p.179
[60] [60] Lewicka 1998, p.238
[61] [61] Wason also used the term "verification bias". (Poletiek 2001, p.73)
[62] Oswald & Grosjean 2004, pp.7996
[63] Wason, Peter C. (1968), "Reasoning about a rule", Quarterly Journal of Experimental Psychology (Psychology Press) 20 (3): 27328,
doi:10.1080/14640746808400161, ISSN1747-0226
[64] Sutherland, Stuart (2007), Irrationality (2nd ed.), London: Pinter and Martin, pp.95103, ISBN978-1-905177-07-3, OCLC72151566
[65] Barkow, Jerome H.; Cosmides, Leda; Tooby, John (1995), The adapted mind: evolutionary psychology and the generation of culture,
Oxford University Press US, pp.181184, ISBN978-0-19-510107-2, OCLC33832963
[66] Oswald & Grosjean 2004, pp.8182, 8687
[67] [67] Lewicka 1998, p.239
[68] Tweney, Ryan D.; Doherty, Michael E.; Worner, Winifred J.; Pliske, Daniel B.; Mynatt, Clifford R.; Gross, Kimberly A.; Arkkelin, Daniel
L. (1980), "Strategies of rule discovery in an inference task", The Quarterly Journal of Experimental Psychology (Psychology Press) 32 (1):
109123, doi:10.1080/00335558008248237, ISSN1747-0226 (ExperimentIV)
[69] Oswald & Grosjean 2004, pp.8689
[70] Hergovich, Schott & Burger 2010
[71] [71] Maccoun 1998
[72] [72] Friedrich 1993, p.298
[73] [73] Kunda 1999, p.94
[74] Nickerson 1998, pp.198199
[75] [75] Nickerson 1998, p.200
[76] [76] Nickerson 1998, p.197
[77] [77] Baron 2000, p.206
[78] Matlin, Margaret W. (2004), "Pollyanna Principle", in Pohl, Rdiger F., Cognitive Illusions: A Handbook on Fallacies and Biases in
Thinking, Judgement and Memory, Hove: Psychology Press, pp.255272, ISBN978-1-84169-351-4, OCLC55124398
[79] Dawson, Erica; Gilovich, Thomas; Regan, Dennis T. (October 2002), "Motivated Reasoning and Performance on the Wason Selection Task"
(http:/ / comp9. psych. cornell. edu/ sec/ pubPeople/ tdg1/ Dawson. Gilo. Regan. pdf), Personality and Social Psychology Bulletin (Society for
Personality and Social Psychology) 28 (10): 13791387, doi:10.1177/014616702236869, , retrieved 2009-09-30
[80] Ditto, Peter H.; Lopez, David F. (1992), "Motivated skepticism: use of differential decision criteria for preferred and nonpreferred
conclusions", Journal of personality and social psychology (American Psychological Association) 63 (4): 568584,
doi:10.1037/0022-3514.63.4.568, ISSN0022-3514
[81] [81] Nickerson 1998, p.198
[82] Oswald & Grosjean 2004, pp.9193
Confirmation bias
41
[83] Friedrich 1993, pp.299, 316317
[84] Trope, Y.; Liberman, A. (1996), "Social hypothesis testing: cognitive and motivational mechanisms", in Higgins, E. Tory; Kruglanski, Arie
W., Social Psychology: Handbook of basic principles, New York: Guilford Press, ISBN978-1-57230-100-9, OCLC34731629 via Oswald &
Grosjean 2004, pp.9193
[85] Dardenne, Benoit; Leyens, Jacques-Philippe (1995), "Confirmation Bias as a Social Skill", Personality and Social Psychology Bulletin
(Society for Personality and Social Psychology) 21 (11): 12291239, doi:10.1177/01461672952111011, ISSN1552-7433
[86] Sandra L. Schneider, ed. (2003). Emerging perspectives on judgment and decision research (http:/ / www. amazon. com/
Emerging-Perspectives-Judgment-Decision-Cambridge/ dp/ 052152718X/ ref=sr_1_1?s=books& ie=UTF8& qid=1337999737& sr=1-1).
Cambridge [u.a.]: Cambridge University Press. p.445. ISBN052152718X. .
[87] Haidt, Jonathan (2012). The Righteous Mind : Why Good People are Divided by Politics and Religion. New York: Pantheon Books.
pp.14734 (e-book edition). ISBN0307377903.
[88] Lindzey, edited by Susan T. Fiske, Daniel T. Gilbert, Gardner (2010). The handbook of social psychology. (http:/ / www. amazon. com/
Handbook-Social-Psychology-Susan-Fiske/ dp/ 0470137495/ ref=sr_1_1?s=books& ie=UTF8& qid=1337998939& sr=1-1) (5th ed.).
Hoboken, N.J.: Wiley. pp.811. ISBN0470137495. .
[89] Pompian, Michael M. (2006), Behavioral finance and wealth management: how to build optimal portfolios that account for investor biases,
John Wiley and Sons, pp.187190, ISBN978-0-471-74517-4, OCLC61864118
[90] Hilton, Denis J. (2001), "The psychology of financial decision-making: Applications to trading, dealing, and investment analysis", Journal
of Behavioral Finance (Institute of Behavioral Finance) 2 (1): 3739, doi:10.1207/S15327760JPFM0201_4, ISSN1542-7579
[91] Krueger, David; Mann, John David (2009), The Secret Language of Money: How to Make Smarter Financial Decisions and Live a Richer
Life, McGraw Hill Professional, pp.112113, ISBN978-0-07-162339-1, OCLC277205993
[92] [92] Nickerson 1998, p.192
[93] [93] Goldacre 2008, p.233
[94] Singh, Simon; Ernst, Edzard (2008), Trick or Treatment?: Alternative Medicine on Trial, London: Bantam, pp.287288,
ISBN978-0-593-06129-9
[95] Atwood, Kimball (2004), "Naturopathy, Pseudoscience, and Medicine: Myths and Fallacies vs Truth", Medscape General Medicine 6 (1): 33
[96] Neenan, Michael; Dryden, Windy (2004), Cognitive therapy: 100 key points and techniques, Psychology Press, p.ix,
ISBN978-1-58391-858-6, OCLC474568621
[97] Blackburn, Ivy-Marie; Davidson, Kate M. (1995), Cognitive therapy for depression & anxiety: a practitioner's guide (2 ed.),
Wiley-Blackwell, p.19, ISBN978-0-632-03986-9, OCLC32699443
[98] Harvey, Allison G.; Watkins, Edward; Mansell, Warren (2004), Cognitive behavioural processes across psychological disorders: a
transdiagnostic approach to research and treatment, Oxford University Press, pp.172173, 176, ISBN978-0-19-852888-3,
OCLC602015097
[99] Nickerson 1998, pp.191193
[100] Myers, D.G.; Lamm, H. (1976), "The group polarization phenomenon", Psychological Bulletin 83 (4): 602627,
doi:10.1037/0033-2909.83.4.602 via Nickerson 1998, pp.193194
[101] Halpern, Diane F. (1987), Critical thinking across the curriculum: a brief edition of thought and knowledge, Lawrence Erlbaum
Associates, p.194, ISBN978-0-8058-2731-6, OCLC37180929
[102] Roach, Kent (2010), "Wrongful Convictions: Adversarial and Inquisitorial Themes", North Carolina Journal of International Law and
Commercial Regulation 35, SSRN1619124, "Both adversarial and inquisitorial systems seem subject to the dangers of tunnel vision or
confirmation bias."
[103] [103] Baron 2000, pp.191,195
[104] [104] Kida 2006, p.155
[105] Tetlock, Philip E. (2005), Expert Political Judgment: How Good Is It? How Can We Know?, Princeton, N.J.: Princeton University Press,
pp.125128, ISBN978-0-691-12302-8, OCLC56825108
[106] Smith, Jonathan C. (2009), Pseudoscience and Extraordinary Claims of the Paranormal: A Critical Thinker's Toolkit, John Wiley and
Sons, pp.149151, ISBN978-1-4051-8122-8, OCLC319499491
[107] Randi, James (1991), James Randi: psychic investigator, Boxtree, pp.5862, ISBN978-1-85283-144-8, OCLC26359284
[108] [108] Nickerson 1998, p.190
[109] Nickerson 1998, pp.192194
[110] [110] Koehler 1993
[111] [111] Mahoney 1977
[112] [112] Horrobin 1990
[113] Proctor, Robert W.; Capaldi, E. John (2006), Why science matters: understanding the methods of psychological research, Wiley-Blackwell,
p.68, ISBN978-1-4051-3049-3, OCLC318365881
[114] Sternberg, Robert J. (2007), "Critical Thinking in Psychology: It really is critical", in Sternberg, Robert J.; RoedigerIII, Henry L.; Halpern,
Diane F., Critical Thinking in Psychology, Cambridge University Press, p.292, ISBN0-521-60834-1, OCLC69423179, "Some of the worst
examples of confirmation bias are in research on parapsychology... Arguably, there is a whole field here with no powerful confirming data at
all. But people want to believe, and so they find ways to believe."
Confirmation bias
42
[115] Shadish, William R. (2007), "Critical Thinking in Quasi-Experimentation", in Sternberg, Robert J.; RoedigerIII, Henry L.; Halpern, Diane
F., Critical Thinking in Psychology, Cambridge University Press, p.49, ISBN978-0-521-60834-3
[116] Jni, P.; Altman, D. G.; Egger, M. (2001). "Systematic reviews in health care: Assessing the quality of controlled clinical trials". BMJ
(Clinical research ed.) 323 (7303): 4246. PMC1120670. PMID11440947.
[117] Shermer, Michael (July 2006), "The Political Brain" (http:/ / www. scientificamerican. com/ article. cfm?id=the-political-brain), Scientific
American, ISSN0036-8733, , retrieved 2009-08-14
[118] Emerson, G. B.; Warme, W. J.; Wolf, F. M.; Heckman, J. D.; Brand, R. A.; Leopold, S. S. (2010). "Testing for the Presence of
Positive-Outcome Bias in Peer Review: A Randomized Controlled Trial". Archives of Internal Medicine 170 (21): 19341939.
doi:10.1001/archinternmed.2010.406. PMID21098355.
[119] Swann, William B.; Pelham, Brett W.; Krull, Douglas S. (1989), "Agreeable Fancy or Disagreeable Truth? Reconciling Self-Enhancement
and Self-Verification", Journal of Personality and Social Psychology (American Psychological Association) 57 (5): 782791,
doi:10.1037/0022-3514.57.5.782, ISSN00223514, PMID2810025
[120] Swann, William B.; Read, Stephen J. (1981), "Self-Verification Processes: How We Sustain Our Self-Conceptions", Journal of
Experimental Social Psychology (Academic Press) 17 (4): 351372, doi:10.1016/0022-1031(81)90043-3, ISSN00221031
[121] Story, Amber L. (1998), "Self-Esteem and Memory for Favorable and Unfavorable Personality Feedback", Personality and Social
Psychology Bulletin (Society for Personality and Social Psychology) 24 (1): 5164, doi:10.1177/0146167298241004, ISSN1552-7433
[122] White, Michael J.; Brockett, Daniel R.; Overstreet, Belinda G. (1993), "Confirmatory Bias in Evaluating Personality Test Information: Am
I Really That Kind of Person?", Journal of Counseling Psychology (American Psychological Association) 40 (1): 120126,
doi:10.1037/0022-0167.40.1.120, ISSN0022-0167
[123] Swann, William B.; Read, Stephen J. (1981), "Acquiring Self-Knowledge: The Search for Feedback That Fits", Journal of Personality and
Social Psychology (American Psychological Association) 41 (6): 11191128, ISSN00223514
[124] Shrauger, J. Sidney; Lund, Adrian K. (1975), "Self-evaluation and reactions to evaluations from others", Journal of Personality (Duke
University Press) 43 (1): 94108, doi:10.1111/j.1467-6494.1975.tb00574, PMID1142062
Footnotes
References
Baron, Jonathan (2000), Thinking and deciding (3rd ed.), New York: Cambridge University Press,
ISBN0-521-65030-5, OCLC316403966
Fine, Cordelia (2006), A Mind of its Own: how your brain distorts and deceives, Cambridge, UK: Icon books,
ISBN1-84046-678-2, OCLC60668289
Friedrich, James (1993), "Primary error detection and minimization (PEDMIN) strategies in social cognition: a
reinterpretation of confirmation bias phenomena", Psychological Review (American Psychological Association)
100 (2): 298319, doi:10.1037/0033-295X.100.2.298, ISSN0033-295X, PMID8483985
Goldacre, Ben (2008), Bad Science, London: Fourth Estate, ISBN978-0-00-724019-7, OCLC259713114
Hergovich, Andreas; Schott, Reinhard; Burger, Christoph (2010), "Biased Evaluation of Abstracts Depending on
Topic and Conclusion: Further Evidence of a Confirmation Bias Within Scientific Psychology" (http:/ / www.
springerlink. com/ content/ 20162475422jn5x6/ ), Current Psychology 29 (3): 188209,
doi:10.1007/s12144-010-9087-5
Horrobin, David F. (1990), "The philosophical basis of peer review and the suppression of innovation" (http:/ /
jama. ama-assn. org/ cgi/ content/ abstract/ 263/ 10/ 1438), Journal of the American Medical Association 263
(10): 14381441, doi:10.1001/jama.263.10.1438, PMID2304222
Kida, Thomas E. (2006), Don't believe everything you think: the 6 basic mistakes we make in thinking, Amherst,
New York: Prometheus Books, ISBN978-1-59102-408-8, OCLC63297791
Koehler, Jonathan J. (1993), "The influence of prior beliefs on scientific judgments of evidence quality" (http:/ /
ideas. repec. org/ a/ eee/ jobhdp/ v56y1993i1p28-55. html), Organizational Behavior and Human Decision
Processes 56: 2855, doi:10.1006/obhd.1993.1044
Kunda, Ziva (1999), Social Cognition: Making Sense of People, MIT Press, ISBN978-0-262-61143-5,
OCLC40618974
Lewicka, Maria (1998), "Confirmation Bias: Cognitive Error or Adaptive Strategy of Action Control?", in Kofta,
Mirosaw; Weary, Gifford; Sedek, Grzegorz, Personal control in action: cognitive and motivational mechanisms,
Confirmation bias
43
Springer, pp.233255, ISBN978-0-306-45720-3, OCLC39002877
Maccoun, Robert J. (1998), "Biases in the interpretation and use of research results" (http:/ / socrates. berkeley.
edu/ ~maccoun/ MacCoun_AnnualReview98. pdf), Annual Review of Psychology 49: 25987,
doi:10.1146/annurev.psych.49.1.259, PMID15012470
Mahoney, Michael J. (1977), "Publication prejudices: an experimental study of confirmatory bias in the peer
review system" (http:/ / www. springerlink. com/ content/ g1l56241734kq743/ ), Cognitive Therapy and Research
1 (2): 161175, doi:10.1007/BF01173636
Nickerson, Raymond S. (1998), "Confirmation Bias; A Ubiquitous Phenomenon in Many Guises", Review of
General Psychology (Educational Publishing Foundation) 2 (2): 175220, doi:10.1037/1089-2680.2.2.175,
ISSN1089-2680
Oswald, Margit E.; Grosjean, Stefan (2004), "Confirmation Bias", in Pohl, Rdiger F., Cognitive Illusions: A
Handbook on Fallacies and Biases in Thinking, Judgement and Memory, Hove, UK: Psychology Press,
pp.7996, ISBN978-1-84169-351-4, OCLC55124398
Plous, Scott (1993), The Psychology of Judgment and Decision Making, McGraw-Hill, ISBN978-0-07-050477-6,
OCLC26931106
Poletiek, Fenna (2001), Hypothesis-testing behaviour, Hove, UK: Psychology Press, ISBN978-1-84169-159-6,
OCLC44683470
Risen, Jane; Gilovich, Thomas (2007), "Informal Logical Fallacies", in Sternberg, Robert J.; RoedigerIII, Henry
L.; Halpern, Diane F., Critical Thinking in Psychology, Cambridge University Press, pp.110130,
ISBN978-0-521-60834-3, OCLC69423179
Vyse, Stuart A. (1997), Believing in magic: The psychology of superstition, New York: Oxford University Press,
ISBN0-19-513634-9, OCLC35025826
Further reading
Stanovich, Keith (2009), What Intelligence Tests Miss: The Psychology of Rational Thought, New Haven (CT):
Yale University Press, ISBN978-0-300-12385-2, Lay summary (http:/ / web. mac. com/ kstanovich/ Site/
YUP_Reviews_files/ TICS_review. pdf) (21 November 2010)
Westen, Drew (2007), The political brain: the role of emotion in deciding the fate of the nation, PublicAffairs,
ISBN978-1-58648-425-5, OCLC86117725
External links
Skeptic's Dictionary: confirmation bias (http:/ / skepdic. com/ confirmbias. html) by Robert T. Carroll
Teaching about confirmation bias (http:/ / www. devpsy. org/ teaching/ method/ confirmation_bias. html), class
handout and instructor's notes by K. H. Grobman
Confirmation bias learning object (http:/ / hosted. xamai. ca/ confbias/ ), interactive number triples exercise by
Rod McFarland, Simon Fraser University
Brief summary of the 1979 Stanford assimilation bias study (http:/ / faculty. babson. edu/ krollag/ org_site/
soc_psych/ lord_death_pen. html) by Keith Rollag, Babson College
"Morton's demon" (http:/ / www. talkorigins. org/ origins/ postmonth/ feb02. html), Usenet post by Glenn Morton,
February 2, 2002
Bandwagon effect
44
Bandwagon effect
The bandwagon effect is a well documented form of groupthink in behavioral science and has many applications.
The general rule is that conduct or beliefs spread among people, as fads and trends clearly do, with "the probability
of any individual adopting it increasing with the proportion who have already done so".
[1]
As more people come to
believe in something, others also "hop on the bandwagon" regardless of the underlying evidence. The tendency to
follow the actions or beliefs of others can occur because individuals directly prefer to conform, or because
individuals derive information from others. Both explanations have been used for evidence of conformity in
psychological experiments. For example, social pressure has been used to explain Asch's conformity experiments,
[2]
and information has been used to explain Sherif's autokinetic experiment.
[3]
The concept
In laymans term the bandwagon effect refers to people doing certain things because other people are doing them,
regardless of their own beliefs, which they may ignore or override. The perceived "popularity" of an object or person
may have an effect on how it is viewed on a whole. For instance, once a product becomes popular, more people tend
to "get on the bandwagon" and buy it, too. The bandwagon effect has wide implications, but is commonly seen in
politics,consumer and social behavior. This effect is noticed and followed very much by today's youth, where for
instance if people see many of their friends buying a particular phone, they could become more interested in buying
that product.
When individuals make rational choices based on the information they receive from others, economists have
proposed that information cascades can quickly form in which people decide to ignore their personal information
signals and follow the behavior of others.
[4]
Cascades explain why behavior is fragilepeople understand that they
are based on very limited information. As a result, fads form easily but are also easily dislodged. Such informational
effects have been used to explain political bandwagons.
[5]
Origin of the phrase
Literally, a bandwagon is a wagon which carries the band in a parade, circus or other entertainment.
[6]
The phrase
"jump on the bandwagon" first appeared in American politics in 1848 when Dan Rice, a famous and popular circus
clown of the time, used his bandwagon and its music to gain attention for his political campaign appearances. As his
campaign became more successful, other politicians strove for a seat on the bandwagon, hoping to be associated with
his success. Later, during the time of William Jennings Bryan's 1900 presidential campaign, bandwagons had
become standard in campaigns,
[7]
and "jump on the bandwagon" was used as a derogatory term, implying that people
were associating themselves with the success without considering what they associated themselves with.
Use in politics
The bandwagon effect occurs in voting: some people vote for those candidates or parties who are likely to succeed
(or are proclaimed as such by the media), hoping to be on the "winner's side" in the end.
[8]
The bandwagon effect has
been applied to situations involving majority opinion, such as political outcomes, where people alter their opinions to
the majority view (McAllister and Studlar 721). Such a shift in opinion can occur because individuals draw
inferences from the decisions of others, as in an informational cascade.
Because of time zones, election results are broadcast in the eastern parts of the United States while polls are still
open in the west. This difference has led to research on how the behavior of voters in western United States are
influenced by news about the decisions of voters in other time zones. In 1980, NBC News declared Ronald Reagan
to be the winner of the presidential race on the basis of the exit polls several hours before the voting booths closed in
the west.
Bandwagon effect
45
It is also said to be important in the American Presidential Primary elections. States all vote at different times, spread
over some months, rather than all on one day. Some states (Iowa, New Hampshire) have special precedence to go
early while others have to wait until a certain date. This is often said to give undue influence to these states, a win in
these early states is said to give a candidate the "Big Mo" (momentum) and has propelled many candidates to win the
nomination. Because of this, other states often try front loading (going as early as possible) to make their say as
influential as they can. In the 2008 presidential primaries two states had all or some of their delegates banned from
the convention by the central party organizations for voting too early.
[9][10]
Several studies have tested this theory of the bandwagon effect in political decision making. In the 1994 study of
Robert K. Goidel and Todd G. Shields in The Journal of Politics, 180 students at the University of Kentucky were
randomly assigned to nine groups and were asked questions about the same set of election scenarios. About 70% of
subjects received information about the expected winner (Goidel and Shields 807). Independents, which are those
who do not vote based on the endorsement of any party and are ultimately neutral, were influenced strongly in favor
of the person expected to win (Goidel and Shields 807-808). Expectations played a significant role throughout the
study. It was found that independents are twice as likely to vote for the Republican candidate when the Republican is
expected to win. From the results, it was also found that when the Democrat was expected to win, independent
Republicans and weak Republicans were more likely to vote for the Democratic candidate (Goidel and Shields 808).
A study by Albert Mehrabian, reported in The Journal of Applied Social Psychology (1998), tested the relative
importance of the bandwagon (rally around the winner) effect versus the underdog (empathic support for those
trailing) effect. Bogus poll results presented to voters prior to the 1996 Republican primary clearly showed the
bandwagon effect to predominate on balance. Indeed, approximately 6% of the variance in the vote was explained in
terms of the bogus polls, showing that poll results (whether accurate or inaccurate) can significantly influence
election results in closely contested elections. In particular, assuming that one candidate "is an initial favorite by a
slim margin, reports of polls showing that candidate as the leader in the race will increase his or her favorable
margin" (Mehrabian, 1998, p.2128). Thus, as poll results are repeatedly reported, the bandwagon effect will tend to
snowball and become a powerful aid to leading candidates.
During the 1992 U.S. presidential election, Vicki G. Morwitz and Carol Pluzinski conducted a study, which was
published in The Journal of Consumer Research. At a large northeastern university, some of 214 volunteer business
students were given the results of student and national polls indicating that Bill Clinton was in the lead. Others were
not exposed to the results of the polls. Several students who had intended to vote for Bush changed their minds after
seeing the poll results (Morwitz and Pluzinski 58-64).
Additionally, British polls have shown an increase to public exposure. Sixty-eight percent of voters had heard of the
general election campaign results of the opinion poll in 1979. In 1987, this number of voters aware of the results
increased to 74% (McAllister and Studlar 725). According to British studies, there is a consistent pattern of apparent
bandwagon effects for the leading party.
Use in microeconomics
In microeconomics, bandwagon effect describes interactions of demand and preference.
[11]
The bandwagon effect
arises when people's preference for a commodity increases as the number of people buying it increases. This
interaction potentially disturbs the normal results of the theory of supply and demand, which assumes that consumers
make buying decisions solely based on price and their own personal preference. The fashion industry is a good
example of the way our taste is manipulated for profit. how Madison Avenue influences our tastes.With the help of
Madison Avenue stylists create a bandwagon effect which makes us feel uncomfortable if we do not conform to the
current dress code. Gary Becker has even argued that the bandwagon effect could be so strong as to make the
demand curve slope upward.
[12]
A further explanation of the bandwagon effect in new products adoption (marketing) could lay in the "relative"
performance of the adopters, versus the people remaining with the established solution. If the innovation creates a
Bandwagon effect
46
value added capable to shift the ranking of the adopter within a group of Customers, when the adopter improves his
own ranking, he will worsen everyone else's ranking. This creates for the remainings an incentive to switch to the
innovation. ( this concept is adapted from "the Art of strategy - Dixit, Nalebuff - pg 271,2 )
References
[1] Colman, Andrew (2003). Oxford Dictionary of Psychology. New York: Oxford University Press. p.77. ISBN0-19-280632-7.
[2] Asch, S. E. (1955). "Opinions and social pressure". Scientific American 193: 3135. doi:10.1038/scientificamerican1155-31.
[3] Sherif, M. (1936). The psychology of social norms. New York: Harper Collins.
[4] Bikhchandani, Sushil; Hirshleifer, David; Welch, Ivo (1992). "A Theory of Fads, Fashion, Custom, and Cultural Change as Informational
Cascades". Journal of Political Economy 100 (5): 9921026. doi:10.1086/261849. JSTOR2138632.
[5] Lohmann, S. (1994). "The Dynamics of Informational Cascades: The Monday Demonstrations in Leipzig, East Germany, 1989-91". World
Politics 47 (1): 42101. JSTOR2950679.
[6] "Bandwagon" (http:/ / dictionary. reference.com/ browse/ bandwagon). Dictionary.com. Archived (http:/ / web. archive. org/ web/
20070312031504/ http:/ / dictionary. reference.com/ browse/ bandwagon) from the original on 12 March 2007. . Retrieved 2007-03-09.
[7] "Bandwagon Effect" (http:/ / www. wordwizard.com/ ch_forum/ topic. asp?TOPIC_ID=6642& SearchTerms=bandwagon,effect). . Retrieved
2007-03-09.
[8] Nadeau, Richard; Cloutier, Edouard; Guay, J.-H. (1993). "New Evidence About the Existence of a Bandwagon Effect in the Opinion
Formation Process". International Political Science Review 14 (2): 203213. doi:10.1177/019251219301400204.
[9] (2007, October 22). 5 States May Use Half Of GOP [Supplemental Material]. USA Today. Retrieved from http:/ / www. usatoday. com/ news/
politics/ election2008/ 2007-10-22-gop-delegates_N.htm
[10] "Florida Democrats Stripped of Convention Delegates Due to Early Primary" (http:/ / www. foxnews. com/ story/ 0,2933,294601,00. html).
FOX News. 2007-08-26. .
[11] Leibenstein, Harvey (1950). "Bandwagon, Snob, and Veblen Effects in the Theory of Consumers' Demand". Quarterly Journal of
Economics 64 (2): 183207. doi:10.2307/1882692.
[12] Gisser, Misch; McClure, James; kten, Giray; Santoni, Gary (2009). "Some Anomalies Arising from Bandwagons that Impart Upward
Sloping Segments to Market Demand" (http:/ / econjwatch. org/ articles/
some-anomalies-arising-from-bandwagons-that-impart-upward-sloping-segments-to-market-demand). Econ Journal Watch 6 (1): 2134. .
Further reading
Goidel, Robert K.; Shields, Todd G. (1994). "The Vanishing Marginals, the Bandwagon, and the Mass Media".
The Journal of Politics 56: 802810. doi:10.2307/2132194.
McAllister, Ian; Studlar, Donley T. (1991). "Bandwagon, Underdog, or Projection? Opinion Polls and Electoral
Choice in Britain, 1979-1987". The Journal of Politics 53: 720740. doi:10.2307/2131577.
Mehrabian, Albert (1998). "Effects of Poll Reports on Voter Preferences". Journal of Applied Social Psychology
28: 21192130. doi:10.1111/j.1559-1816.1998.tb01363.x.
Morwitz, Vicki G.; Pluzinski, Carol (1996). "Do Polls Reflect Opinions or Do Opinions Reflect Polls?". Journal
of Consumer Research 23 (1): 5365. JSTOR2489665.
External links
Definition at Investopedia (http:/ / www. investopedia. com/ terms/ b/ bandwagon-effect. asp#ixzz272aXkAwV)
Description at Wisegeek (http:/ / www. wisegeek. com/ what-is-a-bandwagon-effect. htm)
Base rate fallacy
47
Base rate fallacy
The base rate fallacy, also called base rate neglect or base rate bias, is an error that occurs when the conditional
probability of some hypothesis H given some evidence E is assessed without taking into account the prior probability
("base rate") of H and the total probability of evidence E.
[1]
The conditional probability can be expressed as P(H|E),
the probability of H given E, and the base rate error happens when the values of sensitivity and specificity, which
depend only on the test itself, are used in place of positive predictive value and negative predictive value, which
depend on both the test and the baseline prevalence of event.
Example
In a city of 1 million inhabitants there are 100 terrorists and 999,900 non-terrorists. To simplify the example, it is
assumed that the only people in the city are inhabitants. Thus, the base rate probability of a randomly selected
inhabitant of the city being a terrorist is 0.0001, and the base rate probability of that same inhabitant being a
non-terrorist is 0.9999. In an attempt to catch the terrorists, the city installs an alarm system with a surveillance
camera and automatic facial recognition software. The software has two failure rates of 1%:
1. 1. The false negative rate: If the camera sees a terrorist, a bell will ring 99% of the time, and it will fail to ring 1% of
the time.
2. 2. The false positive rate: If the camera sees a non-terrorist, a bell will not ring 99% of the time, but it will ring 1%
of the time.
Suppose now that an inhabitant triggers the alarm. What is the chance that the person is a terrorist? In other words,
what is P(T|B), the probability that a terrorist has been detected given the ringing of the bell? Someone making the
'base rate fallacy' would infer that there is a 99% chance that the detected person is a terrorist. Although the inference
seems to make sense, it is actually bad reasoning, and a calculation below will show that the chances they are a
terrorist are actually near 1%, not near 99%.
The fallacy arises from confusing the natures of two different failure rates. The 'number of non-bells per 100
terrorists' and the 'number of non-terrorists per 100 bells' are unrelated quantities. One does not necessarily equal the
other, and they don't even have to be almost equal. To show this, consider what happens if an identical alarm system
were set up in a second city with no terrorists at all. As in the first city, the alarm sounds for 1 out of every 100
non-terrorist inhabitants detected, but unlike in the first city, the alarm never sounds for a terrorist. Therefore 100%
of all occasions of the alarm sounding are for non-terrorists, but a false negative rate cannot even be calculated. The
'number of non-terrorists per 100 bells' in that city is 100, yet P(T|B) = 0%. There is zero chance that a terrorist has
been detected given the ringing of the bell.
Imagine that the city's entire population of one million people pass in front of the camera. About 99 of the 100
terrorists will trigger the alarm-and so will about 9,999 of the 999,900 non-terrorists. Therefore, about 10,098
people will trigger the alarm, among which about 99 will be terrorists. So the probability that a person triggering the
alarm is actually a terrorist is only about 99 in 10,098, which is less than 1%, and very very far below our initial
guess of 99%.
The base rate fallacy is so misleading in this example because there are many more non-terrorists than terrorists. If,
instead, the city had about as many terrorists as non-terrorists, and the false-positive rate and the false-negative rate
were nearly equal, then the probability of misidentification would be about the same as the false-positive rate of the
device. These special conditions hold sometimes: as for instance, about half the women undergoing a pregnancy test
are actually pregnant, and some pregnancy tests give about the same rates of false positives and of false negatives. In
this case, the rate of false positives per positive test will be nearly equal to the rate of false positives per nonpregnant
woman. This is why it is very easy to fall into this fallacy: by coincidence it gives the correct answer in many
common situations.
Base rate fallacy
48
In many real-world situations, though, particularly problems like detecting criminals in a largely law-abiding
population, the small proportion of targets in the large population makes the base rate fallacy very applicable. Even a
very low false-positive rate will result in so many false alarms as to make such a system useless in practice.
Mathematical formalism
In the above example, where P(T|B) means the probability of T given B, the base rate fallacy is committed by
assuming that P(terrorist|bell) equals P(bell|terrorist) and then adding the premise that P(bell|terrorist)=99%. Now, is
it true that P(terrorist|bell) equals P(bell|terrorist)?
Well, no. Instead, the correct calculation uses Bayes' theorem to take into account the prior probability of any
randomly selected inhabitant in the city being a terrorist and the total probability of the bell ringing:
Thus, in the example the probability was overestimated by more than 100 times due to the failure to take into
account the fact that there are about 10000 times more nonterrorists than terrorists (a.k.a. failure to take into account
the 'prior probability' of being a terrorist).
Findings in psychology
In experiments, people have been found to prefer individuating information over general information when the
former is available.
[2][3][4]
In some experiments, students were asked to estimate the grade point averages (GPAs) of hypothetical students.
When given relevant statistics about GPA distribution, students tended to ignore them if given descriptive
information about the particular student, even if the new descriptive information was obviously of little or no
relevance to school performance.
[3]
This finding has been used to argue that interviews are an unnecessary part of the
college admissions process because interviewers are unable to pick successful candidates better than basic statistics.
Psychologists Daniel Kahneman and Amos Tversky attempted to explain this finding in terms of the
representativeness heuristic.
[3]
Richard Nisbett has argued that some attributional biases like the fundamental
attribution error are instances of the base rate fallacy: people underutilize "consensus information" (the "base rate")
about how others behaved in similar situations and instead prefer simpler dispositional attributions.
[5]
Kahneman considers base rate neglect to be a specific form of extension neglect.
[6]
Base rate fallacy
49
References
[1] http:/ / www. fallacyfiles.org/ baserate.html
[2] Bar-Hillel, Maya (1980). "The base-rate fallacy in probability judgments". Acta Psychologica 44: 211233.
[3] Kahneman, Daniel; Amos Tversky (1973). "On the psychology of prediction". Psychological Review 80: 237251. doi:10.1037/h0034747.
[4] Kahneman, Daniel; Amos Tversky (1985). "Evidential impact of base rates". In Daniel Kahneman, Paul Slovic & Amos Tversky (Eds.).
Judgment under uncertainty: Heuristics and biases. pp.153160.
[5] Nisbett, Richard E.; E. Borgida, R. Crandall & H. Reed (1976). "Popular induction: Information is not always informative". In J. S. Carroll &
J. W. Payne (Eds.). Cognition and social behavior. 2. pp.227236.
[6] Kahneman, Daniel (2000). "Evaluation by moments, past and future". In Daniel Kahneman and Amos Tversky (Eds.). Choices, Values and
Frames.
External links
The Base Rate Fallacy (http:/ / www. fallacyfiles. org/ baserate. html) The Fallacy Files
Psychology of Intelligence Analysis: Base Rate Fallacy (https:/ / www. cia. gov/ library/
center-for-the-study-of-intelligence/ csi-publications/ books-and-monographs/
psychology-of-intelligence-analysis/ art15. html#ft145)
The base rate fallacy explained visually (http:/ / www. youtube. com/ watch?v=D8VZqxcu0I0) (Video)
Belief bias
Belief bias is a cognitive bias in which people evaluate the validity of a given conclusion (Evans & Curtis-Holmes,
2005).
[1]
People either accept or reject it depending if it is consistent with their everyday knowledge (prior beliefs).
This decision is also affected by the conclusions believability as opposed to logical validity (Dube, Rotello & Heit,
2010)).
[2]
Belief bias occurs whenever responses are given on the foundation of the conclusions believability,
despite instructions stressing that responses should be made on the basis of logical validity (Quayle & Ball, 2000).
[3]
Syllogisms Within Reasoning
Belief bias emanates due to a conflict occurring between belief and logic reasoning (Sa, West & Stanovich, 1999).
[4]
These conclusions within belief bias contain syllogisms and premises . Syllogism is a form of reasoning in which a
conclusion is drawn from two given premises (Morely, Evans & Handley, 2004).
[5]
For example, if this syllogism is
broken down it becomes:
Major Premises: All dogs are animals,
Minor Premises: All animals have four legs
Conclusions: All dogs have four legs
Therefore, belief bias occurs when a persons personal beliefs and knowledge do not agree with the conclusion given
(Markovits, Saelen & Forgues, 2009).
[6]
This example above is correct and so would be considered valid and
believable. However, this example below does not make sense and therefore is invalid and unbelievable:
Major Premises: All poodles are dogs
Minor Premises: Some dogs are animals
Conclusion:Some poodles are animals.
Belief bias
50
Evidence of Belief Bias: Belief Vs Logic
In a series of experiments by Evans, Barston and Pollard (1983)
[7]
participants were presented with evaluation task
paradigms, containing two premises and a conclusion . In other words, the participants were asked to make an
evaluation of logical validity. The subjects, however, a exhibited belief bias, evidenced by the tendency to rejected
valid arguments with unbelievable conclusions, and endorsed invalid arguments with believable conclusions. It
seems that instead of following directions and assessing logical validity, the subjects based their assessments on
personal beliefs.[1]
Consequently, these results demonstrated a greater acceptance of more believable (80%), than unbelievable (33%)
conclusions. Participants also illustrated evidence of logical competences and the results determined an increase in
acceptance of n valid (73%) than invalid (41%). Additionally, theres a small difference between believable and valid
(89%) in comparison to unbelievable and invalid (56%) (Evans, Barston & Pollard, 1983; Morley, Evans & Handley,
2004).
[8][9]
It has been argued that using more realistic content in syllogisms can facilitate more normative performance from
participants. It has been suggested that the use of more abstract, artificial content will also have a biasing effect on
performance.Therefore, more research is required to understand fully how and why belief bias occurs and if there are
certain mechanisms that are responsible for such things. . There is also evidence of clear individual differences in
normative responding that are predicted by the response times of participants
[10]
Notes
[1] Evans,, J. St. B. T; Curtis-Holmes, J. (2005). "Rapid responding increases belief bias: Evidence for the dual-process theory of reasoning.".
Thinking and Reasoning 11 (4): 382389. doi:10.1080/13546780542000005.
[2] Dube, C; Rotello, C. M., & Heit, E. (2010). "Assessing the Belief-bias effect with ROCs: Its a response bias effect.". Psychological Review
117 (8): 831863. doi:10.1037/a0019634. PMID20658855.
[3] Quayle,, J. D.; Ball, L. J. (2000). "Working memory, metacognitive uncertainty & belief bias is syllogistic reasoning." (http:/ / www. psych.
lans. ac. uk/ people/ uploads/ LindenBall20031001T093815. pdf. ). The Quarterly Journal of Experimental Psychology 53A (4). .
[4] Sa, W. C.; West, R. F., & Stanovich, K. E. (1999). "The domain specificity and generality of belief bias: Searching for a generalisable critical
thinking". Journal of Educational Psychology 91 (3). doi:10.1037/0022-0663.91.3.497.
[5] Morely,, N. J.; Evans, J. St. B. T. & Handley, S. J. (2004). "Belief bias& figural bias in syllogistic reasoning". The Quarterly Journal of
Experimental Psychology 57A (4): 666692. doi:10.1080/02724980343000440.
[6] Markovits, H; Saelen, C., & Forgues, H. L. (2009). "An inverse belief bias effect: More evidence for the role of inhibitory processes in logical
reasoning.". Journal of Experimental Psychology 56 (2): 112120. doi:10.1027/1618-3169.56.2.112.
[7] Evans, J. St. B. T.; Barston, J. L. & Pollard, P. (1983). "On the conflict between logic and belief in syllogistic reasoning". Memory and
Cognition 11 (3): 285306. doi:10.3758/BF03196976.
[8] Evans, J. St. B.T.; Barston, J.L.; Pollard, P. (1983). "On the conflict between logic and belief in syllogistic reasoning". Memory and Cognition
11: 295306.
[9] Morely, N. J.; Evans, J. St. B. T., & Handley, S. J. (2004). "Belief bias & figural bias in syllogistic reasoning". The Quartely Journal of
Experimental Psychology 57A (4): 666692. doi:10.1080/02724980343000440.
[10] Stupple, E.J.N.; L. J. Ball, J. St. B. T. Evans & E. Kamal-Smith (2011). "When logic and belief collide: Individual differences in reasoning
times support a selective processing model". Journal of Cognitive Psychology 23 (8): 931941. doi:10.1080/20445911.2011.589381.
Belief bias
51
Further reading
Markovits, H.; G. Nantel (1989). "The belief-bias effect in the production and evaluation of logical conclusions".
Memory and Cognition 17 (1): 1117. doi:10.3758/BF03199552.
Klauer, K.C.; J. Musch, B. Naumer (2000). "On belief bias in syllogistic reasoning". Psychological Review 107
(4): 852884. doi:10.1037/0033-295X.107.4.852. PMID11089409.
Dube, C.; C. M. Rotello, E. Heit (2010). "Assessing the belief bias effect with ROCs: Its a response bias effect".
Psychological Review 117 (3): 831863. doi:10.1037/a0019634. PMID20658855.
External links
Changing Minds: Belief Bias (http:/ / changingminds. org/ explanations/ theories/ belief_bias. htm)
Bias blind spot
The bias blind spot is the cognitive bias of failing to compensate for one's own cognitive biases. The term was
created by Emily Pronin, a social psychologist from Princeton University's Department of Psychology, with
colleagues Daniel Lin and Lee Ross.
[1]
The bias blind spot is named after the visual blind spot.
Pronin and her co-authors explained to subjects the better-than-average effect, the halo effect, self-serving bias and
many other cognitive biases. According to the better-than-average bias, specifically, people are likely to see
themselves as inaccurately "better than average" for possible positive traits and "less than average" for negative
traits. When subsequently asked how biased they themselves were, subjects rated themselves as being much less
vulnerable to those biases than the average person.
Role of introspection
Emily Pronin and Matthew Kugler have argued that this phenomenon is due to the introspection illusion.
[2]
In their
experiments, subjects had to make judgments about themselves and about other subjects.
[3]
They displayed standard
biases, for example rating themselves above the others on desirable qualities (demonstrating illusory superiority).
The experimenters explained cognitive bias, and asked the subjects how it might have affected their judgment. The
subjects rated themselves as less susceptible to bias than others in the experiment (confirming the bias blind spot).
When they had to explain their judgments, they used different strategies for assessing their own and others' bias.
Pronin and Kugler's interpretation is that when people decide whether someone else is biased, they use overt
behaviour. On the other hand, when assessing whether or not they themselves are biased, people look inward,
searching their own thoughts and feelings for biased motives.
[2]
Since biases operate unconsciously, these
introspections are not informative, but people wrongly treat them as reliable indication that they themselves, unlike
other people, are immune to bias.
[3]
Pronin and Kugler tried to give their subjects access to others' introspections. To do this, they made audio recordings
of subjects who had been told to say whatever came into their heads as they decided whether their answer to a
previous question might have been affected by bias.
[3]
Although subjects persuaded themselves they were unlikely to
be biased, their introspective reports did not sway the assessments of observers.
Bias blind spot
52
References
[1] Emily Pronin, Center for Behavioral Decision Research (http:/ / cbdr. cmu. edu/ event. asp?eventID=15)
[2] Gilovich, Thomas; Nicholas Epley, Karlene Hanko (2005). "Shallow Thoughts About the Self: The Automatic Components of
Self-Assessment". In Mark D. Alicke, David A. Dunning, Joachim I. Krueger. The Self in Social Judgment. Studies in Self and Identity. New
York: Psychology Press. p.77. ISBN978-1-84169-418-4.
[3] Pronin, Emily; Matthew B. Kugler (July 2007). "Valuing thoughts, ignoring behavior: The introspection illusion as a source of the bias blind
spot". Journal of Experimental Social Psychology (Elsevier) 43 (4): 565578. doi:10.1016/j.jesp.2006.05.011. ISSN0022-1031.
Choice-supportive bias
In cognitive science, choice-supportive bias is the tendency to retroactively ascribe positive attributes to an option
one has selected. It is a cognitive bias.
What is remembered about a decision can be as important as the decision itself, especially in determining how much
regret or satisfaction one experiences.
[1]
Research indicates that the process of making and remembering choices
yields memories that tend to be distorted in predictable ways.
[1]
In cognitive science, one predictable way that
memories of choice options are distorted is that positive aspects tend to be remembered as part of the chosen option,
whether or not they originally were part of that option, and negative aspects tend to be remembered as part of
rejected options.
[1]
Once an action has been taken, the ways in which we evaluate the effectiveness of what we did
may be biased.
[2]
It is believed this may influence our future decision-making. These biases may be stored as
memories, which are attributions that we make about our mental experiences based on their subjective qualities, our
prior knowledge and beliefs, our motives and goals, and the social context. True and false memories arise by the
same mechanism because when the brain processes and stores information, it cannot tell the difference from where
they came from.
[3]
General definition
The tendency to remember ones choices as better than they actually were, where people tend to over attribute
positive features to options they chose and negative features to options not chosen.
[1]
Theory
Experiments in cognitive science and social psychology have revealed a wide variety of biases in areas such as
statistical reasoning, social attribution, and memory.
[2]
Choice-supportive memory distortion is thought to occur during the time of memory retrieval and was the result of
the belief that, "I chose this option, therefore it must have been the better option."
[4]
It is also possible that
choice-supportive memories arise because an individual is only paying attention to certain pieces of information
when making a decision or to post-choice cognitive dissonance.
[4]
In addition, biases can also arise because they are
closely related to the high level cognitive operations and complex social interactions.
[5]
Memory distortions may sometimes serve a purpose because it may be in our interest to not remember some details
of an event or to forget others altogether.
[6]
Choice-supportive bias
53
Making a selection
The objective of a choice is generally to pick the best option. Thus, after making a choice, a person is likely to
maintain the belief that the chosen option was better than the options rejected. Every choice has an upside and a
downside. The process of making a decision mostly relies upon previous experiences. Therefore, a person will
remember not only the decision made but also the reasoning behind making that decision.
Motivation
Motivation may also play a role in this process because when a person remembers the option that they chose as being
the best option, it should help reduce regret about their choice. This may represent a positive illusion that promotes
well-being.
Cases when individual is not in control
There are cases where an individual is not always in control of which options are received. People often end up with
options that were not chosen but, instead were assigned by others, such as job assignments made by bosses, course
instructors assigned by a registrar, or vacation spots selected by other family members.
[7]
However, being assigned
(random or not) to an option leads to a different set of cognitions and memory attributions that tend to favor the
alternative (non-received) option and may emphasize regret and disappointment.
[8]
Assigned Options: Making a choice or having a choice made for you by other people in your best interest can
prompt memory attributions that support that choice. Current experiments show no choice-supportive memory
bias for assigned options.
[7]
However, choices which are made on a persons behalf in their best interest do show a
tendency for choice-supportive memory bias.
Random Selection: People do not show choice-supportive biases when choices are made randomly for them.
[9]
This is because choice-supportive memory bias tends to arise during the act of making the decision.
How choice-supportive bias relates to self
People's conception of who they are is shaped by the memories of the choices they make; the college favored over
the one renounced, the job chosen over the one rejected, the candidate elected instead of another one not selected.
[10]
Memories of chosen as well as forgone alternatives can affect one's sense of well-being. Regret for options not taken
can cast a shadow, whereas satisfaction at having made the right choice can make a good outcome seem even
better.
[10]
Positive illusions
Choice-supportive bias often results in memories that depict the self in an overly favorable light. In general,
cognitive biases loosen our grasp on reality because the line between reality and fantasy can become fuzzy if ones
brain has failed to remember a particular event.
[5]
Positive illusions are generally mild and are important contributors
to our sense of well being. However, we all need to be aware that they do exist as part of human nature.
[5]
Choice-supportive bias
54
Memory storage
Human beings are blessed with having an intelligent and complex mind, which allows us to remember our past, be
able to optimize the present, and plan for the future. Remembering involves a complex interaction between the
current environment, what one expects to remember, and what is retained from the past.
[5]
The mechanisms of the
brain that allow memory storage and retrieval serve us well most of the time, but occasionally get us into trouble.
Memories change over time
There is now abundant evidence that memory content can undergo systematic changes. After some period of time
and if the memory is not used often, it may become forgotten.
Memory Retention: It is recognized that retention is best for experiences that are pleasant, intermediate for
experiences that are unpleasant, and worst for experiences that are neutral. Generic memories provide the basis
for inferences that can bring about distortions. These distortions in memory do not appear to displace an
individuals specific memories, but rather they supplement and fill in the gaps when the memories are lost.
[11]
It
has been shown that a wide variety of strategic and systematic processes are used to activate different areas of the
brain in order to retrieve information.
Credibility of a Memory: People have a way to self-check memories, in which a person may consider the
plausibility of the retrieved memory by asking themselves is this event even possible.
[3]
For example, if a person
remembers seeing a pig fly, they must conclude that it was from a dream because pigs cannot fly in the real
world. Memory does not provide people with perfect reproductions of what happened, it only consists of
constructions and reconstructions of what happened.
[3]
Brain areas of interest
There is extensive evidence that the amygdala is involved in effectively influencing memory.
[12]
Emotional arousal,
usually fear based, activates the amygdala and results in the modulation of memory storage occurring in other brain
regions. The forebrain is one of the targets of the amygdala. The forebrain receives input from amygdala and
calculates the emotional significance of the stimulus, generates an emotional response, and transmits it to cerebral
cortex. This can alter the way neurons respond to future input, and therefore cognitive biases, such as
choice-supportive bias can influence future decisions.
Stress hormones affect memory
Effects of stress-related hormones, such as epinephrine and glucocorticoids are mediated by influences involving the
amygdala.
[12]
It has been shown in experiments with rats that when they are given systemic injections of epinephrine
while being trained to perform a task, they show an enhanced memory of performing the task. In effect the stronger
the emotion that is tied to the memory, the more likely the individual is to remember. Therefore, if the memory is
stored and retrieved properly it is less likely to be distorted.
Brain mapping
A PET scan or fMRI can be used to identify different regions of the brain that are activated during specific memory
retrieval.
fMRI study
True versus False Memories: One study asked subjects to remember a series of events while being monitored by
an fMRI to see which areas "light up." When an individual remembered a greater number of true memories than
false memories, it showed a cluster spanning the right superior temporal gyrus and lateral occipital cortex.
However, when the reverse occurred (when an individual remembered a greater number of false memories than
Choice-supportive bias
55
true) the brain area that showed activation was the left insula.
[13]
These findings may provide some insight as to
which areas of the brain are involved with storing memories and later retrieving them.
Choice-supportive bias increases with age
Studies now show that as people age, their process of memory retrieval changes. Although general memory
problems are common to everyone because no memory is perfectly accurate, older adults are more likely than
younger adults to show choice-supportive biases.
Aging of the brain
Normal aging may be accompanied by neuropathy in the frontal brain regions. Frontal regions help people encode or
use specific memorial attributes to make source judgments, controls personality and the ability to plan for events.
These areas can attribute to memory distortions and regulating emotion.
Regulation of emotion
In general, older adults are more likely to remember emotional aspects of situations than are younger adults. For
example, on a memory characteristic questionnaire, older adults rated remembered events as having more associated
thoughts and feelings than did younger adults. As a person ages, regulating personal emotion becomes a higher
priority, whereas knowledge acquisition becomes less of a powerful motive. Therefore choice-supportive bias would
arise because their focus was on how they felt about the choice rather than on the factual details of the options.
Studies have shown that when younger adults are encouraged to remember the emotional aspect of a choice, they are
more likely to show choice-supportive bias. This may be related to older adults' greater tendency to show a positivity
effect in memory.
Rely on familiarity
Older adults rely more than younger adults on categorical or general knowledge about an event to recognize
particular elements from the event.
[1]
Older adults are also less likely to correctly remember contextual features of
events, such as their color or location. This may be because older adults remember (or rely on) fewer source
identifying characteristics than the young. Consequently, older adults must more often guess or base a response on
less specific information, such as familiarity.
[14]
As a result, if they cant remember something, they are more likely
to fill in the missing gaps with things that are familiar to them.
[5]
Getting the 'gist'
Older adults are more reliant on gist-based retrieval. A number of studies suggest that using stereotypes or general
knowledge to help remember an event is less cognitively demanding than relying on other types of memorial
information and thus might require less reflective activity. This shift towards gist-based processes might occur as a
compensation for age decrements in verbatim memory.
[15]
Inhibition
The episodic memory and inhibition accounts of age-related increases in false memories. Inhibition of a memory
may be related to an individuals hearing capacity and attention span. If the person cannot hear what is going on
around them or is not paying much attention, the memory cannot be properly stored and therefore cannot be
accurately retrieved.
Choice-supportive bias
56
Examples of choice-supportive bias
Deciding between two used cars
Henkel and Mather tested the role of beliefs at the time of retrieval about which option was chosen by giving
participants several hypothetical choices like deciding between two used cars.
[4]
After making several choices,
participants left and were asked to return a week later. At that point, Henkel and Mather reminded them which option
they had chosen for each choice and gave them a list of the features of the two options; some new positive and
negative features were mixed in with the old features. Next, participants were asked to indicate whether each option
was new, had been associated with the option they chose, or had been associated with the option they rejected.
Participants favored whichever option Henkel and Mather had told them they had chosen in their memories.
[4]
These
findings show that beliefs at the time of retrieval about which option was chosen shape both which features are
attributed to the options and how vividly they are remembered.
Remembering high school grades
One study looked at the accuracy and distortion in memory for high school grades. The relation between accuracy
and distortion of autobiographical memory content was examined by verifying 3,220 high school grades recalled by
99 freshman college students.
[11]
It was shown that most errors inflated the actual high school grade, meaning that
these distortions are attributed to memory reconstructions in a positive and emotionally gratifying direction. In
addition, their findings indicate that the process of distortion does not cause the actual unpleasant memory loss of
getting the bad grade.
[11]
This is because there was no correlation found between the percentage of accuracy recall
and the degree of asymmetry, or distortion. This shows that the distortion in memories of high school grades arises
after the content has been forgotten by another mechanism.
A 50 year study of high school grades
Many similar studies have been performed, such as a fifty year study of remembering high school grades. In this
study one to 54 years after graduating, 276 alumni correctly recalled 3,025 of 3,967 college grades. The number of
omission errors increased with the retention interval and better students made fewer errors.
[16]
The accuracy of recall
increased with confidence in recall. Eighty-one percent of errors of commission inflated the actual grade. This data
suggested that distortions occur soon after graduation, remain constant during the retention interval, and are greater
for better students and for courses students enjoyed most. Therefore, sometime in between when the memory is
stored and when it is retrieved some time later, the distortion may arise.
[16]
Methods for testing choice-supportive bias
Written scenario memory tests
Researchers have used written scenarios in which participants are asked to make a choice between two options.
Later, on a memory test, participants are given a list of positive and negative features, some of which were in the
scenario and some of which are new. A choice-supportive bias is seen when both correct and incorrect attributions
tend to favor the chosen option, with positive features more likely to be attributed to the chosen option and negative
features to the rejected option.
Deception: Henkel and Mather (2007) found that giving people false reminders about which option they chose in
a previous experiment session led people to remember the option they were told they had chosen as being better
than the other option. This reveals that choice-supportive biases arise in large part when remembering past
choices, rather than being the result of biased processing at the time of the choice.
Choice-supportive bias
57
Deese/RoedigerMcDermott paradigm
The DeeseRoedigerMcDermott paradigm (DRM) consists of a participant listening to an experimenter read lists
of thematically related words (e.g. table, couch, lamp, desk); then after some period of time the experimenter will ask
if a word was presented in the list. Participants often report that related but non-presented words (e.g. chair) were
included in the encoding series, essentially suggesting that they heard the experimenter say these non-presented
words (or critical lures). Incorrect yes responses to critical lures, often referred to as false memories, are remarkably
high under standard DRM conditions.
[17]
Relation to cognitive dissonance
The theory of cognitive dissonance proposes that people have a motivational drive to reduce dissonance.
Choice-supportive bias is potentially related to the aspect of cognitive dissonance explored by Jack Brehm (1956) as
postdecisional dissonance. Within the context of cognitive dissonance, choice-supportive bias would be seen as
reducing the conflict between "I prefer X" and "I have committed to Y".
Debiasing
A study of the Lady Macbeth effect showed reduced choice-supportive bias by having participants engage in
washing.
[18]
References
[1] Mather, M., & Johnson, M.K. (2000). Choice-supportive source monitoring: Do our decisions seem better to us as we age? Psychology and
Aging, 15, 596-606.
[2] "But That's Crazy! Cognitive Bias in Decision-making | Duncan Pierce." Duncan Pierce | Duncanpierce.org. Web. 18 Sept. 2010.
<http://duncanpierce.org/cognitive_bias_workshop>.
[3] Johnson, Marcia K. "Memory and Reality." American Psychologist (2006): 760-71. Web. 18 Sept. 2010. http:/ / www. erin. utoronto. ca/
~jnagel/ 2111/ Johnson2006. pdf.
[4] "Memory Distortion in Decision Making." University of Southern California. Web. 18 Sept. 2010.
<http://www.usc.edu/projects/matherlab/s/memorydistortionchoices.html>.
[5] [5] Schacter, Daniel L. The Seven Sins of Memory: How the Mind Forgets and Remembers. Boston: Houghton Mifflin, 2002. Print.
[6] Gordon, Ruthanna; Franklin, Nancy; Beck, Jennifer (2005). "Wishful Thinking and Source Monitoring". Memory & Cognition 33 (3):
41829. doi:10.3758/BF03193060.
[7] Mather, M.; Shafir, E.; Johnson, M. (2003). "Remembering chosen and assigned options". Memory & Cognition 31 (3): 422433.
doi:10.3758/BF03194400.
[8] Stoll Benney, Kristen; Henkel, Linda (2006). "The Role of Free Choice in Memory for past Decisions". Memory 14 (8): 1001011.
doi:10.1080/09658210601046163.
[9] Henkel, L.A.; Mather, M. (2007). "Memory attributions for choices: How beliefs shape our memories" (http:/ / www. usc. edu/ projects/
matherlab/ pdfs/ HenkelMather2007.pdf) (PDF). Journal of Memory and Language 57 (2): 163176. doi:10.1016/j.jml.2006.08.012. .
[10] Mather, M.; Shafir, E.; Johnson, M. K. (2000). "Misremembrance of options past: Source monitoring and choice". Psychological Science 11
(2): 132138. doi:10.1111/1467-9280.00228. PMID11273420.
[11] Bahrick, Harry P.; Hall, Lynda K.; Berger, Stephanie A. (1996). "Accuracy and Distortion in Memory for High School Grades".
Psychological Science 7 (5): 26571. doi:10.1111/j.1467-9280.1996.tb00372.x.
[12] McGaugh, James L.; Cahill, Larry; Roozendaal, Benno (1996). "Involvement of the Amygdala in Memory Storage: Interaction with Other
Brain Systems". Proceedings of the National Academy of Sciences 93 (24): 135083514. doi:10.1073/pnas.93.24.13508.
[13] Baym, C. L.; Gonsalves, B. D. (2010). "Comparison of neural activity that leads to true memories, false memories, and forgetting: An fMRI
study of the misinformation effect". Cogn Affect Behav Neurosci 10 (3): 339348. doi:10.3758/CABN.10.3.339. PMID20805535.
[14] Dodson, C.; Bawa, S.; Slotnick, S. (2007). "Aging, source memory, and misrecollections". Journal of Experimental Psychology-Learning
Memory and Cognition 33: 169181. doi:10.1037/0278-7393.33.1.169.
[15] Lovden, M. (2003). "The episodic memory and inhibition accounts of age-related increases in false memories: A consistency check".
Journal of Memory and Language 49 (2): 268283. doi:10.1016/S0749-596X(03)00069-X.
[16] Bahrick, H. P.; Hall, L. K.; Da Costa, L. A. (2008). "Fifty years of memory of college grades: Accuracy and distortions". Emotion 8 (1):
1322. doi:10.1037/1528-3542.8.1.13. PMID18266512.
[17] Foley, M.; Hughes, K.; Librot, H.; Paysnick, A. (2009). "Imagery Encoding Effects on Memory in the DRM Paradigm: A Test of Competing
Predictions". Applied Cognitive Psychology 23 (6): 828848. doi:10.1002/acp.1516.
Choice-supportive bias
58
[18] Lee, Spike W. S.; Norbert Schwarz (2010). "Washing away postdecisional dissonance" (http:/ / www. sciencemag. org/ content/ 328/ 5979/
709. abstract). Science 328 (5979): 709. doi:10.1126/science.1186799. PMID20448177. .
Mather, M.; Johnson, M.K. (2000). "Choice-supportive source monitoring: Do our decisions seem better to us as
we age?" (http:/ / www. usc. edu/ projects/ matherlab/ pdfs/ MatherJohnson2000. pdf) (PDF). Psychology and
Aging 15 (4): 596606. doi:10.1037/0882-7974.15.4.596. PMID11144319.
Mather, M.; Shafir, E.; Johnson, M.K. (2000). "Misrememberance of options past: Source monitoring and choice"
(http:/ / www. usc. edu/ projects/ matherlab/ pdfs/ Matheretal2000. pdf) (PDF). Psychological Science 11 (2):
132138. doi:10.1111/1467-9280.00228. PMID11273420.
External links
Memory distortion for past choices (http:/ / www. usc. edu/ projects/ matherlab/ s/ memorydistortionchoices.
html)
Clustering illusion
The clustering illusion refers to the tendency to erroneously perceive small samples from random distributions to
have significant "streaks" or "clusters", caused by a human tendency to underpredict the amount of variability likely
to appear in a small sample of random or semi-random data due to chance.
[1]
Thomas Gilovich found that most people thought that the sequence OXXXOXXXOXXOOOXOOXXOO
[2]
looked
non-random, when, in fact, it has several characteristics maximally probable for a pseudorandom stream, such as an
equal number of each result ( ) and an equal number of adjacent results with the same outcome for
both possible outcomes ( ). In sequences like this, people seem
to expect to see a greater number of alternations than one would predict statistically. The probability of an alternation
in a sequence of independent random binary events is 0.5, yet people seem to expect an alternation rate of about
0.7.
[3][4][5]
In fact, in a short number of trials, variability and non-random-looking "streaks" are quite probable.
Daniel Kahneman and Amos Tversky explained this kind of misprediction as being caused by the representativeness
heuristic
[4]
(which itself they also first proposed). Gilovich argues that a similar effect occurs for other types of
random dispersions, including 2-dimensional data such as seeing clusters in the locations of impact of V-1 flying
bombs on London during World War II or seeing streaks in stock market price fluctuations over time.
[1][4]
Although
Londoners developed specific theories about the pattern of impacts within London, in a statistical analysis by R. D.
Clarke originally published in 1946 the impacts of V-2 rockets on London is a close fit to the Poisson distribution,
meaning it closely resembles the expected result from a chance dispersion.
[6][7][8][9][10]
This analysis was a plot point
in Thomas Pynchon's novel Gravity's Rainbow.
The clustering illusion is central to the "hot hand fallacy", the first study of which was reported by Gilovich, Robert
Vallone and Amos Tversky. They found that the idea that basketball players shoot successfully in "streaks",
sometimes called by sportcasters as having a "hot hand" and widely believed by Gilovich et al.'s subjects, was false.
In the data they collected, if anything the success of a previous throw very slightly predicted a subsequent miss
rather than another success.
[5]
A study in 2008 by Jennifer Whitson and Adam Galinsky found that subjects were more likely to report meaningful
clusters in semi-random pictures after they had been primed to feel out-of-control, or had been induced to reminisce
about an experience where they felt out of control.
[10][11][12]
Using this cognitive bias in causal reasoning may result in the Texas sharpshooter fallacy. More general forms of
erroneous pattern recognition are pareidolia and apophenia. Related biases are the illusion of control which the
clustering illusion could contribute to, and insensitivity to sample size in which people don't expect greater variation
in smaller samples. A different cognitive bias involving misunderstanding of chance streams is the gambler's fallacy.
Clustering illusion
59
References
[1] Gilovich, Thomas (1991). How we know what isn't so: The fallibility of human reason in everyday life. New York: The Free Press.
ISBN0-02-911706-2.
[2] [2] Gilovich, 1991 p. 16
[3] Tune, G. S. (1964). "Response preference: A review of some relevant literature". Psychological Bulletin 61: 286302. doi:10.1037/h0048618.
PMID14140335.
[4] Kahneman, Daniel; Amos Tversky (1972). "Subjective probability: A judgment of representativeness". Cognitive Psychology 3: 430454.
doi:10.1016/0010-0285(72)90016-3.
[5] Gilovich, Thomas; Robert Vallone & Amos Tversky (1985). "The hot hand in basketball: On the misperception of random sequences" (http:/
/ www.psych.cornell.edu/ sec/ pubPeople/ tdg1/ Gilo. Vallone. Tversky. pdf). Cognitive Psychology 17: 295314. .
[6] Clarke, R. D. (1946). "An application of the Poisson distribution" (http:/ / www. actuaries. org. uk/ __data/ assets/ pdf_file/ 0016/ 26053/
0481. pdf). Journal of the Institute of Actuaries 72: 481. .
[7] [7] Gilovich, 1991 p. 19
[8] Mori, Kentaro. "Seeing patterns" (http:/ / forgetomori. com/ 2009/ skepticism/ seeing-patterns/ ). . Retrieved 3 March 2012.
[9] "Bombing London" (http:/ / www. dur. ac. uk/ stat.web/ bomb. htm). . Retrieved 3 March 2012.
[10] Tierney, John (3 October 2008). "See a pattern on Wall Street?" (http:/ / tierneylab. blogs. nytimes. com/ 2008/ 10/ 03/ see-a-pattern-here/ )
(October 3, 2008). TierneyLab (New York Times). . Retrieved 3 March 2012.
[11] Whitson, Jennifer A.; Adam D. Galinksy (2008). "Lacking control increases illusory pattern perception" (http:/ / rifters. com/ real/ articles/
Science_LackingControlIncreasesIllusoryPatternPerception. pdf). Science 322: 115117. .
[12] Yong, Ed. "Lacking control drives false conclusions, conspiracy theories and superstitions" (http:/ / scienceblogs. com/ notrocketscience/
2008/ 12/ lacking_control_drives_false_conclusions_conspiracy_theories. php). . Retrieved 3 March 2012.
External links
Skeptic's Dictionary: The clustering illusion (http:/ / skepdic. com/ clustering. html)
Hot Hand website: Statistical analysis of sports streakiness (http:/ / thehothand. blogspot. com/ )
Congruence bias
Congruence bias is a type of cognitive bias similar to confirmation bias. Congruence bias occurs due to people's
overreliance on direct testing of a given hypothesis, and a neglect of indirect testing.
Examples
Suppose that in an experiment, a subject is presented with two buttons, and is told that pressing one of those buttons,
but not the other, will open a door. The subject adopts the hypothesis that the button on the left opens the door in
question. A direct test of this hypothesis would be pressing the button on the left; an indirect test would be pressing
the button on the right. The latter is still a valid test because once the result of the door's remaining closed is found,
the left button is proven to be the desired button. (This example is parallel to Bruner, Goodnow, and Austin's
example in the psychology classic A Study of Thinking.)
We can take this idea of direct and indirect testing and apply it to more complicated experiments in order to explain
the presence of a congruence bias in people. In an experiment, a subject will test his own usually naive hypothesis
again and again instead of trying to disprove it.
The classic example of subjects' congruence bias is found in Wason (1960, 1968b). Here, the experimenter gave
subjects the number sequence "2, 4, 6," telling the subjects that this sequence followed a particular rule and
instructing subjects to find the rule underlying the sequence logic. Subjects provided their own number sequences as
tests to see if they could ascertain the rule dictating which numbers could be included in the sequence and which
could not. Most subjects respond to the task by quickly deciding that the underlying rule is "numbers ascending by
2," and provide as tests only sequences concordant with this rule, such as "3, 5, 7," or even "pi plus 2, plus 4, plus 6."
Each of these sequences follows the underlying rule the experimenter is thinking of, though "numbers ascending by
Congruence bias
60
2" is not the actual criterion being used. However, because subjects succeed at repeatedly testing the same singular
principle, they naively believe their chosen hypothesis is correct. When a subject offers up to the experimenter the
hypothesis "numbers ascending by 2" only to be told he is wrong, much confusion usually ensues. At this point,
many subjects attempt to change the wording of the rule without changing its meaning, and even those who switch to
indirect testing have trouble letting go of the "+ 2" convention, producing potential rules as idiosyncratic as "the first
two numbers in the sequence are random, and the third number is the second number plus two." Many subjects never
realize that the actual rule the experimenter was using was simply just to list ascending numbers, because of the
subjects' inability to consider indirect tests of their hypotheses.
Wason attributed this failure of subjects to an inability to consider alternative hypotheses, which is the root of the
congruence bias. Jonathan Baron explains that subjects could be said to be using a "congruence heuristic," wherein a
hypothesis is tested only by thinking of results that would be found if that hypothesis is true. This heuristic, which
many people seem to use, ignores alternative hypotheses.
To avoid falling into the trap of the congruence bias, Baron suggests that the following two heuristics be used:
1. Ask "How likely is a yes answer, if I assume that my hypothesis is false?" Remember to choose a test that has a
high probability of giving some answer if the hypothesis is true, and a low probability if it is false.
2. "Try to think of alternative hypotheses; then choose a test most likely to distinguish them a test that will
probably give different results depending on which is true." An example of the need for the heuristic could be
seen in a doctor attempting to diagnose appendicitis. In that situation, assessing a white blood cell count would
not assist in diagnosis, because an elevated white blood cell count is associated with a number of maladies.
External links
Jonathan Baron's Website
[1]
Thinking and Deciding, an introduction to Decision Theory
[2]
References
[1] http:/ / www. sas. upenn. edu/ ~baron/
[2] http:/ / www. amazon. com/ gp/ product/ 0521659728
Conjunction fallacy
61
Conjunction fallacy
I am particularly fond of this example [the Linda problem] because I know that the [conjoint] statement is least probable, yet a little
homunculus in my head continues to jump up and down, shouting at mebut she cant just be a bank teller; read the description.
Stephen J. Gould
[1]
The conjunction fallacy is an informal fallacy that occurs when it is assumed that specific conditions are more
probable than a single general one.
The most often-cited example of this fallacy originated with Amos Tversky and Daniel Kahneman:
[2]
Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was
deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear
demonstrations.
Which is more probable?
1. 1. Linda is a bank teller.
2. 2. Linda is a bank teller and is active in the feminist movement.
90% of those asked chose option 2. However the probability of two events occurring together (in
"conjunction") is always less than or equal to the probability of either one occurring aloneformally, for two
events A and B this inequality could be written as , and
For example, even choosing a very low probability of Linda being a bank teller, say Pr(Linda is a bank teller)
= 0.05 and a high probability that she would be a feminist, say Pr(Linda is a feminist) = 0.95, then, assuming
independence, Pr(Linda is a bank teller and Linda is a feminist) = 0.05 0.95 or 0.0475, lower than Pr(Linda
is a bank teller).
Tversky and Kahneman argue that most people get this problem wrong because they use the representativeness
heuristic to make this kind of judgment: Option 2 seems more "representative" of Linda based on the
description of her, even though it is clearly mathematically less likely.
[3]
In other demonstrations they argued that specific scenario seemed more likely because of representativeness,
but each added detail would actually make the scenario less and less likely. In this way it could be similar to
the misleading vividness or slippery slope fallacies. More recently Kahneman has argued that the conjunction
fallacy is a type of extension neglect
[4]
Joint versus separate evaluation
In some experimental demonstrations the conjoint option is evaluated separately from its basic option. In other
words, one group of participants is asked to rank order the likelihood that Linda is a bank teller, a high school
teacher, and several other options, and another group is asked to rank order whether Linda is a bank teller and
active in the feminist movement versus the same set of options (without Linda is a bankteller as an option). In
this type of demonstration different groups of subjects rank order Linda as a bank teller and active in the
feminist movement more highly than Linda as a bank teller.
[3]
Separate evaluation experiments preceded the earliest joint evaluation experiments, and Kahneman and
Tversky were surprised when the effect was still observed under joint evaluation.
[5]
In separate evaluation the term conjunction effect may be preferred.
[3]
Conjunction fallacy
62
Criticism of the Linda problem
Critics such as Gerd Gigerenzer and Ralph Hertwig criticized the Linda problem on grounds such as the
wording and framing. The question of the Linda problem may violate conversational maxims in that people
assume that the question obeys the maxim of relevance. Gigerenzer argues that some of the terminology used
have polysemous meanings, the alternatives of which he claimed were more "natural". He argues that the
meaning of probable what happens frequently, corresponds to the mathematical probability people are
supposed to be tested on, but the meanings of probable what is plausible, and whether there is evidence do
not.
[6]
The term "and" has even been argued to have relevant polysemous meanings.
[7]
Many techniques have
been developed to control for this possible misinterpretation (see Moro, 2009, Tentori & Crupi, 2012), but
none of them has dissipated the effect.
Many variations in wording of the Linda problem were studied by Tversky and Kahneman.
[3]
If the first option
is changed to obey conversational relevance, i.e., "Linda is a bank teller whether or not she is active in the
feminist movement" the effect is decreased, but the majority (57%) of the respondents still commit the
conjunction error. It has been reported that if the probability is changed to frequency format (see debiasing
section below) the effect is reduced or eliminated. However, studies exist in which indistinguishable
conjunction fallacy rates have been observed with stimuli framed in terms of probabilities versus frequencies
(see, for example, Tentori, Bonini, & Osherson, 2004 or Weddell & Moro, 2008).
The wording criticisms may be less applicable to the conjunction effect in separate evaluation.
[8]
The "Linda
problem" has been studied and criticized more than other types of demonstration of the effect (some described
below).
[9]
Other demonstrations
Policy experts were asked to rate the probability that the Soviet Union would invade Poland, and the United
States would break off diplomatic relations, all in the following year. They rated it on average as having a 4%
probability of occurring. Another group of experts was asked to rate the probability simply that the United
States would break off relations with the Soviet Union in the following year. They gave it an average
probability of only 1%.
[3]
In an experiment conducted in 1980, respondents were asked the following:
Suppose Bjorn Borg reaches the Wimbledon finals in 1981. Please rank order the following outcomes
from most to least likely.
Borg will win the match
Borg will lose the first set
Borg will lose the first set but win the match
Borg will win the first set but lose the match
On average, participants rated "Borg will lose the first set but win the match" more highly than "Borg
will lose the first set".
[3]
In another experiment, participants were asked:
Consider a regular six-sided die with four green faces and two red faces. The die will be rolled 20
times and the sequence of greens (G) and reds (R) will be recorded. You are asked to select one
sequence, from a set of three, and you will win $25 if the sequence you choose appears on
successive rolls of the die.
1. 1. RGRRR
2. 2. GRGRRR
3. 3. GRRRRR
Conjunction fallacy
63
65% of participants chose the second sequence, though option 1 is contained within it and is
shorter than the other options. In a version where the $25 bet was only hypothetical the results did
not significantly differ. Tversky and Kahneman argued that sequence 2 appears "representative"
of a chance sequence
[3]
(compare to the clustering illusion).
Debiasing
Drawing attention to set relationships, using frequencies instead of probabilities and/or thinking
diagrammatically sharply reduce the error in some forms of the conjunction fallacy.
[10]
In one experiment the question of the Linda problem was reformulated as follows:
There are 100 persons who fit the description above (that is, Lindas). How many of them are:
Bank tellers? __ of 100
Bank tellers and active in the feminist movement? __ of 100
Whereas previously 85% of participants gave the wrong answer (bank teller and active in the feminist
movement) in experiments done with this questioning none of the participants gave a wrong answer.
[11]
Notes
[1] [1] Gould (1988)
[2] Tversky & Kahneman (1982, 1983)
[3] Tversky & Kahneman (1983)
[4] [4] Kahneman (2003)
[5] [5] Kahneman (2011) chapter 15
[6] Gigerenzer (1996), Hertwig & Gigerenzer (1999)
[7] Mellers, Hertwig & Kahneman (2001)
[8] [8] Gigerenzer (1996)
[9] Kahneman (2011) ch. 15, Kahneman & Tversky (1996), Mellers, Hertwig & Kahneman (2001)
[10] Tversky & Kahneman (1983), Gigerenzer (1991), Hertwig & Gigerenzer (1999), Mellers, Hertwig & Kahneman (2001)
[11] [11] Gigerenzer (1991)
References
Tversky, A. and Kahneman, D. (October 1983). "Extension versus intuitive reasoning: The conjunction fallacy
in probability judgment" (http:/ / content2. apa. org/ journals/ rev/ 90/ 4/ 293). Psychological Review 90 (4):
293315. doi:10.1037/0033-295X.90.4.293.
Tversky, A. and Kahneman, D. (1982) "Judgments of and by representativeness". In D. Kahneman, P. Slovic
& A. Tversky (Eds.), Judgment under uncertainty: Heuristics and biases. Cambridge, UK: Cambridge
University Press.
Kahneman, D., & Tversky, A. (1996). On the reality of cognitive illusions. Psychological Review, 103,
582-591.
Gigerenzer, G. (1996). On narrow norms and vague heuristics: A reply to Kahneman and Tversky.
Psychological Review, 103, 592-596.
Gigerenzer, G. (1991). How to make cognitive illusions disappear: Beyond "heuristics and biases." European
Review of Social Psychology, 2, 83-115.
Gould, Stephen J. (1988). "The Streak of Streaks" (http:/ / www. nybooks. com/ articles/ archives/ 1988/ aug/
18/ the-streak-of-streaks/ ?pagination=false). The New York Review of Books.
Hertwig, Ralph; Gerd Gigerenzer (1999). "The Conjunction Fallacy Revisited: How Intelligent Inferences
Look Like Reasoning Errors". Journal of Behavioral Decision Making 12: 275305.
Kahneman, Daniel (2011). Thinking, Fast and Slow. New York: Farrar, Straus and Giroux.
Conjunction fallacy
64
Kahneman, Daniel. (2000). "Evaluation by moments, past and future". In Daniel Kahneman and Amos
Tversky (Eds.). Choices, Values and Frames.
Mellers, Barbara; Ralph Hertwig & Daniel Kahneman (2001). "Do frequency representations eliminate
conjunction effects? An exercise in adversarial collaboration" (http:/ / cds. unibas. ch/ ~hertwig/ pdfs/ 2001/
Mellersetal2001_frequency_eliminate_conjunction. pdf). Psychological Science 12 (4): 269275.
Moro, R. (2009). On the nature of the conjunction fallacy. Synthese, 171,124.
Tentori, K. & Crupi, V. (2012). On the conjunction fallacy and the meaning of and, yet again: A reply to
Hertwig, Benz, and Krauss (2008). Cognition, 122, 123134.
Tentori, K., Bonini, N., & Osherson, D. (2004). The conjunction fallacy: A misunderstanding about
conjunction? Cognitive Science, 28, 467477.
External links
Fallacy files: Conjunction fallacy (http:/ / www. fallacyfiles. org/ conjunct. html)
Conservatism (belief revision)
In cognitive psychology and decision science, conservatism or conservatism bias is a bias in human information
processing. This bias describes human belief revision in which persons over-weight the prior distribution (base rate)
and under-weigh new sample evidence when compared to Bayesian belief-revision.
According to the theory, "opinion change is very orderly, and usually proportional to the numbers of Bayes's
Theorem - but it is insufficient in amount".
[1]
In other words, persons update their prior beliefs as new evidence
becomes observed, but they do so more slowly than they would if they used Bayes's theorem.
This bias was discussed by Ward Edwards in 1968,
[1]
who reported on experiments like the following one:
This bookbag contains 1000 poker chips. I started out with two such bags, one containing 700 red and
300 blue chips, the other containing 300 red and 700 blue. [...] Now, you sample, randomly, with
replacement after each chip. In 12 samples, you get 8 reds and 4 blues. [...] what is the probability that
this is the predominantly red bag?
Most subjects chose an answer around .7. The correct answer according to Bayes' Theorem is closer to .97. Edwards
suggested that people updated beliefs conservatively, in accordance with Bayes's Theorem more slowly. They
updated from .5 incorrectly according to an observed bias in several experiments.
[1]
In finance
In finance, evidence has been found that investors under-react to corporate events, consistent with conservatism.
This includes announcements of earnings, changes in dividends, and stock splits.
[2]
Possible explanations
A recent study suggests that the belief revising conservatism can be explained by an information-theoretic generative
mechanism and that assumes a noisy conversion of objective evidence (observation) into subjective estimates
(judgment).
[3]
The study explains that the estimates of conditional probabilities are conservative because of noise in
the retrieval of information from memory, whereas noise is defined as the mixing of evidence. Memories of high
likelihoods are mixed with evidence of low likelihood and the resulting estimate is lower than it should be. The
retrieval of memories of lower likelihoods are higher than they should be, and the result is conservatism (low is not
low enough, and high is not high enough, the result is not extreme enough, which is conservative).
Conservatism (belief revision)
65
References
[1] Edwards, Ward. "Conservatism in Human Information Processing (excerpted)". In Daniel Kahneman, Paul Slovic and Amos Tversky. (1982).
Judgment under uncertainty: Heuristics and biases. New York: Cambridge University Press. ISBN 978-0521284141 Original work published
1968.
[2] Kadiyala, Padmaja; Rau, P. Raghavendra (2004). "Investor Reaction to Corporate Event Announcements: Under-reaction or Over-reaction?"
(http:/ / www. jstor.org/ stable/ 10. 1086/ 381273). The Journal of Business 77 (4): 357-386. JSTOR10.1086/381273. .. Earlier version at
doi:10.2139/ssrn.249979
[3] Hilbert, Martin (2012) "Toward a synthesis of cognitive biases: How noisy information processing can bias human decision making" (http:/ /
psycnet.apa. org/ psycinfo/ 2011-27261-001). Psychological Bulletin, 138(2), 211237; free access to the study here: http:/ / www.
martinhilbert. net/ HilbertPsychBull. pdf
Contrast effect
A contrast effect is the enhancement or diminishment, relative to normal, of perception, cognition and related
performance as a result of immediately previous or simultaneous exposure to a stimulus of lesser or greater value in
the same dimension. (Here, normal perception or performance is that which would be obtained in the absence of the
comparison stimulusi.e., one based on all previous experience.)
Contrast effects are ubiquitous throughout human and non-human animal perception, cognition, and resultant
performance. A weight is perceived as heavier than normal when "contrasted" with a lighter weight. It is perceived
as lighter than normal when contrasted with a heavier weight. An animal works harder than normal for a given
amount of reward when that amount is contrasted with a lesser amount and works less energetically for that given
amount when it is contrasted with a greater amount. A person appears more appealing than normal when contrasted
with a person of less appeal and less appealing than normal when contrasted with one of greater appeal.
Types
Simultaneous contrast
Simultaneous contrast identified by Michel Eugne Chevreul refers
to the manner in which the colors of two different objects affect each
other. The effect is more noticeable when shared between objects of
complementary color.
[1]
In the image here, the two inner rectangles are exactly the same shade
of grey, but the upper one appears to be a lighter grey than the lower
one due to the background provided by the outer rectangles.
This is a different concept from contrast, which by itself refers to one
object's difference in color and luminance compared to its surroundings
or background.
Contrast effect
66
Successive contrast
Successive contrast occurs when the perception of currently viewed
stimuli is modulated by previously viewed stimuli.
For example, when one stares at the dot in the center of one of the two
colored disks on the top row for a few seconds and then looks at the
dot in the center of the disk on the same side in the bottom row, the
two lower disks, though identically colored, appear to have different
colors for a few moments.
One type of contrast that involves both time and space is metacontrast
and paracontrast. When one half of a circle is lit for 10 milliseconds,
it is at its maximum intensity. If the other half is displayed at the same
time (but 20-50 ms later), there is a mutual inhibition: the left side is
darkened by the right half (metacontrast), and the center may be
completely obliterated. At the same time, there is a slight darkening of the right side due to the first stimulus; this is
paracontrast.
[2]
Metacontrast and paracontrast
Domains
The contrast effect was noted by the 17th century philosopher John Locke, who observed that lukewarm water can
feel hot or cold, depending on whether the hand touching it was previously in hot or cold water.
[3]
In the early 20th
century, Wilhelm Wundt identified contrast as a fundamental principle of perception, and since then the effect has
been confirmed in many different areas.
[3]
Contrast effects shape not only visual qualities like color and brightness,
but other kinds of perception, including the perception of weight.
[4]
One experiment found that thinking of the name
"Hitler" led to subjects rating a person as more friendly.
[5]
Whether a piece of music is perceived as good or bad can
depend on whether the music heard before it was unpleasant or pleasant.
[6]
For the effect to work, the objects being
compared need to be similar to each other: a television reporter can seem to shrink when interviewing a tall
basketball player, but not when standing next to a tall building.
[4]
References
[1] Colour, Why the World Isn't Grey, Hazel Rosotti, Princeton University Press, Princeton, NJ, 1985, pp 135-136. ISBN 0-691-02386-7
[2] "eye, human."Encyclopdia Britannica. 2008. Encyclopdia Britannica 2006 Ultimate Reference Suite DVD
[3] Kushner, Laura H. (2008). Contrast in judgments of mental health (http:/ / books. google. com/ books?id=TYn5VHp9jioC& pg=PA1).
ProQuest. p.1. ISBN978-0-549-91314-6. . Retrieved 24 March 2011.
[4] Plous, Scott (1993). The psychology of judgment and decision making (http:/ / books. google. com/ books?id=xvWOQgAACAAJ).
McGraw-Hill. pp.3841. ISBN978-0-07-050477-6. . Retrieved 24 March 2011.
[5] Moskowitz, Gordon B. (2005). Social cognition: understanding self and others (http:/ / books. google. com/ books?id=_-NLW8Ynvp8C&
pg=PA421). Guilford Press. p.421. ISBN978-1-59385-085-2. . Retrieved 24 March 2011.
[6] Popper, Arthur N. (30 November 2010). Music Perception (http:/ / books. google. com/ books?id=ZYXd3CF1_vkC& pg=PA150). Springer.
p.150. ISBN978-1-4419-6113-6. . Retrieved 24 March 2011.
Contrast effect
67
External links
WebExhibits - Simultaneous Contrast (http:/ / webexhibits. org/ colorart/ contrast. html)
Example of simultaneous contrast with simple gray objects (http:/ / www. poynton. com/ notes/
colour_and_gamma/ GammaFAQ. html#NTSC)
Interactive Classic Black and White example of simultaneous contrast (http:/ / colorisrelative. com/ bwbox. html)
Curse of knowledge
The curse of knowledge is a cognitive bias according to which better-informed agents find it extremely difficult to
think about problems from the perspective of lesser-informed agents. The term was coined by Robin Hogarth.
[1]
In one experiment, one group of subjects "tapped" a well-known song on a table while another listened and tried to
identify the song. Some "tappers" described a rich sensory experience in their minds as they tapped out the melody.
Tappers on average estimated that 50% of listeners would identify the specific tune; in reality only 2.5% of listeners
could identify the song.
[2][3]
Related to this finding is the phenomenon experienced by players of charades: The actor
may find it frustratingly hard to believe that his or her teammates keep failing to guess the secret phrase, known only
to the actor, conveyed by pantomime.
It has been argued that the curse of knowledge could contribute to the difficulty of teaching.
[4]
References
[1] Camerer, Colin; George Loewenstein & Mark Weber (1989). "The curse of knowledge in economic settings: An experimental analysis".
Journal of Political Economy 97: 12321254.
[2] Heath, Chip; Dan Heath (2007). Made to Stick. Random House.
[3] Ross, L., & Ward, A. (1996). Naive realism in everyday life: Implications for social conflict and misunderstanding. In T. Brown, E. S. Reed
& E. Turiel (Eds.), Values and knowledge (pp. 103135). Hillsdale, NJ: Erlbaum.
[4] Wieman, Carl (2007). "The "Curse of Knowledge," or Why Intuition About Teaching Often Fails" (http:/ / www. aps. org/ publications/
apsnews/ 200711/ backpage.cfm). APS News. The Back Page 16 (10). . Retrieved 8 March 2012.
Decoy effect
68
Decoy effect
In marketing, the decoy effect (or asymmetric dominance effect) is the phenomenon whereby consumers will tend
to have a specific change in preference between two options when also presented with a third option that is
asymmetrically dominated. An option is asymmetrically dominated when it is inferior in all respects to one option;
but, in comparison to the other option, it is inferior in some respects and superior in others. In other words, in terms
of specific attributes determining preferability, it is completely dominated by (i.e., inferior to) one option and only
partially dominated by the other. When the asymmetrically dominated option is present, a higher percentage of
consumers will prefer the dominating option than when the asymmetrically dominated option is absent. The
asymmetrically dominated option is therefore a decoy serving to increase preference for the dominating option. The
decoy effect is also an example of the violation of the independence of irrelevant alternatives axiom of decision
theory.
Example
For example, if there is a consideration set involving MP3 players, consumers will generally see higher storage
capacity (number of GB) and lower price as positive attributes; while some consumers may want a player that can
store more songs, other consumers will want a player that costs less. In Consideration Set 1, two devices are
available:
Consideration Set 1
A B
price $400 $300
storage 30GB 20GB
In this case, some consumers will prefer A for its greater storage capacity, while others will prefer B for its lower
price.
Now suppose that a new player, C, is added to the market; it is more expensive than both A and B and has more
storage than B but less than A:
Consideration Set 2
A B C
price $400 $300 $450
storage 30GB 20GB 25GB
The addition of Cwhich consumers would presumably avoid, given that a lower price can be paid for a model with
more storagecauses A, the dominating option, to be chosen more often than if only the two choices in
Consideration Set 1 existed; C affects consumer preferences by acting as a basis of comparison for A and B. Because
A is better than C in both respects, while B is only partially better than C, more consumers will prefer A now than did
before. C is therefore a decoy whose sole purpose is to increase sales of A.
Conversely, suppose that instead of C, a player D is introduced that has less storage than both A and B, and that is
more expensive than B but not as expensive as A:
Decoy effect
69
Consideration Set 3
A B D
price $400 $300 $350
storage 30GB 20GB 15GB
The result here is similar: consumers will not prefer D, because it is not as good as B in any respect. However,
whereas C increased preference for A, D has the opposite effect, increasing preference for B.
References
J. Huber et al. (June 1982). "Adding Asymmetrically Dominated Alternatives: Violations of Regularity and the
Similarity Hypothesis". The Journal of Consumer Research 9 (1): 90ff. doi:10.1086/208899.
Vedantam, Shankar (April 2, 2007). "The Decoy Effect, or How to Win an Election"
[1]
. The Washington Post.
Retrieved 2007-04-02.
National Public Radio (April 26, 2007). "Measuring 'the Decoy Effect' in Political Races - interview with Shankar
Vedantam"
[2]
. NPR. Retrieved 2007-04-26.
References
[1] http:/ / www. washingtonpost. com/ wp-dyn/ content/ article/ 2007/ 04/ 01/ AR2007040100973. html
[2] http:/ / www. npr. org/ templates/ story/ story.php?storyId=9585221
Denomination effect
The denomination effect is a theoretical form of cognitive bias relating to currency, whereby people are less likely
to spend larger bills than their equivalent value in smaller bills. It was proposed by Priya Raghubir and Joydeep
Srivastava in their 2009 paper "Denomination Effect".
[1][2]
In an experiment conducted by Raghubir and Srivastava, university students were given a dollar, either in quarters or
as a single dollar bill. The students were then given the option to either save the money they had been given or to
spend it on candy. Consistent with the theory, the students given the quarters were more likely to spend the money
they were given.
[3]
Notes
[1] Priya Raghubir and Joydeep Srivastava, "Denomination Effect", Journal of Consumer Research, 7 April 2009, http:/ / www. journals.
uchicago. edu/ doi/ abs/ 10.1086/ 599222
[2] Joffe-Walt, Chana a(12 May 2009). Why We Spend Coins Faster Than Bills (http:/ / www. npr. org/ templates/ story/ story.
php?storyId=104063298), NPR
[3] Priya Raghubir, Joydeep Srivastava. "Denomination Effect" (http:/ / w4. stern. nyu. edu/ news/ docs/ Denomination_Round_4-1. pdf). .
Retrieved 5 December 2012.
Distinction bias
70
Distinction bias
Distinction bias (a concept of decision theory), is the tendency to view two options as more dissimilar when
evaluating them simultaneously than when evaluating them separately.
The concept was advanced by Hsee and Zhang as an explanation for differences in evaluations of options between
joint evaluation mode and separate evaluation mode (2004). Evaluation mode is a contextual feature in decision
making. Joint evaluation mode is when options are evaluated simultaneously, and separate evaluation mode is when
each option is evaluated in isolation (e.g., Hsee, 1998; Hsee & Leclerc, 1998). Research shows that evaluation mode
affects the evaluation of options, such that options presented simultaneously are evaluated differently than the same
options presented separately.
Hsee and Zhang (2004) offered a number of potential explanations for this change in preferences from joint
evaluation to separate evaluation, including the distinction bias. The distinction bias suggests that comparing two
options (as done in joint evaluation) makes even small differences between options salient. In other words, viewing
options simultaneously makes them seem more dissimilar than when viewing and evaluating each in isolation.
Understanding the differences between joint evaluation and separate evaluation is important because while
preferences are often formed and decisions made through distinction, options are generally experienced separately.
This results in a mismatch in which the best decision in the choice context may not provide the best experience. For
example, when televisions are displayed next to each other on the sales floor, the difference in quality between two
very similar, high-quality televisions may appear great. A consumer may pay a much higher price for the
higher-quality television, even though the difference in quality is imperceptible when the televisions are viewed in
isolation. Because the consumer will likely be watching only one television at a time, the lower-cost television would
have provided a similar experience at a lower cost.
References
Hsee, C.K. (1998). "Less is better: When low-value options are valued more highly than high-value options".
Journal of Behavioral Decision Making 11 (2): 107121.
doi:10.1002/(SICI)1099-0771(199806)11:2<107::AID-BDM292>3.0.CO;2-Y.
Hsee, C.K.; Leclerc, F. (1998). "Will products look more attractive when presented separately or together?". The
Journal of Consumer Research 25 (2): 175186. doi:10.1086/209534.
Hsee, C.K.; Zhang, J. (2004). "Distinction bias: Misprediction and mischoice due to joint evaluation". Journal of
Personality and Social Psychology 86 (5): 680695. doi:10.1037/0022-3514.86.5.680. PMID15161394.
Duration neglect
71
Duration neglect
Duration neglect is a cognitive bias that occurs when duration of an episode insufficiently affects its valuation. It is
a subtype of extension neglect and a component of the peak-end rule and affective forecasting.
In one study Daniel Kahneman and Barbara Frederickson showed subjects pleasant or aversive film clips. When
reviewing the clips mentally at a later time, subjects did not appear to take the length of the stimuli into account, only
as if they were a series of affective "snap shots".
[1]
In another demonstration, Kahneman and Frederickson with other collaborators had subjects place their hands in
painfully cold water. Under one set of instructions, they had to keep their hand in the water for an additional 30
seconds as the water was slowly heated to a warmer but still uncomfortably cold level, and under another set of
instructions they were to remove their hand immediately. Otherwise both experiences were the same. Most subjects
chose to repeat the longer experience. Subjects apparently judged the experience according to the peak-end rule, in
other words, according to its worst and final moments only, paying little attention to duration.
[2]
Debiasing
Some forms of duration neglect may be reduced or eliminated by having participants answer in graphical format, or
give a rating for every five minutes.
[3]
References
[1] Frederickson, Barbara L.; Daniel Kahneman (1993). "Duration neglect in retrospective evaluations of affective episodes". Journal of
Personality and Social Psychology 65 (1): 4555. doi:10.1037/0022-3514.65.1.45. PMID8355141.
[2] Kahneman, Daniel; Barbara L. Fredrickson, Charles A. Schreiber & Donald A. Redelmeier (1993). "When More Pain Is Preferred to Less:
Adding a Better End". Psychological Science 4 (6): 401405.
[3] Liersch, M. J.; C. R. M. Mackenzie (2009). "Duration neglect by numbers -- and its elimination by graphs" (http:/ / psy2. ucsd. edu/
~mckenzie/ Liersch& McKenzie2009OBHDP. pdf). Organizational Behavior and Human Decision Processes 108: 303314. .
Empathy gap
72
Empathy gap
A hot-cold empathy gap is a cognitive bias in which a person underestimates the influences of visceral drives, and
instead attributes behavior primarily to other, nonvisceral factors.
The term hot-cold empathy gap was coined by Carnegie Mellon University psychologist, George Loewenstein.
Hot-cold empathy gaps are one of Loewenstein's major contributions to behavioral economics. The crux of this idea
is that human understanding is "state dependent". For example, when one is angry, it is difficult to understand what it
is like for one to be happy, and vice versa; when one is blindly in love with someone, it is difficult to understand
what it is like for one not to be, (or to imagine the possibility of not being blindly in love in the future). Importantly,
an inability to minimize one's gap in empathy can lead to negative outcomes in medical settings (e.g., when a doctor
needs to accurately diagnose the physical pain of a patient)
[1]
or in workplace settings (e.g., when an employer needs
to assess the need for an employee's bereavement leave).
[2]
Areas of study
Implications of the empathy gap were explored in the realm of sexual decision-making, where young men in an
unaroused "cold state" failed to predict that in an aroused "hot state" they will be more likely to make risky sexual
decisions, (e.g.,not using a condom).
[3]
The Empathy gap has also been an important idea in research about the causes of bullying.
[4]
In one study examining
a central theory that, "only by identifying with a victims social suffering can one understand its devastating effects,"
[5]
researchers created five experiments. The first four examined the degree to which participants in a game who
were not excluded could estimate the social pain of those participants who were excluded. The findings were that
those were not socially excluded consistently underestimated the pain felt by those who were excluded. A survey
included in the study directed at teachers' opinions of school policy toward bullying found that those with an
experience of social pain, caused by bullying, often rated the pain experienced by those facing bullying or social
exclusion as higher than teachers who did not have such experience, and further, that teachers who had experienced
social pain were more likely to punish students for bullying.
[6]
Power and empathy gap
The bargaining games conclude that if one is completely powerless, the one proposing the offer to the powerless will
lack strategy. Thus, the powerless will ironically receive higher outcomes. This is because of the egocentric empathy
gap.
[7]
In general, people have difficulty taking perspectives of a typical situation and decision making. Often,
attribution bias of false consensus is the reason why the overestimation of similar perspective occurs.
[8]
Van Boven
[9]
experiment
Van Boven divided students in the experiment into sellers and buyers of the coffee mug. He collected the price that
buyers are willing to pay and collected the price at which the sellers are willing to sell the mug. To test the empathy
gap, the buyers of the coffee mug were also asked to predict the price that the sellers would have to offer.
[10]
Empathy gap
73
Result
Prediction of the sellers and buyers were close to their own price proposed and depended on their own evaluation of
the mug. This leads to a conclusion that empathy gap does exist since both parties were unable to evaluate the other
party. When being the weaker party, an encouragement of being more strategic occurs since the weaker party fears
that their outcome will be in threat. In fact, this leads to a decision in more favor of the weaker party.
[11]
General conclusion about the results
Further conclusion can be made about empathy gap and power: the weaker party often doesnt realize that being in a
weaker party can actually give them more power to strategically think and make decision, leading to better
outcomes. The weaker party has no idea what they are capable of doing. They convince themselves that being more
powerful is often more advantageous. Whereas the powerful party lacks strategy and leads to a poor outcome. Trial
of experiments that held different combinations of the ultimatum game was done. The trials lead to an ultimate
conclusion that participants like to be more powerful than to be powerless. Since the reasoning is not necessarily true
due to attribution bias of false consensus, the powerful ones didnt abuse their power. Even though an assumption of
being powerful leads to powers abuse, in reality it calls for more pro-social behavior and responsibility.
[12]
Influence of power on empathy gap
In certain situations, one with the most power has the strongest influence. The influence is so intense that people
often fail to reason things before the situation has incurred. The recipient of the powerful one (the weak one)
experiences a discrepancy with situational choices and their behavior. This keeps it consistent with egocentric
empathy gap because of the interpretation of the situation. The powerless one had higher allocations of social
responsibility when the offer has been made from the powerful one. The recipients of the power often had different
expectations from the powerful one, i.e.: more retaliatory power and orders. However, since the powerful one lacked
strategic advances, the behavior contradicted with the expectations. Thus, the paradigm in expectations leads the
concept of power to influence empathy gap.
[13]
Smokers and empathy gap
George F. Loewenstein explored visceral factors related to addictions like smoking. The factors have to do with
drive states which are essential for living for example, sleepiness and hunger. Somehow, addiction is
miscatagorized as an essential living drive state due to a behavior disorder.
[14]
From the findings emerged new
discoveries about hot and cold empathy gap and its important role in drug addictions, such as smoking.
Cold to hot empathy gap in smokers
A study done in year 2008 explored empathy gap within smokers.
[15]
The experiment
98 smokers, from the age of 18 to 40 who smoked at least 10 cigarettes per day for their past 12 days and are not
interested in quitting, were selected through paper advertisements. For the experiments, smokers were asked to shun
smoking for two days. The participants started off from session one and then moved on to session two.
[16]
1. Session One: In this session, smokers were asked to only imagine themselves in pain. This was to encourage the
participants to stimulate themselves with thoughts irrelevant to smoking. This lead the experimenters to see if the
participants were at either hot or cold stage of empathy gap. After the experimenters discovered the stage of the
participants, experimenters did another experiment. The participants in the cold state were categorized in the
controlled cue and were asked to remove a plastic cover of a tray after 20 seconds of the exposure of the plastic
cover. Underneath was a tape roll. The participants were asked to stare at the tape roll and then were surveyed.
The participants in the hot state were categorized in the smoking cue. They were asked to do similar ritual of
Empathy gap
74
removing the plastic cover but the only difference was underneath the plastic cover was a pack of cigarettes, a
lighter and an ashtray. The participants were asked to pick up a cigarette, light it with the lighter and stare at it
without smoking it and were later surveyed. Then they (both controlled and smoker participants) are asked the
minimum amount of money needed to delay smoking. Both hot and cold staged participants are asked to state the
minimum price amount they need to delay smoking "right now".
[17]
2. Session two: Then the participants were going through similar session as session one. The only difference was
that for the participants in the smoking cue were asked to express their minimum price to delay smoking before
and after removing the plastic cover off the tray. The participants were also informed that there will be a 50%
chance that the compensation the participants expressed will be taken into consideration for their real
compensation given at the end of the study. In the end, all participants were rewarded five dollars for their
participation.
[18]
The result of experiment
WATC or willingness to accept craving is a measurement based on the money that participants received for previous
smoking research. The Figure One shows a graph of the results. The results indicate that the compensation demand
increased from first session to the second session for those in controlled cue and decreased for those in the hot cue.
Figure Two shows the change in amount stated by the participants to delay smoking.
[19]
Influence of empathy gap in smoking
The cold cue participants under predicted their monetary compensation to delay smoking whereas the hot cue
participants over predicted their monetary compensation. This shows the gap both groups in different stages of
empathy. It can also lead to a prediction that they will be misinformed about high risk situations. For example, many
smokers in parties will probably underestimate their consumption of smoking, however, the consumption may be
higher than predicted for the smoker. Those who would like to quit smoking may find quitting easy, however during
the time of quitting smoking, they might find it incredibly difficult to control the urge to smoking. High craving
situations will lead to a higher chances for a person to smoke, where as those who are not in the state of craving
smoke will have no idea about how it is like to intensely crave smoking.
[20]
Empathy gap and memory
Hot-cold empathy gap is also dependent on the persons memory of visceral experience. As such, it is very common
to underestimate visceral state due to restrictive memory. In general, people are more likely to underestimate the
affect of pain in a cold state as compared to those in the hot state.
The experiment
Nordgren, van der Pligt and van Harreveld, assessed the impact of pain on the subjects performance on a memory
test. In the assessment process, participants were questioned how pain and other factors affected their performance.
Empathy gap
75
The result of experiment
The results revealed that those participants in the pain free or cold state undervalued the impact of pain on their
performance. Whereas, participants undergoing pain, accurately measured the affect of pain on performance.
[21]
References
[1] Lowenstein, George (2005). "Hot-cold empathy gaps and medical decision making.". Health Psychology 24 (4): S49S56.
doi:10.1037/0278-6133.24.4.S49.
[2] Empathy gaps for social pain: Why people underestimate the pain of social suffering. Nordgren, Loran F.; Banas, Kasia; MacDonald, Geoff
Journal of Personality and Social Psychology, Vol 100(1), Jan 2011, 120128.
[3] Ariely, D.; Loewenstein, G.F. (2006). "The Heat of the Moment: The Effect of Sexual Arousal on Sexual Decision Making". Journal of
Behavioral Decision Making 19: 8798.
[4] Simone Rovers, et al, Indicators of School Crime and Safety: 2010 (http:/ / bjs. ojp. usdoj. gov/ content/ pub/ pdf/ iscs10. pdf), National
Center for Education Statistics, (2010): iv
[5] http:/ / www. physorg.com/ news/ 2011-01-empathy-gap-bullying. html
[6] Empathy gaps for social pain: Why people underestimate the pain of social suffering. Nordgren, Loran F.; Banas, Kasia; MacDonald, Geoff
Journal of Personality and Social Psychology, Vol 100(1), Jan 2011, 120128.
[7] Handgraaf, Michel J. J.; Van Dijk, Eric; Vermunt, Ril C.; Wilke, Henk A. M.; De Dreu, Carsten K. W. (1 January 2008). "Less power or
powerless? Egocentric empathy gaps and the irony of having little versus no power in social decision making.". Journal of Personality and
Social Psychology 95 (5): 1138. doi:10.1037/0022-3514.95.5.1136.
[8] al.], Neil R. Carlson ... [et (2009). Psychology : the science of behaviour (4th Canadian ed. ed.). Toronto: Pearson. pp.480.
ISBN978-0-205-64524-4.
[9] http:/ / psych. colorado. edu/ ~vanboven/ VanBoven/ Home. html
[10] Boven, Leaf; George Loewenstein, David Dunning (2003). "Mispredicting the endowment effect: underestimation of owners selling prices
by buyers agents" (http:/ / psych.colorado.edu/ ~vanboven/ research/ publications/ vb_loew_dun_jebo. pdf). Journal of Economic Behavior
& Organization 51: 363. .
[11] Boven, Leaf; George Loewenstein, David Dunning (2003). "Mispredicting the endowment effect: underestimation of owners selling prices
by buyers agents" (http:/ / psych.colorado.edu/ ~vanboven/ research/ publications/ vb_loew_dun_jebo. pdf). Journal of Economic Behavior
& Organization 51: 363. .
[12] Handgraaf, Michel J. J.; Van Dijk, Eric; Vermunt, Ril C.; Wilke, Henk A. M.; De Dreu, Carsten K. W. (1 January 2008). "Less power or
powerless? Egocentric empathy gaps and the irony of having little versus no power in social decision making.". Journal of Personality and
Social Psychology 95 (5): 11361149. doi:10.1037/0022-3514.95.5.1136.
[13] Handgraaf, Michel J. J.; Van Dijk, Eric; Vermunt, Ril C.; Wilke, Henk A. M.; De Dreu, Carsten K. W. (1 January 2008). "Less power or
powerless? Egocentric empathy gaps and the irony of having little versus no power in social decision making.". Journal of Personality and
Social Psychology 95 (5): 11361149. doi:10.1037/0022-3514.95.5.1136.
[14] Loewenstein, George. "A Visceral Account of Addiction". 1999.
[15] Sayette, Michael A.; Loewenstein, George; Griffin, Kasey M.; Black, Jessica J. (1 September 2008). "Exploring the Cold-to-Hot Empathy
Gap in Smokers". Psychological Science 19 (9): 926932. doi:10.1111/j.1467-9280.2008.02178.x. PMC2630055. PMID18947359.
[16] Sayette, Michael A.; Loewenstein, George; Griffin, Kasey M.; Black, Jessica J. (1 September 2008). "Exploring the Cold-to-Hot Empathy
Gap in Smokers". Psychological Science 19 (9): 926932. doi:10.1111/j.1467-9280.2008.02178.x. PMC2630055. PMID18947359.
[17] Sayette, Michael A.; Loewenstein, George; Griffin, Kasey M.; Black, Jessica J. (1 September 2008). "Exploring the Cold-to-Hot Empathy
Gap in Smokers". Psychological Science 19 (9): 926932. doi:10.1111/j.1467-9280.2008.02178.x. PMC2630055. PMID18947359.
[18] Sayette, Michael A.; Loewenstein, George; Griffin, Kasey M.; Black, Jessica J. (1 September 2008). "Exploring the Cold-to-Hot Empathy
Gap in Smokers". Psychological Science 19 (9): 926932. doi:10.1111/j.1467-9280.2008.02178.x. PMC2630055. PMID18947359.
[19] Sayette, Michael A.; Loewenstein, George; Griffin, Kasey M.; Black, Jessica J. (1 September 2008). "Exploring the Cold-to-Hot Empathy
Gap in Smokers". Psychological Science 19 (9): 926932. doi:10.1111/j.1467-9280.2008.02178.x. PMC2630055. PMID18947359.
[20] Sayette, Michael A.; Loewenstein, George; Griffin, Kasey M.; Black, Jessica J. (1 September 2008). "Exploring the Cold-to-Hot Empathy
Gap in Smokers". Psychological Science 19 (9): 926932. doi:10.1111/j.1467-9280.2008.02178.x. PMC2630055. PMID18947359.
[21] Nordgren, Loran F.; Banas, Kasia; MacDonald, Geoff (NaN undefined NaN). "Empathy gaps for social pain: Why people underestimate the
pain of social suffering.". Journal of Personality and Social Psychology 100 (1): 120128. doi:10.1037/a0020938.
Endowment effect
76
Endowment effect
In behavioral economics, the endowment effect (also known as divestiture aversion) is the hypothesis that a person's
willingness to accept (WTA) compensation for a good is greater than their willingness to pay (WTP) for it once their
property right to it has been established. People will pay more to retain something they own than to obtain something
owned by someone elseeven when there is no cause for attachment, or even if the item was only obtained minutes
ago. This is due to the fact that once you own the item, foregoing it feels like a loss, and humans are loss-averse. The
endowment effect contradicts the Coase theorem, and was described as inconsistent with standard economic theory
which asserts that a person's willingness to pay (WTP) for a good should be equal to their willingness to accept
(WTA) compensation to be deprived of the good, a hypothesis which underlies consumer theory and indifference
curves.
Examples
One of the most famous examples of the endowment effect in the literature is from a study by Kahneman, Knetsch &
Thaler (1990)
[10]
where human participants were given a mug and then offered the chance to sell it or trade it for an
equally priced alternative good (pens). Kahneman et al. (1990)
[10]
found that participants WTA compensation for the
mug (once their ownership of the mug had been established) was approximately twice as high as their WTP for it.
Other examples of the endowment effect include work by Carmon and Ariely (2000)
[]
who found that participants'
hypothetical selling price (WTA) for NCAA final four tournament tickets were 14 times higher than their
hypothetical buying price (WTP). Alsom, Work by Hossain and List (Working Paper) discussed in the Economist
(2010),
[]
showed that workers worked harder to maintain ownership of a provisional awarded bonus than they did for
a bonus framed as a potential yet-to-be-awarded gain. In addition to these examples, the endowment effect has been
observed in a wide range of different populations using different goods (see Hoffman and Spitzer,1993 for a review
[]
) including children (Harbaugh et al., 2001
[]
) great apes (Kanngiesser, Santos, Hood, Call, 2011
[]
), and old world
monkeys (Lakshminaryanan, Chen & Santos, 2008
[]
).
Background
Psychologists first noted the difference between consumers WTP and WTA as early as the 1960s (Coombs,
Bezembinder and Goode, 1967,
[]
Slovic and Lichtenstein, 1968
[]
). The term endowment effect however was first
explicitly coined by the economist Richard Thaler (1980)
[]
in reference to the under-weighting of opportunity costs
as well as the inertia introduced into a consumer's choice processes when goods included in their endowment become
more highly valued than goods that are not. In the years that followed, extensive investigations into the endowment
effect have been conducted producing a wealth of interesting empirical and theoretical findings (Hoffman and
Spitzer, 1993 for a review
[]
).
Theoretical Explanations
Reference Dependent Accounts
According to reference dependent theories, consumers first evaluate the potential change in question as either being a
gain or a loss. In line with prospect theory (Tversky and Kahneman, 1979
[]
), changes that are framed as losses are
weighed more heavily than are the changes framed as gains. Thus an individual owning "A" amount of a good, asked
how much he/she would be willing to pay to acquire "B', would have a WTP that is lower than the WTA he/she
would be willing to accept to sell (B-A) units simply because the value function for gains is less steep than the value
function for losses.
Endowment effect
77
Figure 1 presents this explanation in graphical form. An individual at point A, asked how much he/she would be
willing to accept (WTA) as compensation to sell X units and move to point C, would demand greater compensation
for that loss than he/she would be willing to pay for an equivalent gain of X units to move him/her to point B. Thus
the difference between (B-A) and (C-A) would account for the endowment effect. In other words he/she expects
more money while selling; but wants to pay less while buying the same amount of goods.
Figure 1 : Prospect Theory and the Endowment Effect
Neoclassical Explanations
Hanemann (1991),
[]
develops a neoclassical explanation for the endowment effect, accounting for the effect without
invoking prospect theory.
Figure 2 presents this explanation in graphical form. In the figure, two indifference curves for a particular good X
and wealth are given. An individual asked how much he/she would be willing to pay to move from A where he/she
has X
0
of good X to point B, where he/she has the same wealth and X
1
of good X, has his/her WTP represented by
the vertical distance between C and B since the individual is indifferent about being at A or C. On the other hand, an
individual asked to indicate how much he/she would be willing to accept to move from B to A has his/her WTA
represented by the vertical distance between A and D as he/she is indifferent about either being at point B or D.
Shogren et al. (1994)
[]
has reported findings that lend support to Hanemann's hypothesis.
Endowment effect
78
Figure 2 : Hanemann's Endowment Effect Explanation
Connection-Based Theories
Connection based theories propose that subjective feelings are responsible for an individual's reluctance to trade (i.e.
the endowment effect). For example, receiving a mug may induce a minimal attachment to that item which an
individual may be averse to breaking, resulting in an increase in the perceived value of that object. A real world
example of this would be an individual refusing to part with an old painting for any price due to it having
"sentimental value". Work by Morewedge, Shu, Gilbert and Wilson (2009)
[]
provides some support for these
theories, as does work by Maddux et al. (2010).
[]
Others have argued that the short duration of ownership or highly
prosaic items typically used in endowment effect type studies is not sufficient to produce such a connection,
conducting research demonstrating support for those points (e.g. Liersch & Rottenstreich, Working Paper).
Evolutionary Arguments
Huck, Kirchsteiger & Oechssler (2005)
[]
have raised the hypothesis that natural selection may favor individuals
whose preferences embody an endowment effect given that it may improve one's bargaining position in bilateral
trades. Thus in a small tribal society with a few alternative sellers (i.e. where the buyer may not have the option of
moving to an alternative seller), having a predisposition towards embodying the endowment effect may be
evolutionarily beneficial. This may be linked with findings (Shogren, et al., 1994
[]
) that suggest the endowment
effect is less strong when the relatively artificial sense of scarcity induced in experimental settings is lessened.
Criticisms of the endowment effect
Some economists have questioned the effect's existence. Hanemann (1991)
[]
noted that economic theory only
suggests that WTP and WTA should be equal for goods which are close substitutes, so observed differences in these
measures for goods such as environmental resources and personal health can be explained without reference to an
endowment effect. Shogren, et al. (1994)
[]
noted that the experimental technique used by Kahneman and Thaler
(1990)
[]
to demonstrate the endowment effect created a situation of artificial scarcity. They performed a more robust
experiment with the same goods used by Kahneman and Thaler
[]
(chocolate bars and mugs) and found little evidence
Endowment effect
79
of the endowment effect. Others have argued that the use of hypothetical questions and experiments involving small
amounts of money tells us little about actual behavior (e.g. Hoffman and Spitzer, 1993, p.69, n. 23
[]
) with some
research supporting these points (e.g. Kahneman, Knetsch and Thaler, 1990,
[10]
Harless, 1989
[]
) and others not (e.g.
Knez, Smith and Williams, 1985)
[]
Implications of the endowment effect
Herbert Hovenkamp (1991)
[]
has argued that the presence of an endowment effect has significant implications for
law and economics, particularly in regard to welfare economics. He argues that the presence of an endowment effect
indicates that a person has no indifference curve (see however Hanemann, 1991
[]
) rendering the neoclassical tools of
welfare analysis useless, concluding that courts should instead use WTA as a measure of value. Fischler (1995)
[]
however, raises the counterpoint that using WTA as a measure of value would deter the development of a nation's
infrastructure and economic growth.
The endowment effect has also been raised as a possible explanation for the lack of demand for reverse mortgage
opportunities in the United States (contracts in which a home owner sells back his property to the bank in exchange
for an annuity) (Huck, Kirchsteiger & Oechssler, 2005).
[]
External links
"The WTP-WTA Gap, the 'Endowment Effect,' Subject Misconceptions, and Experimental Procedures"
[1]
,
Charles Plott et al., American Economic Review 2005
The Endowment Effect's Disappearing Act
[2]
, Larry E. Ribstein, December 4, 2005
"Exchange Asymmetries Incorrectly Interpreted as Evidence of Endowment Effect Theory and Prospect Theory?"
[3]
, Charles Plott et al., American Economic Review 2007
The "Mystery" of the Endowment Effect
[4]
, Per Bylund, December 28, 2011
[5]
References
[1] http:/ / papers. ssrn. com/ sol3/ papers.cfm?abstract_id=615861
[2] http:/ / busmovie. typepad. com/ ideoblog/ 2005/ 12/ the_endowment_e. html
[3] http:/ / search.ssrn. com/ sol3/ papers. cfm?abstract_id=940633
[4] http:/ / mises. org/ daily/ 5839/ The-Mystery-of-the-Endowment-Effect
[5] http:/ / www. economist.com/ node/ 15271260
Essentialism
80
Essentialism
In philosophy, essentialism is the view that, for any specific entity (such as a group of people), there is a set of
incidental attributes all of which are necessary to its identity and function.
[1]
According to essentialism, a member of
a specific group may possess other characteristics that are neither needed to establish its membership nor preclude its
membership, but that essences do not simply reflect ways of grouping objects; they also result in properties of the
object, as the object can be subjugated to smaller contexts.
[2]
This view is contrasted with non-essentialism, which
states that, for any given kind of entity, there are no specific traits which entities of that kind must possess.
Anthropology professor Lawrence Hirschfeld
[3]
gives an example of what constitutes the essence of a tiger,
regardless of whether it is striped or albino, or has lost a leg. The essential properties of a tiger are those without
which it is no longer a tiger. Other properties, such as stripes or number of legs, are considered inessential or
'accidental'.
[4]
Biologist Ernst Mayr epitomizes the effect of such an essentialist character of Platonic forms in
biology: "Flesh-and-blood rabbits may vary, but their variations are always to be seen as flawed deviation from the
ideal essence of rabbit". For Mayr, the healthful antithesis of essentialism in biology is "population thinking".
[5]
In philosophy
An essence characterizes a substance or a form, in the sense of the Forms or Ideas in Platonic idealism. It is
permanent, unalterable, and eternal; and present in every possible world. Classical humanism has an essentialist
conception of the human being, which means that it believes in an eternal and unchangeable human nature. The idea
of an unchangeable human nature has been criticized by Kierkegaard, Marx, Heidegger, Sartre, and many other
existential thinkers.
In Plato's philosophy (in particular, the Timaeus and the Philebus), things were said to come into being in this world
by the action of a demiurge who works to form chaos into ordered entities. From Aristotle onward the definition, in
philosophical contexts, of the word "essence" is very close to the definition of form (Gr. morphe). Many definitions
of essence hark back to the ancient Greek hylomorphic understanding of the formation of the things of this world.
According to that account, the structure and real existence of any thing can be understood by analogy to an artifact
produced by a craftsman. The craftsman requires hyle (timber or wood) and a model, plan or idea in his own mind
according to which the wood is worked to give it the indicated contour or form (morphe). Aristotle was the first to
use the terms hyle and morphe. According to his explanation, all entities have two aspects, "matter" and "form". It is
the particular form imposed that gives some matter its identity, its quiddity or "whatness" (i.e., its "what it is").
Plato was one of the first essentialists, believing in the concept of ideal forms, an abstract entity of which individual
objects are mere facsimilies. To give an example; the ideal form of a circle is a perfect circle, something that is
physically impossible to make manifest, yet the circles that we draw and observe clearly have some idea in common
this idea is the ideal form. Plato believed that these ideas are eternal and vastly superior to their manifestations in
the world, and that we understand these manifestations in the material world by comparing and relating them to their
respective ideal form. Plato's forms are regarded as patriarchs to essentialist dogma simply because they are a case of
what is intrinsic and a-contextual of objects the abstract properties that makes them what they are. For more on
forms, read Plato's parable of the cave.
Karl Popper splits the ambiguous term realism into essentialism and realism. He uses essentialism whenever he
means the opposite of nominalism, and realism only as opposed to idealism. Popper himself is a realist as opposed to
an idealist, but a methodological nominalist as opposed to an essentialist. For example, statements like "a puppy is a
young dog" should be read from right to left, as an answer to "What shall we call a young dog"; never from left to
right as an answer to "What is a puppy?"
[6]
Essentialism
81
Metaphysical essentialism
Essentialism, in its broadest sense, is any philosophy that acknowledges the primacy of Essence. Unlike
Existentialism, which posits "being" as the fundamental reality, the essentialist ontology must be approached from a
metaphysical perspective. Empirical knowledge is developed from experience of a relational universe whose
components and attributes are defined and measured in terms of intellectually constructed laws. Thus, for the
scientist, reality is explored as an evolutionary system of diverse entities, the order of which is determined by the
principle of causality. Because Essentialism is a conceptual worldview that is not dependent on objective facts and
measurements, it is not limited to empirical understanding or the objective way of looking at things.
Despite the metaphysical basis for the term, academics in science, aesthetics, heuristics, psychology, and
gender-based sociological studies have advanced their causes under the banner of Essentialism. Possibly the clearest
definition for this philosophy was offered by gay/lesbian rights advocate Diana Fuss, who wrote: "Essentialism is
most commonly understood as a belief in the real, true essence of things, the invariable and fixed properties [of]
which define the 'whatness' of a given entity".(
[7]
) Metaphysical essentialism stands diametrically opposed to
existential realism in that finite existence is only differentiated appearance, whereas "ultimate reality" is held to be
absolute essence.
Although the Greek philosophers believed that the true nature of the universe was perfect, they attributed the
observed imperfections to man's limited perception. For Plato, this meant that there had to be two different realities:
the "essential" and the "perceived". Plato's dialectical protg Aristotle (384-322 B.C.) applied the term "essence" to
the one common characteristic that all things belonging to a particular category have in common and without which
they could not be members of that category; hence, the idea of rationality as the essence of man. This notion carried
over into all facets of reality, including species of living creatures. For contemporary essentialists, however, the
characteristic that all existents have in common is the power to exist, and this potentiality defines the "uncreated"
Essence.
It was the Egyptian-born philosopher Plotinus [204270 CE] who brought Greek Idealism to the Roman Empire as
Neo-Platonism, and with it the concept that not only do all existents emanate from a "primary essence" but that the
mind plays an active role in shaping or ordering the objects of perception, rather than passively receiving
experiential data. But with the Empire's fall to the Goths in A.D. 476, Neo-Platonism gave way to the spread of
Christianity in the Western world, leaving Aristotle's multiple "essences" unchallenged to dominate philosophical
thought throughout the Middle Ages on into the era of modern science.
Essentialism
82
In psychology
Paul Bloom attempts to explain why people will pay more in
an auction for the clothing of celebrities if the clothing is
unwashed. He believes the answer to this and many other
questions is that people cannot help but think of objects as
containing a sort of "essence" that can be influenced.
[8]
There is a difference between metaphysical essentialism (see
above) and psychological essentialism, the latter referring not
to an actual claim about the world but a claim about a way of
representing entities in cognitions (
[9]
Medin, 1989).
Influential in this area is Susan Gelman, who has outlined
many domains in which children and adults construe classes
of entities, particularly biological entities, in essentialist
termsi.e., as if they had unobservable underlying essences
which can be used to predict unobservable surface
characteristics (
[10]
Toosi & Ambady, 2011). This causal
relationship is unidirectional; an observable feature of an
entity does not define the underlying essence (
[11]
Dar-Nimrod
& Heine, 2011) .
In developmental psychology
Essentialism has emerged as an important concept in psychology, particularly developmental psychology.
[12]
[13]
Gelman and Kremer (1991) studied the extent to which children from 47 years old demonstrate essentialism .
Children were able to identify the cause of behaviour in living and non-living objects. Children understood that
underlying essences predicted observable behaviours . Participants could correctly describe living objects behaviour
as self-perpetuated and non-living objects as a result of an adult influencing the objects actions. This is a biological
way of representing essential features in cognitions. Understanding the underlying causal mechanism for behaviour
suggests essentialist thinking (
[14]
Rangel and Keller, 2011). Younger children were unable to identify causal
mechanisms of behaviour whereas older children were able to . This suggests that essentialism is rooted in cognitive
development . It can be argued that there is a shift in the way that children represent entities, from not understanding
the causal mechanism of the underlying essence to showing sufficient understanding (
[15]
Demoulin, Leyens &
Yzerbyt, 2006).
There are four key criteria which constitute essentialist thinking . The first facet is the aforementioned individual
causal mechanisms (del Rio & Strasser, 2011) . The second is innate potential; the assumption that an object will
fulfil its predetermined course of development (
[16]
Kanovsky, 2007). According to this criterion, essences predict
developments in entities that will occur throughout its lifespan. The third is immutability (
[17]
Holtz & Wagner,
2009). Despite altering the superficial appearance of an object it does not remove its essence. Observable changes in
features of an entity are not salient enough to alter its essential characteristics. The fourth is inductive potential
(
[18]
Birnbaum, Deeb, Segall, Ben-Aliyahu & Diesendruck, 2010). This suggests that entities may share common
features but are essentially different. However similar two beings may be, their characteristics will be at most
analogous, differing most importantly in essences.
The implications of psychological essentialism are numerous . Prejudiced individuals have been found to endorse
exceptionally essential ways of thinking, suggesting that essentialism may perpetuate exclusion among social groups
(
[19]
Morton, Hornsey & Postmes, 2009). This may be due to an over-extension of an essential-biological mode of
thinking stemming from cognitive development.
[20]
Paul Bloom of Yale University has stated that "one of the most
exciting ideas in cognitive science is the theory that people have a default assumption that things, people and events
have invisible essences that make them what they are. Experimental psychologists have argued that essentialism
underlies our understanding of the physical and social worlds, and developmental and cross-cultural psychologists
have proposed that it is instinctive and universal. We are natural-born essentialists."
[21]
It is suggested that the
categorical nature of essentialist thinking predicts the use of stereotypes and can be targeted in the application of
Essentialism
83
stereotype prevention
[22]
(Bastian & Haslam, 2006).
In ethics
Classical essentialism claims that some things are wrong in an absolute sense, for example murder breaks a
universal, objective and natural moral law and not merely an adventitious, socially or ethically constructed one.
Many modern essentialists claim that right and wrong are moral boundaries which are individually constructed. In
other words, things that are ethically right or wrong are actions that the individual deems to be beneficial or harmful.
In biology
It is often held that before evolution was developed as a scientific theory, there existed an essentialist view of
biology that posited all species to be unchanging throughout time. Some religious opponents of evolution continue to
maintain this view of biology (see creation-evolution controversy).
Recent work by historians of systematics has, however, cast doubt upon this view. Mary P. Winsor, Ron Amundson
and Staffan Mller-Wille have each argued that in fact the usual suspects (such as Linnaeus and the Ideal
Morphologists) were very far from being essentialists, and it appears that the so-called "essentialism story" (or
"myth") in biology is a result of conflating the views expressed by philosophers from Aristotle onwards through to
John Stuart Mill and William Whewell in the immediately pre-Darwinian period, using biological examples, with the
use of terms in biology like species.
[23][24][25]
Essentialism and society and politics
The essentialist view on gender, sexuality, race, ethnicity, or other group characteristics is that they are fixed traits,
discounting variation among group members as secondary.
Contemporary proponents of identity politics, including feminism, gay rights, and/or racial equality activists,
generally take constructionist viewpoints,. For example, they agree with Simone de Beauvoir that "one is not born,
but becomes a woman".
[26]
As 'essence' may imply permanence, some argue that essentialist thinking tends towards
political conservatism and therefore opposes social change. Essentialist claims have provided useful rallying-points
for radical politics, including feminist, anti-racist, and anti-colonial struggles. In a culture saturated with essentialist
modes of thinking, an ironic or strategic essentialism can sometimes be politically expedient.
In social thought, metaphysical essentialism is often conflated with biological reductionism. Most sociologists, for
example, employ a distinction between biological sex and gender role. Similar distinctions across disciplines
generally fall under the division of "nature versus nurture". However, this has been contested by Monique Wittig,
who argued that even biological sex is not an essence, and that the body's physiology is "caught up" in processes of
social construction.
[27]
In history
Essentialism is used by some historians in listing essential cultural characteristics of a particular nation or culture. A
people can be understood in this way. In other cases, the essentialist method has been used by members, or admirers,
of an historical community to establish a praiseworthy national identity.
[28]
Contrastingly, many historians reject
essentialism as a form of determinism and prefer to contextualize cultural tropes within a broader lens of historical
cause and effect.
Essentialism
84
References
[1] Cartwright, R. L. (1968). "Some remarks on essentialism.". The Journal of Philosophy 65 (20): 615626.
[2] Gnter Radden, H. Cuyckens (2003). Motivation in language: studies in honor of Gnter Radden (http:/ / books. google. com/
books?id=qzhJ3KpLpQUC& pg=PA275& dq=essentialism+ definition& cd=3#v=onepage& q=essentialism definition& f=false). John
Benjamins. p.275. ISBN978-1-58811-426-6. .
[3] http:/ / www. newschool. edu/ nssr/ faculty.aspx?id=16186
[4] Lawrence A. Hirschfeld (http:/ / www.newschool.edu/ lang/ faculty. aspx?id=1724), "Natural Assumptions: Race, Essence, and Taxonomies
of Human Kinds" (http:/ / findarticles.com/ p/ articles/ mi_m2267/ is_n2_v65/ ai_20964256/ ), Social Research 65 (Summer 1998). Infotrac
(December 24, 2003).
[5] [5] Both Mayr quotes in Dawson 2009:24, 25.
[6] The Open Society and its Enemies, passim.
[7] Fuss, Diana. Essentially Speaking [1989] ISBN 978-0-415-90132-1
[8] Paul Bloom, July 2011 Ted talk, "The Origins of Pleasure" (http:/ / www. ted. com/ talks/ paul_bloom_the_origins_of_pleasure. html)
[9] Medin, D. L. (1989). "Conceptes and conceptual structure". American Psychologist 44: 14691481. doi:10.1016/0010=0285(88)90018-7.
[10] Toosi, N. R.; Ambady, N. (2011). "Ratings of essentialism for eight religious identities.". International Journal for the Psychology of
Religion 21 (1): 1729. doi:10.1080/10508619.2011.532441.
[11] Dar-Nimrod, I.; Heine, S. J. (2011). "Gentic essentialism: On the deceptive deteriminism of DNA,". Psychological Bulletin 137 (5):
800818. doi:10.1037/a0021860.
[12] Gelman, S. The essential child: Origins of essentialism in everyday thought. New York: Oxford University Press.
[13] Gelman, S. A.; Kremer, K. E. (1991). "Understanding natural causes: Children's explanations of how objects and their properties originate.".
Child Development 62 (2): 396414. doi:10.2307/1131012.
[14] Rangel, U.; Keller, J. (2011). "Essentialism goes social: Belief in social determinism as a component of psychological essentialism.".
Journal of Personality and Social Psychology 100 (6): 10561078. doi:10.1037/a0022401.
[15] Demoulin, S.; Leyens, J-P., Yzerbyt, V. (2006). "Lay theories of essentialism". Group Processes & Intergroup Relations 9 (1): 2542.
doi:10.1177/1368430206059856.
[16] Kanovsky, M. (2007). "Essentialism and folksociology: Ethnicity again.". Journal of Cognition and Culture 7 (34): 241281.
doi:10.1163/156853707X208503.
[17] Holtz, P.; Wagner, W. (2009). "Essentialism and attribution of monstrosity in racist discourse: Right-wing internet postings about africans
and jews.". Journal of Community & Applied Social Psychology 19 (6): 411425. doi:10.1002/casp.1005.
[18] Birnbaum, D.; Deeb, I., Segall, G., Ben-Eliyahu, A., & Diesendruck, G. (2010). "The development of social essentialism: The case of Israeli
children's inferences about Jews and Arabs.". Child Development 81 (3): 757777. doi:10.1111/j.1467-8624-2010.01432.x.
[19] Morton, T. A.; Hornsey, M. J., & Postmes, T. (2009). "Shifting ground: The variable use of essentialism in contexts of inclusion and
exclusion.". British Journal of Social Psychology 48 (1): 3559. doi:10.1348/014466607X270287.
[20] Medin, D.L. & Atran, S. "The native mind: biological categorization and reasoning in development and across cultures.", Psychological
Review 111(4) (2004).
[21] Bloom. P. (2010) Why we like what we like. Observer. 23 (8), 3 online link (http:/ / www. psychologicalscience. org/ index. php/
publications/ observer/ 2010/ october-10/ why-we-like-what-we-like. html).
[22] Bastian, B.; Haslam, N. (2006). "Psychological essentialism and stereotype endorsement". Journal of Experimental and Social Psychology
42 (2): 228235. doi:10.1016/j.jesp.2005.03.003.
[23] Amundson, R. (2005) The changing rule of the embryo in evolutionary biology: structure and synthesis, New York, Cambridge University
Press. ISBN 0-521-80699-2
[24] Mller-Wille, Staffan. 2007. Collection and collation: theory and practice of Linnaean botany. Studies in History and Philosophy of Science
Part C: Studies in History and Philosophy of Biological and Biomedical Sciences 38 (3):541562.
[25] Winsor, M. P. (2003) Non-essentialist methods in pre-Darwinian taxonomy. Biology & Philosophy, 18, 387400.
[26] Beauvoir, Simone. 1974. Ch. XII: Childhood, The Second Sex. New York: Vintage Books
[27] Wittig, Monique. 1992. The Category of Sex. Pp. 18 in The Straight Mind and Other Essays. Boston: Beacon Press
[28] Touraj Atabaki, Beyond Essentialism: Who Writes Whose Past in the Middle East and Central Asia? (http:/ / www. iisg. nl/ research/
beyond-essentialism. pdf), Inaugural Lecture as Extraordinary Professor of the Social History of the Middle East and Central Asia in the
University of Amsterdam, 13 December 2002
Essentialism
85
Further reading
Runes, Dagobert D. (1972) Dictionary of Philosophy (Littlefield, Adams & Co.). See for instance the articles on
"Essence", pg.97; "Quiddity", pg.262; "Form", pg.110; "Hylomorphism", pg.133; "Individuation", pg.145; and
"Matter", pg.191.
Barrett, H. C. (2001). On the functional origins of essentialism (http:/ / www. sscnet. ucla. edu/ anthro/ faculty/
barrett/ essentialism. pdf). Mind and Society, 3, Vol. 2, 130.
Sayer, Andrew (August 1997) "Essentialism, Social Constructionism, and Beyond", Sociological Review 45 : 456.
Oderberg, David S. (2007) Real Essentialism New York, Routledge.
External links
Essentialism (http:/ / philpapers. org/ browse/ essentialism) at PhilPapers
Essentialism (https:/ / inpho. cogs. indiana. edu/ taxonomy/ 2269) at the Indiana Philosophy Ontology Project
Cliff, Brian (Spring 1996). "Essentialism" (http:/ / www. english. emory. edu/ Bahri/ Essentialism. html). Emory
University. Retrieved 2008-08-29.
Experimenter's bias
In experimental science, experimenter's bias is subjective bias towards a result expected by the human
experimenter. David Sackett,
[1]
in a useful review of biases in clinical studies, states that biases can occur in any one
of seven stages of research:
1. 1. in reading-up on the field,
2. 2. in specifying and selecting the study sample,
3. 3. in executing the experimental manoeuvre (or exposure),
4. 4. in measuring exposures and outcomes,
5. 5. in analyzing the data,
6. 6. in interpreting the analysis, and
7. 7. in publishing the results.
The inability of a human being to be objective is the ultimate source of this bias. It occurs more often in sociological
and medical sciences, where double blind techniques are often employed to combat the bias. But experimenter's bias
can also be found in some physical sciences, for instance, where the experimenter rounds off measurements.
Classification of experimenter's biases
Modern electronic or computerized data acquisition techniques have greatly reduced the likelihood of such bias, but
it can still be introduced by a poorly designed analysis technique. Experimenter's bias was not well recognized until
the 1950s and 60's, and then it was primarily in medical experiments and studies. Sackett (1979) catalogued 56
biases that can arise in sampling and measurement in clinical research, among the above-stated first six stages of
research. These are as follows:
1. 1. In reading-up the field
1. 1. the biases of rhetoric
2. 2. the all's well literature bias
3. 3. one-sided reference bias
4. 4. positive results bias
5. 5. hot stuff bias
2. 2. In specifying and selecting the study sample
Experimenter's bias
86
1. 1. popularity bias
2. 2. centripetal bias
3. 3. referral filter bias
4. 4. diagnostic access bias
5. 5. diagnostic suspicion bias
6. 6. unmasking (detection signal) bias
7. 7. mimicry bias
8. 8. previous opinion bias
9. 9. wrong sample size bias
10. 10. admission rate (Berkson) bias
11. 11. prevalence-incidence (Neyman) bias
12. 12. diagnostic vogue bias
13. 13. diagnostic purity bias
14. 14. procedure selection bias
15. 15. missing clinical data bias
16. 16. non-contemporaneous control bias
17. 17. starting time bias
18. 18. unacceptable disease bias
19. 19. migrator bias
20. 20. membership bias
21. 21. non-respondent bias
22. 22. volunteer bias
3. 3. In executing the experimental manoeuvre (or exposure)
1. 1. contamination bias
2. 2. withdrawal bias
3. 3. compliance bias
4. 4. therapeutic personality bias
5. 5. bogus control bias
4. 4. In measuring exposures and outcomes
1. 1. insensitive measure bias
2. 2. underlying cause bias (rumination bias)
3. 3. end-digit preference bias
4. 4. apprehension bias
5. 5. unacceptability bias
6. 6. obsequiousness bias
7. 7. expectation bias
8. 8. substitution game
9. 9. family information bias
10. 10. exposure suspicion bias
11. 11. recall bias
12. 12. attention bias
13. 13. instrument bias
5. 5. In analyzing the data
1. 1. post-hoc significance bias
2. 2. data dredging bias (looking for the pony)
3. 3. scale degradation bias
4. 4. tidying-up bias
Experimenter's bias
87
5. 5. repeated peeks bias
6. 6. In interpreting the analysis
1. 1. mistaken identity bias
2. 2. cognitive dissonance bias
3. 3. magnitude bias
4. 4. significance bias
5. 5. correlation bias
6. 6. under-exhaustion bias
The effects of bias on experiments in the physical sciences have not always been fully recognized.
Statistical background
In principle, if a measurement has a resolution of , then if the experimenter averages independent
measurements the average will have a resolution of (this is the central limit theorem of statistics). This is
an important experimental technique used to reduce the impact of randomness on an experiment's outcome. This
requires that the measurements be statistically independent; there are several reasons why they may not be. If
independence is not satisfied, then the average may not actually be a better statistic but may merely reflect the
correlations among the individual measurements and their non-independent nature.
The most common cause of non-independence is systematic errors (errors affecting all measurements equally,
causing the different measurements to be highly correlated, so the average is no better than any single measurement).
Experimenter bias is another potential cause of non-independence.
Biological and medical sciences
The complexity of living systems and the ethical impossibility of performing fully controlled experiments with
certain species of animals and humans provide a rich, and difficult to control, source of experimental bias. The
scientific knowledge about the phenomenon under study, and the systematic elimination of probable causes of bias,
by detecting confounding factors, is the only way to isolate true cause-effect relationships. It is also in epidemiology
that experimenter bias has been better studied than in other sciences.
A number of studies into Spiritual Healing illustrate how the design of the study can introduce experimenter bias into
the results. A comparison of two studies illustrates that subtle differences in the design of the tests can adversely
affect the results of one. The difference was due to the intended result: a positive or negative outcome rather than
positive or neutral.
A 1995 paper
[2]
by Hodges & Scofield of spiritual healing used the growth rate of cress seeds as their independent
variable in order to eliminate a placebo response or participant bias. The study reported positive results as the test
results for each sample were consistent with the healers intention that healing should or should not occur. However
the healer involved in the experiment was a personal acquaintance of the study authors raising the distinct possibility
of experimenter bias. A randomized clinical trial,
[3]
published in 2001, investigated the efficacy of spiritual healing
(both at a distance and face-to-face) on the treatment of chronic pain in 120 patients. Healers were observed by
"simulated healers" who then mimicked the healers movements on a control group while silently counting backwards
in fives - a neutral rather than should not heal intention. The study found a decrease in pain in all patient groups but
"no statistically significant differences between healing and control groups ... it was concluded that a specific effect
of face-to-face or distant healing on chronic pain could not be demonstrated."
Experimenter's bias
88
Physical sciences
If the signal being measured is actually smaller than the rounding error and the data are over-averaged, a positive
result for the measurement can be found in the data where none exists (i.e. a more precise experimental apparatus
would conclusively show no such signal). If an experiment is searching for a sidereal variation of some
measurement, and if the measurement is rounded-off by a human who knows the sidereal time of the measurement,
and if hundreds of measurements are averaged to extract a "signal" which is smaller than the apparatus' actual
resolution, then it should be clear that this "signal" can come from the non-random round-off, and not from the
apparatus itself. In such cases a single-blind experimental protocol is required; if the human observer does not know
the sidereal time of the measurements, then even though the round-off is non-random it cannot introduce a spurious
sidereal variation.
Social sciences
The experimenter may introduce cognitive bias into a study in several ways. First, in what is called the
observer-expectancy effect, the experimenter may subtly communicate their expectations for the outcome of the
study to the participants, causing them to alter their behavior to conform to those expectations. After the data are
collected, bias may be introduced during data interpretation and analysis. For example, in deciding which variables
to control in analysis, social scientists often face a trade-off between omitted-variable bias and post-treatment bias.
[4]
Forensic sciences
Observer effects are rooted in the universal human tendency to interpret data in a manner consistent with ones
expectations.
[5]
This tendency is particularly likely to distort the results of a scientific test when the underlying data
are ambiguous and the scientist is exposed to domain-irrelevant information that engages emotions or desires.
[6]
Despite impressions to the contrary, forensic DNA analysts often must resolve ambiguities, particularly when
interpreting difficult evidence samples such as those that contain mixtures of DNA from two or more individuals,
degraded or inhibited DNA, or limited quantities of DNA template. The full potential of forensic DNA testing can
only be realized if observer effects are minimized.
[7]
References
[1] Sackett, D. L. (1979). "Bias in analytic research". Journal of Chronic Diseases 32 (12): 5163. doi:10.1016/0021-9681(79)90012-2.
PMID447779.
[2] Hodges, RD and Scofield, AM (1995). "Is spiritual healing a valid and effective therapy?". Journal of the Royal Society of Medicine 88 (4):
203207. PMC1295164. PMID7745566.
[3] Abbot, NC, Harkness, EF, Stevinson, C, Marshall, FP, Conn, DA and Ernst, E. (2001). "Spiritual healing as a therapy for chronic pain: a
randomized, clinical trial". Pain 91 (12): 7989. doi:10.1016/S0304-3959(00)00421-8. PMID11240080.
[4] King, Gary. "Post-Treatment Bias in Big Social Science Questions" (http:/ / gking. harvard. edu/ gking/ talks/ bigprobP. pdf), accessed
February 7, 2011.
[5] Rosenthal, R. (1966). Experimenter Effects in Behavioral Research. NY: Appleton-Century-Crofts.
[6] Risinger, D. M.; Saks, M. J.; Thompson, W. C.; Rosenthal, R. (2002). "The Daubert/Kumho Implications of Observer Effects in Forensic
Science: Hidden Problems of Expectation and Suggestion". Calif. L. Rev. 90 (1): 156. doi:10.2307/3481305. JSTOR3481305.
[7] D. Krane, S. Ford, J. Gilder, K. Inman, A. Jamieson, R. Koppl, I. Kornfield, D. Risinger, N. Rudin, M. Taylor, W.C. Thompson (2008).
"Sequential unmasking: A means of minimizing observer effects in forensic DNA interpretation" (http:/ / www. bioforensics. com/ articles/
sequential_unmasking. html). Journal of Forensic Sciences 53 (4): 10061007. doi:10.1111/j.1556-4029.2008.00787.x. PMID18638252. .
False-consensus effect
89
False-consensus effect
Fundamentalists and political radicals often
overestimate the number of people who share
their values and beliefs, because of the
false-consensus effect.
In psychology, the false-consensus effect or false-consensus bias is a
cognitive bias whereby a person tends to overestimate how much other
people agree with him or her. There is a tendency for people to assume
that their own opinions, beliefs, preferences, values and habits are
'normal' and that others also think the same way that they do.
[1]
This
cognitive bias tends to lead to the perception of a consensus that does
not exist, a 'false consensus'. This false consensus is significant
because it increases self-esteem. The need to be "normal" and fit in
with other people is underlined by a desire to conform and be liked by
others in a social environment.
Within the realm of personality psychology, the false-consensus effect
does not have significant effects. This is because the false-consensus
effect relies heavily on the social environment and how a person interprets this environment. Instead of looking at
situational attributions, personality psychology evaluates a person with dispositional attributions, making the
false-consensus effect relatively irrelevant in that domain. Therefore, a person's personality potentially could affect
the degree that the person relies on false-consensus effect, but not the existence of such a trait.
The false-consensus effect is not necessarily restricted to cases where people believe that their values are shared by
the majority. The false-consensus effect is also evidenced when people overestimate the extent of their particular
belief is correlated with the belief of others. Thus, fundamentalists do not necessarily believe that the majority of
people share their views, but their estimates of the number of people who share their point of view will tend to
exceed the actual number.
This bias is especially prevalent in group settings where one thinks the collective opinion of their own group matches
that of the larger population. Since the members of a group reach a consensus and rarely encounter those who
dispute it, they tend to believe that everybody thinks the same way.
Additionally, when confronted with evidence that a consensus does not exist, people often assume that those who do
not agree with them are defective in some way.
[2]
There is no single cause for this cognitive bias; the availability
heuristic, self-serving bias and nave realism have been suggested as at least partial underlying factors.
The false-consensus effect can be contrasted with pluralistic ignorance, an error in which people privately disapprove
but publicly support what seems to be the majority view (regarding a norm or belief), when the majority in fact
shares their (private) disapproval. While the false consensus effect leads people to wrongly believe that they agree
with the majority (when the majority, in fact, openly disagrees with them), the pluralistic ignorance effect leads
people to wrongly believe that they disagree with the majority (when the majority, in fact, covertly agrees with
them). Pluralistic ignorance might, for example, lead a student to engage in binge drinking because of the mistaken
belief that most other students approve of it, while in reality most other students disapprove, but behave in the same
way because they share the same mistaken (but collectively self-sustaining) belief. In a parallel example of false
consensus, a student who likes binge drinking would believe that a majority also likes it, while in reality, most others
dislike it and openly say so.
False-consensus effect
90
Major theoretical approaches
The false-consensus effect can be traced back to two parallel theories of social perception, "the study of how we
form impressions of and make inferences about other people".
[3]
The first is the idea of social comparison. The
principal claim of Leon Festingers (1954) social comparison theory was that individuals evaluate their thoughts and
attitudes based on other people.
[4]
This may be motivated by a desire for confirmation and the need to feel good
about oneself. As an extension of this theory, people may use others as sources of information to define social reality
and guide behavior. This is called informational social influence.
[5][6]
The problem, though, is that people are often
unable to accurately perceive the social norm and the actual attitudes of others. In other words, research has shown
that people are surprisingly poor "intuitive psychologists" and that our social judgments are oftentimes inaccurate.
[4]
This finding helped to lay the groundwork for an understanding of biased processing and inaccurate social
perception. The false-consensus effect is just one example of such an inaccuracy.
[6]
The second influential theory is projection, the idea that people project their own attitudes and beliefs onto others.
This idea of projection is not a new concept. In fact, it can be found in Sigmund Freuds work on the defense
mechanism of projection (1956), D.S. Holmes' work on "attributive projection" (1968), and Gustav Ichheissers work
on social perception (1970).
[7]
D.S. Holmes, for example, described social projection as the process by which people
"attempt to validate their beliefs by projecting their own characteristics onto other individuals".
[4]
Here, a connection can be made between the two stated theories of social comparison and projection. First, as social
comparison theory explains, individuals constantly look to peers as a reference group and are motivated to do so in
order to seek confirmation for their own attitudes and beliefs.
[4]
In order to guarantee confirmation and a higher
self-esteem, though, an individual might unconsciously project their own beliefs onto the others (the targets of their
comparisons). This final outcome is the false-consensus effect. To summarize, the false-consensus effect can be seen
as stemming from both social comparison theory and the concept of projection.
The false-consensus effect, as defined by Ross, Greene, and House in 1977, came to be the culmination of the many
related theories that preceded it. In their well-known series of four studies, Ross and associates hypothesized and
then demonstrated that people tend to overestimate the popularity of their own beliefs and preferences.
[8]
In each of
the studies, subjects or "raters" were asked to choose one of a few mutually-exclusive responses. They would then
predict the popularity of each of their choices among other participants, referred to as "actors." To take this a step
further, Ross and associates also proposed and tested a related bias in social inferences: they found that raters in an
experiment estimated their own response to be not only common, but also not very revealing of the actors'
"distinguishing personal dispositions".
[9]
On the other hand, alternative or opposite responses were perceived as
much more revealing of the actors as people. In general, the raters made more "extreme predictions" about the
personalities of the actors that did not share the raters' own preference. In fact, the raters may have even thought that
there was something wrong with the people expressing the alternative response.
[2]
In the ten years after the influential Ross et al. study, close to 50 papers were published with data on the
false-consensus effect.
[10]
Theoretical approaches were also expanded. The theoretical perspectives of this era can be
divided into four categories: (a) selective exposure and cognitive availability, (b) salience and focus of attention, (c)
logical information processing, and (d) motivational processes.
[10]
In general, the researchers and designers of these
theories believe that there is not a single right answer. Instead, they admit that there is overlap among the theories
and that the false-consensus effect is most likely due to a combination of these factors.
[11]
Selective exposure and cognitive availability
This theory is closely tied to the availability heuristic, which suggests that perceptions of similarity (or difference)
are affected by how easily those characteristics can be recalled from memory.
[10]
And as one might expect,
similarities between oneself and others are more easily recalled than differences. This is in part because people
usually associate with those who are similar to themselves. This selected exposure to similar people may bias or
restrict the "sample of information about the true diversity of opinion in the larger social environment".
[12]
As a
False-consensus effect
91
result of the selective exposure and availability heuristic, it is natural for the similarities to prevail in one's
thoughts.
[13]
Botvin, Baker, Dusenbury, and Goldberg (1992) did a popular study on the effects of the FCE among a specific
adolescent community in an effort to determine whether students show a higher level of FCE among their direct
peers as opposed to society at large.
[14]
The participants of this experiment were 203 college students ranging in age
from 18 to 25 (with an average age of 18.5). The participants were given a questionnaire and asked to answer
questions regarding a variety of social topics. For each social topic, they were asked to answer how they felt about
the topic and to estimate the percentage of their peers who would agree with them. The results determined that the
false-consensus effect was extremely prevalent when participants were describing the rest of their college
community; out of twenty topics considered, sixteen of them prominently demonstrated the FCE. The high levels of
FCE seen in this study can be attributed to the group studied; because the participants were asked to compare
themselves to a group of peers that they are constantly around (and view as very similar to themselves), the levels of
FCE increased.
[15]
Salience and focus of attention
This theory suggests that when an individual focuses solely on their own preferred position, they are more likely to
fall victim to the false-consensus effect and overestimate its popularity.
[12]
This is because that position is the only
one in their immediate consciousness. Performing an action that promotes the position will make it more salient and
may increase the false-consensus effect. If, however, more positions are presented to the individual, the degree of the
false-consensus effect might decrease significantly.
[12]
Logical information processing
This theory assumes that active and seemingly rational thinking underlies an individuals estimates of similarity
among others.
[12]
This is manifested in ones causal attributions. For instance, if an individual makes an external
attribution for their belief, the individual will likely view his or her experience of the thing in question as merely a
matter of objective experience. For example, a few movie-goers may falsely assume that the quality of the film is a
purely objective entity. To explain their dissatisfaction with it, the viewers may say that it was simply a bad movie
(an external attribution). Based on this (perhaps erroneous) assumption of objectivity, it seems rational or "logical" to
assume that everyone else will have the same experience; consensus should be high. On the other hand, someone in
the same situation who makes an internal attribution (perhaps a film aficionado who is well-aware of his or her
especially high standards) will realize the subjectivity of the experience and will be drawn to the opposite
conclusion; consensus will be much lower. Though they result in two opposite outcomes, both paths of attribution
rely on an initial assumption which then leads to a "logical" conclusion. By this logic, then, it can be said that the
false-consensus effect is really a reflection of the fundamental attribution error (specifically the actor-observer bias)
in which people prefer situational/external attributions over internal/dispositional ones to justify their own behaviors.
In a study done by Fox, Yinon, and Mayraz, researchers were attempting to determine whether or not the levels of
the false-consensus effect changed in different age groups. In order to come to a conclusion, it was necessary for the
researchers to split their participants into four different age groups. Two hundred participants were used, and gender
was not considered to be a factor. Just as in the previous study mentioned, this study used a questionnaire as its main
source of information. The results showed that the FCE was extremely prevalent in all groups, but was the most
prevalent in the oldest age group; these participants were labeled as "old-age home residents". They showed the FCE
in all 12 areas that they were questioned about. The increase in FCE seen in the oldest age group can be accredited to
their high level of "logical" reasoning behind their decisions; the oldest age group has obviously lived the longest,
and therefore feels that they can project their beliefs on to all age groups due to their (seemingly objective) past
experiences and wisdom . The younger age groups cannot logically relate to those older to them because they have
not had that experience and do not pretend to know these "objective" truths. These results demonstrate a tendency for
False-consensus effect
92
older people to rely more heavily on situational attributions (life experience) as opposed to internal attributions.
[16]
Motivational processes
This theory stresses the benefits of the false-consensus effect: namely, the perception of increased social validation,
social support, and self-esteem. It may also be useful to exaggerate similarities in social situations in order to
increase liking.
[17]
It is possible that these benefits serve as positive reinforcement for false-consensus thinking.
Applications
The false-consensus effect is an important attribution bias to take into consideration when conducting business and
in everyday social interactions. Essentially, people are inclined to believe that the general population agrees with
their opinions and judgments, which, true or not, gives them a feeling of more assurance and security in their
decisions. This could be an important phenomenon to either exploit or avoid in business dealings. For example, if a
man doubted whether he wanted to buy a new tool, breaking down his notion that others agree with his doubt would
be an important step in persuading him to purchase it. By convincing the customer that other people in fact do want
to buy the appliance, the seller could perhaps make a sale that he would not have made otherwise. In this way, the
false-consensus effect is closely related to conformity, the effect in which an individual is influenced to match the
beliefs or behaviors of a group. There are two differences between the false-consensus effect and conformity: most
importantly, conformity is matching the behaviors, beliefs or attitudes of a real group, while the false-consensus
effect is perceiving that others share your behaviors, beliefs or attitudes, whether they really do or not. By making
the customer feel like the opinion of others (society) is to buy the appliance, he will feel more confident about his
purchase and believe that other people would have made the same decision.
Similarly, any elements of society affected by public opinionelections, advertising, publicityare very much
influenced by the false-consensus effect. This is partially due to the fact that how people develop their perceptions
involves "differential processes of awareness".
[18]
That is to say, while some people are motivated to reach correct
conclusions, others may be motivated to reach preferred conclusions. (It is obvious that the latter category will more
often result in a false consensus, because the subject is likely to search actively for like-minded supporters and may
discount or ignore the opposition).
Uncertainties
There is ambiguity about several facets of the false-consensus effect, as well as its study. First of all, it is unclear
exactly which factors play the largest role in the strength and prevalence of the false-consensus effect in individuals
(ex: Two individuals in the same group and with very similar social standing could have very different levels of
FCE, but it is unclear what social, personality or perceptual differences between them play the largest role in causing
this disparity).
Additionally, it can be difficult to obtain accurate survey data about the FCE (as well as other psychological biases)
because the search for consistent, reliable groups to be surveyed (often over an extended period of time) often leads
to groups that might have dynamics slightly different from those of the "real world." For example, many of the
referenced studies in this article examined college students, who might have an especially high level of FCE both
because they are surrounded by their peers (and perhaps experience the availability heuristic) and because they often
assume they are similar to their peers. This may result in distorted data from some studies of the false-consensus
effect.
False-consensus effect
93
References
Notes
[1] "False Consensus & False Uniqueness" (http:/ / www.psychologycampus. com/ social-psychology/ false-consensus. html). Psychology
Campus.com. . Retrieved 2007-11-13.
[2] "Why We All Stink as Intuitive Psychologists: The False Consensus Bias" (http:/ / www. spring. org. uk/ 2007/ 11/
why-we-all-stink-as-intuitive.php). PsyBlog. . Retrieved 2007-11-13.
[3] Aronson, Wilson & Akert 2005, p.84
[4] Bauman & Geher 2002, p.294
[5] Aronson, Wilson & Akert 2005, p.215
[6] Bauman & Geher 2002, p.293
[7] Gilovich, Thomas (1990). "Differential construal and the false-consensus effect.". Journal of Personality and Social Psychology 59 (4):
623634. doi:10.1037/0022-3514.59.4.623. ISSN0022-3514.
[8] [8] (Ross et al 1)
[9] [9] (Ross)
[10] Marks & Miller 1987, p.72
[11] [11] (Marks)
[12] Marks & Miller 1987, p.73
[13] [13] (Marks 1)
[14] [14] (Bauman et. al 1)
[15] [15] (Bauman)
[16] Yinon, Yoel; Mayraz, Avigail; Fox, Shaul (1994). "Age and the False-Consensus Effect". The Journal of Social Psychology 134 (6):
717725. doi:10.1080/00224545.1994.9923006. ISSN0022-4545.
[17] Marks & Miller 1987, p.74
[18] Nir, L. (2011). "Motivated reasoning and public opinion perception". Public Opinion Quarterly (Oxford University Press).
doi:10.1093/poq/nfq076. ISSN0033-362X.
Sources
Aronson, Elliot; Wilson, Timothy D.; Akert, Robin M. (2005). Social Psychology (7th ed.). Boston: Prentice Hall.
Bauman, Kathleen P.; Geher, Glenn (2002). "We think you agree: The detrimental impact of the false consensus
effect on behavior". Current Psychology 21 (4): 293318. doi:10.1007/s12144-002-1020-0. ISSN0737-8262.
Marks, Gary; Miller, Norman (1987). "Ten years of research on the false-consensus effect: An empirical and
theoretical review.". Psychological Bulletin 102 (1): 7290. doi:10.1037/0033-2909.102.1.72. ISSN1939-1455.
Ross, L (1977). "The false consensus effect: An egocentric bias in social perception and attribution processes".
Journal of Experimental Social Psychology 13 (3): 279301. doi:10.1016/0022-1031(77)90049-X.
ISSN00221031.
Further reading
Kunda, Ziva (1999), Social Cognition: Making Sense of People, MIT Press, pp.396401,
ISBN978-0-262-61143-5, OCLC40618974
Fields, James M.; Schuman, Howard (1976). "Public Beliefs About the Beliefs of the Public". Public Opinion
Quarterly 40 (4): 427. doi:10.1086/268330. ISSN0033362X.
External links
Changing minds: the false consensus effect (http:/ / changingminds. org/ explanations/ theories/ false_consensus.
htm)
Overcoming Bias: Mind Projection Fallacy (http:/ / www. overcomingbias. com/ 2008/ 03/ mind-projection. html)
Functional fixedness
94
Functional fixedness
Functional fixedness is a cognitive bias that limits a person to using an object only in the way it is traditionally
used. The concept of functional fixedness originated in Gestalt Psychology, a movement in psychology that
emphasizes holistic processing. Karl Duncker defined functional fixedness as being a "mental block against using an
object in a new way that is required to solve a problem." (Duncker, 1945) This "block" limits the ability of an
individual to use components given to them to complete a task, as they can not move past the original purpose of
those components. For example, if someone needs a paperweight, but they only have a hammer, they may not see
how the hammer can be used as a paperweight. This inability to see a hammer's use as anything other than for
pounding nails, is functional fixedness. The person couldn't think to use the hammer in a way other than in its
conventional function.
When tested, 5-year-old children show no signs of functional fixedness. It has been argued that this is because at age
5, any goal to be achieved with an object is equivalent to any other goal. However, by age 7, children have acquired
the tendency to treat the originally intended purpose of an object as special (German & Defeyter, 2000).
Examples in Research
Experimental paradigms typically involve solving problems in novel situations in which the subject has the use of a
familiar object in an unfamiliar context. The object may be familiar from the subjects past experience or from
previous tasks within an experiment.
Duncker (1945)
Candle Box
Candle box problem diagram
In a classic experiment demonstrating functional fixedness, Duncker
(1945) gave participants a candle, a box of thumbtacks, and a book of
matches, and asked them to attach the candle to the wall so that it did
not drip onto the table below. Duncker found that participants tried to
attach the candle directly to the wall with the tacks, or to glue it to the
wall by melting it. Very few of them thought of using the inside of the
box as a candle-holder and tacking this to the wall. In Dunckers terms
the participants were fixated on the boxs normal function of holding
thumbtacks and could not re-conceptualize it in a manner that allowed
them to solve the problem. For instance, participants presented with an
empty tack box were two times more likely to solve the problem than
those presented with the tack box used as a container (Adamson 1952).
More recently, Michael C. Frank and Michael Ramscar gave a written
version of the candlebox problem to undergraduates at Stanford. When
the problem was given with identical instructions to those in the original experiment, 23% of students were able to
solve the problem. For another group of students, the noun phrases such as "book of matches" were underlined, and
for a third group the nouns (e.g., "box") were underlined. For these two groups, 55% and 47% were able to solve the
problem effectively. In a follow-up experiment, all the nouns except "box" were underlined and similar results were
produced. The authors concluded that students' performance was contingent on their representation of the lexical
concept "box" rather than instructional manipulations. The ability to overcome functional fixedness was contingent
on having a flexible representation of the word box which allows students to see that the box can be used when
attaching a candle to a wall.
Functional fixedness
95
Adamson
When Adamson (1952) replicated Duncker's box experiment, Adamson split participants into 2 experimental groups:
preutilization and no preutilization. In this experiment, when there is preutilization, meaning when objects are
presented to participants in a traditional manner (materials are in the box, thus using the box as a container),
participants are less likely to consider the box for any other use, whereas with no preutilization (when boxes are
presented empty), participants are more likely to think of other uses for the box.
Birch and Rabinowitz
Birch and Rabinowitz (1951) adapted the two-cord problem from Maier (1930, 1931), where subjects would be
given 2 cords hanging from the ceiling, and 2 heavy objects in the room. They are told they must connect the cords,
but they are just far enough apart that one cannot reach the other easily. The solution was to tie one of the heavy
objects to a cord and be a weight, and swing the cord as a pendulum, catch the rope as it swings while holding on to
the other rope, and then tie them together. The participants are split into 3 groups: Group R, which completes a
pretask of completing an electrical circuit by using a relay, Group S, which completes the circuit with a switch, and
Group C which is the control group given no pretest experience. Group R participants were more likely to use the
switch as the weight, and Group S were more likely to use the relay. Both groups did so because their previous
experience led them to use the objects a certain way, and functional fixedness did not allow them to see the objects
as being used for another purpose.
Current Conceptual Relevance
Is functional fixedness universal?
Researchers have investigated whether functional fixedness is affected by culture.
In a recent study, preliminary evidence supporting the universality of functional fixedness was found (German &
Barret, 2005). The studys purpose was to test if individuals from non-industrialized societies, specifically with low
exposure to high-tech artifacts, demonstrated functional fixedness. The study tested the Shuar,
hunter-horticulturalists of the Amazon region of Ecuador, and compared them to a control group from an industrial
culture.
The Shuar community had only been exposed to a limited amount of industrialized artifacts, such as machete, axes,
cooking pots, nails, shotguns, and fishhooks, all considered low-tech. Two tasks were assessed to participants for
the study: the box task, where participants had to build a tower to help a character from a fictional storyline to reach
another character with a limited set of varied materials; the spoon task, where participants were also given a problem
to solve based on a fictional story of a rabbit that had to cross a river (materials were used to represent settings) and
they were given varied materials including a spoon. In the box-task, participants were slower to select the materials
than participants in control conditions, but no difference in time to solve the problem was seen. In the spoon task,
participants were slower in selection and completion of task. Results showed that Individuals from non-industrial
(technologically sparse cultures) were susceptible to functional fixedness. They were faster to use artifacts without
priming than when design function was explained to them. This occurred even though participants were less exposed
to industrialized manufactured artifacts, and that the few artifacts they currently use were used in multiple ways
regardless of their design. (German & Barret, 2005)
Functional fixedness
96
Following the Wrong Footsteps: Fixation Effects of Pictorial Examples in a Design
Problem-Solving Task
Investigators examined in two experiments "whether the inclusion of examples with inappropriate elements, in
addition to the instructions for a design problem, would produce fixation effects in students naive to design tasks"
(Chrysikou & Weisberg, 2005). They examined the inclusion of examples of inappropriate elements, by explicitly
depicting problematic aspects of the problem presented to the students through example designs. They tested
non-expert participants on three problem conditions: with standard instruction, fixated (with inclusion of problematic
design), and defixated (inclusion of problematic design accompanied with helpful methods). They were able to
support their hypothesis by finding that a) problematic design examples produce significant fixation effects, and b)
fixation effects can be diminished with the use of defixating instructions.
Following is one examples of the three problems used in experiment to understand more thoroughly the procedure of
study. In "The Disposable Spill-Proof Coffee Cup Problem", adapted from Janson & Smith, 1991, participants were
asked to construct as many designs as possible for an inexpensive, disposable, spill-proof coffee cup. Standard
Condition participants were presented only with instructions. In the fixated condition, participants were presented
with instructions, the design presented below, and problems they should be aware of. Finally, in the defixated
condition, participants were presented the same as other conditions in addition to suggestions of design elements
they should avoid using. The other two problems included building a bike rack, and designing a container for cream
cheese.
Techniques to Avoid Functional Fixedness
Overcoming Functional Fixedness in Science Classrooms with Analogical Transfer
Based on the assumption that students are functionally fixed, a study on analogical transfer in the science classroom
shed light on significant data that could provide an overcoming technique for functional fixedness. The findings
support the fact that students show positive transfer (performance) on problem solving after being presented with
analogies of certain structure and format (Solomon, 1994). The present study expanded Dunckers experiments from
1945, by trying to demonstrate that when students were "presented with a single analogy formatted as a problem,
rather than as a story narrative, they would orient the task of problem-solving and facilitate positive transfer"
(Solomon, 1994). A total of 266 freshmen students from a high school science class participated in the study. The
experiment was a 2x2 design where conditions: "task contexts" (type and format) vs. "prior knowledge" (specific vs.
general) were attested. Students were classified into 5 different groups, where 4 were according to their prior science
knowledge (ranging from specific to general), and 1 served as a control group (no analog presentation). The 4
different groups were then classified into "analog type and analog format" conditions, structural or surface types and
problem or surface formats. Inconclusive evidence was found for positive analogical transfer based on prior
knowledge, however groups did demonstrate variability. The problem format and the structural type of analog
presentation showed the highest positive transference to problem solving. The researcher suggested then, that a
well-thought and planned analogy relevant in format and type to the problem-solving task to be completed can be
helpful for students to overcome functional fixedness. This study not only brought new knowledge about the human
mind at work but also provides important tools for educational purposes and possible changes that teachers can apply
as aids to lesson plans (Solomon, 1994).
Functional fixedness
97
Uncommitting
One study suggests that functional fixedness can be combated by design decisions from functionally fixed designs so
that the essence of the design is kept (Latour, 1994). This helps the subjects who have created functionally fixed
designs understand how to go about solving general problems of this type, rather than using the fixed solution for a
specific problem. Latour performed an experiment researching this by having software engineers analyze a fairly
standard bit of code, the quicksort algorithm, and use it to create a partitioning function. Part of the quicksort
algorithm involves partitioning a list into subsets so it can be sorted, the experimenters wanted to use the code from
within the algorithm to just do the partitioning. To do this they abstracted each block of code in the function,
discerning the purpose of it, and deciding if it is needed for the partitioning algorithm. This abstracting allowed them
to reuse the code from the quicksort algorithm, to create a working partition algorithm without having to design it
from scratch (Latour, 1994).
Overcoming Prototypes
A comprehensive study exploring several classical functional fixedness experiments showed an overlying theme of
overcoming prototypes. Those that were successful at completing the tasks had the ability to look beyond the
prototype, or the original intention for the item in use. Conversely, those that could not create a successful finished
product could not move beyond the original use of the item. It also seemed to be the case for functional fixedness
categorization studies as well. Reorganization into categories of seemingly unrelated items was easier for those that
could look beyond intended function. Therefore, there is a need to overcome the prototype in order to avoid
functional fixedness. Carnevale suggests analyzing the object and mentally break it down into its components. After
that is completed, it is essential to explore the possible functions of those parts. In doing so, an individual may
familiarize themselves with new ways to use the items that are available to them at the givens. Individuals are
therefore thinking creatively and overcoming the prototypes that limit their ability to successfully complete the
functional fixedness problem (Carnevale, 1998).
The Generic Parts Technique
For each object, you need to decouple its function from its form. McCaffrey (2012) shows a highly effective
technique for doing so. As you break an object into its parts, ask yourself two questions. "Can I subdivide the current
part further?" If yes, do so. "Does my current description imply a use?" If yes, create a more generic description
involving its shape and material. For example, initially I divide a candle into its parts: wick and wax. The word
'wick' implies a use: burning to emit light. So, describe it more generically as a string. This brings to mind using the
wick to tie things together (once I extract it from the wax). Since 'string' implies a use, I describe it more generically:
interwoven fibrous strands. This brings to mind that I could use the wick to make a wig for my hamster. Since
"interwoven fibrous strands" does not imply a use, I can stop working on wick and start working on wax. People
trained in this technique solved 67% more problems that suffered from functional fixedness than a control group.
This techniques systematically strips away all the layers of associated uses from an object and its parts.
Functional fixedness
98
References
Adamson, R.E. (1952). Functional Fixedness as related to problem solving: A repetition of three experiments.
Journal of Experimental Psychology, 44, 288-291.
Birch, H.G., & Rabinowitz, H.S. (1951). The negative effect of previous experience on productive thinking.
Journal of Experimental Psychology, 41, 121-125.
Carnevale, Peter J. (1998). Social Values and Social Conflict Creative Problem Solving and Categorization.
Journal of Personality and Social Psychology, 74(5), 1300.
Coon, D. (2004) Introduction to Psychology: Gateways to Mind and Behavior Tenth Edition,
Wadsworth/Thompson Learning http:/ / www. wadsworth. com
Duncker, K. (1945). On problem solving. Psychological Monographs, 58:5 (Whole No. 270)
Frank, Michael C., and Michael Ramscar. How do Presentation and Context Influence Representation for
Functional Fixedness Tasks? Proceedings of the 25th Annual Meeting of the Cognitive Science Society, 2003.
German, T.P., & Barrett, H.C. (2005). Functional fixedness in a technologically sparse culture. Psychological
Science, 16, 1-5. Full text
[1]
German, T.P., & Defeyter, M.A. (2000). Immunity to functional fixedness in young children. Psychonomic
Bulletin & Review, 7(4), 707-712
Mayer, R. E. (1992). Thinking, Problem Solving, Cognition. New York: W. H. Freeman and Company.
McCaffrey, T. (2012). Innovation relies on the obscure: A key to overcoming the classic functional fixedness
problem. Psychological Science, 23(3), 215-218.
Solomon, I. (1994). Analogical Transfer and Functional Fixedness" in the Science Classroom. Journal of
Educational Research, 87(6), 371-377.
Latour, Larry (1994). "Controlling Functional Fixedness: the Essence of Successful Reuse" http:/ / www. cs.
umaine. edu/ ~larry/ latour/ ECAI/ paper-sent/ paper-sent. html
Pink, Dan (2009) "Dan Pink on the surprising science of motivation" http:/ / www. ted. com/ talks/ lang/ eng/
dan_pink_on_motivation. html
External links
Controlling Functional Fixedness: the Essence of Successful Reuse
[2]
Adaptations for Tool Use: The Artifact Concept and Inferences about Function
[3]
Functional Fixedness in a Technologically Sparse Culture
[4]
Dan Pink on the surprising science of motivation
[5]
References
[1] http:/ / www. sscnet. ucla.edu/ anthro/ faculty/ barrett/ german-barrett-PS. pdf
[2] http:/ / www. umcs.maine. edu/ ~larry/ latour/ ECAI/ paper-sent/ paper-sent. html
[3] http:/ / www. psych.ucsb.edu/ research/ cep/ topics/ tools. htm
[4] http:/ / www. anthro.ucla.edu/ faculty/ barrett/ german-barrett-PS. pdf
[5] http:/ / www. ted. com/ talks/ dan_pink_on_motivation.html
Forer effect
99
Forer effect
The Forer effect (also called the Barnum Effect after P. T. Barnum's observation that "we've got something for
everyone") is the observation that individuals will give high accuracy ratings to descriptions of their personality that
supposedly are tailored specifically for them, but are in fact vague and general enough to apply to a wide range of
people. This effect can provide a partial explanation for the widespread acceptance of some beliefs and practices,
such as astrology, fortune telling, graphology, and some types of personality tests.
A related and more general phenomenon is that of subjective validation.
[1]
Subjective validation occurs when two
unrelated or even random events are perceived to be related because a belief, expectancy, or hypothesis demands a
relationship. Thus people seek a correspondence between their perception of their personality and the contents of a
horoscope.
Forer's demonstration
In 1948, psychologist Bertram R. Forer gave a personality test to his students. He told his students they were each
receiving a unique personality analysis that was based on the test's results and to rate their analysis on a scale of 0
(very poor) to 5 (excellent) on how well it applied to themselves. In reality, each received the same analysis:

You have a great need for other people to like and admire you. You have a tendency to be critical of yourself. You have a great deal of unused
capacity which you have not turned to your advantage. While you have some personality weaknesses, you are generally able to compensate for
them. Your sexual adjustment has presented problems for you. Disciplined and self-controlled outside, you tend to be worrisome and insecure
inside. At times you have serious doubts as to whether you have made the right decision or done the right thing. You prefer a certain amount
of change and variety and become dissatisfied when hemmed in by restrictions and limitations. You pride yourself as an independent thinker
and do not accept others' statements without satisfactory proof. You have found it unwise to be too frank in revealing yourself to others. At
times you are extroverted, affable, sociable, while at other times you are introverted, wary, reserved. Some of your aspirations tend to be pretty
unrealistic. Security is one of your major goals in life.
On average, the rating was 4.26, but only after the ratings were turned in was it revealed that each student had
received identical copies assembled by Forer from various horoscopes.
[2]
As can be seen from the profile, there are a
number of statements that could apply equally to anyone. These statements later became known as Barnum
statements, after P. T. Barnum.
In another study examining the Barnum effect, students took the MMPI personality assessment and researchers
evaluated their responses. The researchers wrote accurate evaluations of the students personalities, but gave the
students both the accurate assessment and a fake assessment using vague generalities. Students were then asked to
choose which personality assessment they believe was their own, actual assessment. More than half of the students
(59%) chose the fake assessment as opposed to the real one.
[3]
The Forer effect is more frequently referred to as "The Barnum Effect". This term was coined in 1956 by American
psychologist Paul Meehl in his essay, "Wanted - A Good Cookbook". He relates the vague personality descriptions
used in certain "pseudo-successful" psychological tests to those given by entertainer and businessman P. T. Barnum,
who was a notorious hoaxer.
[4][5]
Forer effect
100
Repeating the study
Two factors are important in ensuring that the study is replicable. The content of the description offered is important,
with specific emphasis on the ratio of positive to negative trait assessments. The other important factor is that the
subject trusts the person who is giving feedback to give them feedback based on honest and subjective
assessment.
[6][7]
The effect is so consistent because the statements are so vague. People are able to read their own meaning into the
statements they receive, and thus, the statement becomes "personal" to them. The most effective statements contain
statements based around the phrase: "at times." Such as: "At times you feel very sure of your self, while at other
times you are not as confident." This phrase can apply to almost anybody, and thus each person can read their own
meaning into it. Keeping statements vague in this manner will ensure high rates of reliability when repeating the
study.
[8]
Variables influencing the effect
Studies have shown that the Barnum effect is seemingly universal - it has been observed in people from many
different cultures and geographic locations. In 2009, psychologists Paul Rogers and Janice Soule conducted a study
that compared the tendencies of Westerners to accept Barnum personality profiles to the tendencies of Chinese
people. They were unable to find any significant differences
[9]
.
However, later studies have found that subjects give higher accuracy ratings if the following are true:
the subject believes that the analysis applies only to him or her, and thus applies their own meaning to the
statements.
[10]
the subject believes in the authority of the evaluator.
the analysis lists mainly positive traits.
See Dickson and Kelly for a review of the literature.
[11]
Sex has also proven to play a role in how accurate the subject believes the description to be: women are more likely
than men to believe that the vague statement is accurate.
[12]
The method in which the Barnum personality profiles are presented can also affect the extent to which people accept
them as their own. For instance, Barnum profiles that are more personalized - perhaps containing a specific person's
name - are more likely to yield higher acceptability ratings than those that could be applied to anyone.
[13]
Recent research
Belief in the paranormal
There is evidence that having prior belief in the paranormal leads to greater influence of the effect
[14]
. Subjects who,
for example, believe in the accuracy of horoscopes have a greater tendency to believe that the vague generalities of
the response apply specifically to them. Other examples of beliefs in the paranormal, called schizotypies, include
belief in magical powers, spiritual happenings, or other influences. Studies on the relationship between schizotypies
and belief in the Barnum effect have shown high amounts of correlation.
[15]
However, Rogers and Soule's 2009
study (see "Variables Influencing the Effect" above) also tested subjects' astrological beliefs, and both the Chinese
and Western skeptics were more likely to identify the ambiguity within the Barnum profiles. This suggests that
individuals who do not believe in astrology are possibly influenced less by the effect.
Forer effect
101
Self-serving bias
Self-serving bias has been shown to cancel the Barnum effect. According to the self-serving bias, subjects accept
positive attributes about themselves while rejecting negative ones. In one study, subjects were given one of three
personality reports. One contained Barnum profiles with socially desirable personality traits, one contained profiles
full negative traits (also called "common faults"), and the last contained a mixture of the two. Subjects who received
the socially desirable and mixed reports were far more likely to agree with the personality assessments than the
subjects who received negative reports, though it should be noted that there was not a significant difference between
the first two groups. In another study, subjects were given a list of traits instead of the usual "fake" personality
assessment. The subjects were asked to rate how much they felt these traits applied to them. In line with the
self-serving bias, the majority of subjects agreed with positive traits about themselves, and disagreed with negative
ones. The study concluded that the self-serving bias is powerful enough to cancel out the usual Barnum effect.
[16]
In popular culture
A similar experiment was made during the second episode of the seventh season of the TV show Penn & Teller:
Bullshit!. The episode was about astrology, and also discussed confirmation bias. The results were similar to Forer's
study.
A version of the original experiment was performed by illusionist Derren Brown. He described the experiment in his
book Tricks of the Mind.
References
[1] Marks, David F (2000). The Psychology of the Psychic (2 ed.). Amherst, New York: Prometheus Books. pp.41. ISBN1-57392-798-8.
[2] Forer, B.R. (1949). "The fallacy of personal validation: A classroom demonstration of gullibility". Journal of Abnormal and Social
Psychology (American Psychological Association) 44 (1): 118123. doi:10.1037/h0059240.
[3] Cline, Austin. "Flaws in Reasoning and Arguments: Barnum Effect & Gullibility" (http:/ / atheism. about. com/ od/ logicalflawsinreasoning/
a/ barnum. htm). About.com. . Retrieved 12 November 2012.
[4] Meehl, Paul (1956). Wanted - A Good Cookbook (http:/ / psycnet. apa. org/ index. cfm?fa=fulltext. journal& jcode=amp& vol=11& issue=6&
page=263& format=PDF). pp.266. .
[5] Dutton, Denis. "The Cold Reading Technique" (http:/ / denisdutton. com/ cold_reading. htm). . Retrieved 28 November 2012.
[6] Claridge, G; Clark, K., Powney, E., & Hassan, E. (2008). "Schizotypy and the barnum effect.". Personality and Individual Differences. 44
(2): 436-444.
[7] "Something for Everyone - The Barnum Effect" (http:/ / thearticulateceo. typepad. com/ my-blog/ 2012/ 01/
something-for-everyone-the-barnum-effect.html). The Articulate CEO. . Retrieved 25 November 2012.
[8] Krauss-Whitbourne, Susan. "When it comes to personality tests, skepticism is a good thing." (http:/ / www. psychologytoday. com/ blog/
fulfillment-any-age/ 201008/ when-it-comes-personality-tests-dose-skepticism-is-good-thing). Psychology Today. . Retrieved 25 November
2012.
[9] Rogers, Paul; Janice Soule (2009). "Cross-Cultural Differences in the Acceptance of Barnum Profiles Supposedly Derived From Western
Versus Chinese Astrology" (http:/ / jcc. sagepub.com/ content/ 40/ 3/ 381. full. pdf+ html). Journal of Cross-Cultural Psychology. . Retrieved
11/11/2012.
[10] Krauss-Whitbourne, Susan. "When it comes to personality tests, skepticism is a good thing." (http:/ / www. psychologytoday. com/ blog/
fulfillment-any-age/ 201008/ when-it-comes-personality-tests-dose-skepticism-is-good-thing). Psychology Today. . Retrieved 25 November
2012.
[11] Dickson, D.H.; Kelly, I.W. (1985). "The 'Barnum Effect' in Personality Assessment: A Review of the Literature". Psychological Reports
(Missoula) 57 (1): 367382. ISSN0033-2941. OCLC1318827.
[12] Piper-Terry, M.L.; Downey, J.L. (1998). "Sex, gullibility, and the barnum effect". Psychological Reports 82: 571-575.
[13] Farley-Icard, Roberta Lynn (2007). Factors that influence the Barnum Effect: Social desirability, base rates and personalization.
[14] "Balance-Today - Astroology" (http:/ / balance-today. org/ bias/ bias_examples/ astrology. html). . Retrieved 28 November 2012.
[15] Claridge, G; Clark, K., Powney, E., & Hassan, E. (2008). "Schizotypy and the barnum effect.". Personality and Individual Differences. 44
(2): 436-444.
[16] MacDonald, D.J.; Standing, L.G. (2002). "Does self-serving bias cancel the barnum effect?". Social behavior and personality 30 (6):
625-630.
Forer effect
102
External links
The Fallacy of Personal Validation: A Classroom Demonstration of Gullibility By: Bertram R. Forer (Full Text)
(http:/ / www. scribd. com/ doc/ 17378132/
The-Fallacy-of-Personal-Validation-a-Classroom-Demonstration-of-Gullibility)
An autotest (http:/ / homepage. bluewin. ch/ Ysewijn/ english_Barnum. htm)
Online test demonstrating the effect (http:/ / forer. netopti. net/ )
Skeptic's Dictionary: the Forer effect (http:/ / www. skepdic. com/ forer. html)
Framing effect (psychology)
Framing effect is an example of cognitive bias, in which people react differently to a particular choice depending on
whether it is presented as a loss or as a gain.
[1]
Experiments
Amos Tversky and Daniel Kahneman (1981) explored how different phrasing affected participants' responses to a
choice in a hypothetical life and death situation.
Participants were asked to choose between two treatments for 600 people affected by a deadly disease. Treatment A
was predicted to result in 400 deaths, whereas treatment B had a 33% chance that no one would die but a 66%
chance that everyone would die. This choice was then presented to participants either with positive framing, ie how
many people would live, or with negative framing, ie how many people would die.
Framing Treatment A Treatment B
Positive "Saves 200 lives" "A 33% chance of saving all 600 people, 66% possibility of saving no one."
Negative "400 people will die" "A 33% chance that no people will die, 66% probability that all 600 will die."
Treatment A was chosen by 72% of participants when it was presented with positive framing ("saves 200 lives")
dropping to only 22% when the same choice was presented with negative framing ("400 people will die").
This effect has been shown in other contexts:
93% of PhD students registered early when a penalty fee for late registration was emphasised, with only 67%
doing so when this was presented as a discount for earlier registration.
[2]
62% of people disagreed with allowing "public condemnation of democracy", but only 46% of people agreed that
it was right to "forbid public condemnation of democracy".(Rugg, as cited in Plous, 1993)
More people will support an economic policy if the employment rate is emphasised than when the associated
unemployment rates is highlighted.
[3]
It has been argued that pretrial detention may increase a defendant's willingness to accept a plea bargain, since
imprisonment, rather than freedom, will be his baseline, and pleading guilty will be viewed as an event that will
cause his earlier release rather than as an event that will put him in prison.
[4]
Framing effect (psychology)
103
Applications
Frame analysis has been a significant part of scholarly work on topics like social movements and political opinion
formation in both sociology and political science.
Political polls will often be framed to encourage a response beneficial to the organisation that has commissioned the
poll. The effect is so pronounced that it has been suggested that political polls may discredit by such framing.
[5]
Amelioration
One of the dangers of framing effects is that people are often provided options within the context of only one of the
two frames.
[6]
Furthermore, framing effects may persist even when monetary incentives are provided.
[7]
Thus,
individuals' decisions may be malleable through manipulation with the framing effect, and the consequences of
framing effects may be inescapable. However, Druckman (2001b) conveys that the framing effects and their societal
implications may be emphasized more than they should be. He demonstrated that the effects of framing can be
reduced, or even eliminated, if ample, credible information is provided to people.
[8]
Causes
Framing impacts people because individuals perceive losses and gains differently, as illustrated in prospect theory
(Tversky & Kahneman, 1981). The value function, founded in prospect theory, illustrates an important underlying
factor to the framing effect: a loss is more devastating than the equivalent gain is gratifying (Tversky & Kahneman,
1981). Thus, people tend to avoid risk when a positive frame is presented but seek risks when a negative frame is
presented (Tversky & Kahneman, 1981). Additionally, the value function takes on a sigmoid shape, which indicates
that gains for smaller values are psychologically larger than equivalent increases for larger quantities (Tversky &
Kahneman, 1981). Another important factor contributing to framing is certainty effect and pseudocertainty effect in
which a sure gain is favored to a probabilistic gain (Clark, 2009), but a probabilistic loss is preferred to a definite
loss.
[9]
For example, in Tversky and Kahneman's (1981) experiment, in the first problem, treatment A, which saved a
sure 200 people, was favored due to the certainty effect.
According to Fuzzy-trace theory, this phenomenon is attributed to categorical gist attitudes, a tendency to generalize
percentages as into categories such as high, some, or none. In the Asian Disease Problem, one is more likely to
choose the sure option in the gain frame to avoid the risk of no one being saved. In the loss frame, we are more likely
to choose the risky option. This is because we generalize the options into some people will die (sure option) or
some people will die or no one will die (risky option).
[10]
References
Druckman, J. (2001a). "Evaluating framing effects". Journal of Economic Psychology 22: 96101.
Druckman, J. (2001b). "Using credible advice to overcome framing effects". Journal of Law, Economics, and
Organization 17: 6282. doi:10.1093/jleo/17.1.62.
Clark, D (2009). Framing effects exposed. Pearson Education.
Gtcher, S., Orzen, H., Renner, E., & Stamer, C. (in press). Are experimental economists prone to framing
effects? A natural field experiment. Journal of Economic Behavior & Organization.
Plous, Scott (1993). The psychology of judgment and decision making. McGraw-Hill. ISBN978-0-07-050477-6.
Tversky, Amos; Kahneman, Daniel (1981). "The Framing of decisions and the psychology of choice". Science
211 (4481): 453458. doi:10.1126/science.7455683. PMID7455683.
Khberger, Anton; Tanner, Carmen (2010). "Risky choice framing: Task versions and a comparison of prospect
theory and fuzzy-trace theory". Journal of Behavioral Decision Making 23 (3): 314329. doi:10.1002/bdm.656.
Framing effect (psychology)
104
References
[1] [1] Plous, 1993
[2] Gtcher Orzen, Renner, & Stamer, in press
[3] [3] Druckman, 2001b
[4] Stephanos Bibas (June 2004). Plea Bargaining outside the Shadow of Trial. 117. Harvard Law Review. pp.24632547
[5] [5] Druckman, 2001b
[6] [6] Druckman, 2001a
[7] Tversky & Kahneman, 1981
[8] [8] Druckman, 2001b
[9] Tversky & Kahneman, 1981
[10] [10] Khberger, 2010
Gambler's fallacy
The Gambler's fallacy, also known as the Monte Carlo fallacy (because its most famous example happened in a
Monte Carlo Casino in 1913),
[1][2]
and also referred to as the fallacy of the maturity of chances, is the belief that if
deviations from expected behaviour are observed in repeated independent trials of some random process, future
deviations in the opposite direction are then more likely.
An example: coin-tossing
Simulation of coin tosses:
Each frame, a coin is
flipped which is red on one
side and blue on the other.
The result of each flip is
added as a colored dot in
the corresponding column.
As the pie chart shows, the
proportion of red versus
blue approaches 50-50 (the
Law of Large Numbers).
But the difference between
red and blue does not
systematically decrease to
zero.
The gambler's fallacy can be illustrated by considering the repeated toss of a fair coin.
With a fair coin, the outcomes in different tosses are statistically independent and the
probability of getting heads on a single toss is exactly
1

2
(one in two). It follows that the
probability of getting two heads in two tosses is
1

4
(one in four) and the probability of
getting three heads in three tosses is
1

8
(one in eight). In general, if we let A
i
be the event
that toss i of a fair coin comes up heads, then we have,
.
Now suppose that we have just tossed four heads in a row, so that if the next coin toss
were also to come up heads, it would complete a run of five successive heads. Since the
probability of a run of five successive heads is only
1

32
(one in thirty-two), a believer in
the gambler's fallacy might believe that this next flip is less likely to be heads than to be
tails. However, this is not correct, and is a manifestation of the gambler's fallacy; the
event of 5 heads in a row and the event of "first 4 heads, then a tails" are equally likely,
each having probability
1

32
. Given the first four rolls turn up heads, the probability that
the next toss is a head is in fact,
.
While a run of five heads is only
1

32
= 0.03125, it is only that before the coin is first
tossed. After the first four tosses the results are no longer unknown, so their probabilities
are 1. Reasoning that it is more likely that the next toss will be a tail than a head due to
the past tosses, that a run of luck in the past somehow influences the odds in the future, is
the fallacy.
Gambler's fallacy
105
Explaining why the probability is 1/2 for a fair coin
We can see from the above that, if one flips a fair coin 21 times, then the probability of 21 heads is 1 in 2,097,152.
However, the probability of flipping a head after having already flipped 20 heads in a row is simply
1

2
. This is an
application of Bayes' theorem.
This can also be seen without knowing that 20 heads have occurred for certain (without applying of Bayes' theorem).
Consider the following two probabilities, assuming a fair coin:
probability of 20 heads, then 1 tail = 0.5
20
0.5 = 0.5
21
probability of 20 heads, then 1 head = 0.5
20
0.5 = 0.5
21
The probability of getting 20 heads then 1 tail, and the probability of getting 20 heads then another head are both 1 in
2,097,152. Therefore, it is equally likely to flip 21 heads as it is to flip 20 heads and then 1 tail when flipping a fair
coin 21 times. Furthermore, these two probabilities are equally as likely as any other 21-flip combinations that can
be obtained (there are 2,097,152 total); all 21-flip combinations will have probabilities equal to 0.5
21
, or 1 in
2,097,152. From these observations, there is no reason to assume at any point that a change of luck is warranted
based on prior trials (flips), because every outcome observed will always have been as likely as the other outcomes
that were not observed for that particular trial, given a fair coin. Therefore, just as Bayes' theorem shows, the result
of each trial comes down to the base probability of the fair coin:
1

2
.
Other examples
There is another way to emphasize the fallacy. As already mentioned, the fallacy is built on the notion that previous
failures indicate an increased probability of success on subsequent attempts. This is, in fact, the inverse of what
actually happens, even on a fair chance of a successful event, given a set number of iterations. Assume a fair
16-sided die, where a win is defined as rolling a 1. Assume a player is given 16 rolls to obtain at least one win
(1p(rolling no ones)). The low winning odds are just to make the change in probability more noticeable. The
probability of having at least one win in the 16 rolls is:
However, assume now that the first roll was a loss (93.75% chance of that,
15

16
). The player now only has 15 rolls
left and, according to the fallacy, should have a higher chance of winning since one loss has occurred. His chances of
having at least one win are now:
Simply by losing one toss the player's probability of winning dropped by 2 percentage points. By the time this
reaches 5 losses (11 rolls left), his probability of winning on one of the remaining rolls will have dropped to ~50%.
The player's odds for at least one win in those 16 rolls has not increased given a series of losses; his odds have
decreased because he has fewer iterations left to win. In other words, the previous losses in no way contribute to the
odds of the remaining attempts, but there are fewer remaining attempts to gain a win, which results in a lower
probability of obtaining it.
The player becomes more likely to lose in a set number of iterations as he fails to win, and eventually his probability
of winning will again equal the probability of winning a single toss, when only one toss is left: 6.25% in this
instance.
Some lottery players will choose the same numbers every time, or intentionally change their numbers, but both are
equally likely to win any individual lottery draw. Copying the numbers that won the previous lottery draw gives an
equal probability, although a rational gambler might attempt to predict other players' choices and then deliberately
avoid these numbers. Low numbers (below 31 and especially below 12) are popular because people play birthdays as
their so-called lucky numbers; hence a win in which these numbers are over-represented is more likely to result in a
Gambler's fallacy
106
shared payout.
A joke told among mathematicians demonstrates the nature of the fallacy. When flying on an aircraft, a man decides
to always bring a bomb with him. "The chances of an aircraft having a bomb on it are very small," he reasons, "and
certainly the chances of having two are almost none!" A similar example is in the book The World According to
Garp when the hero Garp decides to buy a house a moment after a small plane crashes into it, reasoning that the
chances of another plane hitting the house have just dropped to zero.
Reverse fallacy
The reversal is also a fallacy (not to be confused with the inverse gambler's fallacy) in which a gambler may instead
decide that tails are more likely out of some mystical preconception that fate has thus far allowed for consistent
results of tails. Believing the odds to favor tails, the gambler sees no reason to change to heads. Again, the fallacy is
the belief that the "universe" somehow carries a memory of past results which tend to favor or disfavor future
outcomes.
Caveats
In most illustrations of the gambler's fallacy and the reversed gambler's fallacy, the trial (e.g. flipping a coin) is
assumed to be fair. In practice, this assumption may not hold.
For example, if one flips a fair coin 21 times, then the probability of 21 heads is 1 in 2,097,152 (above). If the coin is
fair, then the probability of the next flip being heads is 1/2. However, because the odds of flipping 21 heads in a row
is so slim, it may well be that the coin is somehow biased towards landing on heads, or that it is being controlled by
hidden magnets, or similar.
[3]
In this case, the smart bet is "heads" because the empirical evidence21 "heads" in a
rowsuggests that the coin is likely to be biased toward "heads", contradicting the general assumption that the coin
is fair.
Childbirth
Instances of the gamblers fallacy when applied to childbirth can be traced all the way back to 1796, in Pierre-Simon
Laplaces A Philosophical Essay on Probabilities. Laplace wrote of the ways men calculated their probability of
having sons: "I have seen men, ardently desirous of having a son, who could learn only with anxiety of the births of
boys in the month when they expected to become fathers. Imagining that the ratio of these births to those of girls
ought to be the same at the end of each month, they judged that the boys already born would render more probable
the births next of girls." In short, the expectant fathers feared that if more sons were born in the surrounding
community, then they themselves would be more likely to have a daughter.
[4]
Some expectant parents believe that, after having multiple children of the same sex, they are "due" to have a child of
the opposite sex. While the TriversWillard hypothesis predicts that birth sex is dependent on living conditions (i.e.
more male children are born in "good" living conditions, while more female children are born in poorer living
conditions), the probability of having a child of either gender is still regarded as 50/50.
Monte Carlo Casino
The most famous example happened in a game of roulette at the Monte Carlo Casino in the summer of 1913, when
the ball fell in black 26 times in a row, an extremely uncommon occurrence (but not more nor less common than any
of the other 67,108,863 sequences of 26 red or black, neglecting the 0 slot on the wheel), and gamblers lost millions
of francs betting against black after the black streak happened. Gamblers reasoned incorrectly that the streak was
causing an "imbalance" in the randomness of the wheel, and that it had to be followed by a long streak of red.
[1]
Gambler's fallacy
107
Non-examples of the fallacy
There are many scenarios where the gambler's fallacy might superficially seem to apply, but actually does not. When
the probability of different events is not independent, the probability of future events can change based on the
outcome of past events (see statistical permutation). Formally, the system is said to have memory. An example of this
is cards drawn without replacement. For example, if an ace is drawn from a deck and not reinserted, the next draw is
less likely to be an ace and more likely to be of another rank. The odds for drawing another ace, assuming that it was
the first card drawn and that there are no jokers, have decreased from
4

52
(7.69%) to
3

51
(5.88%), while the odds for
each other rank have increased from
4

52
(7.69%) to
4

51
(7.84%). This type of effect is what allows card counting
schemes to work (for example in the game of blackjack).
Meanwhile, the reversed gambler's fallacy may appear to apply in the story of Joseph Jagger, who hired clerks to
record the results of roulette wheels in Monte Carlo. He discovered that one wheel favored nine numbers and won
large sums of money until the casino started rebalancing the roulette wheels daily. In this situation, the observation
of the wheel's behavior provided information about the physical properties of the wheel rather than its "probability"
in some abstract sense, a concept which is the basis of both the gambler's fallacy and its reversal. Even a biased
wheel's past results will not affect future results, but the results can provide information about what sort of results the
wheel tends to produce. However, if it is known for certain that the wheel is completely fair, then past results provide
no information about future ones.
The outcome of future events can be affected if external factors are allowed to change the probability of the events
(e.g., changes in the rules of a game affecting a sports team's performance levels). Additionally, an inexperienced
player's success may decrease after opposing teams discover his weaknesses and exploit them. The player must then
attempt to compensate and randomize his strategy. (See Game theory).
Many riddles trick the reader into believing that they are an example of the gambler's fallacy, such as the Monty Hall
problem.
Non-example: unknown probability of event
When the probability of repeated events are not known, outcomes may not be equally probable. In the case of coin
tossing, as a run of heads gets longer and longer, the likelihood that the coin is biased towards heads increases. If one
flips a coin 21 times in a row and obtains 21 heads, one might rationally conclude a high probability of bias towards
heads, and hence conclude that future flips of this coin are also highly likely to be heads. In fact, Bayesian inference
can be used to show that when the long-run proportion of different outcomes are unknown but exchangeable
(meaning that the random process from which they are generated may be biased but is equally likely to be biased in
any direction) previous observations demonstrate the likely direction of the bias, such that the outcome which has
occurred the most in the observed data is the most likely to occur again.
[5]
Psychology behind the fallacy
Origins
Gambler's fallacy arises out of a belief in the law of small numbers, or the erroneous belief that small samples must
be representative of the larger population. According to the fallacy, "streaks" must eventually even out in order to be
representative.
[6]
Amos Tversky and Daniel Kahneman first proposed that the gambler's fallacy is a cognitive bias
produced by a psychological heuristic called the representativeness heuristic, which states that people evaluate the
probability of a certain event by assessing how similar it is to events they have experienced before, and how similar
the events surrounding those two processes are.
[7][8]
According to this view, "after observing a long run of red on the
roulette wheel, for example, most people erroneously believe that black will result in a more representative sequence
than the occurrence of an additional red",
[9]
so people expect that a short run of random outcomes should share
properties of a longer run, specifically in that deviations from average should balance out. When people are asked to
Gambler's fallacy
108
make up a random-looking sequence of coin tosses, they tend to make sequences where the proportion of heads to
tails stays closer to 0.5 in any short segment than would be predicted by chance (insensitivity to sample size);
[10]
Kahneman and Tversky interpret this to mean that people believe short sequences of random events should be
representative of longer ones.
[11]
The representativeness heuristic is also cited behind the related phenomenon of the
clustering illusion, according to which people see streaks of random events as being non-random when such streaks
are actually much more likely to occur in small samples than people expect.
[12]
The gambler's fallacy can also be attributed to the mistaken belief that gambling (or even chance itself) is a fair
process that can correct itself in the event of streaks, otherwise known as the just-world hypothesis.
[13]
Other
researchers believe that individuals with an internal locus of control - that is, people who believe that the gambling
outcomes are the result of their own skill - are more susceptible to the gambler's fallacy because they reject the idea
that chance could overcome skill or talent.
[14]
Variations of the gambler's fallacy
Some researchers believe that there are actually two types of gambler's fallacy: Type I and Type II. Type I is the
"classic" gambler's fallacy, when individuals believe that a certain outcome is "due" after a long streak of another
outcome. Type II gambler's fallacy, as defined by Gideon Keren and Charles Lewis, occurs when a gambler
underestimates how many observations are needed to detect a favorable outcome (such as watching a roulette wheel
for a length of time and then betting on the numbers that appear most often). Detecting a bias that will lead to a
favorable outcome takes an impractically large amount of time and is very difficult, if not impossible, to do,
therefore people fall prey to the Type II gambler's fallacy.
[15]
The two types are different in that Type I wrongly
assumes that gambling conditions are fair and perfect, while Type II assumes that the conditions are biased, and that
this bias can be detected after a certain amount of time.
Another variety, known as the retrospective gambler's fallacy, occurs when individuals judge that a seemingly rare
event must come from a longer sequence than a more common event does. For example, people believe that an
imaginary sequence of die rolls is more than three times as long when a set of three 6's is observed as opposed to
when there are only two 6's. This effect can be observed in isolated instances, or even sequentially. A real world
example is when a teenager becomes pregnant after having unprotected sex, people assume that she has been
engaging in unprotected sex for longer than someone who has been engaging in unprotected sex and is not
pregnant.
[16]
Relationship to hot-hand fallacy
Another psychological perspective states that gambler's fallacy can be seen as the counterpart to basketball's
Hot-hand fallacy. In the hot-hand fallacy, people tend to predict the same outcome of the last event (positive
recency) - that a high scorer will continue to score. In gambler's fallacy, however, people predict the opposite
outcome of the last event (negative recency) - that, for example, since the roulette wheel has landed on black the last
six times, it is due to land on red the next. Ayton and Fischer have theorized that people display positive recency for
the hot-hand fallacy because the fallacy deals with human performance, and that people do not believe that an
inanimate object can become "hot."
[17]
Human performance is not perceived as "random," and people are more likely
to continue streaks when they believe that the process generating the results is nonrandom.
[6]
Usually, when a person
exhibits the gambler's fallacy, they are more likely to exhibit the hot-hand fallacy as well, suggesting that one
construct is responsible for the two fallacies.
[18]
The difference between the two fallacies is also represented in economic decision-making. A study by Huber,
Kirchler, and Stockl (2010) examined how the hot hand and the gambler's fallacy are exhibited in the financial
market. The researchers gave their participants a choice: they could either bet on the outcome of a series of coin
tosses, use an "expert" opinion to sway their decision, or choose a risk-free alternative instead for a smaller financial
reward. Participants turned to the "expert" opinion to make their decision 24% of the time based on their past
Gambler's fallacy
109
experience of success, which exemplifies the hot-hand. If the expert was correct, 78% of the participants chose the
expert's opinion again, as opposed to 57% doing so when the expert was wrong. The participants also exhibited the
gambler's fallacy, with their selection of either heads or tails decreasing after noticing a streak of that outcome. This
experiment helped bolster Ayton and Fischer's theory that people put more faith in human performance than they do
in seemingly random processes.
[19]
Neurophysiology
While the representativeness heuristic and other cognitive biases are the most commonly cited cause of the gambler's
fallacy, research suggests that there may be a neurological component to it as well. Functional magnetic resonance
imaging has revealed that, after losing a bet or gamble ("riskloss"), the frontoparietal network of the brain is
activated, resulting in more risk-taking behavior. In contrast, there is decreased activity in the amygdala, caudate and
ventral striatum after a riskloss. Activation in the amygdala is negatively correlated with gambler's fallacy - the more
activity exhibited in the amygdala, the less likely an individual is to fall prey to the gambler's fallacy. These results
suggest that gambler's fallacy relies more on the prefrontal cortex (responsible for executive, goal-directed
processes) and less on the brain areas that control affective decision-making.
The desire to continue gambling or betting is controlled by the striatum, which supports a choice-outcome
contingency learning method. The striatum processes the errors in prediction and the behavior changes accordingly.
After a win, the positive behavior is reinforced and after a loss, the behavior is conditioned to be avoided. In
individuals exhibiting the gambler's fallacy, this choice-outcome contingency method is impaired, and they continue
to make risks after a series of losses.
[20]
Possible solutions
The gambler's fallacy is a deep-seated cognitive bias and therefore very difficult to eliminate. For the most part,
educating individuals about the nature of randomness has not proven effective in reducing or eliminating any
manifestation of the gambler's fallacy. Participants in an early study by Beach and Swensson (1967) were shown a
shuffled deck of index cards with shapes on them, and were told to guess which shape would come next in a
sequence. The experimental group of participants was informed about the nature and existence of the gambler's
fallacy, and were explicitly instructed not to rely on "run dependency" to make their guesses. The control group was
not given this information. Even so, the response styles of the two groups were similar, indicating that the
experimental group still based their choices on the length of the run sequence. Clearly, instructing individuals about
randomness is not sufficient in lessening the gambler's fallacy.
[21]
It does appear, however, that an individual's susceptibility to the gambler's fallacy decreases with age. Fischbein and
Schnarch (1997) administered a questionnaire to five groups: students in grades 5, 7, 9, 11, and college students
specializing in teaching mathematics. None of the participants had received any prior education regarding
probability. The question was, "Ronni flipped a coin three times and in all cases heads came up. Ronni intends to flip
the coin again. What is the chance of getting heads the fourth time?" The results indicated that as the older the
students got, the less likely they were to answer with "smaller than the chance of getting tails," which would indicate
a negative recency effect. 35% of the 5th graders, 35% of the 7th graders, and 20% of the 9th graders exhibited the
negative recency effect. Only 10% of the 11th graders answered this way, however, and none of the college students
did. Fischbein and Schnarch therefore theorized that an individual's tendency to rely on the representativeness
heuristic and other cognitive biases can be overcome with age.
[22]
Another possible solution that could be seen as more proactive comes from Roney and Trick, Gestalt psychologists
who suggest that the fallacy may be eliminated as a result of grouping. When a future event (ex: a coin toss) is
described as part of a sequence, no matter how arbitrarily, a person will automatically consider the event as it relates
to the past events, resulting in the gambler's fallacy. When a person considers every event as independent, however,
the fallacy can be greatly reduced.
[23]
Gambler's fallacy
110
In their experiment, Roney and Trick told participants that they were betting on either two blocks of six coin tosses,
or on two blocks of seven coin tosses. The fourth, fifth, and sixth tosses all had the same outcome, either three heads
or three tails. The seventh toss was grouped with either the end of one block, or the beginning of the next block.
Participants exhibited the strongest gambler's fallacy when the seventh trial was part of the first block, directly after
the sequence of three heads or tails. Additionally, the researchers pointed out how insidious the fallacy can be - the
participants that did not show the gambler's fallacy showed less confidence in their bets and bet fewer times than the
participants who picked "with" the gambler's fallacy. However, when the seventh trial was grouped with the second
block (and was therefore perceived as not being part of a streak), the gambler's fallacy did not occur.
Roney and Trick argue that a solution to gambler's fallacy could be, instead of teaching individuals about the nature
of randomness, training people to treat each event as if it is a beginning and not a continuation of previous events.
This would prevent people from gambling when they are losing in the vain hope that their chances of winning are
due to increase.
References
[1] Lehrer, Jonah (2009). How We Decide. New York: Houghton Mifflin Harcourt. p.66. ISBN978-0-618-62011-1.
[2] Blog - "Fallacy Files" (http:/ / www.fallacyfiles. org/ gamblers. html) What happened at Monte Carlo in 1913.
[3] Martin Gardner, Entertaining Mathematical Puzzles, Dover Publications, 69-70.
[4] [4] Barron, G. and Leider, S. (2010). The role of experience in the gambler's fallacy. Journal of Behavioral Decision Making, 23, 117-129.
[5] O'Neill, B. and Puza, B.D. (2004) Dice have no memories but I do: A defence of the reverse gambler's belief. (http:/ / cbe. anu. edu. au/
research/ papers/ pdf/ STAT0004WP.pdf). Reprinted in abridged form as O'Neill, B. and Puza, B.D. (2005) In defence of the reverse
gambler's belief. The Mathematical Scientist 30(1), pp. 1316.
[6] [6] Burns, B.D. and Corpus, B. (2004). Randomness and inductions from streaks: "Gambler's fallacy" versus "hot hand." Psychonomic Bulletin
and Review. 11, 179-184
[7] Tversky, Amos; Daniel Kahneman (1974). "Judgment under uncertainty: Heuristics and biases". Science 185 (4157): 11241131.
doi:10.1126/science.185.4157.1124. PMID17835457.
[8] Tversky, Amos; Daniel Kahneman (1971). "Belief in the law of small numbers". Psychological Bulletin 76 (2): 105110.
doi:10.1037/h0031322.
[9] Tversky & Kahneman, 1974.
[10] Tune, G.S. (1964). "Response preferences: A review of some relevant literature". Psychological Bulletin 61 (4): 286302.
doi:10.1037/h0048618. PMID14140335.
[11] Tversky & Kahneman, 1971.
[12] Gilovich, Thomas (1991). How we know what isn't so. New York: The Free Press. pp.1619. ISBN0-02-911706-2.
[13] [13] Rogers, P. (1998). The cognitive psychology of lottery gambling: A theoretical review. Journal of Gambling Studies, 14, 111-134
[14] [14] Sundali, J. and Croson, R. (2006). Biases in casino betting: The hot hand and the gambler's fallacy. Judgment and Decision Making, 1, 1-12.
[15] [15] Keren, G. and Lewis, C. (1994). The two fallacies of gamblers: Type I and Type II. Organizational Behavior and Human Decision
Processes, 60, 75-89.
[16] [16] Oppenheimer, D.M. and Monin, B. (2009). The retrospective gambler's fallacy: Unlikely events, constructing the past, and multiple
universes. Judgment and Decision Making, 4, 326-334.
[17] Ayton, P.; Fischer, I. (2004). "The hot hand fallacy and the gambler's fallacy: Two faces of subjective randomness?". Memory and Cognition
32: 13691378.
[18] Sundali, J.; Croson, R. (2006). "Biases in casino betting: The hot hand and the gambler's fallacy". Judgment and Decision Making 1: 112.
[19] Huber, J.; Kirchler, M.; Stockl, T. (2010). "The hot hand belief and the gambler's fallacy in investment decisions under risk". Theory and
Decision 68: 445462.
[20] Xue, G.; Lu, Z.; Levin, I.P.; Bechara, A. (2011). "An fMRI study of risk-taking following wins and losses: Implications for the gambler's
fallacy". Human Brain Mapping 32: 271281.
[21] Beach, L.R.; Swensson, R.G. (1967). "Instructions about randomness and run dependency in two-choice learning". Journal of Experimental
Psychology 75: 279282.
[22] Fischbein, E.; Schnarch, D. (1997). "The evolution with age of probabilistic, intuitively based misconceptions". Journal for Research in
Mathematics Education 28: 96105.
[23] Roney, C.J.; Trick, L.M. (2003). "Grouping and gambling: A gestalt approach to understanding the gambler's fallacy". Canadian Journal of
Experimental Psychology 57: 6975.
Hindsight bias
111
Hindsight bias
Hindsight bias, also known as the knew-it-all-along effect or creeping determinism, is the inclination to see
events that have already occurred as being more predictable than they were before they took place.
[1]
It is a
multifaceted phenomenon that can affect different stages of designs, processes, contexts, and situations.
[2]
Hindsight
bias may cause memory distortion, where the recollection and reconstruction of content can lead to false theoretical
outcomes. It has been suggested that the effect can cause extreme methodological problems while trying to analyze,
understand, and interpret results in experimental studies. A basic example of the hindsight bias is when, after
viewing the outcome of a potentially unforeseeable event, a person believes he or she "knew it all along." Such
examples are present in the writings of historians describing outcomes of battles, physicians recalling clinical trials,
and in judicial systems trying to attribute responsibility and predictability of accidents.
[3]
History
The hindsight bias, although not hitherto named as such, was not a new concept when it emerged in psychological
research in the 1970s. In fact it had been indirectly described numerous times by historians, philosophers and
physicians.
[3]
In 1973 Baruch Fischhoff attended a seminar where Paul E. Meehl stated an observation that clinicians
often overestimate their ability to have foreseen the outcome of a particular case, as they claim to have known it all
along.
[4]
Baruch, a psychology graduate student at the time, saw an opportunity in psychological research to explain
these observations.
[4]
Daniel Kahneman
In the early seventies investigation of heuristics and biases was a large area
of study in psychology, led by Amos Tversky and Daniel Kahneman.
[4]
Two heuristics developed by Tversky and Kahneman were of immediate
importance in the development of the hindsight bias, and these were the
availability heuristic and the representativeness heuristic.
[5]
In an
elaboration of these heuristics, Beyth and Fischhoff devised the first
experiment directly testing the hindsight bias.
[6]
They asked participants to
judge the likelihood of several outcomes of U.S. President Richard Nixons
upcoming visit to Peking (now romanized as Beijing) and Moscow. Some
time after President Nixons return, participants were asked to recall, or
reconstruct the probabilities they had assigned to each possible outcome,
and their perceptions of likelihood of each outcome was greater or
overestimated for events that actually had occurred.
[6]
This study is
frequently referred to in definitions of the hindsight bias, and the title of the
paper, I knew it would happen, may have contributed to the hindsight bias being interchangeable with the term
knew it all along hypothesis.
In 1975 Fischhoff developed another method for investigating the hindsight bias, which at the time was referred to as
the "creeping determinism hypothesis".
[3]
This method involves giving participants a short story with four possible
outcomes, one of which they are told is true, and are then asked to assign the likelihood of each particular
outcome.
[3]
Participants frequently assign a higher likelihood of occurrence to whichever outcome they have been
told is true.
[3]
Remaining relatively unmodified, this method is still used in psychological and behavioural
experiments investigating aspects of the hindsight bias. Having evolved from the heuristics of Tversky and
Kahneman into the creeping determinism hypothesis and finally into the hindsight bias as we now know it, the
concept has many practical applications and is still at the forefront of research today. Recent studies involving the
hindsight bias have investigated the effect age has on the bias, how hindsight may impact interference and confusion,
and how it may affect banking and investment strategies.
[7][8][9]
Hindsight bias
112
Function
The hindsight bias is defined as a tendency to change a recollection from an original thought to something different
because of newly provided information.
[10]
Since 1973, when Fischhoff started the hindsight bias research, there has
been a focus on two main explanations of the bias: distorted event probabilities and distorted memory for judgments
of factual knowledge.
[11]
In tests for hindsight bias a person is asked to remember a specific event from the past or
recall some descriptive information that they had been tested on earlier. In between the first test and final test they
are given the correct information about the event or knowledge. At the final test he or she will report that they knew
the answer all along when they truly have changed their answer to fit with the correct information they were given
after the initial test. Hindsight bias has been found to take place in both memory for experienced situations (events
that the person is familiar with) and hypothetical situations (made up events where the person must imagine being
involved). More recently it has been found that hindsight bias also exists in recall with visual material.
[11]
When
tested on initially blurry images the subjects learn what the true image was after the fact and they would then
remember a clear recognizable picture.
Cognitive models
To understand how a person can so easily change the foundation of knowledge and belief for events after receiving
new information three cognitive models of hindsight bias have been reviewed.
[12]
The three models are SARA
(Selective Activation and Reconstructive Anchoring), RAFT (Reconstruction After Feedback with Take the Best)
and CMT (Causal Model Theory). SARA and RAFT focus on distortions or changes in a memory process while
CMT focuses on probability judgments of hindsight bias.
The SARA model explains hindsight bias for descriptive information in memory and hypothetical situations and was
created by Rdiger Pohl and associates.
[12][13]
SARA assumes that people have a set of images to draw their
memories from. They suffer from the hindsight bias due to selective activation or biased sampling of that set of
images. Basically, people only remember small select amounts of information and when asked to recall it at a later
time they will use that biased image to support their own opinions about the situation. The set of images is originally
processed in the brain when first experienced. When remembered this image is reactivated, and the ability for editing
and alteration of the memory is possible which takes place in hindsight bias when new and correct information is
presented, leading one to believe that this new information when remembered at a later time is the persons original
memory. Due to this reactivation in the brain a more permanent memory trace can be created. The new information
acts as a memory anchor causing retrieval impairment.
[14]
The RAFT model
[15]
explains hindsight bias with comparisons of objects using knowledge based probability then
applying interpretations to those probabilities.
[12]
When given two choices a person will recall the information on
both topics and will make assumptions based on how reasonable they find the information to be. An example would
be comparing two cities to find which is larger. If either city is well known (i.e. popular sporting team) while the
other is not as recognizable, the persons mental cues for the more popular city will increase. They will then 'Take
the best' option in their assessment of their own probabilities. They recognize a city due to a sports team then assume
that the city will be the most populated. Take the Best refers to a cue that is viewed as most valid and becomes
support for the persons interpretations. RAFT is a by-product of adaptive learning. Feedback information will
update a person's knowledge base. This can lead to a person who is unable to retrieve the initial information since the
information cue has been replaced by a cue that they thought was more fitting. The 'best' cue has been replaced and
the person only remembers the answer that is most likely and believes that they thought this was the best point the
whole time.
[12]
Both SARA and RAFT descriptions include a memory trace impairment or cognitive distortion that is caused by
feedback of information and reconstruction of memory.
CMT is a non-formal theory based on work by many researchers to create a collaborative process model for
hindsight bias that involves event outcomes.
[12]
People try to make sense of an event that has not turned out how
Hindsight bias
113
they expected by creating causal reasoning for the starting event conditions. This can give that person the idea that
the event outcome was inevitable and there was nothing that could take place to prevent it from happening. CMT can
be caused by a discrepancy between a persons expectation of the event and the reality of an outcome. They
consciously want to make sense of what has happened and selectively retrieve memory that supports the current
outcome. The causal attribution can be motivated by wanting to feel more positive about the outcome and possibly
themselves.
[16]
Are people liars or are they tricking themselves into believing that they knew the right answer? These models would
show that memory distortions and personal bias play a role.
Memory distortions
Hindsight bias has similarities to other memory distortions such as misinformation effect and false autobiographical
memory.
[10]
Misinformation effect occurs after an event is witnessed; new information received after the fact
influences how the person remembers the event, and can be called post-event misinformation. This is an important
issue with eyewitness testimony. False autobiographical memory takes place when suggestions or additional outside
information is provided to distort and change memory of events; this can also lead to false memory syndrome. At
times this can lead to creation of new memories that are completely false and have not taken place. All three of these
memory distortions contain a three-stage procedure.
[10]
The details of each procedure are different but can result in
some psychological manipulation and alteration of memory. Stage one is different between the three paradigms
although all involve an event, an event that has taken place (misinformation effect), an event that has not taken place
(false autobiographical memory), and a judgment made by a person about an event that must be remembered
(hindsight bias). Stage two consists of more information that is received by the person after the event has taken
place. The new information given in hindsight bias is correct and presented up front to the person, while the extra
information for the other two memory distortions is wrong and presented in an indirect and possibly manipulative
way. The third stage consists of recalling the starting information. The person must recall the original information
with hindsight bias and misinformation effect while a person that has a false autobiographical memory is expected to
remember the incorrect information as a true memory.
[10]
For a false autobiographical memory to be created, the person must believe a memory that is not real. To seem real,
the information given must be influenced by their own personal judgments. There is no real episode of an event to
remember, so this memory construction must be logical to that person's knowledge base. Hindsight bias and
misinformation effect recall a specific time and event, this is called an episodic memory process.
[10]
These two
memory distortions both use memory-based mechanisms that involve a memory trace that has been changed.
Hippocampus activation takes place when an episodic memory is recalled.
[17]
The memory is then available for
alteration by new information. The person believes that the remembered information is the original memory trace,
not an altered memory. This new memory is made from accurate information and therefore the person does not have
much motivation to admit they were wrong originally by remembering the original memory. This can lead to
motivated forgetting.
Motivated forgetting
Following a negative outcome of a situation people do not want to accept blame. Instead of accepting their role in the
event, they view themselves as caught up in a situation that was unforeseeable and therefore they are not the culprit,
which is referred to as defensive processing, or view the situation as inevitable and that there was nothing that could
be done to prevent it, which is retroactive pessimism.
[18]
Defensive processing involves less hindsight bias as they
are playing ignorant of the event. Retroactive pessimism makes use of hindsight bias after a negative, unwanted
outcome. Events in life can be hard to control or predict. It is no surprise that people want to view themselves in a
more positive light and do not want to take responsibility for situations they could have altered. This leads to
hindsight bias in the form of retroactive pessimism to inhibit upward counterfactual thinking, instead interpreting the
Hindsight bias
114
outcome as succumbing to an inevitable fate.
[19]
This memory inhibition, preventing a person from recalling what
really happened, may lead to failure to accept one's mistakes and therefore to be unable to learn and grow to prevent
a similar mistake from taking place in the future.
[18]
Hindsight bias can also lead to overconfidence in one's decisions
without considering other options.
[20]
Such people see themselves as persons who remember correctly, even though
they are just forgetting that they were wrong. Avoiding responsibility is common among the human population.
Examples will be discussed below to show the regularity and severity of hindsight bias in society.
Elimination
Research shows that people still exhibit the bias even when they are informed about it.
[21]
Researchers attempt to
decrease the bias in participants has failed, leading one to think that hindsight bias has an automatic source in
cognitive reconstruction. This supports the Causal Model Theory and the use of sense-making to understand event
outcomes.
[12]
The only observable way to decrease hindsight bias in testing is to increase accountability of the
participant's answer.
[20]
Related disorders
Schizophrenia
Schizophrenia is an example of a disorder that directly affects the hindsight bias. The hindsight bias has a stronger
effect on schizophrenic individuals compared to individuals from the general public.
[22]
The hindsight bias effect is a paradigm that demonstrates how recently acquired knowledge influences the
recollection of past information. Recently acquired knowledge has a strange, but strong influence on schizophrenic
individuals in relation to information previous learned. New information combined with the lack of acceptable
influence of past reality-based memories can disconfirm behaviour and delusional belief, which typify in patients
suffering from schizophrenia.
[22]
This can cause faulty memory, which can lead to hindsight thinking and believing
in knowing something they don't.
[22]
Delusion-prone individuals suffering from schizophrenia can falsely jump to
conclusions.
[23]
Jumping to conclusions can lead to hindsight, which strongly influences the delusional conviction in
schizophrenic individuals.
[23]
In numerous studies, cognitive functional deficits in schizophrenic individuals impair
their ability represent and uphold contextual processing.
[24]
Post-traumatic stress disorder
Post-traumatic stress disorder is the re-experiencing and avoidance of trauma-related stressors, emotions and
memories from a past event or events that has cognitive dramatizing impact on an individual.
[25]
PTSD can be attributed to the functional impairment of the prefrontal cortex (PFC) structure. Dysfunctions of
cognitive processing of context and abnormalities that PTSD patients suffer from can affect hindsight thinking such
as in combat soldiers perceiving they could have altered outcomes of events in war.
[26]
The Prefrontal Cortex (PFC)
and dopamine (DA) systems are parts of the brain that can be responsible for the impairment in cognitive control
processing of context information. The PFC is well known for controlling the thought process in hindsight bias that
something will happen when it evidently does not. Brain impairment in certain brain regions can also affect the
thought process of an individual who may engage in hindsight thinking.
[27]
Cognitive flashbacks and other associated features from a traumatic event can trigger severe stress and negative
emotions such as unpardonable guilt. For example, studies were done on trauma-related guilt characteristics of war
veterans with chronic PTSD 8.
[28]
Although there has been limited research, significant data proves that hindsight
bias, in terms of guilt and responsibility from traumatic events of war, has an effect on war veterans' personal
perception of wrongdoing. They blame themselves and in hindsight, perceive that they could have prevented what
happened.
Hindsight bias
115
Examples
Health care system
Accidents are prone to happen in any human undertaking, but accidents occurring within the health care system seem
more salient and severe due to their profound effect on the lives of those involved, sometimes resulting in the death
of a patient. Hindsight bias has been shown to be a disadvantage of nearly all methods of measuring error and
adverse events within the healthcare system.
[29]
These methods include morbidity and mortality conferences and
autopsy, case analysis, medical malpractice claims analysis, staff interviews and even patient observation.
Furthermore, studies of injury or death rates as a result of error and virtually all incident review procedures used in
healthcare today fail to control for hindsight bias, severely limiting the generalizability and integrity of the
research.
[30]
Physicians who are primed with a possible diagnosis before evaluating the symptoms of a patient
themselves are more likely to arrive at the primed diagnosis than physicians who were only given the symptoms of
the patient.
[31]
According to Harvard Medical Practice Studies, 44,00098,000 deaths in the United States each year
are a result of safety incidents within the healthcare system.
[29]
Many of these deaths are viewed to be preventable
after the fact, clearly indicating the presence and importance of a hindsight bias in this field.
Judicial system
Hindsight bias results in being held to a higher standard in court. The defense is particularly susceptible to these
effects since their actions are the ones being scrutinized by the jury. Due to the hindsight bias, defendants will be
judged as being capable of preventing the bad outcome.
[32]
Though much stronger for the defendants, hindsight bias
also affects the plaintiffs. In cases where there is an assumption of risk, hindsight bias may contribute to the jurors
perceiving the event as riskier due to the poor outcome. This may lead the jury to feel that the plaintiff should have
exercised greater caution in the situation. Both of these effects can be minimized if attorneys put the jury in a
position of foresight rather than hindsight through the use of language and timelines. Encouraging people to
explicitly think about the counterfactuals was an effective means of reducing the hindsight bias.
[33]
In other words,
people became less attached to the actual outcome and were more open to consider alternative lines of reasoning
prior to the event. Judges involved in fraudulent transfer litigation cases were subject to the hindsight bias as well,
resulting in an unfair advantage for the plaintiff.
[34]
This shows that jurors are not the only ones sensitive to the
effects of the hindsight bias in the courtroom.
References
[1] Hoffrage, U., & Pohl, R. (2003) Hindsight Bias: A Special Issue of Memory. Champlain, NY: Psychology Press
[2] [2] Rudiger, F. (2007) Ways to Assess Hindsight Bias: Social Cognition. 25(1):14-31
[3] Fischhoff, B. (2003). Hindsight foresight: the effect of outcome knowledge on judgement under uncertainty. Qual Saf Health Care. 12,
304-312
[4] [4] Fischhoff, B. (2007). An early history of hindsight research. Social cognition. 25, 10-13
[5] [5] Tversky, A., Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability. Cognitive psychology. 5, 207-232
[6] Fischhoff, B., and Beyth, R. (1975). I knew it would happen Remembered probabilities of once-future things. Organizational Behaviour and
Human Performance. 13, 1-16
[7] [7] Bernstein, D. M., Erdfelder, E., Meltzoff, A. N., Peria, W., Loftus, G. R. (2011). Hindsight bias from 3 to 95 years of age. J Exp Psychol
Learn Mem Cogn. 2, 378-391
[8] [8] Marks, A. Z., and Arkes, H. R. (2010). The effects of mental contamination on the hindsight bias: Source confusion determines success in
disregarding knowledge. J. Behave. Dec. Making.23: 131-160
[9] [9] Biasi, B., Weber, M. (2009). Hindsight Bias, risk perception and investment performance. journal of management science. 55, 1018-1029
[10] Mazzoni, G., & Vannucci, M. (2007). Hindsight bias, the misinformation effect, and false autobiographical memories. Social Cognition,
25(1), 203-220.
[11] Blank, H., Musch, J., & Pohl, R. F. (2007) Hindsight Bias: On Being Wise After the Event. Social Cognition, 25(1), 1-9.
[12] Blank, H., & Nestler, S. (2007). Cognitive Process Models of Hinsight Bias. Social cognition, 25(1), 132-147.
[13] Pohl, R., F., Eisenhauer, M., & Hardt, O. (2003) SARA: A Cognitive Process Model to Stimulate the Anchoring Effect and Hindsight bias.
Memory, 11 337-356.
Hindsight bias
116
[14] Loftus, E., F. (1991). Made in Memory: Distortions in Recollection After Misleading Information.The Psychology of Learning and
Motivation, 25, 187-215. New York: Academic Press
[15] Hertwig, R., Fenselow, C., & Hoffrage, U. (2003). Hindsight Bias: Knowledge and Heuristics Affect our reconstruction of the Past.
Memory, 11, 357-377.
[16] Nestler, S., Blank, H., & von Collani, G. (2008). A Causal Model Theory of Creeping Determinism. Social Psychology, 39(3), 182-188.
[17] Nadel, L., Hupbach, A., Hardt, O., & Gomez, R. (2008)Episodic Memory: Reconsolidation. Dere, D., Easton, A., Nadel, L., & Huston, J., P.
(Eds), Handbook of Episodic Memory (pp. 43-56)The Netherlands: Elsevier.
[18] Pezzo, M., & Pezzo, S., P.(2007) Making Sense of Failure: A Motivated Model of Hindsight Bias. Social Cognition, 25(1), 147-165
[19] Tykocinski, O., E., & Steinberg, N. (2005) Coping with disappointing outcomes: Retroactive pessimism and motivated inhibition of
counterfactuals. Journal of Experimental Social Psychology, 41, 551-558.
[20] Arkes, H., Faust, D., Guilmette, T., J., & Hart, K. (1988). Eliminating Hindsight Bias. Journal of applied Psychology, 73(2), 305-307.
[21] Pohl, R., F., & Hell, W. (1996). No reduction in Hindsight Bias after Complete Information and repeated Testing. Organizational Behaviour
and Human Decision Processes, 67(1), 49-58.
[22] Woodward, T. S., Moritz, S., Arnold, M. M., Cuttler, C., Whitman, J. C., Lindsay, S. (2006). Increased hindsight bias in schizophrenia.
Neuropsychology. 20: 462-467.
[23] Freeman D, Pugh K, Garety PA. (2008) Jumping to conclusions and paranoid ideation in the general population. Schizophrenia Research.
102:254260.
[24] [24] Avram J. Holmes, Angus MacDonald III, Cameron S. Carter, Deanna M. Barch, V. Andrew Stenger and Jonathan D. Cohen. (2005).
Prefrontal functioning during context processing in schizophrenia and major depression: An event-related fMRI study. Schizophrenia
Research, 76(2-3):199-206.
[25] Brewin, C., Dalgleish, R. & Joseph, S. (1996). A dual representation theory of posttraumatic stress disorder. American Psychological
Association. 103(4):670-68.
[26] Richert, K. A., Carrion, V. G., Karchemskiy, A. and Reiss, A. L. (2006), Regional differences of the prefrontal cortex in pediatric PTSD: an
MRI study. Depression and Anxiety. 23: 1725.
[27] Braver, Todd S.; Barch, Deanna M.; Keys, Beth A.; Carter, Cameron S.; Cohen, Jonathan D.; Kaye, Jeffrey A.; Janowsky, Jeri S.; Taylor,
Stephan F.; Yesavage, Jerome A.; Mumenthaler, Martin S.; Jagust, William J.; Reed, Bruce R. (2001). Context processing in older adults:
Evidence for a theory relating cognitive control to neurobiology in healthy aging. Journal of Experimental Psychology. 130(4):746-763.
[28] Beckham, Jean C., Feldman, Michelle E., Kirby, Angela C. (1998). Atrocities Exposure in Vietnam Combat Veterans with Chronic
Posttraumatic Stress Disorder: Relationship to Combat Exposure, Symptom Severity, Guilt, and Interpersonal Violence. Journal of Traumatic
Stress. 11:777-785.
[29] Hurwitz, B., & Sheikh, A. (2009). Healthcare Errors and Patient Safety. Hoboken, NJ: Blackwell Publishing.
[30] [30] Carayon, P. (2007) Handbook of Human Factors and Ergonomics in Healthcare and Patient Safety. Hoboken, NJ: Wiley Publishing
[31] Arkes, H. R., Saville, P. D., Harkness, A. R. (1981). Hindsight bias among physicians weighing the likelihood of diagnoses.Journal of
Applied Psychology. 66, 252-254.
[32] Starr, V. H., & McCormick, M. (2001). Jury Selection (Third Edition). Aspen Law and Business
[33] [33] Peterson, R. L. (2007). Inside the Investor's Brain: the power of mind over money. Hoboken, NJ: Wiley Publishing.
[34] Simkovic, M., & Kaminetzky, B. (2010). Leverage Buyout Bankruptcies, the Problem of Hindsight Bias, and the Credit Default Swap
Solution. Seton Hall Public Research Paper: August 29, 2010.
External links
Excerpt from: David G. Myers, Exploring Social Psychology. New York: McGraw-Hill, 1994, pp.15-19. (http:/ /
csml.som. ohio-state. edu/ Music829C/ hindsight. bias. html) (More discussion of Paul Lazarsfeld's experimental
questions.)
Forecasting (Macro and Micro) and Future Concepts (http:/ / www. cxoadvisory. com/ gurus/ Fisher/ article/ )
Ken Fisher on Market Analysis (4/7/06)
Iraq War Naysayers May Have Hindsight Bias (http:/ / www. washingtonpost. com/ wp-dyn/ content/ article/
2006/ 10/ 01/ AR2006100100784. html). Shankar Vedantam. Washington Post.
Why Hindsight Can Damage Foresight (http:/ / www. forecasters. org/ pdfs/ foresight/ free/ Issue17_Goodwin.
pdf). Paul Goodwin. Foresight: The International Journal of Applied Forecasting, Spring 2010.
Hostile media effect
117
Hostile media effect
The hostile media effect, sometimes called the hostile media phenomenon, refers to the finding that people with
strong biases toward an issue (partisans) perceive media coverage as biased against their opinions, regardless of the
reality. Proponents of the hostile media effect argue that this finding cannot be attributed to the presence of bias in
the news reports, since partisans from opposing sides of an issue rate the same coverage as biased against their side
and biased in favor of the opposing side.
[1]
The phenomenon was first proposed and studied experimentally by
Robert Vallone, Lee Ross and Mark Lepper.
[1][2]
Studies
In the first major study of this phenomenon,
[1]
pro-Palestinian students and pro-Israeli students at Stanford
University were shown the same news filmstrips pertaining to the then-recent (1982) Sabra and Shatila massacre of
Palestinian refugees by Christian Lebanese militia fighters in Beirut during the Lebanese Civil War. On a number of
objective measures, both sides found that these identical news clips were slanted in favor of the other side.
Pro-Israeli students reported seeing more anti-Israel references and fewer favorable references to Israel in the news
report and pro-Palestinian students reported seeing more anti-Palestinian references, and so on. Both sides said a
neutral observer would have a more negative view of their side from viewing the clips, and that the media would
have excused the other side where it blamed their side.
It is important to note that the two sides were not asked questions about subjective generalizations about the media
coverage as a whole, such as what might be expressed as "I thought that the news has been generally biased against
this side of the issue." Instead, when viewing identical news clips, subjects differed along partisan lines on simple,
objective criteria such as the number of references to a given subject. The research suggests the hostile media effect
is not just a difference of opinion but a difference of perception (selective perception).
Studies have also found hostile media effects related to other political conflicts, such as strife in Bosnia.
[3]
and in
U.S. presidential elections.
[4]
This effect is interesting to psychologists because it appears to be a reversal of the
otherwise pervasive effects of confirmation bias: in this area, people seem to pay more attention to information that
contradicts rather than supports their existing views. This is an example of disconfirmation bias.
An oft-cited forerunner to Vallone's et al. study was conducted by Albert Hastorf and Hadley Cantril in 1954.
[5]
Princeton and Dartmouth students were shown a filmstrip of a controversial Princeton-Dartmouth football game.
Asked to count the number of infractions committed by both sides, students at both universities "saw" many more
infractions committed by the opposing side, in addition to making different generalizations about the game. Hastorf
and Cantril concluded that "there is no such 'thing' as a 'game' existing 'out there' in its own right which people
merely 'observe.' ... For the 'thing' simply is not the same for different people whether the 'thing' is a football game, a
presidential candidate, Communism, or spinach."
[6]
References
[1] Vallone, R.P., Ross, L., & Lepper, M.R. (1985). The hostile media phenomenon: Biased Perception and Perceptions of Media Bias in
Coverage of the "Beirut Massacre". (http:/ / www.ssc. wisc. edu/ ~jpiliavi/ 965/ hwang. pdf) Journal of Personality and Social Psychology,
49, 577-585. summary (http:/ / faculty.babson. edu/ krollag/ org_site/ soc_psych/ vallone_beirut. html).
[2] Vallone, R.E., Lepper, M.R., & Ross, L. (1981). Perceptions of media bias in the 1980 presidential election. Unpublished manuscript,
Stanford University. As cited in Vallone, Ross & Lepper, 1985.
[3] Matheson, K. & Dursun, S. (2001). Social identity precursors to the hostile media phenomenon: Partisan perceptions of coverage of the
Bosnian conflict (http:/ / gpi.sagepub.com/ cgi/ reprint/ 4/ 2/ 116) Group Processes and Intergroup Relations, 4, 117-126.
[4] Dalton, R. J.; Beck, P. A.; Huckfeldt, R. (1998). "Partisan Cues and the Media: Information Flows in the 1992 Presidential Election".
American Political Science Review 92 (1): 111126. JSTOR2585932.
[5] Hastorf, A. H.; Cantril, H. (1954). "They Saw a Game: A Case Study". Journal of Abnormal and Social Psychology 49 (1): 129134.
doi:10.1037/h0057880.
Hostile media effect
118
[6] Hastorf & Cantril (1954), pp. 132-133. Emphasis as in original.
External links
Ohio State: Think Political News Is Biased? Depends Who You Ask (http:/ / researchnews. osu. edu/ archive/
talkbias. htm)
Cancelling Each Other Out? Interest Group Perceptions of the News Media (http:/ / www. cjc-online. ca/ index.
php/ journal/ article/ view/ 960/ 866)
Public Perceptions of Bias in the News Media: Taking A Closer Look at the Hostile Media Phenomenon (http:/ /
www. uky. edu/ AS/ PoliSci/ Peffley/ pdf/ MediaBiasMidwest2001_4-04-01_. PDF) (PDF)
Hyperbolic discounting
In economics, hyperbolic discounting is a time-inconsistent model of discounting.
Given two similar rewards, humans show a preference for one that arrives sooner rather than later. Humans are said
to discount the value of the later reward, by a factor that increases with the length of the delay. This process is
traditionally modeled in form of exponential discounting, a time-consistent model of discounting. A large number of
studies have since demonstrated that the constant discount rate assumed in exponential discounting is systematically
being violated.
[1]
Hyperbolic discounting is a particular mathematical model devised as an improvement over
exponential discounting. Hyperbolic discounting has been observed in humans and animals.
In hyperbolic discounting, valuations fall very rapidly for small delay periods, but then fall slowly for longer delay
periods. This contrasts with exponential discounting, in which valuation falls by a constant factor per unit delay,
regardless of the total length of the delay. The standard experiment used to reveal a test subject's hyperbolic
discounting curve is to compare short-term preferences with long-term preferences. For instance: "Would you prefer
a dollar today or three dollars tomorrow?" or "Would you prefer a dollar in one year or three dollars in one year and
one day?" For certain range of offerings, a significant fraction of subjects will take the lesser amount today, but will
gladly wait one extra day in a year in order to receive the higher amount instead.
[2]
Individuals with such preferences
are described as "present-biased".
Individuals using hyperbolic discounting reveal a strong tendency to make choices that are inconsistent over
timethey make choices today that their future self would prefer not to make, despite using the same reasoning.
This dynamic inconsistency happens because the value of future rewards is much lower under hyperbolic
discounting than under exponential discounting.
[3]
Observations
The phenomenon of hyperbolic discounting is implicit in Richard Herrnstein's "matching law," which states that
when dividing their time or effort between two non-exclusive, ongoing sources of reward, most subjects allocate in
direct proportion to the rate and size of rewards from the two sources, and in inverse proportion to their delays. That
is, subjects' choices "match" these parameters.
After the report of this effect in the case of delay,
[4]
George Ainslie pointed out that in a single choice between a
larger, later and a smaller, sooner reward, inverse proportionality to delay would be described by a plot of value by
delay that had a hyperbolic shape, and that when the larger, later reward is preferred, this preference can be reversed
by reducing both rewards' delays by the same absolute amount. That is, for values of x for which under current
conditions it would be obviously rational to prefer x dollars in (n + 1) days over one dollar in n days (e.g., x = 3), a
large subset of the population would (rationally) prefer the former alternative given large values of n, but even
among this subset, a large (sub-)subset would (irrationally) prefer one dollar in n days when n = 0. Ainslie
demonstrated the predicted reversal to occur among pigeons.
[5]
Hyperbolic discounting
119
A large number of subsequent experiments have confirmed that spontaneous preferences by both human and
nonhuman subjects follow a hyperbolic curve rather than the conventional, "exponential" curve that would produce
consistent choice over time.
[6][7]
For instance, when offered the choice between $50 now and $100 a year from now,
many people will choose the immediate $50. However, given the choice between $50 in five years or $100 in six
years almost everyone will choose $100 in six years, even though that is the same choice seen at five years' greater
distance.
Hyperbolic discounting has also been found to relate to real-world examples of self-control. Indeed, a variety of
studies have used measures of hyperbolic discounting to find that drug-dependent individuals discount delayed
consequences more than matched nondependent controls, suggesting that extreme delay discounting is a fundamental
behavioral process in drug dependence.
[8][9][10]
Some evidence suggests pathological gamblers also discount delayed
outcomes at higher rates than matched controls.
[11]
Whether high rates of hyperbolic discounting precede addictions
or vice-versa is currently unknown, although some studies have reported that high-rate discounting rats are more
likely to consume alcohol
[12]
and cocaine
[13]
than lower-rate discounters. Likewise, some have suggested that
high-rate hyperbolic discounting makes unpredictable (gambling) outcomes more satisfying.
[14]
The degree of discounting is vitally important in describing hyperbolic discounting, especially in the discounting of
specific rewards such as money. The discounting of monetary rewards varies across age groups due to the varying
discount rate.
[6]
The rate depends on a variety of factors, including the species being observed, age, experience, and
the amount of time needed to consume the reward.
[15][16]
Criticism
An article from 2003 noted that the evidence might be better explained by a similarity heuristic than by hyperbolic
discounting.
[17]
Similarly, a 2011 paper criticized the existing studies for mostly using data collected from university
students and being too quick to conclude that the hyperbolic model of discounting is correct.
[18]
A study by Daniel Read introduces "subadditive discounting": the fact that discounting over a delay increases if the
delay is divided into smaller intervals. This hypothesis may explain the main finding of many studies in support of
hyperbolic discountingthe observation that impatience declines with timewhile also accounting for observations
not predicted by hyperbolic discounting.
[19]
Mathematical model
Step-by-step explanation
Suppose that in a study, participants are offered the choice between taking x dollars immediately or taking y dollars n
days later. Suppose further that one participant in that study employs exponential discounting and another employs
hyperbolic discounting.
Each participant will realize that a) s/he should take x dollars immediately if s/he can invest the dollar in a different
venture that will yield more than y dollars n days later and b) s/he will be indifferent between the choices (selecting
one at random) if the best available alternative will likewise yield y dollars n days later. (Assume, for the sake of
simplicity, that the values of all available investments are compounded daily.) Each participant correctly understands
the fundamental question being asked: "For any given value of y dollars and n days, what is the minimum amount of
money, i.e., the minimum value for x dollars, that I should be willing to accept? In other words, how many dollars
would I need to invest today to get y dollars n days from now?" Each will take x dollars if x is greater than the
answer that s/he calculates, and each will take y dollars n days from now if x is smaller than that answer. However,
the methods that they use to calculate that amount and the answers that they get will be different, and only the
exponential discounter will use the correct method and get a reliably correct result:
Hyperbolic discounting
120
The exponential discounter will think "The best alternative investment available (that is, the best investment
available in the absence of this choice) in the absence of this choice gives me a return of r percent per day; in
other words, once a day it adds to its value r percent of the value that it had the previous day. That is, every day it
multiplies its value once by (100% + r%). So if I hold the investment for n days, its value will have multiplied
itself by this amount n times, making that value (100% + r%)^n of what it was at the start that is, (1 + r%)^n
times what it was at the start. So to figure out how much I would need to start with today to get y dollars n days
from now, I need to divide y dollars by ([1 + r%]^n). If my other choice of how much money to take is greater
than this result, then I should take the other amount, invest it in the other venture that I have in mind, and get even
more at the end. If this result is greater than my other choice, then I should take y dollars n days from now,
because it turns out that by giving up the other choice I am essentially investing that smaller amount of money to
get y dollars n days from now, meaning that I'm getting an even greater return by waiting n days for y dollars,
making this my best available investment."
The hyperbolic discounter, however, will think "If I want y dollars n days from now, then the amount that I need
to invest today is y divided by n, because that amount times n equals y dollars. [There lies the hyperbolic
discounter's error.] If my other choice is greater than this result, I should take it instead because x times n will be
greater than y times n; if it is less than this result, then I should wait n days for y dollars."
Where the exponential discounter reasons correctly and the hyperbolic discounter goes wrong is that as n becomes
very large, the value of ([1 + r%]^n) becomes much larger than the value of n, with the effect that the value of (y/[(1
+ r%)^n) becomes much smaller than the value of (y/n). Therefore, the minimum value of x (the number of dollars in
the immediate choice) that suffices to be greater than that amount will be much smaller than the hyperbolic
discounter thinks, with the result that s/he will perceive x-values in the range from (y/[(1 + r%)^n) to (y/n) (inclusive
at the low end) as being too small and, as a result, irrationally turn those alternatives down when they are in fact the
better investment.
Formal model
Comparison of the discount factors of hyperbolic and exponential discounting. In
both cases, . Hyperbolic discounting is shown to value future assets
higher than exponential discounting.
Hyperbolic discounting is mathematically
described as:
where f(D) is the discount factor that
multiplies the value of the reward, D is the
delay in the reward, and k is a parameter
governing the degree of discounting. This is
compared with the formula for exponential
discounting:
Simple derivation
If is an exponential discounting function and
a hyperbolic function (with n the amount of
weeks), then the exponential discounting a
Hyperbolic discounting
121
week later from "now" (n=0) is , and the exponential discounting a week from week n is , which means they are the
same. For g(n), , which is the same as for f, while . From this one can see that the two types of discounting are the
same "now", but when n is much greater than 1, for instance 52 (one year), will tend to go to 1, so that the hyperbolic
discounting of a week in the far future is virtually zero, while the exponential is still 1/2.
Quasi-hyperbolic approximation
The "quasi-hyperbolic" discount function, proposed by Laibson (1997),
[3]
approximates the hyperbolic discount
function above in discrete time by
and
where and are constants between 0 and 1; and again D is the delay in the reward, and f(D) is the discount factor.
The condition f(0) = 1 is stating that rewards taken at the present time are not discounted.
Quasi-hyperbolic time preferences are also referred to as "beta-delta" preferences. They retain much of the analytical
tractability of exponential discounting while capturing the key qualitative feature of discounting with true
hyperbolas.
Explanations
Uncertain risks
Notice that whether discounting future gains is rational or notand at what rate such gains should be
discounteddepends greatly on circumstances. Many examples exist in the financial world, for example, where it is
reasonable to assume that there is an implicit risk that the reward will not be available at the future date, and
furthermore that this risk increases with time. Consider: Paying $50 for dinner today or delaying payment for sixty
years but paying $100,000. In this case, the restaurateur would be reasonable to discount the promised future value
as there is significant risk that it might not be paid (e.g. due to the death of the restaurateur or the diner).
Uncertainty of this type can be quantified with Bayesian analysis.
[20]
For example, suppose that the probability for
the reward to be available after time t is, for known hazard rate
but the rate is unknown to the decision maker. If the prior probability distribution of is
then, the decision maker will expect that the probability of the reward after time t is
which is exactly the hyperbolic discount rate. Similar conclusions can be obtained from other plausible distributions
for .
[20]
Hyperbolic discounting
122
Applications
More recently these observations about discount functions have been used to study saving for retirement, borrowing
on credit cards, and procrastination. It has frequently been used to explain addiction.
[21][22]
Hyperbolic discounting
has also been offered as an explanation of the divergence between privacy attitudes and behaviour.
[23]
Present Values of Annuities
Present Value of an Standard Annuity
The present value of a series of equal annual cash flows in arrears discounted hyperbolically:
Where V is the present value, P is the annual cash flow, D is the number of annual payments and k is the factor
governing the discounting.
References
[1] Frederick, Shane; Loewenstein, George; O'Donoghue, Ted (2002). "Time Discounting and Time Preference: A Critical Review". Journal of
Economic Literature 40 (2): 351401. doi:10.1257/002205102320161311.
[2] Thaler, R. H. (1981). "Some Empirical Evidence on Dynamic Inconsistency". Economic Letters 8 (3): 201207.
doi:10.1016/0165-1765(81)90067-7.
[3] Laibson, David (1997). "Golden Eggs and Hyperbolic Discounting". Quarterly Journal of Economics 112 (2): 443477.
doi:10.1162/003355397555253.
[4] Chung, S. H.; Herrnstein, R. J. (1967). "Choice and delay of Reinforcement". Journal of the Experimental Analysis of Behavior 10 (1):
6774. doi:10.1901/jeab.1967.10-67.
[5] Ainslie, G. W. (1974). "Impulse control in pigeons". Journal of the Experimental Analysis of Behavior 21 (3): 485489.
doi:10.1901/jeab.1974.21-485.
[6] Green, L.; Fry, A. F.; Myerson, J. (1994). "Discounting of delayed rewards: A life span comparison". Psychological Science 5 (1): 3336.
doi:10.1111/j.1467-9280.1994.tb00610.x.
[7] Kirby, K. N. (1997). "Bidding on the future: Evidence against normative discounting of delayed rewards". Journal of Experimental
Psychology: General 126 (1): 5470. doi:10.1037/0096-3445.126.1.54.
[8] Bickel, W. K.; Johnson, M. W. (2003). "Delay discounting: A fundamental behavioral process of drug dependence". In Loewenstein, G.;
Read, D.; Baumeister, R. F.. Time and Decision. New York: Russell Sage Foundation. ISBN0-87154-549-7.
[9] Madden, G. J.; Petry, N. M.; Bickel, W. K.; Badger, G. J. (1997). "Impulsive and self-control choices in opiate-dependent patients and
non-drug-using control participants: Drug and monetary rewards". Experimental and Clinical Psychopharmacology 5: 256262.
PMID9260073.
[10] Vuchinich, R. E.; Simpson, C. A. (1998). "Hyperbolic temporal discounting in social drinkers and problem drinkers". Experimental and
Clinical Psychopharmacology 6 (3): 292305. doi:10.1037/1064-1297.6.3.292.
[11] Petry, N. M.; Casarella, T. (1999). "Excessive discounting of delayed rewards in substance abusers with gambling problems". Drug and
Alcohol Dependence 56 (1): 2532. doi:10.1016/S0376-8716(99)00010-1.
[12] Poulos, C. X.; Le, A. D.; Parker, J. L. (1995). "Impulsivity predicts individual susceptibility to high levels of alcohol self administration".
Behavioral Pharmacology 6 (8): 810814. doi:10.1097/00008877-199512000-00006.
[13] Perry, J. L.; Larson, E. B.; German, J. P.; Madden, G. J.; Carroll, M. E. (2005). "Impulsivity (delay discounting) as a predictor of acquisition
of i.v. cocaine self-administration in female rats". Psychopharmacology 178 (23): 193201. doi:10.1007/s00213-004-1994-4.
PMID15338104.
[14] Madden, G. J.; Ewan, E. E.; Lagorio, C. H. (2007). "Toward an animal model of gambling: Delay discounting and the allure of
unpredictable outcomes". Journal of Gambling Studies 23 (1): 6383. doi:10.1007/s10899-006-9041-5.
[15] Loewenstein, G.; Prelec, D. (1992). Choices Over Time. New York: Russell Sage Foundation. ISBN0-87154-558-6.
[16] Raineri, A.; Rachlin, H. (1993). "The effect of temporal constraints on the value of money and other commodities". Journal of Behavioral
Decision-Making 6 (2): 7794. doi:10.1002/bdm.3960060202.
[17] Rubinstein, Ariel (2003). "Economics and Psychology? The Case of Hyperbolic Discounting.". International Economic Review 44 (4):
1207-1216.
[18] [18] Andersen, Steffen; Harrison, Glenn W.; Lau, Morten; Rutstrm, E. Elisabet (2011). "Discounting Behavior: A Reconsideration".
[19] Read, Daniel (2001). "Is time-discounting hyperbolic or subadditive?". Journal of risk and uncertainty 23 (1): 5-32.
doi:10.1023/A:1011198414683.
Hyperbolic discounting
123
[20] Sozou, P. D. (1998). "On hyperbolic discounting and uncertain hazard rates". Proceedings of the Royal Society B Biological Sciences 265
(1409): 2015. doi:10.1098/rspb.1998.0534.
[21] O'Donoghue, T.; Rabin, M. (1999). "Doing it now or later". The American Economic Review 89: 103124.
[22] O'Donoghue, T.; Rabin, M. (2000). "The economics of immediate gratification". Journal of Behavioral Decision Making 13: 233250.
[23] Acquisti, Alessandro; Grossklags, Jens (2004). "Losses, Gains, and Hyperbolic Discounting: Privacy Attitudes and Privacy Behavior". In
Camp, J.; Lewis, R.. The Economics of Information Security. Kluwer. pp.179186.
Further reading
Ainslie, G. W. (1975). "Specious reward: A behavioral theory of impulsiveness and impulsive control".
Psychological Bulletin 82 (4): 463496. doi:10.1037/h0076860. PMID1099599.
Ainslie, G. (1992). Picoeconomics: The Strategic Interaction of Successive Motivational States Within the Person.
Cambridge: Cambridge University Press.
Ainslie, G. (2001). Breakdown of Will. Cambridge: Cambridge University Press. ISBN978-0-521-59694-7.
Rachlin, H. (2000). The Science of Self-Control. Cambridge; London: Harvard University Press.
Illusion of control
The illusion of control is the tendency for people to overestimate their ability to control events, for instance to feel
that they control outcomes that they demonstrably have no influence over.
[1]
The effect was named by psychologist
Ellen Langer and has been replicated in many different contexts.
[2]
It is thought to influence gambling behavior and
belief in the paranormal.
[3]
Along with illusory superiority and optimism bias, the illusion of control is one of the
positive illusions.
The illusion is more common in familiar situations, and in situations where the person knows the desired outcome.
[4]
Feedback that emphasizes success rather than failure can increase the effect, while feedback that emphasizes failure
can decrease or reverse the effect.
[5]
The illusion is weaker for depressed individuals and is stronger when
individuals have an emotional need to control the outcome.
[4]
The illusion is strengthened by stressful and
competitive situations, including financial trading.
[6]
Though people are likely to overestimate their control when the
situations are heavily chance-determined, they also tend to underestimate their control when they actually have it,
which runs contrary to some theories of the illusion and its adaptiveness.
[7]
The illusion might arise because people lack direct introspective insight into whether they are in control of events.
This has been called the introspection illusion. Instead they may judge their degree of control by a process that is
often unreliable. As a result, they see themselves as responsible for events when there is little or no causal link.
Demonstration
The illusion of control is demonstrated by three converging lines of evidence: 1) laboratory experiments, 2) observed
behavior in familiar games of chance such as lotteries, and 3) self-reports of real-world behavior.
[8]
One kind of laboratory demonstration involves two lights marked "Score" and "No Score". Subjects have to try to
control which one lights up. In one version of this experiment, subjects could press either of two buttons.
[9]
Another
version had one button, which subjects decided on each trial to press or not.
[10]
Subjects had a variable degree of
control over the lights, or none at all, depending on how the buttons were connected. The experimenters made clear
that there might be no relation between the subjects' actions and the lights.
[10]
Subjects estimated how much control
they had over the lights. These estimates bore no relation to how much control they actually had, but was related to
how often the "Score" light lit up. Even when their choices made no difference at all, subjects confidently reported
exerting some control over the lights.
[10]
Ellen Langer's research demonstrated that people were more likely to behave as if they could exercise control in a
chance situation where "skill cues" were present.
[11][12]
By skill cues, Langer meant properties of the situation more
Illusion of control
124
normally associated with the exercise of skill, in particular the exercise of choice, competition, familiarity with the
stimulus and involvement in decisions. One simple form of this effect is found in casinos: when rolling dice in a
craps game people tend to throw harder when they need high numbers and softer for low numbers.
[2][13]
In another experiment, subjects had to predict the outcome of thirty coin tosses. The feedback was rigged so that
each subject was right exactly half the time, but the groups differed in where their "hits" occurred. Some were told
that their early guesses were accurate. Others were told that their successes were distributed evenly through the thirty
trials. Afterwards, they were surveyed about their performance. Subjects with early "hits" overestimated their total
successes and had higher expectations of how they would perform on future guessing games.
[2][12]
This result
resembles the irrational primacy effect in which people give greater weight to information that occurs earlier in a
series.
[2]
Forty percent of the subjects believed their performance on this chance task would improve with practice,
and twenty-five percent said that distraction would impair their performance.
[2][12]
Another of Langer's experimentsreplicated by other researchersinvolves a lottery. Subjects are either given
tickets at random or allowed to choose their own. They can then trade their tickets for others with a higher chance of
paying out. Subjects who had chosen their own ticket were more reluctant to part with it. Tickets bearing familiar
symbols were less likely to be exchanged than others with unfamiliar symbols. Although these lotteries were
random, subjects behaved as though their choice of ticket affected the outcome.
[11][14]
Another way to investigate perceptions of control is to ask people about hypothetical situations, for example their
likelihood of being involved in a motor vehicle accident. On average, drivers regard accidents as much less likely in
"high-control" situations, such as when they are driving, than in "low-control" situations, such as when they are in
the passenger seat. They also rate a high-control accident, such as driving into the car in front, as much less likely
than a low-control accident such as being hit from behind by another driver.
[8][15][16]
Explanations
Ellen Langer, who first demonstrated the illusion of control, explained her findings in terms of a confusion between
skill and chance situations. She proposed that people base their judgments of control on "skill cues". These are
features of a situation that are usually associated with games of skill, such as competitiveness, familiarity and
individual choice. When more of these skill cues are present, the illusion is stronger.
[5][6][17]
Suzanne Thompson and colleagues argued that Langer's explanation was inadequate to explain all the variations in
the effect. As an alternative, they proposed that judgments about control are based on a procedure that they called the
"control heuristic".
[5][18]
This theory proposes that judgments of control to depend on two conditions; an intention to
create the outcome, and a relationship between the action and outcome. In games of chance, these two conditions
frequently go together. As well as an intention to win, there is an action, such as throwing a die or pulling a lever on
a slot machine, which is immediately followed by an outcome. Even though the outcome is selected randomly, the
control heuristic would result in the player feeling a degree of control over the outcome.
[17]
Self-regulation theory offers another explanation. To the extent that people are driven by internal goals concerned
with the exercise of control over their environment, they will seek to reassert control in conditions of chaos,
uncertainty or stress. One way of coping with a lack of real control is to falsely attribute oneself control of the
situation.
[6]
The core self-evaluations (CSE) trait is a stable personality trait composed of locus of control, neuroticism,
self-efficacy, and self-esteem.
[19]
While those with high core self-evaluations are likely to believe that they control
their own environment (i.e., internal locus of control),
[20]
very high levels of CSE may lead to the illusion of control.
Illusion of control
125
Benefits and costs to the individual
Taylor and Brown have argued that positive illusions, including the illusion of control, are adaptive as they motivate
people to persist at tasks when they might otherwise give up.
[21]
This position is supported by Albert Bandura's claim
that "optimistic self-appraisals of capability, that are not unduly disparate from what is possible, can be
advantageous, whereas veridical judgements can be self-limiting".
[22]
His argument is essentially concerned with the
adaptive effect of optimistic beliefs about control and performance in circumstances where control is possible, rather
than perceived control in circumstances where outcomes do not depend on an individual's behavior.
Bandura has also suggested that:
"In activities where the margins of error are narrow and missteps can produce costly or injurious
consequences, personal well-being is best served by highly accurate efficacy appraisal."
[23]
Taylor and Brown argue that positive illusions are adaptive, since there is evidence that they are more common in
normally mentally healthy individuals than in depressed individuals. However, Pacini, Muir and Epstein have shown
that this may be because depressed people overcompensate for a tendency toward maladaptive intuitive processing
by exercising excessive rational control in trivial situations, and note that the difference with non-depressed people
disappears in more consequential circumstances.
[24]
There is also empirical evidence that high self-efficacy can be maladaptive in some circumstances. In a
scenario-based study, Whyte et al. showed that participants in whom they had induced high self-efficacy were
significantly more likely to escalate commitment to a failing course of action.
[25]
Knee and Zuckerman have
challenged the definition of mental health used by Taylor and Brown and argue that lack of illusions is associated
with a non-defensive personality oriented towards growth and learning and with low ego involvement in
outcomes.
[26]
They present evidence that self-determined individuals are less prone to these illusions. In the late
1970s, Abramson and Alloy demonstrated that depressed individuals held a more accurate view than their
non-depressed counterparts in a test which measured illusion of control.
[27]
This finding held true even when the
depression was manipulated experimentally. However, when replicating the findings Msetfi et al. (2005, 2007) found
that the overestimation of control in nondepressed people only showed up when the interval was long enough,
implying that this is because they take more aspects of a situation into account than their depressed
counterparts.
[28][29]
Also, Dykman et al. (1989) showed that depressed people believe they have no control in
situations where they actually do, so their perception is not more accurate overall.
[30]
Allan et al. (2007) has
proposed that the pessimistic bias of depressives resulted in "depressive realism" when asked about estimation of
control, because depressed individuals are more likely to say no even if they have control.
[31]
A number of studies have found a link between a sense of control and health, especially in older people.
[32]
Fenton-O'Creevy et al.
[6]
argue, as do Gollwittzer and Kinney,
[33]
that while illusory beliefs about control may
promote goal striving, they are not conducive to sound decision-making. Illusions of control may cause insensitivity
to feedback, impede learning and predispose toward greater objective risk taking (since subjective risk will be
reduced by illusion of control).
Applications
Psychologist Daniel Wegner argues that an illusion of control over external events underlies belief in psychokinesis,
a supposed paranormal ability to move objects directly using the mind.
[34]
As evidence, Wegner cites a series of
experiments on magical thinking in which subjects were induced to think they had influenced external events. In one
experiment, subjects watched a basketball player taking a series of free throws. When they were instructed to
visualise him making his shots, they felt that they had contributed to his success.
[35]
One study examined traders working in the City of London's investment banks. They each watched a graph being
plotted on a computer screen, similar to a real-time graph of a stock price or index. Using three computer keys, they
had to raise the value as high as possible. They were warned that the value showed random variations, but that the
Illusion of control
126
keys might have some effect. In fact, the fluctuations were not affected by the keys.
[6][16]
The traders' ratings of their
success measured their susceptibility to the illusion of control. This score was then compared with each trader's
performance. Those who were more prone to the illusion scored significantly lower on analysis, risk management
and contribution to profits. They also earned significantly less.
[6][16][36]
Notes
[1] [1] Thompson 1999, pp.187,124
[2] [2] Plous 1993, p.171
[3] Vyse 1997, pp.129130
[4] [4] Thompson 1999, p.187
[5] [5] Thompson 1999, p.188
[6] Fenton-O'Creevy, Mark; Nigel Nicholson, Emma Soane, Paul Willman (2003), "Trading on illusions: Unrealistic perceptions of control and
trading performance", Journal of Occupational and Organizational Psychology (British Psychological Society) 76: 5368,
doi:10.1348/096317903321208880, ISSN2044-8325
[7] Gino, Francesca; Zachariah Sharek, Don A. Moore (March 2011). "Keeping the illusion of control under control: Ceilings, floors, and
imperfect calibration" (http:/ / www.sciencedirect. com/ science?_ob=ArticleURL& _udi=B6WP2-51F1SGG-1& _user=10& _coverDate=03/
31/ 2011& _rdoc=1& _fmt=high& _orig=gateway& _origin=gateway& _sort=d& _docanchor=& view=c& _acct=C000050221&
_version=1& _urlVersion=0& _userid=10& md5=eea0b3f8be009d60a1338e598da297ac& searchtype=a). Organizational Behavior and
Human Decision Processes 114 (2): 104114. doi:10.1016/j.obhdp.2010.10.002. . Retrieved 23 April 2011.
[8] [8] Thompson 2004, p.116
[9] Jenkins, H.H. & Ward, W.C. (1965) Judgement of contingency between responses and outcomes. Psychological Monographs, 79 (1, Wole
No. 79).
[10] Allan, L.G.; Jenkins, H.M. (1980), "The judgment of contingency and the nature of the response alternatives", Canadian Journal of
Psychology 34: 111
[11] Langer, Ellen J. (1975), "The Illusion of Control", Journal of Personality and Social Psychology 32 (2): 311328
[12] Langer, Ellen J.; Roth, Jane (1975), "Heads I win, tails it's chance: The illusion of control as a function of the sequence of outcomes in a
purely chance task", Journal of Personality and Social Psychology 32 (6): 951955
[13] Henslin, J. M. (1967), "Craps and magic", American Journal of Sociology 73: 316330
[14] [14] Thompson 2004, p.115
[15] McKenna, F. P. (1993), "It won't happen to me: Unrealistic optimism or illusion of control?", British Journal of Psychology (British
Psychological Society) 84 (1): 3950, ISSN0007-1269
[16] Hardman 2009, pp.101103
[17] [17] Thompson 2004, p.122
[18] Thompson, Suzanne C.; Armstrong, Wade; Thomas, Craig (1998), "Illusions of Control, Underestimations, and Accuracy: A Control
Heuristic Explanation", Psychological Bulletin (American Psychological Association) 123 (2): 143161, doi:10.1037/0033-2909.123.2.143,
ISSN00332909, PMID9522682
[19] Judge, T. A., Locke, E. A., & Durham, C. C. (1997). The dispositional causes of job satisfaction: A core evaluations approach. Research in
Organizational Behavior, 19, 151188.
[20] Judge, T. A., Kammeyer-Mueller, J. D. (2011). Implications of core self-evaluations for a changing organizational context. Human Resource
Management Review, 21, 331-341.
[21] Taylor, S.E., & Brown, J.D. (1988). Illusion and Well-Being - a Social Psychological Perspective On Mental-Health. Psychological Bulletin,
103(2), 193210.
[22] Bandura, A. (1989), "Human Agency in Social Cognitive Theory", American Psychologist 44 (9): 11751184
[23] Bandura, A. (1997). Self-efficacy: The exercise of control. New York: W.H. Freeman and Company.
[24] Pacini, R., Muir, F., & Epstein, S. (1998). Depressive realism from the perspective of Cognitive-Experiential Self-Theory. Journal of
Personality and Social Psychology, 74(4), 1056-1068.
[25] Whyte, G., Saks, A. & Hook, S. (1997) When success breeds failure: The role of self-efficacy in escalating commitment to a losing course
of action. Journal of Organizational Behavior, 18, 415-432.
[26] Knee, C.R., & Zuckerman, M. (1998). A nondefensive personality: Autonomy and control as moderators of defensive coping and
self-handicapping. Journal of Research in Personality, 32(2), 115-130.
[27] Abramson, L.Y., & Alloy, L.B. (1980). The judgment of contingency: Errors and their implications. In J. Singer and A. Baum (Eds.),
Advances in environmental psychology. Vol. II. New York: Erlbaum
[28] Msetf RM, Murphy RA, Simpson J (2007). "Depressive realism and the effect of intertrial interval on judgements of zero, positive, and
negative contingencies". The Quarterly Journal of Experimental Psychology 60 (3): 461481. doi:10.1080/17470210601002595.
PMID17366312.
[29] Msetfi RM, Murphy RA, Simpson J, Kornbrot DE (2005). "Depressive realism and outcome density bias in contingency judgments: the
effect of the context and intertrial interval" (http:/ / www. lancs. ac. uk/ shm/ dhr/ publications/ janesimpson/ depressiverealism. pdf) (PDF).
Illusion of control
127
Journal of Experimental Psychology. General 134 (1): 1022. doi:10.1037/0096-3445.134.1.10. PMID15702960. .
[30] Dykman, B.M., Abramson, L.Y., Alloy, L.B., Hartlage, S. (1989). "Processing of ambiguous and unambiguous feedback by depressed and
nondepressed college students: Schematic biases and their implications for depressive realism". Journal of Personality and Social Psychology
56 (3): 431445. doi:10.1037/0022-3514.56.3.431. PMID2926638.
[31] Allan LG, Siegel S, Hannah S. (2007). "The sad truth about depressive realism" (http:/ / psych. mcmaster. ca/ hannahsd/ pubs/
AllanSiegelHannah'07. pdf) (PDF). The Quarterly Journal of Experimental Psychology 60 (3): 482495. doi:10.1080/17470210601002686.
PMID17366313. .
[32] [32] Plous 1993, p.172
[33] Gollwitzer, P.M.; Kinney, R.F. (1989), "Effects of Deliberative and Implemental Mind-Sets On Illusion of Control", Journal of Personality
and Social Psychology 56 (4): 531542
[34] Wegner, Daniel M. (2008), "Self is Magic" (http:/ / isites. harvard. edu/ fs/ docs/ icb. topic67047. files/ 2_13_07_Wegner. pdf), in John
Baer, James C. Kaufman, Roy F. Baumeister, Are we free?: psychology and free will, New York: Oxford University Press,
ISBN978-0-19-518963-6, , retrieved 2008-07-02
[35] Pronin, Emily; Daniel M. Wegner, Kimberly McCarthy, Sylvia Rodriguez (2006), "Everyday Magical Powers: The Role of Apparent Mental
Causation in the Overestimation of Personal Influence" (http:/ / www. wjh. harvard. edu/ ~wegner/ pdfs/ Pronin, Wegner, McCarthy, &
Rodriguez (2006).pdf), Journal of Personality and Social Psychology (American Psychological Association) 91 (2): 218231,
doi:10.1037/0022-3514.91.2.218, ISSN0022-3514, PMID16881760, , retrieved 2009-07-03
[36] Fenton-O'Creevy, M., Nicholson, N. and Soane, E., Willman, P. (2005) Traders - Risks, Decisions, and Management in Financial Markets
ISBN 0-19-926948-3
References
Baron, Jonathan (2000), Thinking and deciding (3rd ed.), New York: Cambridge University Press,
ISBN0-521-65030-5, OCLC316403966
Hardman, David (2009), Judgment and decision making: psychological perspectives, Wiley-Blackwell,
ISBN978-1-4051-2398-3
Plous, Scott (1993), The Psychology of Judgment and Decision Making, McGraw-Hill, ISBN978-0-07-050477-6,
OCLC26931106
Thompson, Suzanne C. (1999), "Illusions of Control: How We Overestimate Our Personal Influence", Current
Directions in Psychological Science (Association for Psychological Science) 8 (6): 187190, ISSN09637214,
JSTOR20182602
Thompson, Suzanne C. (2004), "Illusions of control", in Pohl, Rdiger F., Cognitive Illusions: A Handbook on
Fallacies and Biases in Thinking, Judgement and Memory, Hove, UK: Psychology Press, pp.115125,
ISBN978-1-84169-351-4, OCLC55124398
Vyse, Stuart A. (1997), Believing in Magic: The Psychology of Superstition, Oxford University Press US,
ISBN0-19-513634-9
Further reading
Fast, Nathanael J.; Gruenfeld, Deborah H; Sivanathan, Niro; Galinsky, Adam D. (2009). "Illusory Control: A
Generative Force Behind Power's Far-Reaching Effects". Psychological Science 20 (4): 502508.
doi:10.1111/j.1467-9280.2009.02311.x. ISSN0956-7976.
Illusion of validity
128
Illusion of validity
The illusion of validity is a cognitive bias described by Amos Tversky and Daniel Kahneman in which consistent
evidence persistently leads to confident predictions even after the predictive value of the evidence has been
discredited.
[1]
Kahneman describes this bias as the first cognitive bias he conceived, in which he evaluated officer candidates for
the Israeli Defense Forces according to a test that he knew to be nearly worthless but yet on which he still found it
compelling to make strong predictions.
[2]
In one study, subjects reported higher confidence in a prediction of the final grade point average of a student after
seeing a first-year record of consistent B&apos;s than a first-year record of an even number of A&apos;s and
C&apos;s.
[3]
Consistent patterns may be observed when input variables are highly redundant or correlated, which may increase
subjective confidence. However, a number of highly correlated inputs should not increase confidence much more
than only one of the inputs; instead higher confidence should be merited when a number of highly independent
inputs show a consistent pattern.
[4]
For example, some studies have shown a high degree of correlation between IQ and SAT scores,
[5]
so once you
know someone's SAT score, knowing their IQ in addition would not add much information and should increase
confidence only very little; and, to whatever degree SAT scores correlate with GPA scores, GPA could be predicted
from an IQ score nearly as effectively as SAT score.
The illusion of validity may be caused in part by confirmation bias
[6]
and/or the representativeness heuristic and
could in turn cause the overconfidence effect.
[7]
References
Kahnemann on YouTube: Explorations of the Mind: Intuition
[8]
[1] Tversky, Amos; Daniel Kahneman (1974). "Judgment under uncertainty: Heuristics and biases". Science 185 (4157): 11241131.
doi:10.1126/science.185.4157.1124. PMID17835457.
[2] Kahneman, Daniel (2011). Thinking, fast and slow. New York: Farrar, Straus and Giroux. pp.209211.
[3] Tversky & Kahneman, 1974
[4] Tversky & Kahneman, 1974
[5] Frey, Meredith C.; Douglas K. Detterman (2004). "Scholastic Assessment org?". Psychological Science 15 (6): 373378.
doi:10.1111/j.0956-7976.2004.00687.x. PMID15147489.
[6] Einhorn, Hillel; Robyn M. Hogarth (1978). "Confidence in judgment: Persistence of the illusion of validity". Psychological Review 85 (5):
395416.
[7] Einhorn & Hogarth, 1978
[8] http:/ / www. youtube.com/ watch?v=dddFfRaBPqg
Illusory correlation
129
Illusory correlation
Illusory correlation is the phenomenon of seeing a relationship between variables (typically people, events, or
behaviors) even when no such relationship exists. A common example of this phenomenon would be when people
form false associations between membership in a statistical minority group and rare (typically negative) behaviors as
variables that are novel or deviant tend to capture the attention.
[1]
This is one way stereotypes form and endure.
David Hamilton and Terrence Rose (1980) found that stereotypes can lead people to expect certain groups and traits
to fit together, and then to overestimate the frequency with which these correlations actually occur.
[2]
History
"Illusory correlation" was originally coined by Chapman and Chapman (1967) to describe people's tendencies to
overestimate relationships between two groups when distinctive and unusual information is presented.
[3][4]
The
concept was used to question claims about objective knowledge in clinical psychology through the Chapmans'
refutation of many clinicians' widely-used Wheeler signs for homosexuality in Rorschach tests.
[5]
Example
David Hamilton and Robert Gifford (1976) conducted a series of experiments that demonstrated how stereotypic
beliefs regarding minorities could derive from illusory correlation processes.
[6]
To test their hypothesis, Hamilton
and Gifford had research participants read a series of sentences discribing either desirable or undesirable behaviors,
which were attributed to either Group A or Group B.
[3]
Abstract groups were used so that no previously established
stereotypes would influence results. Most of the sentences were associated with Group A, and the remaining few
were associated with Group B.
[6]
The following table summarizes the information given.
Behaviors Group A (Majority) Group B (Minority) Total
Desirable 18 (69%) 9 (69%) 27
Undesirable 8 (30%) 4 (30%) 12
Total 26 13 39
Each group had the same proportions of positive and negative behaviors, so there was no real association between
behaviors and group membership. Results of the study show that positive, desirable behaviors were not seen as
distinctive so people were accurate in their associations. On the other hand, when distinctive, undesirable behaviors
were represented in the sentences, the participants overestimated how much the minority group exhibited the
behaviors.
[6]
A parallel effect occurs when people judge whether two events, such as pain and bad weather, are correlated. They
rely heavily on the relatively small number of cases where the two events occur together. People pay relatively little
attention to the other kinds of observation(of no pain and/or good weather).
[7][8]
Illusory correlation
130
Theories
General theory
Most explanations for illusory correlation involve psychological heuristics: information processing short-cuts that
underlie many human judgments.
[9]
One of these is availability: the ease with which an idea comes to mind.
Availability is often used to estimate how likely an event is or how often it occurs.
[10]
This can result in illusory
correlation, because some pairings can come easily and vividly to mind even though they are not especially
frequent.
[9]
Information processing
Martin Hilbert (2012) proposes an information processing mechanism that assumes a noisy conversion of objective
observations into subjective judgments. The theory defines noise as the mixing of these observations during retrieval
from memory.
[11]
According to the model, underlying cognitions or subjective judgments are identical with noise or
objective observations that can lead to overconfidence or what is known as conservatism bias- when asked about
behavior participants underestimate the majority or larger group and overestimate the minority or smaller group.
These results are illusory correlations.
Working-memory capacity
In an experimental study done by Eder, Fiedler and Hamm-Eder (2011), the effects of working-memory capacity on
illusory correlations were investigated. They first looked at the individual differences in working memory, and then
looked to see if that had any effect on the formation of illusory correlations. They found that individuals with higher
working memory capacity viewed minority group members more positively than individuals with lower working
memory capacity. In a second experiment, the authors looked into the effects of memory load in working memory on
illusory correlations. They found that increased memory load in working memory led to an increase in the
prevalence of illusory correlations. The experiment was designed to specifically test working memory and not
substantial stimulus memory. This means that the development of illusory correlations was caused by deficiencies in
central cognitive resources caused by the load in working memory, not selective recall.
[12]
Attention theory of learning
Attention theory of learning proposes that features of majority groups are learned first, and then features of minority
groups. This results in an attempt to distinguish the minority group from the majority, leading to these differences
being learned more quickly. The Attention theory also argues that, instead of forming one stereotype regarding the
minority group, two stereotypes, one for the majority and one for the minority, are formed.
[13]
Learning effects on illusory correlations
A study by Murphy et al.(2011) was conducted to investigate whether increased learning would have any effect on
illusory correlations. It was found that educating people about how illusory correlation occurs resulted in a decreased
incidence of illusory correlations.
[14]
Illusory correlation
131
Age
Johnson and Jacobs (2003) performed an experiment to see how early in life individuals begin forming illusory
correlations. Children in grades 2 and 5 were exposed to a typical illusory correlation paradigm to see if negative
attributes were associated with the minority group. The authors found that both groups formed illusory
correlations.
[15]
A study performed by Primi and Agnoli(2002) found that children also create illusory correlations. In their
experiment, children in grades 1, 3, 5, and 7, and adults all look at the same illusory correlation paradigm. The study
found that children did create significant illusory correlations, but those correlations were weaker than those created
by adults. In a second study, groups of shapes with different colors were used. The formation of illusory correlation
persisted showing that social stimuli are not necessary for creating these correlations.
[16]
Explicit versus implicit attitudes
Two studies performed by Ratliff and Nosek examined whether or not explicit and implicit attitudes affected illusory
correlations. In one study, Ratliff and Nosek had two groups: one a majority and the other a minority. They then had
three groups of participants, all with readings about the two groups. One group of participants received
overwhelming pro-majority readings, one was given pro-minority readings, and one received neutral readings. The
groups that had pro-majority and pro-minority readings favored their respective pro groups both explicitly and
implicitly. The group that had neutral readings favored the majority explicitly, but not implicitly. The second study
was similar, but instead of readings, pictures of behaviors were shown, and the participants wrote a sentence
describing the behavior they saw in the pictures presented. The findings of both studies supported the authors'
argument that the differences found between the explicit and implicit attitudes is a result of the interpretation of the
covariation and making judgments based on these interpretations (explicit) instead of just accounting for the
covariation (implicit).
[17]
Paradigm structure
Berndsen et al. (1999) wanted to determine if the structure of testing for illusory correlations could lead to the
formation of illusory correlations. The hypothesis was that identifying test variables as Group A and Group B might
be causing the participants to look for differences between the groups, resulting in the creation of illusory
correlations. An experiment was set up where one set of participants were told the groups were Group A and Group
B, while another set of participants were given groups labeled as students who graduated in 1993 or 1994. This study
found that illusory correlations were more likely to be created when the groups were Group A and B, as compared to
students of the class of 1993 or the class of 1994.
[18]
References
Notes
[1] Pelham, Brett (2007). Conducting Research in Psychology : measuring the weight of smoke. Belmont, CA: Wadsworth Publishing.
ISBN0-534-53294-2. OCLC70836619.
[2] "Stereotypes" (http:/ / www. colorado.edu/ conflict/ peace/ problem/ stereoty. htm). .
[3] Whitley & Kite 2010
[4] Chapman, L (1967). "Illusory correlation in observational report". Journal of Verbal Learning and Verbal Behavior 6 (1): 151155.
doi:10.1016/S0022-5371(67)80066-5. ISSN00225371.
[5] Chapman, Loren J. and Jean P. (1969). "Illusory Correlation as an Obstacle to the Use of Valid Psychodiagnostic Signs". Journal of
Abnormal Psychology 74 (3): 271-80.
[6] Hamilton, D; Gifford, R (1976). "Illusory correlation in interpersonal perception: A cognitive basis of stereotypic judgments". Journal of
Experimental Social Psychology 12 (4): 392407. doi:10.1016/S0022-1031(76)80006-6. ISSN00221031.
[7] Kunda 1999, pp.127130
[8] Plous 1993, pp.162164
Illusory correlation
132
[9] Plous 1993, pp.164167
[10] [10] Plous 1993, p.121
[11] Martin Hilbert (2012) "Toward a synthesis of cognitive biases: How noisy information processing can bias human decision making" (http:/ /
psycnet.apa. org/ psycinfo/ 2011-27261-001). Psychological Bulletin, 138(2), 211237; free access to the study here:
martinhilbert.net/HilbertPsychBull.pdf
[12] Eder, Andreas B.; Fiedler, Klaus & Hamm-Eder, Silke (2011). "Illusory correlations revisited: The role of pseudocontingencies and
working-memory capacity". The Quarterly Journal of Experimental Psychology 64 (3): 517532. doi:10.1080/17470218.2010.509917.
[13] Sherman, Jeffrey; Kruschke, Sherman, Percy, Petrocelli and Conrey (2009). "Attentional processes in stereotype formation: A common
model for category accentuation and illusory correlation". Journal of Personal and Social Psychology 96 (2): 305323.
doi:10.1037/a0013778.
[14] Murphy, Robin; Schmeer, Stefanie; Frederic Vallee-Tourangeau;Esther Mondragon; Denis Hilton (2011). "Making the illusory
correlation effect appear and then disappear: The effects of increased learning". The Quarterly journal of Experimental Psychology. 1 64:
2440. doi:10.1080/17470218.2010.493615.
[15] Johnston, Kristen E.; Jacobs, J. E. (2003). "Children's Illusory Correlations: The role of attentional bias in group impression formation".
Journal of Cognition and Development 4 (2): 129160.
[16] Primi, Caterina; Agnoli (2002). "Children Correlate infrequent behaviors with minority groups: a case of illusory correlation". Cognitive
Development 17: 11051131.
[17] Ratliff, Kate A.; Nosek, Brian A. (2010). "Creating distinct implicit and explicit attitudes with an illusory correlation paradigm". Journal of
Experimental Social Psychology 46: 721728. doi:10.1016/j.jesp.2010.04.011.
[18] Berndsen, Mariette; Spears, and van der Pligt (1999). "Determinants of intergroup differentiation in the illusory correlation task". British
Journal of Psychology 90: 201220.
Sources
Kunda, Ziva (1999). Social Cognition: Making Sense of People. MIT Press. ISBN978-0-262-61143-5.
OCLC40618974.
Plous, Scott (1993). The Psychology of Judgment and Decision Making. McGraw-Hill. ISBN978-0-07-050477-6.
OCLC26931106.
Whitley, Bernard E.; Kite, Mary E. (2010). The Psychology of Prejudice and Discrimination. Belmont, CA:
Wadsworth. ISBN978-0-495-59964-7. OCLC695689517.
Information bias (psychology)
133
Information bias (psychology)
Information bias is a type of cognitive bias, and involves e.g. distorted evaluation of information. Information bias
occurs due to people's curiosity and confusion of goals when trying to choose a course of action.
Over-evaluation of information
An example of information bias is believing that the more information that can be acquired to make a decision, the
better, even if that extra information is irrelevant for the decision.
Examples of information bias are prevalent in medical diagnosis. Subjects in experiments concerning medical
diagnostic problems show an information bias in which they seek information that is unnecessary in deciding the
course of treatment.
Globoma experiment
In an experiment,
[1]
subjects considered this diagnostic problem involving fictitious diseases:
A female patient is presenting symptoms and a history which both suggest a diagnosis of globoma, with about 80%
probability. If it isn't globoma, it's either popitis or flapemia. Each disease has its own treatment which is ineffective
against the other two diseases. A test called the ET scan would certainly yield a positive result if the patient had
popitis, and a negative result if she has flapemia. If the patient has globoma, a positive and negative result are
equally likely. If the ET scan was the only test you could do, should you do it? Why or why not?
Many subjects answered that they would conduct the ET scan even if it were costly, and even if it were the only test
that could be done. However, the test in question does not affect the course of action as to what treatment should be
done. Because the probability of globoma is so high with a probability of 80%, the patient would be treated for
globoma no matter what the test says. Globoma is the most probable disease before or after the ET scan.
In this example, we can calculate the value of the ET scan. Out of 100 patients, a total of 80 people will have
globoma regardless of whether the ET scan is positive or negative. Since it is equally likely for a patient with
globoma to have a positive or negative ET scan result, 40 people will have a positive ET scan and 40 people will
have a negative ET scan, which totals to 80 people having globoma. This means that a total of 20 people will have
either popitis or flapemia regardless of the result of the ET scan. The number of patients with globoma will always
be greater than the number of patients with popitis or flapemia in either case of a positive or negative ET scan so the
ET scan is useless in determining what disease to treat. The ET scan will indicate that globoma should be treated
regardless of the result.
References
[1] Baron, J. (1988, 1994, 2000). Thinking and Deciding. Cambridge University Press. (http:/ / www. amazon. com/ dp/ 0521659728)
Insensitivity to sample size
134
Insensitivity to sample size
Insensitivity to sample size is a cognitive bias that occurs when people judge the probability of obtaining a sample
statistic without respect to the sample size. For example, in one study subjects assigned the same probability to the
likelihood of obtaining a mean height of above six feet [183cm] in samples of 10, 100, and 1,000 men. In other
words, variation is more likely in smaller samples, but people may not expect this.
[1]
In another example, Amos Tversky and Daniel Kahneman asked subjects
A certain town is served by two hospitals. In the larger hospital about 45 babies are born each day, and
in the smaller hospital about 15 babies are born each day. As you know, about 50% of all babies are
boys. However, the exact percentage varies from day to day. Sometimes it may be higher than 50%,
sometimes lower.
For a period of 1 year, each hospital recorded the days on which more than 60% of the babies born were
boys. Which hospital do you think recorded more such days?
1. 1. The larger hospital
2. 2. The smaller hospital
3. About the same (that is, within 5% of each other)
[1]
56% of subjects chose option 3, and 22% of subjects respectively chose options 1 or 2. However, according to
sampling theory the larger hospital is much more likely to report a sex ratio close to 50% on a given day than the
smaller hospital (see the law of large numbers).
Relative neglect of sample size were obtained in a different study of statistically sophisticated psychologists.
[2]
Tversky and Kahneman explained these results as being caused by the representativeness heuristic, according to
which people intuitively judge samples as having similar properties to their population without taking other
considerations into effect. A related bias is the clustering illusion, in which people under-expect streaks or runs in
small samples. Insensitivity to sample size is a subtype of extension neglect.
[3]
References
[1] Tversky, Amos; Daniel Kahneman (1974). "Judgment under uncertainty: Heuristics and biases". Science 185 (4157): 11241131.
doi:10.1126/science.185.4157.1124. PMID17835457.
[2] Tversky, Amos; Daniel Kahneman (1971). "Belief in the law of small numbers". Psychological Bulletin 76 (2): 105110.
doi:10.1037/h0031322.
[3] Kahneman, Daniel (2000). "Evaluation by moments, past and future". In Daniel Kahneman and Amos Tversky (Eds.). Choices, Values and
Frames. p.708.
Just-world hypothesis
135
Just-world hypothesis
The just-world hypothesis (or just-world fallacy) is the cognitive bias that human actions eventually yield morally
fair and fitting consequences, so that, ultimately, noble actions are duly rewarded and evil actions are duly punished.
In other words, the just-world hypothesis is the tendency to attribute consequences to, or expect consequences as the
result of, an unspecified power that restores moral balance; the fallacy is that this implies (often unintentionally) the
existence of such a power in terms of some cosmic force of justice, desert, stability, or order in the universe.
The fallacy popularly appears in the English language in various figures of speech, which often imply a negative
reprisal of justice, such as: "You got what was coming to you," "What goes around comes around," and "You reap
what you sow." This phenomenon of this fallacy has been widely studied by social psychologists since Melvin J.
Lerner conducted seminal work on the belief in a just world in the early 1960s.
[1]
Since that time, research has
continued, examining the predictive capacity of the hypothesis in various situations and across cultures, and
clarifying and expanding the theoretical understandings of just world beliefs.
[2]
Emergence
The phenomenon of belief in a just world has been observed and considered by many philosophers and social
theorists. Psychologist Melvin Lerner's work made the just world hypothesis a focus of social psychological
research.
Melvin Lerner
Melvin Lerner was prompted to study justice beliefs and the just world hypothesis in the context of social
psychological inquiry into negative social and societal interactions.
[3]
Lerner saw his work as extending Stanley
Milgram's work on obedience. He sought to answer the questions of how regimes that cause cruelty and suffering
maintain popular support, and how people come to accept social norms and laws that produce misery and
suffering.
[4]
Lerner's inquiry was influenced by repeatedly witnessing the tendency of observers to blame victims for their
suffering. During his clinical training as a psychologist, he observed treatment of mentally ill persons by the health
care practitioners with whom he worked. Though he knew them to be kindhearted, educated people, they blamed
patients for their own suffering.
[5]
He also describes his surprise at hearing his students derogate the poor, seemingly
oblivious to the structural forces that contribute to poverty.
[3]
In a study he was doing on rewards, he observed that
when one of two men was chosen at random to receive a reward for a task, observers' evaluations were more positive
for the man who had been randomly rewarded than for the man who did not receive a reward.
[6][7]
Existing social
psychological theories, including cognitive dissonance, could not fully explain these phenomena.
[7]
The desire to
understand the processes that caused these observed phenomena led Lerner to conduct his first experiments on what
is now called the just world hypothesis.
Early evidence
In 1966, Lerner and his colleagues began a series of experiments that used shock paradigms to investigate observer
responses to victimization. In the first of these experiments conducted at the University of Kansas, 72 female
subjects were made to watch a confederate receiving electrical shocks under a variety of conditions. Initially,
subjects were upset by observing the apparent suffering of the confederate. However, as the suffering continued and
observers remained unable to intervene, the observers began to derogate the victim. Derogation was greater when the
observed suffering from shock treatments was greater. However, under conditions in which subjects were told that
the victim would receive compensation for her suffering, subjects did not derogate the victim.
[4]
Lerner and
colleagues replicated these findings in subsequent studies, as did other researchers.
[6]
Just-world hypothesis
136
Theory
To explain the findings of these studies, Lerner theorized the prevalence of the belief in a just world. A just world is
one in which actions and conditions have predictable, appropriate consequences. These actions and conditions are
typically individuals' behaviors or attributes. The specific conditions that correspond to certain consequences are
socially determined by the norms and ideologies of a society. Lerner presents the belief in a just world as functional:
it maintains the idea that one can impact the world in a predictable way. Belief in a just world functions as a sort of
"contract" with the world regarding the consequences of behavior. This allows people to plan for the future and
engage in effective, goal-driven behavior. Lerner summarized his findings and his theoretical work in his 1980
monograph The Belief in a Just World: A Fundamental Delusion.
[5]
Lerner hypothesized that the belief in a just world is crucially important for people to maintain for their own
well-being. However, people are confronted daily with evidence that the world is not just: people suffer without
apparent cause. Lerner explained that people use strategies to eliminate threats to their belief in a just world. These
strategies can be rational or irrational. Rational strategies include accepting the reality of injustice, trying to prevent
injustice or provide restitution, and accepting one's own limitations. Non-rational strategies include denial or
withdrawal, and reinterpretation of the event.
There are a few modes of reinterpretation that could make an event fit the belief in a just world. One can reinterpret
the outcome, the cause, and/or the character of the victim. In the case of observing the injustice of the suffering of
innocent others, one major way to rearrange the cognition of an event is to interpret the victim of suffering as
deserving of that suffering.
[1]
Specifically, observers can blame victims for their suffering on the basis of their
behaviors and/or their characteristics. This would result in observers both derogating victims and blaming victims for
their own suffering.
[6]
Much psychological research on the belief in a just world has focused on these negative social
phenomena of victim blaming and victim derogation in different contexts.
[2]
An additional effect of this thinking is that individuals experience less personal vulnerability because they do not
believe they have done anything to deserve or cause negative outcomes.
[2]
This is related to the self-serving bias
observed by social psychologists.
[8]
Many researchers have interpreted just world beliefs as an example of causal attribution. In victim blaming, the
causes of victimization are attributed to an individual rather than a situation. Thus, the consequences of belief in a
just world may be related to or explained in terms of particular patterns of causal attribution.
[9]
Alternatives
Veridical judgment
Others have suggested alternative explanations for the derogation of victims. One suggestion is that derogation
effects are based on accurate judgments of a victim's character. In particular, in relation to Lerner's first studies, some
have hypothesized that it would be logical for observers to derogate an individual who would allow herself to be
shocked without reason.
[10]
A subsequent study by Lerner challenged this alternative hypothesis by showing that
individuals are only derogated when they actually suffer; individuals who agreed to undergo suffering but did not
were viewed positively.
[11]
Guilt reduction
Another alternative explanation offered for the derogation of victims early in the development of the just world
hypothesis is that observers derogate victims to reduce their own feelings of guilt. Observers may feel responsible, or
guilty, for a victim's suffering if they themselves are involved in the situation or experiment. In order to reduce the
guilt, they may devalue the victim.
[12][13][14]
Lerner and colleagues claim that there has not been adequate evidence
to support this interpretation. They conducted one study that found derogation of victims occurred even by observers
Just-world hypothesis
137
who were not implicated in the process of the experiment and thus had no reason to feel guilty.
[6]
Additional evidence
Following Lerner's first studies, other researchers replicated these findings in other settings in which individuals are
victimized. This work, which began in the 1970s and continues today, has investigated how observers react to
victims of random calamities, like traffic accidents, as well as rape and domestic violence, illnesses, and poverty.
[1]
Generally, researchers have found that observers of the suffering of innocent victims tend to both derogate victims
and blame victims for their suffering. Thus, observers maintain their belief in a just world by changing their
cognitions about the character of victims.
[15]
In the early 1970s, social psychologists Zick Rubin and Letitia Anne Peplau developed a measure of belief in a just
world.
[16]
This measure and its revised form published in 1975 allowed for the study of individual differences in just
world beliefs.
[17]
Much of the subsequent research on the just world hypothesis utilized these measurement scales.
Violence
Researchers have looked at how observers react to victims of rape and other violence. In a formative experiment on
rape and belief in a just world by Linda Carli and colleagues, researchers gave two groups of subjects a narrative
about interactions between a man and a woman. The description of the interaction was the same until the end; one
group received a narrative that had a neutral ending and the other group received a narrative that ended with the man
raping the woman. Subjects judged the rape ending as inevitable and blamed the woman in the narrative for the rape
on the basis of her behavior, but not her characteristics.
[18]
These findings have been replicated repeatedly, including
using a rape ending and a 'happy ending' (a marriage proposal).
[19][2]
Other researchers have found a similar phenomenon for judgments of battered partners. One study found that
observers' labels of blame of female victims of relationship violence increase with the intimacy of the relationship.
Observers blamed the perpetrator only in the most significant case of violence, in which a male struck an
acquaintance.
[20]
Bullying
Researchers have employed the just world hypothesis to help understand bullying. Given other research on beliefs in
a just world, it would be expected that observers would derogate and blame victims of bullying. However, the
opposite has been found: individuals high in just world belief have stronger anti-bullying attitudes.
[21]
Other
researchers have found that strong belief in a just world is associated with lower levels of bullying behavior.
[22]
This
finding is in keeping with Lerner's understanding of belief in a just world as functioning as a "contract" that governs
behavior.
[5]
There is additional evidence that belief in a just world is protective of the well-being of children and
adolescents in the school environment,
[23]
as has been shown for the general population.
Illness
Other researchers have found that observers judge sick people as responsible for their illnesses. One experiment
showed that persons suffering from a variety of illnesses were derogated on a measure of attractiveness more so than
healthy individuals were. Victim derogation was found to be higher for those suffering from more severe illnesses,
except in the case of cancer victims.
[24]
Many studies have looked at derogation of AIDS victims specifically. Higher
beliefs in a just world have been found to be related to greater derogation of AIDS victims.
[25]
Just-world hypothesis
138
Poverty
More recently, researchers have explored how people react to poverty through the lens of the just world hypothesis.
High belief in a just world is associated with blaming the poor, and low belief in a just world is associated with
identifying external causes of poverty including world economic systems, war, and exploitation.
[26][27]
The self as victim
Some research on belief in a just world has examined how people react when they themselves are victimized. An
early paper by researcher Ronnie Janoff-Bulman found that rape victims often engage in blaming their own
behaviors, but not their own characteristics, for their victimization.
[28]
It was hypothesized that this may be because
blaming one's own behaviors makes an event more controllable.
These studies on victims of violence, illness, and poverty and others like them have provided consistent support for
the link between observers' just world beliefs and their tendency to blame victims for their suffering.
[1]
As a result,
the just world hypothesis has become widely accepted as a psychological phenomenon.
Theoretical refinement
Subsequent work on measuring belief in a just world has focused on identifying multiple dimensions of the belief.
This work has resulted in the development of new measures of just world belief and additional research.
[2]
Hypothesized dimensions of just world beliefs include belief in an unjust world,
[29]
beliefs in immanent justice and
ultimate justice,
[30]
hope for justice, and belief in one's ability to reduce injustices.
[31]
Other work has focused on
looking at the different domains in which the belief may function; individuals may have different just world beliefs
for the personal domain, the sociopolitical domain, the social domain, etc.
[25]
An especially fruitful distinction is
between the belief in a just world for the self (personal) and the belief in a just world for others (general). These
distinct beliefs are differentially associated with health.
[32]
Correlates
Researchers have used measures of belief in a just world to look at correlates of high and low levels of belief in a just
world.
Limited studies have examined ideological correlates of the belief in a just world. These studies have found
sociopolitical correlates of just world beliefs, including right-wing authoritarianism and the protestant work
ethic.
[33][34]
Studies have also found belief in a just world to be correlated with aspects of religiousness.
[35][36]
Studies of demographic differences, including gender and racial differences, have not shown systematic differences,
but do suggest racial differences, with Black and African Americans having the lowest levels of belief in a just
world.
[37][38]
The development of measures of just world beliefs has also allowed researchers to assess cross-cultural differences
in just world beliefs. Much research conducted shows that beliefs in a just world are evident cross-culturally. One
study tested beliefs in a just world of students in 12 countries. This study found that in countries where the majority
of inhabitants are powerless, belief in a just world tends to be weaker than in other countries.
[39]
This supports the
theory of the just world hypothesis because the powerless have had more personal and societal experiences that have
provided evidence that the world is not just and predictable.
[40]
Just-world hypothesis
139
Current research
Positive mental health effects
Though much of the initial work on belief in a just world focused on the negative social effects of this belief, other
research on belief in a just world suggests that belief in a just world is good, and even necessary, for the mental
health of individuals.
[41]
Belief in a just world is associated with greater life satisfaction and well-being and less
depressive affect.
[32][42]
Researchers are actively exploring reasons that belief in a just world might have these
relationships to mental health; it has been suggested that such beliefs could be a personal resource or coping strategy
that buffers stress associated with daily life and with traumatic events.
[43]
This hypothesis suggests that belief in a
just world can be understood as a positive illusion.
[44]
Correlational studies also showed that beliefs in a just world are correlated with internal locus of control.
[17]
Strong
belief in a just world is associated with greater acceptance of and less dissatisfaction with the negative events in one's
life.
[43]
This may be one pathway through which belief in a just world affects mental health. Others have suggested
that this relationship only holds for beliefs in a just world that apply to the self. Beliefs in a just world that apply to
others are related instead to negative social phenomena of victim blaming and victim derogation observed in other
studies.
[45]
International research
Over forty years after Lerner's seminal work on belief in a just world, researchers continue to study the phenomenon.
Work continues primarily in the United States, Europe, Australia, and Asia.
[7]
Researchers in Germany have
contributed disproportionately to recent research.
[3]
Their work resulted in a volume edited by Lerner and a German
researcher entitled Responses to Victimizations and Belief in a Just World.
[46]
References
[1] Lerner, M.J. & Montada, L. (1998). An Overview: Advances in Belief in a Just World Theory and Methods, in Leo Montada & M.J. Lerner
(Eds.). Responses to Victimizations and Belief in a Just World (17). Plenum Press: New York.
[2] Furnham, A. (2003). Belief in a just world: research progress over the past decade. Personality and Individual Differences; 34: 795817.
[3] Montada, L. & Lerner, M.J. (1998). Preface, in Leo Montada & M.J. Lerner (Eds.). Responses to Victimizations and Belief in a Just World
(pp. viiviii). Plenum Press: New York.
[4] Lerner, M. J., & Simmons, C. H. (1966). Observers reaction to the innocent victim: Compassion or rejection? Journal of Personality and
Social Psychology, 4(2), 203210.
[5] Lerner (1980). The Belief in a Just World: A Fundamental Delusion. Plenum: New York.
[6] Lerner, M. J., & Miller, D. T. (1978). Just world research and the attribution process: Looking back and ahead. Psychological Bulletin, 85(5),
10301051
[7] Maes, J. (1998) Eight Stages in the Development of Research on the Construct of BJW?, in Leo Montada & M.J. Lerner (Eds.). Responses to
Victimizations and Belief in a Just World (pp. 163185). Plenum Press: New York.
[8] Linden, M. & Maercker, A. (2011) Embitterment: Societal, psychological, and clinical perspectives. Wien: Springer.
[9] Howard, J. (1984). Societal influences on attribution: Blaming some victims more than others. Journal of Personality and Social Psychology,
47(3), 494505.
[10] Godfrey, B. & Lowe, C. (1975). Devaluation of innocent victims: An attribution analysis within the just world paradigm. Journal of
Personality and Social Psychology, 31, 944951.
[11] Lerner, M.J. (1970). The desire for justice and reactions to victims. In J. Macaulay & L. Berkowitz (Eds.), Altruism and helping behavior
(pp. 205229). New York: Academic Press.
[12] Davis, K. & Jones, E. (1960). Changes in interpersonal perception as a means of reducing cognitive dissonance. Journal of Abnormal and
Social Psychology, 61, 402410.
[13] Glass, D. (1964). Changes in liking as a means of reducing cognitive discrepancies between self-esteem and aggression. Journal of
Personality, 1964, 32, 531549.
[14] Cialdini, R. B., Kenrick, D. T., & Hoerig, J. H. (1976). Victim derogation in the Lerner paradigm: Just world or just justification? Journal of
Personality and Social Psychology, 33(6), 719724.
[15] Reichle, B., Schneider, A., & Montada, L. (1998). How do observers of victimization preserve their belief in a just world cognitively or
actionally? In L. Montada & M. J. Lerner (Eds.), Responses to victimization and belief in a just world (pp. 5586). New York: Plenum.
Just-world hypothesis
140
[16] Rubin, Z. & Peplau, A. (1973). Belief in a just world and reactions to another's lot: A study of participants in the national draft lottery.
Journal of Social Issues, 29, 7393.
[17] Rubin, Z. & Peplau, L.A. (1975). Who believes in a just world? Journal of Social Issues, 31, 6589.
[18] Janoff-Bulman, R., Timko, C., & Carli, L. L. (1985). Cognitive biases in blaming the victim. Journal of Experimental Social Psychology,
21(2), 161177.
[19] Carli, L. L. (1999). Cognitive Reconstruction, Hindsight, and Reactions to Victims and Perpetrators. Personality and Social Psychology
Bulletin, 25(8), 966979.
[20] Summers, G., & Feldman, N. S. (1984). Blaming the victim versus blaming the perpetrator:An attributional analysis of spouse abuse.
Symposium A Quarterly Journal In Modern Foreign Literatures, 2(4), 339347.
[21] Fox, C. L., Elder, T., Gater, J., & Johnson, E. (2010). The association between adolescents beliefs in a just world and their attitudes to
victims of bullying. The British journal of educational psychology, 80(Pt 2), 18398.
[22] Correia, I., & Dalbert, C. (2008). School Bullying. European Psychologist, 13(4), 248254.
[23] Correia, I., Kamble, S. V., & Dalbert, C. (2009). Belief in a just world and well-being of bullies, victims and defenders: a study with
Portuguese and Indian students. Anxiety, stress, and coping, 22(5), 497508.
[24] Gruman, J. C., & Sloan, R. P. (1983). Disease as Justice: Perceptions of the Victims of Physical Illness. Basic and Applied Social
Psychology, 4(1), 3946.
[25] Furnham, A. & Procter, E. (1992). Sphere-specific just world beliefs and attitudes to AIDS. Human Relations, 45, 265280.
[26] Harper, D. J., Wagstaff, G. F., Newton, J. T., & Harrison, K. R. (1990). Lay causal perceptions of third world poverty and the just world
theory. Social Behavior and Personality: an international journal, 18(2), 235238. Scientific Journal Publishers.
[27] Harper, D. J., & Manasse, P. R. (1992). The Just World and the Third World: British explanations for poverty abroad. The Journal of social
psychology, 6. Heldref Publications.
[28] Janoff-Bulman, R. (1979). Characterological versus behavioral self-blame: inquiries into depression and rape. Journal of personality and
social psychology, 37(10), 1798809.
[29] Dalbert, C., Lipkus, I. M., Sallay, H., & Goch, I. (2001). A just and unjust world: Structure and validity of different world beliefs.
Personality and Individual Differences, 30, 561577.
[30] Maes, J. (1998). Immanent justice and ultimate justice: two ways of believing in justice. In L. Montada, & M. Lerner (Eds.), Responses to
victimization and belief in a just world (pp. 940). New York: Plenum Press.
[31] Mohiyeddini, C., & Montada, L. (1998). BJW and self-efficacy in coping with observed victimization. In L. Montada, & M. Lerner (Eds.),
Responses to victimizations and belief in the just world (pp. 4353). New York: Plenum.
[32] Lipkus, I. M., Dalbert, C., & Siegler, I. C. (1996). The Importance of Distinguishing the Belief in a Just World for Self Versus for Others:
Implications for Psychological Well-Being. Personality and Social Psychology Bulletin, 22(7), 666677.
[33] Lambert, A. J., Burroughs, T., & Nguyen, T. (1999). Perceptions of risk and the buffering hypothesis: The role of just world beliefs and right
wing authoritarianism. Personality and Social Psychology Bulletin, 25(6), 643656.
[34] Furnham, A. & Procter, E. (1989). Belief in a just world: review and critique of the individual difference literature. British Journal of Social
Psychology, 28, 365384.
[35] Begue, L. (2002). Beliefs in justice and faith in people: just world, religiosity and interpersonal trust. Personality and Individual Differences,
32(3), 375382.
[36] Kurst, J., Bjorck, J., & Tan, S. (2000). Causal attributions for uncontrollable negative events. Journal of Psychology and Christianity, 19,
4760.
[37] Calhoun, L., & Cann, A. (1994). Differences in assumptions about a just world: ethnicity and point of view. Journal of Social Psychology,
134, 765770.
[38] Hunt, M. (2000). Status, religion, and the belief in a just world: comparing African Americans, Latinos, and Whites. Social Science
Quarterly, 81, 325343.
[39] Furnham, A. (1991). Just world beliefs in twelve societies. Journal of Social Psychology, 133, 317329.
[40] Furnham, A. (1992). Relationship knowledge and attitudes towards AIDS. Psychological Reports, 71, 11491150.
[41] Dalbert, C. (2001). The justice motive as a personal resource: dealing with challenges and critical life events. New York: Plenum.
[42] Ritter, C., Benson, D. E., & Snyder, C. (1990). Belief in a just world and depression. Sociological Perspective, 25, 235252.
[43] Hafer, C., & Olson, J. (1998). Individual differences in beliefs in a just world and responses to personal misfortune. In L. Montada, & M.
Lerner (Eds.), Responses to victimizations and belief in the just world (pp. 6586). New York: Plenum.
[44] Taylor, S.E., & Brown, J. (1988). Illusion and well-being: A social psychological perspective on mental health. Psychological Bulletin, 103,
193210.
[45] Sutton, R., & Douglas, K. (2005). Justice for all, or just for me? More evidence of the importance of the self-other distinction in just-world
beliefs. Personality and Individual Differences, 39(3), 637645.
[46] Montada, L. & Lerner, M. (Eds.) (1998) Responses to victimizations and belief in the just world. New York: Plenum.
Just-world hypothesis
141
Further reading
Hafer, C. L.; Bgue (2005). "Experimental research on just-world theory: problems, developments, and future
challenges" (http:/ / www. brocku. ca/ psychology/ people/ Hafer_Begue_05. pdf). Psychological Bulletin 131
(1): 128167. doi:10.1037/0033-2909.131.1.128.
Lerner, Melvin J. (1980). The Belief in a Just World A Fundamental Delusion. Perspectives in Social Psychology.
New York: Plenum Press. ISBN978-0-306-40495-5.
Lerner, M.; Simmons, C. H. (1966). "Observers Reaction to the Innocent Victim: Compassion or Rejection?".
Journal of Personality and Social Psychology 4 (2): 203210. doi:10.1037/h0023562. PMID5969146.
Montada, Leo; Lerner, Melvin J. (1998). Responses to Victimization and Belief in a Just World. Critical Issues in
Social Justice. ISBN978-0-306-46030-2.
Rubin, Z.; Peplau, L. A. (1975). "Who believes in a just world?" (http:/ / www. peplaulab. ucla. edu/
Publications_files/ Rubin & Peplau 1975s. pdf). Journal of Social Issues 31 (3): 6590. Reprinted (1977) in
Reflections, XII(1), 126.
Rubin, Z.; Peplau, L. A. (1973). "Belief in a just world and reactions to another's lot: A study of participants in the
national draft lottery" (http:/ / www. peplaulab. ucla. edu/ Publications_files/ Rubin_Peplau_73. pdf). Journal of
Social Issues 29 (4): 7394.
External links
The Just World Hypothesis (http:/ / www. units. muohio. edu/ psybersite/ justworld/ index. shtml)
Issues in Ethics: The Just World Theory (http:/ / www. scu. edu/ ethics/ publications/ iie/ v3n2/ justworld. html)
Less-is-better effect
The less-is-better effect is a type of preference reversal that occurs when a proposition is preferred under joint
evaluation, but not separate evaluation. The term was first proposed by Christoper Hsee.
[1]
The effect has also been
studied by Dan Ariely.
Christopher Hsee demonstrated the effect in a number of experiments, including some which found:
[1]
an expensive $45 scarf was preferred to a cheap $55 coat
7 ounces of ice cream overflowing a small cup was preferred to 8 ounces of ice cream in much larger cup
a dinnerware set with 24 intact pieces was preferred to a set of 24 pieces plus 7 broken pieces
a smaller dictionary was preferred to a larger one with a torn cover
When both options were offered jointly (at the same time) the larger set was preferred, but if they were judged
separately (against control options) the preference was reversed.
Theoretical causes of the less-is-better effect include:
counterfactual thinking. A study found that bronze medalists are happier than silver medalists, apparently because
silver invites comparison to gold whereas bronze invites comparison to not receiving a medal.
[2]
evaluability heuristic and/or fluency heuristic. Hsee hypothesized that subjects evaluated proposals more highly
based on attributes which were easier to evaluate
[1]
(attribute substitution). Another study found that students
preferred funny versus artistic posters according to attributes they could verbalize easily, but the preference was
reversed when they did not need to explain a reason
[3]
(see also introspection illusion).
representativeness heuristic or judgment by prototype. People judge things according to average of a set more
easily than size, a component of extension neglect.
[4]
Less-is-better effect
142
References
[1] Hsee, Christopher K. (1998). "Less Is Better: When Low-value Options Are Valued More Highly than High-value Options". Journal of
Behavioral Decision Making 11: 107121. doi:10.1002/(SICI)1099-0771(199806)11:2<107::AID-BDM292>3.0.CO;2-Y.
[2] Medvec, V. H.; S. Madey & T. Gilovich (1995). "When less is more: Counterfactual thinking and satisfaction among Olympic medalists".
Journal of Personality and Social Psychology 69: 603610.
[3] Wilson, T. D.; J. W. Schooler (1991). "Thinking too much: Introspection can reduce the quality of preferences and decisions". Journal of
Personality and Social Psychology 60: 181192.
[4] Kahneman, Daniel (2011). Thinking, Fast and Slow. New York: Farrar, Straus and Giroux.
Loss aversion
Daniel Kahneman
In economics and decision theory, loss aversion refers to people's tendency
to strongly prefer avoiding losses to acquiring gains. Some studies suggest
that losses are twice as powerful, psychologically, as gains. Loss aversion
was first convincingly demonstrated by Amos Tversky and Daniel
Kahneman.
[1]
This leads to risk aversion when people evaluate a possible gain; since
people prefer avoiding losses to making gains. This explains the curvilinear
shape of the prospect theory utility graph in the positive domain.
Conversely people strongly prefer risks that might mitigate a loss (called
risk seeking behavior).
Loss aversion may also explain sunk cost effects.
Loss aversion implies that one who loses $100 will lose more satisfaction
than another person will gain satisfaction from a $100 windfall. In
marketing, the use of trial periods and rebates tries to take advantage of the buyer's tendency to value the good more
after he incorporates it in the status quo.
Note that whether a transaction is framed as a loss or as a gain is very important to this calculation: would you rather
get a $5 discount, or avoid a $5 surcharge? The same change in price framed differently has a significant effect on
consumer behavior. Though traditional economists consider this "endowment effect" and all other effects of loss
aversion to be completely irrational, that is why it is so important to the fields of marketing and behavioral finance.
The effect of loss aversion in a marketing setting was demonstrated in a study of consumer reaction to price changes
to insurance policies.
[2]
The study found price increases had twice the effect on customer switching, compared to
price decreases.
Loss aversion and the endowment effect
Loss aversion was first proposed as an explanation for the endowment effectthe fact that people place a higher
value on a good that they own than on an identical good that they do not ownby Kahneman, Knetsch, and Thaler
(1990).
[3]
Loss aversion and the endowment effect lead to a violation of the Coase theoremthat "the allocation of
resources will be independent of the assignment of property rights when costless trades are possible" (p. 1326).
In several studies, the authors demonstrated that the endowment effect could be explained by loss aversion but not
five alternatives: (1) transaction costs, (2) misunderstandings, (3) habitual bargaining behaviors, (4) income effects,
or (5) trophy effects. In each experiment half of the subjects were randomly assigned a good and asked for the
minimum amount they would be willing to sell it for while the other half of the subjects were given nothing and
asked for the maximum amount they would be willing to spend to buy the good. Since the value of the good is fixed
and individual valuation of the good varies from this fixed value only due to sampling variation, the supply and
Loss aversion
143
demand curves should be perfect mirrors of each other and thus half the goods should be traded. KKT also ruled out
the explanation that lack of experience with trading would lead to the endowment effect by conducting repeated
markets.
The first two alternative explanationsthat under-trading was due to transaction costs or misunderstandingwere
tested by comparing goods markets to induced-value markets under the same rules. If it was possible to trade to the
optimal level in induced value markets, under the same rules, there should be no difference in goods markets.
The results showed drastic differences between induced-value markets and goods markets. The median prices of
buyers and sellers in induced-value markets matched almost every time leading to near perfect market efficiency, but
goods markets sellers had much higher selling prices than buyers' buying prices. This effect was consistent over
trials, indicating that this was not due to inexperience with the procedure or the market. Since the transaction cost
that could have been due to the procedure was equal in the induced-value and goods markets, transaction costs were
eliminated as an explanation for the endowment effect.
The third alternative explanation was that people have habitual bargaining behaviors, such as overstating their
minimum selling price or understating their maximum bargaining price, that may spill over from strategic
interactions where these behaviors are useful to the laboratory setting where they are sub-optimal. An experiment
was conducted to address this by having the clearing prices selected at random. Buyers who indicated a
willingness-to-pay higher than the randomly drawn price got the good, and vice versa for those who indicated a
lower WTP. Likewise, sellers who indicated a lower willingness-to-accept than the randomly drawn price sold the
good and vice versa. This incentive compatible value elicitation method did not eliminate the endowment effect but
did rule out habitual bargaining behavior as an alternative explanation.
Income effects were ruled out by giving one third of the participants mugs, one third chocolates, and one third
neither mug nor chocolate. They were then given the option of trading the mug for the chocolate or vice versa and
those with neither were asked to merely choose between mug and chocolate. Thus, wealth effects were controlled for
those groups who received mugs and chocolate. The results showed that 86% of those starting with mugs chose
mugs, 10% of those starting with chocolates chose mugs, and 56% of those with nothing chose mugs. This ruled out
income effects as an explanation for the endowment effect. Also, since all participants in the group had the same
good, it could not be considered a "trophy", eliminating the final alternative explanation.
Thus, the five alternative explanations were eliminated in the following ways:
1 & 2: Induced-value market vs. consumption goods market;
3: Incentive compatible value elicitation procedure;
4 & 5: Choice between endowed or alternative good.
Questions about the existence of loss aversion
Recently, studies have questioned the existence of loss aversion. In several studies examining the effect of losses in
decision making under risk and uncertainty no loss aversion was found.
[4]
There are several explanations for these
findings: one, is that loss aversion does not exist in small payoff magnitudes; the other, is that the generality of the
loss aversion pattern is lower than that thought previously. Finally, losses may have an effect on attention but not on
the weighting of outcomes; as suggested, for instance, by the fact that losses lead to more autonomic arousal than
gains even in the absence of loss aversion.
[5]
Loss aversion may be more salient when people compete. Gill and Prowse (2012) provide experimental evidence
that people are loss averse around reference points given by their expectations in a competitive environment with
real effort.
[6]
Loss aversion and the endowment effect are often confused. Gal (2006) argued that the endowment effect, previously
attributed to loss aversion, is more parsimoniously explained by inertia than by a loss/gain asymmetry.
Loss aversion
144
Loss aversion in nonhuman subjects
In 2005, experiments were conducted on the ability of capuchin monkeys to use money. After several months of
training, the monkeys began showing behavior considered to reflect understanding of the concept of a medium of
exchange. They exhibited the same propensity to avoid perceived losses demonstrated by human subjects and
investors.
[7]
However, a subsequent study by Silberberg and colleagues suggested that in fact the 2005 results were
not indicative of loss aversion because there was an unequal time delay in the presentation of gains and losses.
Losses were presented with a delay. Hence, the results can also be interpreted as indicating "delay aversion".
Loss Aversion within Education
Loss aversion experimentation has most recently been applied within an educational setting in an effort to improve
achievement within the U.S. Recent results from Programme for International Student Assessment (PISA) 2009
ranked the US ranks # 31 in Math and #17 in Reading. [8]
In this latest experiment, Fryer et al. posits framing merit pay in terms of a loss in order to be most effective. This
study was performed in the city of Chicago Heights within nine K-8 urban schools, which included 3,200 students.
150 out of 160 eligible teachers participated and were assigned to one of four treatment groups or a control group.
Teachers in the incentive groups received rewards based on their students end of the year performance on the
ThinkLink Predictive Assessment and K-2 students took the Iowa Test of Basic Skills (ITBS) in March). The control
group followed the traditional merit pay process of receiving bonus pay at the end of the year based on student
performance on standardized exams. However, the experimental groups received a lump sum given at beginning of
the year, that would have to be paid back. The bonus was equivalent to approximately 8% of the average teacher
salary in Chicago Heights, approximately $8,000.
Methodology - Gain and Loss teachers received identical net payments for a given level of performance. The only
difference is the timing and framing of the rewards. An advance on the payment and the reframing of the incentive as
avoidance of a loss, the researchers observed treatment effects in excess of 0.20 and some as high as 0.398 Standard
Deviations. According to the authors, 'this suggests that there may be significant potential for exploiting loss
aversion in the pursuit of both optimal public policy and the pursuit of profits'.
[9]
Utilizing loss aversion, specifically within the realm of education has gotten much notoriety in blogs and mainstream
media:
The Washington Post discussed merit pay in a recent article and specifically the study conducted by Fryer et al. The
article discusses the positive results of the experiment and estimates the testing gains of those of the loss group are
associated with an increase in lifetime earnings of between $37,180 and $77,740. They also comment on the fact that
it didnt matter much whether the pay was tied to the performance of a given teacher or to the team to which that
teacher was assigned. They state that a merit pay regime need not pit teachers in a given school against each other to
get results. Washington Post
[10]
Science Daily specifically covers the Fryer study stating that the study showed that students gained as much as a 10
percentile increase in their scores compared to students with similar backgrounds -- if their teacher received a bonus
at the beginning of the year, with conditions attached. It also explains how there was no gain for students when
teachers were offered the bonus at the end of the school year. Thomas Amadio, superintendent of Chicago Heights
Elementary School District 170, where the experiment was conducted, is quoted in this article stating the study
shows the value of merit pay as an encouragement for better teacher performance. Science Daily
[11]
Education weekly also weighs in and discusses utilizing loss aversion within education, specifically merit pay. The
article states there are few noteworthy limitations to the study, particularly relative to scope and sample size;
further, the outcome measure was a low-stakes diagnostic assessment, not the state testits unclear if findings
would look the same if the test was used for accountability purposes. Still Fryer et al. have added an interesting
tumbling element to the merit-pay routine.Education weekly
[12]
Loss aversion
145
The Sun Times interviewed John List, Chairman of the University of Chicagos department of economics. He stated
Its a deeply ingrained behavioral trait. .. that all human beings have this underlying phenomenon that I really,
really dislike losses, and I will do all I can to avoid losing something. The article also speaks to only one other study
to enhance performance in a work environment. The only prior field study of a loss aversion payment plan, they
said, occurred in Nanjing, China, where it improved productivity among factory workers who made and inspected
DVD players and other consumer electronics. The article also covers reaction by Barnett Berry, president of the
Center for Teaching Quality, who stated the study seems to suggest that districts pay teachers working with
children and adolescents in the same way Chinese factory workers were paid for producing widgets. I think this
suggests a dire lack of understanding of the complexities of teaching.Suntimes
[13]
There has also been other criticism of the notion of loss aversion as an explanation of greater effects.
Larry Ferlazzo in his blog questioned what kind of positive classroom culture a loss aversion strategy would create
with students, and what kind of affect a similar plan with teachers would have on school culture. He states that 'the
usual kind of teacher merit pay is bad enough, but a threatened take-away strategy might even be more offensive'.
[14]
Researchers Nathan Novemsky and Daniel Kahneman also state there are limits to loss aversion. Their article
focuses on individual intentions and how such intentions can produce or inhibit loss aversion. They further state that
'the coding of outcomes as gains and losses depends on the agents intentions and not only on the objective state of
affairs at the moment of decision'. They provide an example of two individuals with different intentions performing a
transaction. One, a consumer who has a pair of shoes and would consider giving them up a loss, because their
intention would be to keep them. The other individual however, if he were a shoe salesman with different intentions,
would not be effected by loss aversion, if he were to give up the shoes from his store. [15]
Bill Ferriter also posted an article in the Teachers Leaders Network exclaiming that no matter when external
incentives are awarded, they are not effective in education because teachers are already working as hard as they
can.[16]
References
[1] Kahneman, D. and Tversky, A. (1984). "Choices, Values, and Frames" (http:/ / dirkbergemann. commons. yale. edu/ files/
kahnemann-1984-choices-values-frames.pdf). American Psychologist 39 (4): 341350. .
[2] [2] Dawes, J. 2004 "Price Changes and Defection levels in a Subscription-type Market." Journal of Services Marketing Vol 18, No. 1 2004
[3] Kahneman, D., Knetsch, J., & Thaler, R. (1990). Experimental Test of the endowment effect and the Coase Theorem. Journal of Political
Economy 98(6), 1325-1348.
[4] Erev, Ert & Yechiam, 2008; Ert & Erev, 2008; Harinck, Van Dijk, Van Beest, & Mersmann, 2007; Kermer, Driver-Linn, Wilson, & Gilbert,
2006; Nicolau, 2012; Yechiam & Telpaz, in press
[5] Hochman & Yechiam, 2011
[6] Gill, David and Victoria Prowse (2012). "A structural analysis of disappointment aversion in a real effort competition" (http:/ / papers. ssrn.
com/ sol3/ papers. cfm?abstract_id=1578847). American Economic Review 102 (1): 469503. .
[7] Dubner, Stephen J.; Levitt, Steven D. (2005-06-05). "Monkey Business" (http:/ / www. nytimes. com/ 2005/ 06/ 05/ magazine/ 05FREAK.
html?pagewanted=all). Freakonomics column. New York Times. . Retrieved 2010-08-23.
[8] http:/ / www. oecd. org/ pisa/ pisaproducts/ pisa2009/ pisa2009keyfindings. htm
[9] Fryer et al., Enhancing the efficacy of teacher incentives through loss aversion (http:/ / www. economics. harvard. edu/ faculty/ fryer/ files/
enhancing_teacher_incentives. pdf), Harvard University, 2012
[10] http:/ / www.washingtonpost. com/ blogs/ wonkblog/ wp/ 2012/ 07/ 23/ does-teacher-merit-pay-work-a-new-study-says-yes/
[11] http:/ / www.sciencedaily.com/ releases/ 2012/ 08/ 120809090335. htm
[12] http:/ / www.edexcellence. net/ commentary/ education-gadfly-weekly/ 2012/ august-2/
enhancing-the-efficacy-of-teacher-incentives-through-loss-aversion. html
[13] http:/ / www.suntimes. com/ news/ education/ 14687664-418/
cash-upfront-the-way-to-get-teachers-to-rack-up-better-student-test-scores-study. html
[14] http:/ / larryferlazzo. edublogs. org/ 2012/ 07/ 21/
if-you-only-have-a-hammer-you-tend-to-see-every-problem-as-a-nail-economists-go-after-schools-again/
[15] http:/ / wolfweb.unr. edu/ homepage/ pingle/ Teaching/ BADM%20791/ Week%205%20Decision%20Invariance/
Kahneman-Novemsky-Loss%20Aversion.pdf
Loss aversion
146
[16] http:/ / teacherleaders.typepad. com/ the_tempered_radical/ 2012/ 07/ what-economists-dont-understand-about-educators. html
Sources
Ert, E., & Erev, I. (2008). The rejection of attractive gambles, loss aversion, and the lemon avoidance heuristic.
Journal of Economic Psychology, 29, 715-723.
Erev, I., Ert, E., & Yechiam, E. (2008). Loss aversion, diminishing sensitivity, and the effect of experience on
repeated decisions. Journal of Behavioral Decision Making, 21, 575-597.
Gal, D. (2006). A psychological law of inertia and the illusion of loss aversion. Judgment and Decision Making,
1, 23-32.
Harinck, F., Van Dijk, E., Van Beest, I., & Mersmann, P. (2007). When gains loom larger than losses: Reversed
loss aversion for small amounts of money. Psychological Science, 18, 1099-1105.
Hochman, G., and Yechiam, E. (2011). Loss aversion in the eye and in the heart: The Autonomic Nervous
Systems responses to losses. Journal of Behavioral Decision Making, 24, 140-156.
Kahneman, D., Knetsch, J., & Thaler, R. (1990). Experimental Test of the endowment effect and the Coase
Theorem. Journal of Political Economy 98(6), 1325-1348.
Kahneman, D. & Tversky, A. (1979). Prospect Theory: An Analysis of Decision under Risk. Econometrica 47,
263-291.
Kermer, D.A., Driver-Linn, E., Wilson, T.D., & Gilbert, D.T. (2006). Loss aversion is an affective forecasting
error. Psychological Science, 17, 649-653.
McGraw, A.P., Larsen, J.T., Kahneman, D., & Schkade, D. (2010). Comparing gains and losses. Psychological
Science.
Nicolau, J.L. (2012). Battle Royal: Zero-price effect vs relative vs referent thinking, Marketing Letters, 23, 3,
661-669.
Silberberg, A., et al. (2008). On loss aversion in capuchin monkeys. Journal of the experimental analysis of
behavior, 89, 145-155
Tversky, A. & Kahneman, D. (1991). Loss Aversion in Riskless Choice: A Reference Dependent Model.
Quarterly Journal of Economics 106, 1039-1061.
Yechiam, E., and Telpaz, A. (in press). Losses induce consistency in risk taking even without loss aversion.
Journal of Behavioral Decision Making.
Ludic fallacy
147
Ludic fallacy
The ludic fallacy is a term coined by Nassim Nicholas Taleb in his 2007 book The Black Swan. "Ludic" is from the
Latin ludus, meaning "play, game, sport, pastime."
[1]
It is summarized as "the misuse of games to model real-life
situations."
[2]
Taleb explains the fallacy as "basing studies of chance on the narrow world of games and dice."
[3]
It is a central argument in the book and a rebuttal of the predictive mathematical models used to predict the future
as well as an attack on the idea of applying nave and simplified statistical models in complex domains. According to
Taleb, statistics works only in some domains like casinos in which the odds are visible and defined. Taleb's argument
centers on the idea that predictive models are based on platonified forms, gravitating towards mathematical purity
and failing to take some key ideas into account:
It is impossible to be in possession of all the information.
Very small unknown variations in the data could have a huge impact. Taleb does differentiate his idea from that
of mathematical notions in chaos theory, e.g. the butterfly effect.
Theories/Models based on empirical data are flawed, as events that have not taken place before for which no
conclusive explanation or account can be provided.
Examples
Example 1: Suspicious coin
One example given in the book is the following thought experiment. There are two people:
Dr John, who is regarded as a man of science and logical thinking.
Fat Tony, who is regarded as a man who lives by his wits.
A third party asks them, "assume a fair coin is flipped 99 times, and each time it comes up heads. What are the odds
that the 100th flip would also come up heads?"
Dr John says that the odds are not affected by the previous outcomes so the odds must still be 50:50.
Fat Tony says that the odds of the coin coming up heads 99 times in a row are so low (less than 1 in 6.33 10
29
)
that the initial assumption that the coin had a 50:50 chance of coming up heads is most likely incorrect.
The ludic fallacy here is to assume that in real life the rules from the purely hypothetical model (where Dr John is
correct) apply. Would a reasonable person bet on black on a roulette table that has come up red 99 times in a row
(especially as the reward for a correct guess is so low when compared with the probable odds that the game is fixed)?
In classical terms, highly statistically significant (unlikely) events should make one question one's model
assumptions. In Bayesian statistics, this can be modelled by using a prior distribution for one's assumptions on the
fairness of the coin, then Bayesian inference to update this distribution.
Example 2: Job interview
A man considers going to a job interview. He recently studied statistics and utility theory in college and performed
well in the exams. Considering whether to take the interview, he tries to calculate the probability he will get the job
versus the cost of the time spent.
This young job seeker forgets that real life has more variables than the small set he has chosen to estimate. Even with
a low probability of success, a really good job may be worth the effort of going to the interview. Will he enjoy the
process of the interview? Will his interview technique improve regardless of whether he gets the job or not? Even the
statistics of the job business are non-linear. What other jobs could come the man's way by meeting the interviewer?
Might there be a possibility of a very high pay-off in this company that he has not thought of?
Ludic fallacy
148
Example 3: Stock returns
Any decision theory based on a fixed universe or model of possible outcomes ignores and minimizes the impact of
events which are "outside model." For instance, a simple model of daily stock market returns may include extreme
moves such as Black Monday (1987) but might not model the market breakdowns following the 2011 Japanese
tsunami and its consequences. A fixed model considers the "known unknowns," but ignores the "unknown
unknowns."
Relation to Platonicity
The ludic fallacy is a specific case of the more general problem of Platonicity defined by Taleb as:
the focus on those pure, well-defined, and easily discernible objects like triangles, or more social
notions like friendship or love, at the cost of ignoring those objects of seemingly messier and less
tractable structures.
References
[1] [1] D.P. Simpson, "Cassell's Latin and English Dictionary" (New York: Hungry Minds, 1987) p. 134.
[2] Black Swans, the Ludic Fallacy and Wealth Management (http:/ / www. tocqueville. com/ article/ show/ 204), Franois Sicart.
[3] Nassim Taleb, The Black Swan (New York: Random House, 2007) p. 309.
Further reading
The Ludic Fallacy. Chapter from the book "The Black Swan" (http:/ / www. fooledbyrandomness. com/
LudicFallacy. pdf)
Taleb, Nassim N. (2007). The Black Swan. Random House. ISBN1-4000-6351-5.
Medin, D. & Atran, S. (2004) The native mind: Biological categorization and reasoning in development and
across cultures. Psychological Review.111, 96098
Fodor, J. (1983) Modularity of mind. Cambridge, MA: MIT Press.
Tales of the Unexpected, Wilmott Magazine, June 2006, pp 3036 (http:/ / www. fooledbyrandomness. com/
0603_coverstory. pdf)
"A misplaced question". Taleb at Freakonomics blog (http:/ / freakonomics. blogs. nytimes. com/ 2007/ 08/ 09/
freakonomics-quorum-the-economics-of-street-charity/ )
Mere-exposure effect
149
Mere-exposure effect
The mere-exposure effect is a psychological phenomenon by which people tend to develop a preference for things
merely because they are familiar with them. In social psychology, this effect is sometimes called the familiarity
principle. The effect has been demonstrated with many kinds of things, including words, Chinese characters,
paintings, pictures of faces, geometric figures, and sounds.
[1]
In studies of interpersonal attraction, the more often a
person is seen by someone, the more pleasing and likeable that person appears to be.
Research
The earliest known research on the effect was conducted by Gustav Fechner in 1876.
[2]
Edward B. Titchener also
documented the effect and described the "glow of warmth" felt in the presence of something that is familiar.
[3]
However, Titchener's hypothesis was thrown out once tested and results showed that the enhancement of preferences
for objects did not depend on the individual's subjective impressions of how familiar the objects were. The rejection
of Titchener's hypothesis spurred further research and the development of current theory.
The scholar who is best known for developing the mere-exposure effect is Robert Zajonc. Before conducting his
research, he observed that exposure to a novel stimulus initially elicits a fear/avoidance response by all organisms.
Each repeated exposure to the novel stimulus causes less fear and more of an approach tactic by the observing
organism. After repeated exposure, the observing organism will begin to react fondly to the once novel stimulus.
This observation led to the research and development of the mere-exposure effect.
Zajonc (1960s)
In the 1960s, a series of laboratory experiments by Robert Zajonc demonstrated that simply exposing subjects to a
familiar stimulus led them to rate it more positively than other, similar stimuli which had not been presented.
[4]
In
the beginning of his research, Zajonc looked at language and the frequency of words used. He found that overall
positive words received more usage than their negative counterparts.
[4]
One experiment that was conducted to test the mere-exposure effect used fertile chicken eggs for the test subjects.
Tones of two different frequencies were played to different groups of chicks while they were still unhatched. Once
hatched, each tone was played to both groups of chicks. Each set of chicks consistently chose the tone prenatally
played to it.
[1]
Zajonc tested the mere-exposure effect by using meaningless Chinese characters on two groups of
individuals. The individuals were then told that these symbols represented adjectives and were asked to rate whether
the symbols held positive or negative connotations. The symbols that had been previously seen by the test subjects
were consistently rated more positively than those unseen. After this experiment, the group with repeated exposure
to certain characters reported being in better moods and felt more positive than those who did not receive repeated
exposure.
[1]
In one variation, subjects were shown an image on a tachistoscope for a very brief duration that could not be
perceived consciously. This subliminal exposure produced the same effect,
[5]
though it is important to note that
subliminal effects are unlikely to occur without controlled laboratory conditions.
[6]
According to Zajonc, the mere-exposure effect is capable of taking place without conscious cognition, and that
"preferences need no inferences".
[7]
This statement by Zajonc has spurred much research in the relationship between
cognition and affect. Zajonc explains that if preferences (or attitudes) were merely based upon information units with
affect attached to them, then persuasion would be fairly simple. He argues that this is not the case: such simple
persuasion tactics have failed miserably.
[7]
Zajonc states that affective responses to stimuli happen much more
quickly than cognitive responses, and that these responses are often made with much more confidence. He states that
thought (cognition) and feeling (affect) are distinct, and that cognitions are not free from affect, nor is affect free of
cognition.
[7]
Zajonc states, "...the form of experience that we came to call feeling accompanies all cognitions, that it
Mere-exposure effect
150
arises early in the process of registration and retrieval, albeit weakly and vaguely, and that it derives from a parallel,
separate, and partly independent system in the organism."
[7]
In regards to the mere-exposure effect and decision making, Zajonc states that there has been no empirical proof that
cognition precedes any form of decision making. While this is a common assumption, Zajonc argues that the
opposite is more likely: decisions are made with little to no cognitive process. He equates deciding upon something
with liking it, meaning that more often we cognize reasons to rationalize a decision instead of deciding upon it.
[7]
Being that as it may, once we have decided that we 'like' something it is very difficult to sway that opinion. We are
experts on ourselves, we know what we like, whether or not we have made cognitions to back it up.
Goetzinger (1968)
Charles Goetzinger conducted an experiment using the mere-exposure effect on his class at Oregon State University.
Goetzinger had a student come to class in a large black bag with only his feet visible. The black bag sat on a table in
the back of the classroom. Goetzinger's experiment was to observe if the students would treat the black bag in
accordance to Zajonc's mere-exposure effect. His hypothesis was confirmed. The students in the class first treated
the black bag with hostility, which over time turned into curiosity, and eventually friendship.
[4]
This experiment
confirms Zajonc's mere-exposure effect, by simply presenting the black bag over and over again to the students their
attitudes were changed, or as Zajonc states "mere repeated exposure of the individual to a stimulus is a sufficient
condition for the enhancement of his attitude toward it".
[4]
Bornstein (1989)
A meta-analysis of 208 experiments found that the mere-exposure effect is robust and reliable, with an effect size of
r=0.26. This analysis found that the effect is strongest when unfamiliar stimuli are presented briefly. Mere exposure
typically reaches its maximum effect within 1020 presentations, and some studies even show that liking may
decline after a longer series of exposures. For example, people generally like a song more after they have heard it a
few times, but many repetitions can reduce this preference. A delay between exposure and the measurement of liking
actually tends to increase the strength of the effect. The effect is weaker on children, and for drawings and paintings
as compared to other types of stimuli.
[8]
One social psychology experiment showed that exposure to people we
initially dislike makes us dislike them even more.
[9]
Zola-Morgan (2001)
In support of Zajonc's claim that affect does not need cognition to occur, Zola-Morgan conducted experiments on
monkeys with lesions to the amygdala (the brain structure that is responsive to affective stimuli). In his experiments
Zola-Moragan proved that lesions to the amygdala impair affective functioning, but not cognitive processes.
However, lesions in the hippocampus (the brain structure responsible for memory) impair cognitive functions but
leave emotional responses fully functional.
[1]
Two-factor theory
The mere-exposure effect has been explained by a two-factor theory that posits that repeated exposure of a stimulus
increases perceptual fluency which is the ease with which a stimulus can be processed. Perceptual fluency, in turn,
increases positive affect
[10][11]
Studies showed that repeated exposure increases perceptual fluency, confirming the
first part of the two-factor theory.
[12]
Later studies observed that perceptual fluency is affectively positive,
confirming the second part of the fluency account of the mere-exposure effect.
[13][14]
Mere-exposure effect
151
Application
Advertising
The most obvious application of the mere-exposure effect is found in advertising, but research has been mixed as to
its effectiveness at enhancing consumer attitudes toward particular companies and products. One study tested the
mere-exposure effect with banner ads seen on a computer screen. The study was conducted on college-aged students
who were asked to read an article on the computer while banner ads flashed at the top of the screen. The results
showed that each group exposed to the "test" banner rated the ad more favorably than other ads shown less
frequently or not at all. This research bolsters the evidence for the mere-exposure effect.
A different study showed that higher levels of media exposure are associated with lower reputations for companies,
even when the mere exposure is mostly positive.
[15]
A subsequent review of the research concluded that exposure
leads to ambivalence because it brings about a large number of associations, which tend to be both favorable and
unfavorable.
[16]
Exposure is most likely to be helpful when a company or product is new and unfamiliar to
consumers. An 'optimal' level of exposure to an advertisement may or may not exist. In a third study, experimenters
primed consumers with affective motives. One group of thirsty consumers were primed with a happy face before
being offered a beverage, while a second group was primed with an unpleasant face. The group primed with the
happy face bought more beverages, and were also willing to pay more for the beverage than their unhappy
counterparts. This study bolsters Zajonc's claim that choices are not in need of cognition. Buyers often choose what
they 'like' instead of what they have substantially cognized.
[17]
In the advertising world, the mere-exposure effect suggests that consumers need not cognize advertisements: the
simple repetition is enough to make a 'memory trace' in the consumer's mind and unconsciously affect their
consuming behavior. One scholar explains this relationship as follows: "The approach tendencies created by mere
exposure may be preattitudinal in the sense that they do not require the type of deliberate processing that is required
to form brand attitude."
[18]
Other areas
The mere-exposure effect exists in most areas of human decision making. For example, many stock traders tend to
invest in securities of domestic companies merely because they are more familiar with them despite the fact that
international markets offer similar or even better alternatives.
[19]
The mere-exposure effect also distorts the results of
journal ranking surveys; those academics who previously published or completed reviews for a particular academic
journal rate it dramatically higher than those who did not.
[20]
There are mixed results on the question of whether
mere exposure can promote good relations between different social groups.
[21]
When groups already have negative
attitudes to each other, further exposure can increase hostility.
[21]
A statistical analysis of voting patterns found that a
candidate's exposure has a strong effect on the number of votes they receive, distinct from the popularity of the
policies.
[21]
Another example would be an automotive journalist claiming his own car being the best car in the world
despite having driven countless cars.
Mere-exposure effect
152
References
[1] Zajonc, R.B. (December). "Mere Exposure: A Gateway to the Subliminal" (http:/ / cdp. sagepub. com/ content/ 10/ 6/ 224). Current
Directions in Psychological Science 10 (6). doi:10.1111/1467-8721.00154. . Retrieved April 10, 2011.
[2] Fechner, G.T. (1876). Vorschule der aesthetik. Leipzig, Germany: Breitkoff & Hartel.
[3] Titchener, E.B. (1910). Textbook of psychology. New York: Macmillan.
[4] Zajonc, Robert B. (1968). "Attitudinal Effects Of Mere Exposure". Journal of Personality and Social Psychology 9 (2, Pt.2): 127.
doi:10.1037/h0025848. ISSN1939-1315.
[5] Kunst-Wilson, W.; Zajonc, R. (1980). "Affective discrimination of stimuli that cannot be recognized". Science 207 (4430): 557558.
doi:10.1126/science.7352271. ISSN0036-8075.
[6] De Houwer, J., Hendrickx, H. & Baeyens, F. (1997). Evaluative learning with "subliminally" presented stimuli. Consciousness and
Cognition, 6, 87107.
[7] Zajonc, R.B. (February). "Feeling and thinking: Preferences need no inferences". American Psychologist 35 (2): 151175.
[8] Bornstein, R.F. (1989) Exposure and affect: overview and meta-analysis of research, 19681987. Psychological Bulletin, 106, 265289.
[9] Swap, W. C. (1977). "Interpersonal Attraction and Repeated Exposure to Rewarders and Punishers". Personality and Social Psychology
Bulletin 3 (2): 248251. doi:10.1177/014616727700300219. ISSN0146-1672.
[10] Seamon, John G.; Brody, Nathan; Kauff, David M. (1983). "Affective discrimination of stimuli that are not recognized: Effects of
shadowing, masking, and cerebral laterality". Journal of Experimental Psychology: Learning, Memory, and Cognition 9 (3): 544555.
doi:10.1037/0278-7393.9.3.544. ISSN0278-7393.
[11] Bornstein, Robert F.; D'Agostino, Paul R. (1994). "The Attribution and Discounting of Perceptual Fluency: Preliminary Tests of a
Perceptual Fluency/Attributional Model of the Mere Exposure Effect". Social Cognition 12 (2): 103128. doi:10.1521/soco.1994.12.2.103.
ISSN0278-016X.
[12] Jacoby, Larry L.; Dallas, Mark (1981). "On the relationship between autobiographical memory and perceptual learning.". Journal of
Experimental Psychology: General 110 (3): 306340. doi:10.1037/0096-3445.110.3.306. ISSN0096-3445.
[13] Reber, R.; Winkielman, P.; Schwarz, N. (1998). "Effects of Perceptual Fluency on Affective Judgments". Psychological Science 9 (1):
4548. doi:10.1111/1467-9280.00008. ISSN0956-7976.
[14] Winkielman, Piotr; Cacioppo, John T. (2001). "Mind at ease puts a smile on the face: Psychophysiological evidence that processing
facilitation elicits positive affect.". Journal of Personality and Social Psychology 81 (6): 9891000. doi:10.1037/0022-3514.81.6.989.
ISSN0022-3514.
[15] Fombrun, Charles; Shanley, Mark (1990). "What's in a Name? Reputation Building and Corporate Strategy". The Academy of Management
Journal 33 (2): 233. doi:10.2307/256324. ISSN00014273.
[16] Brooks, Margaret E; Highhouse, Scott (2006). "Familiarity Breeds Ambivalence". Corporate Reputation Review 9 (2): 105113.
doi:10.1057/palgrave.crr.1550016. ISSN1363-3589.
[17] Tom, Gail; Nelson, Carolyn; Srzentic, Tamara; King, Ryan (2007). "Mere Exposure and the Endowment Effect on Consumer Decision
Making". The Journal of Psychology 141 (2): 117125. doi:10.3200/JRLP.141.2.117-126. ISSN0022-3980.
[18] Grimes, Anthony; Phillip J. Kitchen (29). "Researching Mere Exposure Effects to Advertising". International Journal of Market Research
49 (2): 191221.
[19] Huberman, G. (2001). "Familiarity Breeds Investment". Review of Financial Studies 14 (3): 659680. doi:10.1093/rfs/14.3.659.
ISSN14657368.
[20] Serenko, A., & Bontis, N. (2011). What's familiar is excellent: The impact of exposure effect on perceived journal quality. (http:/ / foba.
lakeheadu. ca/ serenko/ papers/ JOI_Serenko_Bontis_Published. pdf) Journal of Informetrics, 5, 219223.
[21] Bornstein, Robert F.; Craver-Lemley, Catherine (2004). "Mere exposure effect". In Pohl, Rdiger F.. Cognitive Illusions: A Handbook on
Fallacies and Biases in Thinking, Judgement and Memory. Hove, UK: Psychology Press. pp.215234. ISBN978-1-84169-351-4.
OCLC55124398.
External links
Changing minds: Mere exposure theory (http:/ / changingminds. org/ explanations/ theories/ mere_exposure. htm)
Money illusion
153
Money illusion
In economics, money illusion refers to the tendency of people to think of currency in nominal, rather than real,
terms. In other words, the numerical/face value (nominal value) of money is mistaken for its purchasing power (real
value). This is false, as modern fiat currencies have no intrinsic value and their real value is derived from their ability
to be exchanged for goods (purchasing power) and used for payment of taxes.
The term was coined by Irving Fisher in Stabilizing the Dollar. It was popularized by John Maynard Keynes in the
early twentieth century, and Irving Fisher wrote an important book on the subject, The Money Illusion, in 1928.
[1]
The existence of money illusion is disputed by monetary economists who contend that people act rationally (i.e.
think in real prices) with regard to their wealth.
[2]
Eldar Shafir, Peter A. Diamond, and Amos Tversky (1997) have
provided compelling empirical evidence for the existence of the effect and it has been shown to affect behaviour in a
variety of experimental and real-world situations.
[3]
Shafir et al.
[3]
also state that money illusion influences economic behaviour in three main ways:
Price stickiness. Money illusion has been proposed as one reason why nominal prices are slow to change even
where inflation has caused real prices or costs to rise.
Contracts and laws are not indexed to inflation as frequently as one would rationally expect.
Social discourse, in formal media and more generally, reflects some confusion about real and nominal value.
Money illusion can also influence people's perceptions of outcomes. Experiments have shown that people generally
perceive an approximate 2% cut in nominal income with no change in monetary value as unfair, but see a 2% rise in
nominal income where there is 4% inflation as fair, despite them being almost rational equivalents. However, this
result is consistent with the 'Myopic Loss Aversion theory'.
[4]
Furthermore, the money illusion means nominal
changes in price can influence demand even if real prices have remained constant.
[5]
On the money illusion
Some have suggested that money illusion implies that the negative relationship between inflation and unemployment
described by the Phillips curve might hold, contrary to recent macroeconomic theories such as the
"expectations-augmented Phillips curve".
[6]
If workers use their nominal wage as a reference point when evaluating
wage offers, firms can keep real wages relatively lower in a period of high inflation as workers accept the seemingly
high nominal wage increase. These lower real wages would allow firms to hire more workers in periods of high
inflation.
Explanations of money illusion generally describe the phenomenon in terms of heuristics. Nominal prices provide a
convenient rule of thumb for determining value and real prices are only calculated if they seem highly salient (e.g. in
periods of hyperinflation or in long term contracts).
A hypothetical example is if a man has $1 000 000 which doubles every 10 years in the bank, while living expenses
(beginning at $100 000 per 10 years) also doubles every 10 years. The man will have $1 900 000 after the first
decade, $3 600 000 after the second, and $6 800 000 after the third (ignoring inflation before each 10-year mark),
and will thus feel safe because each 10 years his net gains (interest subtract living expenses) are more than the
previous 10 years, even though his purchasing power is decreasing because the interest rate matches inflation rate.
Money illusion
154
References
[1] Fisher, Irving (1928), The Money Illusion, New York: Adelphi Company
[2] "A behavioral-economics view of poverty". The American Economic Review 94 (2): 419423. May 2004. doi:10.1257/0002828041302019.
JSTOR3592921.
[3] Shafir, E.; Diamond, P. A.; Tversky, A. (1997), "On Money Illusion", Quarterly Journal of Economics 112 (2): 341374,
doi:10.1162/003355397555208
[4] http:/ / ideas.repec. org/ a/ tpr/ qjecon/ v110y1995i1p73-92. html
[5] [5] Patinkin, 1969
[6] [6] Romer 2006, p.252
Further reading
Fehr, Ernst; Tyran, Jean-Robert (2001), "Does Money Illusion Matter?", American Economic Review 91 (5):
12391262, doi:10.1257/aer.91.5.1239, JSTOR2677924
Howitt, P. (1987), "money illusion", The New Palgrave: A Dictionary of Economics, 3, London: Macmillan,
pp.518519, ISBN0-333-37235-2
Weber, Bernd; Rangel, Antonio; Wibral, Matthias; Falk, Armin (2009), "The medial prefrontal cortex exhibits
money illusion", PNAS 106 (13): 50255028, doi:10.1073/pnas.0901490106, PMC2664018, PMID19307555
Akerlof, George A.; Shiller, Robert J. (2009), Animal Spirits (http:/ / press. princeton. edu/ titles/ 8967. html),
Princeton University Press, pp.4150
Thaler, Richard H.(1997) "Irving Fisher: Modern Behavioral Economist" (http:/ / faculty. chicagobooth. edu/
richard. thaler/ research/ pdf/ IrvingFisher. pdf) in The American Economic Review Vol 87, No 2, Papers and
Proceedings of the Hundred and Fourth Annual Meeting of the American Economic Association (May, 1997)
Huw Dixon (2008), New Keynesian Economics (http:/ / www. dictionaryofeconomics. com/
article?id=pde2008_N000166), New Palgrave Dictionary of Economics New Keynesian macroeconomics (http:/ /
www. cardiff. ac. uk/ carbs/ econ/ workingpapers/ papers/ E2007_3. pdf).
Moral credential
155
Moral credential
The moral credential effect is a bias that occurs when a person's track record as a good egalitarian establishes in
them an unconscious ethical certification, endorsement, or license that increases the likelihood of less egalitarian
decisions later. This effect occurs even when the audience or moral peer group is unaware of the affected person's
previously established moral credential. For example, individuals who had the opportunity to recruit a woman or
African American in one setting were more likely to say later, in a different setting, that a job would be better suited
for a man or a Caucasian.
[1]
Similar effects also appear to occur when a person observes another person from a group
they identify with making an egalitarian decision.
[2]
Group membership
It has been found that moral credentials can be obtained vicariously. That is, a person will behave as if they
themselves have moral credentials when that person observes another person from a group they identify with making
an egalitarian decision.
[3]
In research that draws on social identity theory is was also found that group membership
moderates the effectiveness of moral credentials in mitigating perceptions of prejudice. Specifically, it was observed
that displays of moral credentials have more effect between people who share in-group status.
[4]
Quotes
Philosopher Friedrich Nietzsche in Human, All Too Human (1878):
Innocent corruption. In all institutions that do not feel the sharp wind of public criticism (as, for
example, in scholarly organizations and senates), an innocent corruption grows up, like a
mushroom.
[5][6]
References
[1] Monin, B. & Miller, D. T. (2001). "Moral credentials and the expression of prejudice." Journal of Personality and Social Psychology, 81(1),
33-43.
[2] Kouchaki, M. (Jul 2011). "Vicarious moral licensing: The influence of others' past moral actions on moral behavior.". J Pers Soc Psychol 101
(4): 70215. doi:10.1037/a0024552. PMID21744973.
[3] Kouchaki, M. (Jul 2011). "Vicarious moral licensing: The influence of others' past moral actions on moral behavior.". J Pers Soc Psychol 101
(4): 70215. doi:10.1037/a0024552. PMID21744973.
[4] Krumm, Angela J.; Corning, Alexandra F. (1 December 2008). "Who Believes Us When We Try to Conceal Our Prejudices? The
Effectiveness of Moral Credentials With In-Groups Versus Out-Groups.". The Journal of Social Psychology 148 (6): 689709.
doi:10.3200//SOCP.148.6.689-710.
[5] Human, All Too Human, 468
[6] Zimmern, Helen (translator) (1909). "8. A Look at the State" (http:/ / www. wordsworthclassics. com/ wordsworth/ details2.
aspx?isbn=9781840220834& cat=world). Human, All Too Human. London, England: Wordsworth Editions Limited. p.210.
ISBN978-1-84022-083-4. .
Negativity bias
156
Negativity bias
Negativity bias is the psychological phenomenon by which humans pay more attention to and give more weight to
negative rather than positive experiences or other kinds of information.
Neurological Evidence
In the brain, there are two different systems for negative and positive stimuli. The left hemisphere, which is known
for articulate language, is specialized for positive experiences; whereas, the right hemisphere focuses on negative
experiences. Another area of the brain used for the negativity bias is the amygdala. This specific area of the brain
uses about two-thirds of its neurons searching for negative experiences. Once the amygdala starts looking for the bad
news, it is stored into long-term memory. Positive experiences have to be held in awareness for more than twelve
seconds in order for the transfer from short-term memory to long-term memory to take place.
[1]
We remember more
after we hear disapproving or disappointing news than before; this shows how the brain processes criticism. The
brain also produces an effective management tool for criticism called the criticism sandwich: offering someone
words of praise, discussing critical issues, and then adding more words of praise.
[2]
Implicit memory registers and
responds to negative events almost immediately. It takes five to twenty seconds for positive experiences to even
register in the brain.
[3]
Emotional information revolves within the limbic system. Therefore, the limbic system ties
perfectly into the negativity bias. Furthermore, the limbic system can become overloaded with negative information
and in turn takes control of the brain. The neocortex is responsible for maintaining higher level cognitive processes.
A person uses the neocortex when trying to control the negative symptoms dispersed from the limbic system. Based
on the connection between the limbic system and the nervous system, the body reacts harshly when solely speaking
about negative events.
[4]
Explanations
Research suggests many explanations behind the negativity bias. Listed below are several explanations ranging from
small to large instances of information integration. Each of these tries to clarify why negativity biases occur.
However, future research must be conducted in order to fully understand the causation of humans negative
mindset.
[5]
Selective attention Research shows that people pay more attention to negative issues. Since humans can only focus
on one message at a time, due to selective attention, the negative message becomes more profound.
Retrieval and accessibility
[6]
Some studies found some negativity biases to appear only over time. This
demonstrates how memory places an important role in negativity bias. Throughout the retrieval process, negativity
biases arise. People retain the impression of information rather than the features of the information. Also, since
negative experiences and memories are more distinct in ones mind, they are retrieved more rapidly and therefore
more easily accessible.
Definitiveness
[7]
Humans rely heavily on distinguishing features from an object. For example, when talking about
cars, people rely on the features that make a certain car stand out from another car. However, when this effect is
applied to perception of people, it is the negative traits that stand out. Normal traits of people tend to be positive
traits, so when perceiving other people, humans rely heavily on the negative appearances such as a big nose or a
round tummy.
The judgment process People weigh negative information more than positive information because that is how they
think it should be weighed. It makes sense to people to think in negative terms.
The figureground hypothesis
[8]
There are many happy people in the world and most people expect and report high
levels of personal happiness. People evaluate others in a positive way, and this makes it easier for the more negative
information to stand out so much.
Negativity bias
157
Novelty and distinctiveness Negative information is more distinctive and more novel compared to positive
information. The fact that is has more novelty means that it will be remembered more and more easily recalled. The
fact that it is more distinctive means that it will be more distinguishable among different objects. If the negative
information eliminates its surprisingness or informativeness it will reduce the impact of the negative information.
Credibility Negative information is more credible than positive information. Since there is a strong normative
pressure to say positive things, the person who says something negative is the one who is more likely to seem
sincere.
Interference effects Humans have a very hard time enjoying the positive attributes of an object or event when there
is a negative attribute clinging to that same object or event. For example, if an iPhone screen is cracked, then it is a
cracked iPhone and no longer a great and fabulous iPhone.
Research
Hamlin et al. researched three-month olds and found that they process negativity just as adults do. This reflects
the fact that negativity bias is instinctual in humans and not a conscious decision.
[9]
John Cacioppo showed his participants pictures that he knew would arouse positive feelings, negative feelings,
and neutral feelings. He recorded electrical activity in the brains cerebral cortex to show the information
processing taking place. This demonstration showed that the participants electrical activity was stronger towards
the negative stimuli compared to the positive or neutral stimuli.
[10]
Researchers found that the negativity bias is noticeable during the work day. Amabile studied professionals and
looked at what made their day good or bad. The findings showed that when professionals made even the slightest
step forward on a project, their day was good; however, a minor setback resulted in a bad day. Furthermore,
Amabile found that the negative setbacks were more than twice as strong as the positive steps forward when
relating to the individuals happiness that day.
[2]
Researchers examined the negativity bias with respect to reward and punishment. The findings conclude that
faster learning develops from negative reinforcement rather than positive reinforcement.
[11]
Researchers analyzed language to study the negativity bias. There are more emotional words in the human
dictionary that are negative. One study in particular found that 62% of the emotional words were negative and
32% were positive. 74% of the total words in the English language describe personality traits as negative.
[12]
Researchers studied facial expressions in order to study the negativity bias. Participants facial expressions were
monitored as they were exposed to pleasant, neutral, and unpleasant odors. The results show that the participants
negative reactions to unpleasant odors were stronger than the positive reactions to the pleasant odors.
[12]
Researchers also tested negativity bias with children as the participants with respect to facial expressions.
Children perceived both negative and neutral facial expressions as negative.
[13]
Examples
Researchers have found that children and adults have a greater recall of unpleasant memories compared with
positive memories. Adults and children can recall the detailed descriptions of unpleasant behaviors compared
with positive memories. As humans, we learn faster when we have negative reinforcement.
[11]
Negativity plays a key role in maintaining a healthy marriage. Couples who participate in both negative and
positive interactions remain together; however, the interactions should not be 50-50. Since the brain weighs
negative situations and experiences heavier than positive, the ratio in marriages must be five-to-one. Couples
must engage in five times as many positive experiences as negative experiences.
[10]
Most everyone has pleasant experiences with dogs throughout their life; however if someone has one experience
with a dog attacking or biting him, then they will most likely be scared of dogs and rely more heavily on the
Negativity bias
158
unpleasant experience than the many pleasant experiences.
[14]
The use of social media for large organizations demonstrates the negativity bias. McDonalds used Twitter to get
customers to tell their favorite stories of their experiences with the restaurant (#McFail
[15]
). Out of the 79,000
tweets about McDonalds, 2,000 were negative. Even though there were more positive tweets overall, most of the
headlines focused on the failure of the campaign.
[16]
Managers that do not give any new opportunities to their employees because of their previous mistakes or
something that they did not like on a past project provides a common example of the negativity bias.
[17]
Is bad stronger than good?
Roy F. Baumeister, a professor of social psychology at Florida State University, co-authored the idea of the
negativity bias in a journal article in 2001 entitled, Bad Is Stronger Than Good. He did an experiment where his
participants gained or lost the same amount of money ($50). The findings concluded that people are more upset
about losing money than are pleased gaining money. Baumeister also found that negative events have longer lasting
effects on emotions than positive events do. We also tend to think that people who say negative things are smarter
than those who say positive things. This makes us give more weight to critical reviews and insights.
[2]
The tendency of bad being stronger than good reflects into almost every aspect of human existence. For example, if a
person makes a bad first impression on another person, he will remember that far more easily than a good first
impression. Furthermore, that same person who made a bad impression will have a harder time changing that
impression to good. Additionally, when receiving feedback on a presentation or a finished job, negative feedback
makes a much more profound impact on the person receiving the information. These are simply examples of
everyday life in which negativity impacts humans greater than positivity. This tendency will play into most situations
a person faces throughout his lifetime.
[12]
References
[1] Hanson, Rick. "Confronting the Negativity Bias" (http:/ / www. rickhanson. net/ your-wise-brain/
how-your-brain-makes-you-easily-intimidated). . Retrieved 8 October 2012.
[2] Tugend, Alina. "Praise Is Fleeting, but Brickbats We Recall" (http:/ / www. nytimes. com/ 2012/ 03/ 24/ your-money/
why-people-remember-negative-events-more-than-positive-ones. html?pagewanted=all& _r=0). . Retrieved 9 October 2012.
[3] Moon, Tom. "Are We Hardwired for Unhappiness?" (http:/ / www. tommoon. net/ articles/ are_we_hardwired-1. html). . Retrieved October
25, 2012.
[4] Manley, Ron. "The Nervous System and Self-Regulation" (http:/ / drronmanley. com/ wisdom/ the-nervous-system-and-self-regulation/ ). .
Retrieved October 25, 2012.
[5] Kanouse, David. "Explaining Negativity Biases in Evaluation and Choice Behavior: Theory and Research" (http:/ / www. acrwebsite. org/
search/ view-conference-proceedings. aspx?Id=6335). . Retrieved October 25, 2012.
[6] http:/ / psychology. about.com/ od/ cognitivepsychology/ a/ memory_retrival. htm
[7] http:/ / eesenor. blogspot. com/ 2010/ 04/ definitiveness.html
[8] http:/ / www. turnyourhead.com/ psych.php
[9] [9] Hamlin, J. Kiley et al. "Three-month-olds show a negativity bias in their social evaluations", Developmental Science, Vol 13, Number 6.
2010. pp 923-929. USA. Retrieved on 2012-10-02.
[10] Marano, Hara E.. "Our Brain's Negativite Bias" (http:/ / www. psychologytoday. com/ articles/ 200306/ our-brains-negative-bias).
Psychology Today. Sussex Publishers, LLC. . Retrieved 9 October 2012.
[11] Haizlip, Julie et al. "Perspective: The Negativity Bias, Medical Education, and the Culture of Academic Medicine: Why Culture Change Is
Hard" (http:/ / journals. lww. com/ academicmedicine/ Fulltext/ 2012/ 09000/ Perspective___The_Negativity_Bias,_Medical. 19. aspx). .
Retrieved October 3, 2012.
[12] Bosman, Manie. "You Might Not Like it, But Bad is Stronger than Good" (http:/ / www. strategicleadershipinstitute. net/ news/
you-might-not-like-it-but-bad-is-stronger-than-good/ ). . Retrieved 9 October 2012.
[13] Tottenham, N.; Phuong, Flannery, Gabard-Durnam, & Goff (08 2012). "A Negativity Bias for Ambiguous Facial-Expression Valence
During Childhood: Converging Evidence From Behavior and Facial Corrugator Muscle Responses". Emotion. doi:10.1037/a0029431.
[14] Moon, Tom. "Are We Hardwired for Unhappiness?" (http:/ / www. tommoon. net/ articles/ are_we_hardwired-1. html). . Retrieved 8
October 2012.
[15] http:/ / www.businessesgrow. com/ tag/ psychology-and-social-media-2/
Negativity bias
159
[16] Schaefer, Mark. "We are all standing on digital quicksand" (http:/ / www. businessesgrow. com/ tag/ psychology-and-social-media-2/ ). .
Retrieved October 25, 2012.
[17] Gonzalez, Al. "Leading through Negativity" (http:/ / www. aboutleaders. com/ bid/ 160556/ Leading-through-Negativity). . Retrieved
October 25, 2012.
Further reading
Early negativity bias occurring prior to experiencing of emotion: An ERP study (http:/ / psycnet. apa. org/ index.
cfm?fa=buy.optionToBuy& id=2011-02799-002). Dong, G. et al. (2011)
How to change things when things are hard. Written by: Chip Heath and Dan Heath
Bad is Stronger Than Good (http:/ / www. carlsonmba. umn. edu/ Assets/ 71516. pdf) Article
A note on negativity bias and framing response asymmetry (http:/ / www. springerlink. com/ content/
f4h28734v301068u/ ). Sonsino, D. (2011)
External links
Theory and Research (http:/ / www. acrwebsite. org/ search/ view-conference-proceedings. aspx?Id=6335)
Information on theoretical aspect of negativity bias.
Negativity Bias- description (http:/ / www. youtube. com/ watch?v=E09077HRurg) Video.
Neglect of probability
The neglect of probability, a type of cognitive bias, is the tendency to completely disregard probability when
making a decision under uncertainty and is one simple way in which people regularly violate the normative rules for
decision making. Small risks are typically either neglected entirely or hugely overrated, the continuum between the
extremes is ignored. The term probability neglect was coined by Cass Sunstein.
[1]
There are many related ways in which people violate the normative rules of decision making with regard to
probability including the hindsight bias, the neglect of prior base rates effect, and the gambler's fallacy. This bias,
though, is notably different from the preceding biases because with this bias, the actor completely disregards
probability when deciding, instead of incorrectly using probability, as the actor does in the above examples.
Baron, Granato, Spranca, and Teubal (1993) studied the bias. They did so by asking children the following question:
Susan and Jennifer are arguing about whether they should wear seat belts when they ride in a car.
Susan says that you should. Jennifer says you shouldn't... Jennifer says that she heard of an accident
where a car fell into a lake and a woman was kept from getting out in time because of wearing her seat
belt, and another accident where a seat belt kept someone from getting out of the car in time when there
was a fire. What do you think about this?
Jonathan Baron (2000) notes that subject X responded in the following manner:
A: Well, in that case I don't think you should wear a seat belt.
Q (interviewer): How do you know when that's gonna happen?
A: Like, just hope it doesn't!
Q: So, should you or shouldn't you wear seat belts?
A: Well, tell-you-the-truth we should wear seat belts.
Q: How come?
A: Just in case of an accident. You won't get hurt as much as you will if you didn't wear a seat belt.
Q: OK, well what about these kinds of things, when people get trapped?
Neglect of probability
160
A: I don't think you should, in that case.
It is clear that subject X completely disregards the probability of an accident happening versus the probability of
getting hurt by the seat belt in making the decision. A normative model for this decision would advise the use of
expected-utility theory to decide which option would likely maximize utility. This would involve weighing the
changes in utility in each option by the probability that each option will occur, something that subject X ignores.
Another subject responded to the same question:
A: If you have a long trip, you wear seat belts half way.
Q: Which is more likely?
A: That you'll go flyin' through the windshield.
Q: Doesn't that mean you should wear them all the time?
A: No, it doesn't mean that.
Q: How do you know if you're gonna have one kind of accident or the other?
A: You don't know. You just hope and pray that you don't.
Here again, the subject disregards the probability in making the decision by treating each possible outcome as equal
in his reasoning.
Baron (2000) suggests that adults may suffer from the bias as well, especially when it comes to difficult decisions
like a medical decision under uncertainty. This bias could make actors drastically violate expected-utility theory in
their decision making, especially when a decision must be made in which one possible outcome has a much lower or
higher utility but a small probability of occurring (e.g. in medical or gambling situations). In this aspect, the neglect
of probability bias is similar to the neglect of prior base rates effect.
In another example of near-total neglect of probability, Rottenstreich and Hsee (2001) found that the typical subject
was willing to pay $10 to avoid a 99% chance of a painful electric shock, and $7 to avoid a 1% chance of the same
shock. (They suggest that probability is more likely to be neglected when the outcomes are emotion arousing.)
References
Baron, J. (2000). Thinking and Deciding (3d ed.). Cambridge University Press. p.260-261
Rottenstreich, Y. & Hsee, C.K. (2001). Money, kisses, and electric shocks: on the affective psychology of risk.
Psychological Science, 12, 185-190.
[1] Kahneman, D. (2011). Thinking Fast and Slow (http:/ / www. penguin. co. uk/ nf/ Book/ BookDisplay/ 0,,9780141918921,00. html), Allen
Lane 2011, p. 143 f.
Normalcy bias
161
Normalcy bias
The normalcy bias, or normality bias, refers to a mental state people enter when facing a disaster. It causes people to
underestimate both the possibility of a disaster occurring and its possible effects. This often results in situations
where people fail to adequately prepare for a disaster, and on a larger scale, the failure of governments to include the
populace in its disaster preparations. The assumption that is made in the case of the normalcy bias is that since a
disaster never has occurred then it never will occur. It also results in the inability of people to cope with a disaster
once it occurs. People with a normalcy bias have difficulties reacting to something they have not experienced before.
People also tend to interpret warnings in the most optimistic way possible, seizing on any ambiguities to infer a less
serious situation.
[1]
Possible causes
The normalcy bias may be caused in part by the way the brain processes new data. Research suggests that even when
the brain is calm, it takes 810 seconds to process new information. Stress slows the process, and when the brain
cannot find an acceptable response to a situation, it fixates on a single and sometimes default solution that may or
may not be correct. An evolutionary reason for this response could be that paralysis gives an animal a better chance
of surviving an attack; predators are less likely to eat prey that isn't struggling.
[2]
Effects
The normalcy bias often results in unnecessary deaths in disaster situations. The lack of preparation for disasters
often leads to inadequate shelter, supplies, and evacuation plans. Even when all these things are in place, individuals
with a normalcy bias often refuse to leave their homes. Studies have shown that more than 70% of people check with
others before deciding to evacuate.
[2]
The normalcy bias also causes people to drastically underestimate the effects of the disaster. Therefore, they think
that everything will be all right, while information from the radio, television, or neighbors gives them reason to
believe there is a risk. This creates a cognitive dissonance that they then must work to eliminate. Some manage to
eliminate it by refusing to believe new warnings coming in and refusing to evacuate (maintaining the normalcy bias),
while others eliminate the dissonance by escaping the danger. The possibility that some may refuse to evacuate
causes significant problems in disaster planning.
[3]
Examples
Not limited to, but most notably: The Nazi genocide of millions of Jews. Even after knowing friends and family were
being taken against their will, the Jewish community still stayed put, and refused to believe something was "going
on." Because of the extreme nature of the situation it is understandable why most would deny it.
Little Sioux Scout camp in June 2008. Despite being in the middle of "Tornado Alley," the campground had no
tornado shelter to offer protection from a strong tornado.
[4]
New Orleans before Hurricane Katrina. Inadequate government and citizen preparation and the denial that the levees
could fail were an example of the normalcy bias, as were the thousands of people who refused to evacuate.
Normalcy bias
162
Prevention
The negative effects can be combated through the four stages of disaster response:
preparation, including publicly acknowledging the possibility of disaster and forming contingency plans
warning, including issuing clear, unambiguous, and frequent warnings and helping the public to understand and
believe them
impact, the stage at which the contingency plans take effect and emergency services, rescue teams, and disaster
relief teams work in tandem
aftermath, or reestablishing equilibrium after the fact by providing supplies and aid to those in need
References
[1] [1] "Finding Something to Do: the Disaster Continuity Care Model".
[2] [2] "How to Survive".
[3] "Information Technology for Advancement of Evacuation" (http:/ / www. ysk. nilim. go. jp/ kakubu/ engan/ engan/ taigai/ hapyoronbun/
07-17. pdf).
[4] "Thoughts about Tornadoes and Camping Safety after the Iowa Tragedy on June 11, 2008" (http:/ / www. flame. org/ ~cdoswell/
scout_tragedy/ scout_tragedy_2008. html).
External links
Doswell, Chuck. "Thoughts about Tornadoes and Camping Safety after the Iowa Tragedy on June 11, 2008."
Flame.org. 26 July 2008 http:/ / web. archive. org/ web/ 20081120003313/ http:/ / www. flame. org/ ~cdoswell/
Scout_tragedy/ Scout_tragedy_2008. html.
Oda, Katsuya. "Information Technology for Advancement of Evacuation." http:/ / www. ysk. nilim. go. jp/
kakubu/ engan/ engan/ taigai/ hapyoronbun/ 07-17. pdf.
Ripley, Amanda. "How to Get Out Alive." Time 25 Apr. 2005. http:/ / www. time. com/ time/ magazine/ article/
0,9171,1053663,00. html
Valentine, Pamela V., and Thomas E. Smith. "Finding Something to Do: the Disaster Continuity Care Model."
Brief Treatment and Crisis Intervention 2 (2002): 183-96.
Observer-expectancy effect
163
Observer-expectancy effect
"Participant-observer effect" redirects here.
The observer-expectancy effect (also called the experimenter-expectancy effect, expectancy bias, observer
effect, or experimenter effect) is a form of reactivity in which a researcher's cognitive bias causes them to
unconsciously influence the participants of an experiment. It is a significant threat to a study's internal validity, and
is therefore typically controlled using a double-blind experimental design.
An example of the observer-expectancy effect is demonstrated in music backmasking, in which hidden verbal
messages are said to be audible when a recording is played backwards. Some people expect to hear hidden messages
when reversing songs, and therefore hear the messages, but to others it sounds like nothing more than random
sounds. Often when a song is played backwards, a listener will fail to notice the "hidden" lyrics until they are
explicitly pointed out, after which they are obvious. Other prominent examples include facilitated communication
and dowsing.
External links
Skeptic's Dictionary on the Experimenter Effect
[1]
An article that speaks of Expectancy effects in paranormal investigation
[2]
Another article by Rupert Sheldrake
[3]
References
[1] http:/ / skepdic. com/ experimentereffect. html
[2] http:/ / www. williamjames.com/ Science/ ESP.htm
[3] http:/ / www. sheldrake. org/ experiments/ expectations/
Omission bias
164
Omission bias
The omission bias is an alleged type of cognitive bias. It is the tendency to judge harmful actions as worse, or less
moral than equally harmful omissions (inactions). It is contentious as to whether this represents a systematic error in
thinking, or is supported by a substantive moral theory. For a consequentialist, judging harmful actions as worse than
inaction would indeed be inconsistent, but deontological ethics may, and normally does, draw a moral distinction
between doing and allowing.
[1]
Spranca, Minsk and Baron extended the omission bias to judgments of morality of choices. In one scenario, John, a
tennis player, would be facing a tough opponent the next day in a decisive match. John knows his opponent is
allergic to a food substance. Subjects were presented with two conditions: John recommends the food containing the
allergen to hurt his opponent's performance, or the opponent himself orders the allergenic food, and John says
nothing. A majority of people judged that John's action of recommending the allergenic food as being more immoral
than John's inaction of not informing the opponent of the allergenic substance.
References
[1] Frances Howard-Snyder, Doing vs. Allowing Harm (http:/ / plato. stanford. edu/ entries/ doing-allowing/ ) (Stanford Encyclopaedia of
Philosophy)
Baron, Jonathan. (1988, 1994, 2000). Thinking and Deciding. Cambridge University Press.
Asch DA, Baron J, Hershey JC, Kunreuther H, Meszaros JR, Ritov I, Spranca M. Omission bias and pertussis
vaccination. Medical Decision Making. 1994; 14:118-24.
Optimism bias
The optimism bias (also known as unrealistic or comparative optimism) is a bias that causes a person to believe
that they are less at risk of experiencing a negative event compared to others. There are 4 factors that cause a person
to be optimistically biased: their desired end state, their cognitive mechanisms, the information they have about
themselves versus others, and overall mood.
[1]
The optimistic bias is seen in a number of situations. For example:
people believing that they are less at risk of being a crime victim,
[2]
smokers believing that they are less likely to
contract lung cancer or disease than other smokers, first-time bungee jumpers believing that they are less at risk of an
injury than other jumpers,
[3]
or traders who think they are less exposed to losses in the markets.
[4]
Although the optimism bias occurs for both positive events, such as believing oneself to be more financially
successful than others and negative events, such as being less likely to have a drinking problem, there is more
research and evidence suggesting that the bias is stronger for negative events.
[1][5]
However, different consequences
result from these two types of events: positive events often lead to feelings of well being and self-esteem, while
negative events lead to consequences involving more risk, such as engaging in risky behaviors and not taking
precautionary measures for safety.
[1]
Measuring optimistic bias
The optimistic bias is typically measured through two determinants of risk: absolute risk, where individuals are
asked to estimate their likelihood of experiencing a negative event compared to their actual chance of experiencing a
negative event (comparison against self), and comparative risk, where individuals are asked to estimate the
likelihood of experiencing a negative event (their personal risk estimate) compared to others of the same age and sex
(a target risk estimate).
[5][6]
Problems can occur when trying to measure absolute risk because it is extremely
difficult to determine the actual risk statistic for a person.
[6][7]
Therefore, the optimistic bias is primarily measured in
comparative risk forms, where people compare themselves against others, through direct and indirect comparisons.
[3]
Optimism bias
165
Direct comparisons ask whether an individual's own risk of experiencing an event is negative, positive or equal than
someone else's risk, while indirect comparisons ask individuals to provide separate estimates of their own risk of
experiencing an event and other's risk of experiencing the same event.
[6][8]
After obtaining scores, researchers are able to use the information to determine if there is a difference in the average
risk estimate of the individual compared to the average risk estimate of their peers. Generally in negative events, the
mean risk of an individual appears lower than the risk estimate of others.
[6]
This is then used to demonstrate the bias'
effect. The optimistic bias can only be defined at a group level, because at an individual level the positive assessment
could be true.
[5]
Likewise, difficulties can arise in measurement procedures, as it is difficult to determine when
someone is being optimistic, realistic, or pessimistic.
[6][8]
Research suggests that the bias comes from an
overestimate of group risks rather than underestimating one's own risk.
[6]
Factors of optimistic bias
The factors leading to the optimistic bias can be categorized into four different groups: desired end states of
comparative judgment, cognitive mechanisms, information about the self versus a target, and underlying affect.
[1]
These are explained more in detail below.
1. Desired end states of comparative judgment
Many explanations for the optimistic bias come from the goals that people want and outcomes they wish to see.
[1]
People tend to view their risks as less than others because they believe that this is what other people want to see.
These explanations include self-enhancement, self-presentation, and perceived control.
Self-enhancement
Self-enhancement suggests that optimistic predictions are satisfying and that it feels good to think that positive
events will happen.
[1]
People can control their anxiety and other negative emotions if they believe they are better off
than others.
[1]
People tend to focus on finding information that supports what they want to see happen, rather than
what will happen to them.
[1]
With regards to the optimistic bias, individuals will perceive events more favorably,
because that is what they would like the outcome to be. This also suggests that people might lower their risks
compared to others to make themselves look better than average: they are less at risk than others and therefore
better.
[1]
Self-presentation
Studies suggest that people attempt to establish and maintain a desired personal image in social situations. People are
motivated to present themselves towards others in a good light, and some researchers suggest that the optimistic bias
is a representative of self-presentational processes:people want to appear more well off than others. However, this is
not through conscious effort. In a study where participants believed their driving skills would be either tested in
either real-life or driving simulations, people who believed they were to be tested had less optimistic bias and were
more modest about their skills than individuals who would not be tested.
[9]
Studies also suggest that individuals who
present themselves in a pessimistic and more negative light are generally less accepted by the rest of society.
[10]
This
might contribute to overly optimistic attitudes.
Personal control/perceived control
People tend to be more optimistically biased when they believe they have more control over events than
others.
[1][7][11]
For example, people are more likely to think that they will not be harmed in a car accident if they are
driving the vehicle.
[11]
Another example is that if someone believes that they have a lot of control over becoming
infected with HIV, they are more likely to view their risk of contracting the disease to be low.
[6]
Studies have
suggested that the greater perceived control someone has, the greater their optimistic bias.
[11][12]
Stemming from
Optimism bias
166
this, control is a stronger factor when it comes to personal risk assessments, but not when assessing others.
[7][11]
A meta-analysis reviewing the relationship between the optimistic bias and perceived control found that a number of
moderators contribute to this relationship.
[7]
In previous research, participants from the United States generally had
higher levels of optimistic bias relating to perceived control than those of other nationalities. Students also showed
larger levels of the optimistic bias than non-students.
[7]
The format of the study also demonstrated differences in the
relationship between perceived control and the optimistic bias: direct methods of measurement suggested greater
perceived control and greater optimistic bias as compared to indirect measures of the bias.
[7]
The optimistic bias is
strongest in situations where an individual needs to rely heavily on direct action and responsibility of situations.
[7]
An opposite factor of perceived control is that of prior experience.
[6]
Prior experience is typically associated with
less optimistic bias, which some studies suggest is from either a decrease in the perception of personal control, or
make it easier for individuals to imagine themselves at risk.
[6][12]
Prior experience suggests that events may be less
controllable than previously believed.
[6]
2. Cognitive mechanisms
The optimistic bias is possibly also influenced by three cognitive mechanisms that guide judgments and
decision-making processes: the representativeness heuristic, singular target focus, and interpersonal distance.
[1]
Representativeness heuristic
The estimates of likelihood associated with the optimistic bias are based on how closely an event matches a person's
overall idea of the specific event.
[1]
Some researchers suggest that the representative heuristic is a reason for the
optimistic bias: individuals tend to think in stereotypical categories rather than about their actual targets when
making comparisons.
[12]
For example, when drivers are asked to think about a car accident, they are more likely to
associate a bad driver, rather than just the average driver.
[1]
Individuals compare themselves with the negative
elements that come to mind, rather than an overall accurate comparison between them and another driver.
Additionally, when individuals were asked to compare themselves towards friends, they chose more vulnerable
friends based on the events they were looking at.
[13]
Individuals generally chose a specific friend based on if they
resemble a given example, rather than just an average friend.
[13]
People find examples that relate directly to what
they are asked, resulting in representativeness heuristics.
Singular target focus
One of the difficulties of the optimistic bias is that people know more about themselves than they do about others.
While individuals know how to think about themselves as a single person, they still think of others as a generalized
group, which leads to biased estimates and inabilities to sufficiently understand their target or comparison group.
Likewise, when making judgments and comparisons about their risk compared to others, people generally ignore the
average person, but primarily focus on their own feelings and experiences.
[1]
Interpersonal distance
Perceived risk differences occur depending on how far or close a compared target is to an individual making a risk
estimate.
[1]
The greater the perceived distance between the self and the comparison target, the greater the perceived
difference in risk. When one brings the comparison target closer to the individual, risk estimates appear closer
together than if the comparison target was someone more distant to the participant.
[1]
There is support for perceived
social distance in determining the optimistic bias.
[14]
Through looking at comparisons of personal and target risk
between the in-group level contributes to more perceived similarities than when individuals think about outer-group
comparisons which lead to greater perceived differences.
[14]
In one study, researchers manipulated the social context
of the comparison group, where participants made judgements for two different comparison targets: the typical
student at their university and a typical student at another university. Their findings showed that not only did people
work with the closer comparison first, but also had closer ratings to themselves than the "more different" group.
[14]
Optimism bias
167
Studies have also noticed that people demonstrate more optimistic bias when making comparisons when the other is
a vague individual, but biases are reduced when the other is a familiar person, such as a friend or family member.
This also is determined due to the information they have about the individuals closest to them, but not having the
same information about other people.
[5]
3. Information about self versus target
Individuals know a lot more about themselves than they do about others.
[1]
Because information about others is less
available, information about the self versus others leads people to make specific conclusions about their own risk,
but results in them having a harder time making conclusions about the risks of others. This leads to differences in
judgments and conclusions about self-risks compared to the risks of others, leading to larger gaps in the optimistic
bias.
[1]
Person-positivity bias
Person-positivity bias is the tendency to evaluate an object more favorably the more the object resembles an
individual human being. Generally, the more a comparison target resembles a specific person, the more familiar it
will be. However, groups of people are considered to be more abstract concepts, which leads to less favorable
judgments. With regards to the optimistic bias, when people compare themselves to an average person, whether
someone of the same sex or age, the target continues to be viewed as less human and less personified, which will
result in less favorable comparisons between the self and others.
[1]
Egocentric thinking
Egocentric thinking refer to how individuals know more of their own personal information and risk that they can use
to form judgments and make decisions. One difficulty, though, is that people have a large amount of knowledge
about themselves, but no knowledge about others. Therefore, when making decisions, people have to use other
information available to them, such as population data, in order to learn more about their comparison group.
[1]
This
can relate to an optimism bias because while people are using the available information they have about themselves,
they have more difficulty understanding correct information about others.
[1]
This self-centered thinking is seen most
commonly in adolescents and college students, who generally think more about themselves than others.
[15]
It is also possible that someone can escape egocentric thinking. In one study, researchers had one group of
participants list all factors that influenced their chances of experiencing a variety of events, and then a second group
read the list. Those who read the list showed less optimistic bias in their own reports. It's possible that greater
knowledge about others and their perceptions of their chances of risk bring the comparison group closer to the
participant.
[12]
Underestimating average person's control
Also regarding egocentric thinking, it is possible that individuals underestimate the amount of control the average
person has. This is explained in two different ways:
1. People underestimate the control that others have in their lives.
[12]
2. 2. People completely overlook that others have control over their own outcomes.
For example, many smokers believe that they are taking all necessary precautionary measures so that they won't get
lung cancer, such as smoking only once a day, or using filtered cigarettes, and believe that others are not taking the
same precautionary measures. However, it is likely that many other smokers are doing the same things.
[1]
Optimism bias
168
4. Underlying affect
The last factor of optimistic bias is that of underlying affect and affect experience. Research has found that people
show less optimistic bias when experiencing a negative mood, and more optimistic bias when in a positive mood.
[6]
Sad moods reflect greater memories of negative events, which lead to more negative judgments, while positive
moods promote happy memories and more positive feelings.
[1]
This suggests that overall negative moods, including
depression, result in increased personal risk estimates but less optimistic bias overall.
[6]
Anxiety also leads to less
optimistic bias, continuing to suggest that overall positive experiences and positive attitudes lead to more optimistic
bias in events.
[6]
Why do we care about the optimistic bias?
In health, the optimistic bias tends to prevent individuals from taking on preventative measures for good health.
[16]
Therefore, researchers need to be aware of the optimistic bias and the ways it can prevent people from taking
precautionary measures in life choices. For example, people who underestimate their comparative risk of heart
disease know less about heart disease, and even after reading an article with more information, are still less
concerned about risk of heart disease.
[8]
Because the optimistic bias can be a strong force in decision-making, it is
important to look at how risk perception is determined and how this will result in preventative behaviors. Risk
perceptions are particularly important for individual behaviors, such as exercise, diet, and even sunscreen use.
[17]
A large portion of risk prevention focuses on adolescents. Especially with health risk perception, adolescence is
associated with an increased frequency of risky health-related behaviors such as smoking, drugs, and unsafe sex.
While adolescents are aware of the risk, this awareness does not change behavior habits.
[18]
Adolescents with strong
positive optimistic bias toward risky behaviors had an overall increase in the optimistic bias with age.
[16]
However, many times there are methodological problems in these tests. Unconditional risk questions in
cross-sectional studies are used consistently, leading to problems, as they ask about the likelihood of an action
occurring, but does not determine if there is an outcome, or compare events that haven't happened to events that
have.
[17]
Concerning vaccines, perceptions of those who have not been vaccinated are compared to the perceptions
of people who have been. Other problems which arise include the failure to know a person's perception of a risk.
[17]
Knowing this information will be helpful for continued research on optimistic bias and preventative behaviors.
Attempts to alter and eliminate optimistic bias
Studies have shown that is very difficult to eliminate the optimistic bias, however some people believe that by trying
to reduce the optimistic bias will encourage people to adapt to health-protective behaviors. Researchers suggest that
the optimistic bias cannot be reduced, and that by trying to reduce the optimistic bias the end result was generally
even more optimistically biased.
[19]
In a study of four different tests to reduce the optimistic bias, researchers found
that regardless of the attempts to reduce the bias, through lists of risk factors, participants perceiving themselves as
inferior to others, participants asked to think of high-risk individuals, and giving attributes of why they were at risk,
all increased the bias rather than decreased it.
[19]
Although studies have tried to reduce the optimistic bias through
reducing distance, overall, the optimistic bias still remains.
[14]
Although research has suggested that it is very difficult to eliminate the bias, some factors may help in closing the
gap of the optimistic bias between an individual and their target risk group. First, by placing the comparison group
closer to the individual, the optimistic bias can be reduced: studies found that when individuals were asked to make
comparisons between themselves and close friends, there was almost no difference in the likelihood of an event
occurring.
[13]
Additionally, by actually experiencing an event leads to a decrease in the optimistic bias.
[6]
While this
only applies to events with prior experience, knowing the previously unknown will result in less optimism of it not
occurring.
Optimism bias
169
Optimism bias in policy, planning, and management
Optimism bias influences decisions and forecasts in policy, planning, and management, e.g., the costs and
completion times of planned decisions tend to be underestimated and the benefits overestimated due to optimism
bias. The term planning fallacy for this effect was first proposed by Daniel Kahneman and Amos Tversky.
[20][21]
Reference class forecasting was developed by Oxford professor Bent Flyvbjerg to reduce optimism bias and increase
forecasting accuracy by framing decisions so they take into account available distributional information about
previous, comparable outcomes.
[22]
Daniel Kahneman, winner of the Nobel Prize in economics, calls the use of
reference class forecasting "the single most important piece of advice regarding how to increase accuracy in
forecasting.
[23]
Pessimistic bias
Researchers have not coined the term pessimism bias, because the principles of the optimistic bias continue to be in
effect in situations where individuals regard themselves as worse off than others.
[1]
Optimism may occur from either
a distortion of personal estimates, representing personal optimism, or a distortion for others, representing personal
pessimism,
[1]
making the term "pessimistic bias" obsolete.
References
[1] Shepperd, James A.; Patrick Carroll, Jodi Grace, Meredith Terry (2002). "Exploring the Causes of Comparative Optimism" (http:/ / www.
psych. ufl. edu/ ~shepperd/ articles/ PsychBelgica2002. pdf). Psychologica Belgica 42: 6598. .
[2] Chapin, John; Grace Coleman (2009). "Optimistic Bias: What you Think, What you Know, or Whom you Know?" (http:/ / findarticles. com/
p/ articles/ mi_6894/ is_1_11/ ai_n31528106/ ). North American Journal of Psychology 11 (1): 121132. .
[3] Weinstein, Neil D.; William M. Klein (1996). "Unrealistic Optimism: Present and Future". Journal of Social and Clinical Psychology 15 (1):
18. doi:10.1521/jscp.1996.15.1.1.
[4] Elder; Alexander "Trading for a Living; Psychology, Trading Tactics, Money Management" John Wiley & Sons 1993, Intro - sections
"Psychology is the Key" & "The Odds are against You", And Part I "Individual Psychology", Section 5 "Fantasy versus Reality" ISBN
0-47159224-2
[5] Gouveia, Susana O.; Valerie Clarke (2001). "Optimistic bias for negative and positive events". Health Education 101 (5): 228234.
doi:10.1108/09654280110402080.
[6] Helweg-Larsen, Marie; James A. Shepperd (2001). "Do Moderators of the Optimistic Bias Affect Personal or Target Risk Estimates? A
Review of the Literature" (http:/ / users. dickinson.edu/ ~helwegm/ pdfversion/ do_moderators_of_the_optimistic_bias. pdf). Personality and
Social Psychology Review 5 (1): 7495. doi:10.1207/S15327957PSPR0501_5. .
[7] Klein, Cynthia T. F.; Marie Helweg-Larsen (2002). "Perceived Control and the Optimistic Bias: A Meta-analytic Review" (http:/ / www2.
dickinson.edu/ departments/ psych/ helwegm/ PDFVersion/ Perceived_control_and_the_optimistic. pdf). Psychology and Health 17 (4):
437446. doi:10.1080/0887044022000004920. .
[8] Radcliffe, Nathan M.; William M. P. Klein (2002). "Dispositional, Unrealistic, and Comparative Optimism: Differential Relations with the
Knowledge and Processing of Risk Information and Beliefs about Personal Risk". Personality and Social Psychology Bulletin 28: 836846.
doi:10.1177/0146167202289012.
[9] McKenna, F. P; R. A. Stanier, C. Lewis (1991). "Factors underlying illusionary self-assessment of driving skill in males and females".
Accident Analysis and Prevention 23: 4552. doi:10.1016/0001-4575(91)90034-3. PMID2021403.
[10] Helweg-Larsen, Marie; Pedram Sadeghian, Mary S. Webb (2002). "The stigma of being pessimistically biased" (http:/ / users. dickinson.
edu/ ~helwegm/ PDFVersion/ The_Stigma_of_Being_Pessimistically_Biased. pdf). Journal of Social and Clinical Psychology 21 (1): 92=107.
.
[11] Harris, Peter (1996). "Sufficient grounds for optimism?: The relationship between perceived controllability and optimistic bias". Journal of
Social and Clinical Psychology (http:/ / search.proquest. com/ docview/ 61536420/ 136283117CD63890645/ 1?accountid=10506)& #32;'''15
(1): 952.
[12] Weinstein, Neil D. (1980). "Unrealistic optimism about future life events". Journal of Personality and Social Psychology 39: 806820.
doi:10.1037/0022-3514.39.5.806.
[13] Perloff, Linda S; Barbara K. Fetzer (1986). "Self-other judgments and perceived vulnerability to victimization". Journal of Personality and
Social Psychology 50: 502510. doi:10.1037/0022-3514.50.3.502.
[14] Harris, P; Wendy Middleton, Richard Joiner (2000). "The typical student as an in-group member: eliminating optimistic bias by reducing
social distance". European Journal of Social Psychology 30: 235253.
doi:10.1002/(SICI)1099-0992(200003/04)30:2<235::AID-EJSP990>3.0.CO;2-G.
Optimism bias
170
[15] Weinstein, Neil D. (1987). "Unrealistic Optimism: About Susceptibility in Health Problems: Conclusions from a Community-Wide
Sample". Journal of Behavioral Medicine 10 (5): 481500. doi:10.1007/BF00846146. PMID3430590.
[16] Brnstrm, Richard; Yvonne Brandberg (2010). "Health Risk Perception, Optimistic Bias, and Personal Satisfaction" (http:/ / www. ajhb.
org/ ISSUES/ 2010/ 2/ 02MarApr0710Branstrom.pdf). American Journal of Health Behavior 34 (2): 197205. PMID19814599. .
[17] Brewer, Noel T.; Gretchen B. Chapman, Fredrick X. Gibbons, Meg Gerrard, Kevin D. McCaul, Neil D. Weinstein (2007). "Meta-analysis of
the Relationship Between Risk Perception and Health Behavior: The Example of Vaccination" (http:/ / www. unc. edu/ ~ntbrewer/ pubs/ 2007,
brewer, chpaman, gibbons, et al. pdf). Health Psychology 26 (2): 136=145. doi:10.1037/0278-6133.26.2.136. .
[18] Gerrard, Meg; Frederick X. Gibbons, Alida C. Benthin, Robert M. Hessling (1996). "A Longitudinal Study of the Reciprocal Nature of Risk
Behaviors and Cognitions in Adolescents: What You Do Shapes What You Think, and Vice Versa" (http:/ / faculty. weber. edu/ eamsel/
Classes/ Adolescent Risk taking/ Lectures/ 4-5 - Cognitive/ Gerrard et al (1996). pdf). Health Psychology 15 (5): 344354. PMID8891713. .
[19] Weinstein, Neil D.; William M. Klein (1995). "Resistance of Personal Risk Perceptions to Debiasing Interventions". Health Psychology 14
(2): 132140. doi:10.1037/0278-6133.14.2.132. PMID7789348.
[20] Pezzo, Mark V.; Litman, Jordan A.; Pezzo, Stephanie P. (2006). "On the distinction between yuppies and hippies: Individual differences in
prediction biases for planning future tasks". Personality and Individual Differences 41 (7): 13591371. doi:10.1016/j.paid.2006.03.029.
ISSN01918869.
[21] Kahneman, Daniel; Tversky, Amos (1979). "Intuitive prediction: biases and corrective procedures". TIMS Studies in Management Science
12: 313327.
[22] Flyvbjerg, B., 2008, "Curbing Optimism Bias and Strategic Misrepresentation in Planning: Reference Class Forecasting in Practice."
European Planning Studies, vol. 16, no. 1, January, pp. 3-21. (http:/ / www. sbs. ox. ac. uk/ centres/ bt/ Documents/ Curbing Optimism Bias
and Strategic Misrepresentation. pdf)
[23] [23] Daniel Kahneman, 2011, Thinking, Fast and Slow (New York: Farrar, Straus and Giroux), p. 251
Ostrich effect
In behavioral finance, the ostrich effect is the avoidance of apparently risky financial situations by pretending they
do not exist. The name comes from the common (but false)
[1]
legend that ostriches bury their heads in the sand to
avoid danger.
Galai and Sade (2006) explain differences in returns in the fixed income market by using a psychological
explanation, which they name the "ostrich effect," attributing this anomalous behavior to an aversion to receiving
information on potential interim losses.
[2]
They also provide evidence that the entrance to a leading financial portal
in Israel is positively related to the equity market. Later, research by George Loewenstein and Duane Seppi
determined that people in Scandinavia looked up the value of their investments 50% to 80% less often during bad
markets.
[3]
References
[1] Karl Kruszelnicki, Ostrich Head in Sand (http:/ / www. abc. net. au/ science/ articles/ 2006/ 11/ 02/ 1777947. htm), ABC Science: In Depth.
[2] Galai, Dan; Sade, Orly (2006). "The "Ostrich Effect" and the Relationship between the Liquidity and the Yields of Financial Assets". Journal
of Business 79 (5)
[3] Zweig, Jason (September 13, 2008). "Should You Fear the Ostrich Effect?". The Wall Street Journal: pp.B1.
Outcome bias
171
Outcome bias
The outcome bias is an error made in evaluating the quality of a decision when the outcome of that decision is
already known.
Overview
One will often judge a past decision by its ultimate outcome instead of based on the quality of the decision at the
time it was made, given what was known at that time. This is an error because no decision maker ever knows
whether or not a calculated risk will turn out for the best. The actual outcome of the decision will often be
determined by chance, with some risks working out and others not. Individuals whose judgments are influenced by
outcome bias are seemingly holding decision makers responsible for events beyond their control.
Baron and Hershey (1988) presented subjects with hypothetical situations in order to test this.
[1]
One such example
involved a surgeon deciding whether or not to do a risky surgery on a patient. The surgery had a known probability
of success. Subjects were presented with either a good or bad outcome (in this case living or dying), and asked to
rate the quality of the surgeon's pre-operation decision. Those presented with bad outcomes rated the decision worse
than those who had good outcomes.
The reason why an individual makes this mistake is that he or she will incorporate presently available information
when evaluating a past decision. To avoid the influence of outcome bias, one should evaluate a decision by ignoring
information collected after the fact and focusing on what the right answer is, or was at the time the decision was
made.
References
[1] Baron J. & Hershey J.C. (1988). Outcome bias in decision evaluation. Journal of Personality and Social Psychology. Vol 54(4) Apr, 569-579.
Overconfidence effect
172
Overconfidence effect
The overconfidence effect is a well-established bias in which someone's subjective confidence in their judgments is
reliably greater than their objective accuracy, especially when confidence is relatively high.
[1]
For example, in some
quizzes, people rate their answers as "99% certain" but are wrong 40% of the time. It has been proposed that a
metacognitive trait mediates the accuracy of confidence judgments,
[2]
but this trait's relationship to variations in
cognitive ability and personality remains uncertain.
[1]
Overconfidence is one example of a miscalibration of
subjective probabilities.
Demonstration
The most common way in which overconfidence has been studied is by asking people how confident they are of
specific beliefs they hold or answers they provide. The data show that confidence systematically exceeds accuracy,
implying people are more sure that they are correct than they deserve to be. If human confidence had perfect
calibration, judgments with 100% confidence would be correct 100% of the time, 90% confidence correct 90% of the
time, and so on for the other levels of confidence. By contrast, the key finding is that confidence exceeds accuracy so
long as the subject is answering hard questions about an unfamiliar topic. For example, in a spelling task, subjects
were correct about 80% of the time when they were "100% certain."
[3]
Put another way, the error rate was 20% when
subjects expected it to be 0%. In a series where subjects made true-or-false responses to general knowledge
statements, they were overconfident at all levels. When they were 100% certain of their answer to a question, they
were wrong 20% of the time.
[4]
In a confidence-intervals task, where subjects had to judge quantities such as the total egg production of the U.S. or
the total number of physicians and surgeons in the Boston Yellow Pages, they expected an error rate of 2% when
their real error rate was 46%.
[5]
Once subjects had been thoroughly warned about the bias, they still showed a high
degree of overconfidence.
Overprecision is the excessive confidence that one knows the truth. For reviews, see Harvey (1997) or Hoffrage
(2004).
[6][7]
Much of the evidence for overprecision comes from studies in which participants are asked about their
confidence that individual items are correct. This paradigm, while useful, cannot distinguish overestimation from
overprecision; they are one and the same in these item-confidence judgments. After making a series of
item-confidence judgments, if people try to estimate the number of items they got right, they do not tend to
systematically overestimate their scores. The average of their item-confidence judgments exceeds the count of items
they claim to have gotten right.
[8]
One possible explanation for this is that item-confidence judgments were inflated
by overprecision, and that their judgments do not demonstrate systematic overestimation.
Confidence intervals
The strongest evidence of overprecision comes from studies in which participants are asked to indicate how precise
their knowledge is by specifying a 90% confidence interval around estimates of specific quantities. If people were
perfectly calibrated, their 90% confidence intervals would include the correct answer 90% of the time.
[5]
In fact, hit
rates are often as low as 50%, suggesting people have drawn their confidence intervals too narrowly, implying that
they think their knowledge is more accurate than it actually is.
Overconfidence effect
173
Planning fallacy
The planning fallacy describes the tendency for people to overestimate their rate of work or to underestimate how
long it will take them to get things done.
[9]
It is strongest for long and complicated tasks, and disappears or reverses
for simple tasks that are quick to complete.
Illusion of control
Illusion of control describes the tendency for people to behave as if they might have some control when in fact they
have none.
[10]
However, evidence does not support the notion that people systematically overestimate how much
control they have; when they have a great deal of control, people tend to underestimate how much control they
have.
[11]
Contrary evidence
Wishful thinking effects, in which people overestimate the likelihood of an event because of its desirability are
relatively rare.
[12]
This may be in part because people engage in more defensive pessimism in advance of important
outcomes,
[13]
in an attempt to reduce the disappointment that follows overly optimistic predictions.
[14]
Overplacement
Overplacement is the false belief that one is better than others. For a review, see Alicke and Govorun (2005).
[15]
Better-than-average effects
Perhaps the most celebrated better-than-average finding is Svensons (1981) finding that 93% of American drivers
rate themselves as better than the median.
[16]
The frequency with which school systems claim their students
outperform national averages has been dubbed the Lake Wobegon effect, after Garrison Keillors apocryphal town
in which all the children are above average.
[17]
Overplacement has likewise been documented in a wide variety of
other circumstances.
[18]
Kruger (1999), however showed that this effect is limited to easy tasks in which success is
common or in which people feel competent. For difficult tasks, the effect reverses itself and people believe they are
worse than others.
[19]
Comparative-optimism effects
Some researchers have claimed that people think good things are more likely to happen to them than to others,
whereas bad events were less likely to happen to them than to others.
[20]
But others (Chambers & Windschitl, 2004;
Chambers, Windschitl, & Suls, 2003; Kruger & Burrus, 2004) have pointed out that prior work tended to examine
good outcomes that happened to be common (such as owning ones own home) and bad outcomes that happened to
be rare (such as being struck by lightning).
[21][22][23]
Event frequency accounts for a proportion of prior findings of
comparative optimism. People think common events (such as living past 70) are more likely to happen to them than
to others, and rare events (such as living past 100) are less likely to happen to them than to others.
Positive illusions
Taylor and Brown (1988) have argued that people cling to overly positive beliefs about themselves, illusions of
control, and beliefs in false superiority, because it helps them cope and thrive.
[24]
While there is some evidence that
optimistic beliefs are correlated with better life outcomes, most of the research documenting such links is vulnerable
to the alternative explanation that their forecasts are accurate. The cancer patients who are most optimistic about
their survival chances are optimistic because they have good reason to be.
Overconfidence effect
174
Contrary evidence
Recent work has critiqued the methodology used in older research on overplacement, calling some of the effects
documented in prior research into question.
[25]
Practical implications
"Overconfident professionals sincerely believe they have expertise, act as experts and look like experts. You will have to struggle to
remind yourself that they may be in the grip of an illusion."
Daniel Kahneman
[26]
Overconfidence has been called the most pervasive and potentially catastrophic of all the cognitive biases to which
human beings fall victim.
[27]
It has been blamed for lawsuits, strikes, wars, and stock market bubbles and crashes.
Strikes, lawsuits, and wars could arise from overplacement. If plaintiffs and defendants were prone to believe that
they were more deserving, fair, and righteous than their legal opponents, that could help account for the persistence
of inefficient enduring legal disputes.
[28]
If corporations and unions were prone to believe that they were stronger
and more justified than the other side, that could contribute to their willingness to endure labor strikes.
[29]
If nations
were prone to believe that their militaries were stronger than were those of other nations, that could explain their
willingness to go to war.
[30]
Overprecision could have important implications for investing behavior and stock market trading. Because
Bayesians cannot agree to disagree,
[31]
classical finance theory has trouble explaining why, if stock market traders
are fully rational Bayesians, there is so much trading in the stock market. Overprecision might be one answer.
[32]
If
market actors are too sure their estimates of an assets value is correct, they will be too willing to trade with others
who have different information than they do.
Oskamp (1965) tested groups of clinical psychologists and psychology students on a multiple-choice task in which
they drew conclusions from a case study.
[33]
Along with their answers, subjects gave a confidence rating in the form
of a percentage likelihood of being correct. This allowed confidence to be compared against accuracy. As the
subjects were given more information about the case study, their confidence increased from 33% to 53%. However
their accuracy did not significantly improve, staying under 30%. Hence this experiment demonstrated
overconfidence which increased as the subjects had more information to base their judgment on.
[33]
Even if there is no general tendency toward overconfidence, social dynamics and adverse selection could
conceivably promote it. For instance, those most likely to have the courage to start a new business are those who
most overplace their abilities relative to those of other potential entrants. And if voters find confident leaders more
credible, then contenders for leadership learn that they should express more confidence than their opponents in order
to win election.
[34]
Overconfidence can be beneficial to individual self-esteem as well as giving an individual the will to succeed in their
desired goal. Just believing in oneself may give one the will to take one's endeavours further than those who do
not.
[35]
Overconfidence effect
175
Related biases
Overconfidence bias often serves to increase the effects of escalating commitment - causing decision makers to
refuse to withdraw from a losing situation, or to continue to throw good money, effort, time and other resources
after bad investments.
People often tend to ignore base rates or undervalue their effect. For example, if one is competing against
individuals who are already winners of previous competitions, one's odds of winning should be adjusted
downward considerably. People tend to fail to do so sufficiently.
Core self-evaluations
Very high levels of core self-evaluations (CSE), a stable personality trait composed of locus of control, neuroticism,
self-efficacy, and self-esteem,
[36]
may lead to the overconfidence effect. People who have high core self-evaluations
will think positively of themselves and be confident in their own abilities,
[36]
although extremely high levels of CSE
may cause an individual to be more confident than is warranted.
Notes
[1] [1] Pallier, Gerry, et al. "The role of individual differences in the accuracy of confidence judgments." The Journal of General Psychology 129.3
(2002): 257+.
[2] Stankov, L. (1999). Mining on the "no man's land" between intelligence and personality. In P.L. Ackerman, P.C. Kyllonen, & R.D. Roberts
(Eds.), Learning and individual differences: Process, trait, and content determinants (pp. 314337). Washington, DC: American Psychological
Association.
[3] Adams, P. A., & Adams, J. K. (1960). Confidence in the recognition and reproduction of words difficult to spell. The American Journal of
Psychology, 73(4), 544-552.
[4] Lichtenstein, Sarah; Baruch Fischhoff, Lawrence D. Phillips (1982). "Calibration of probabilities: The state of the art to 1980". In Daniel
Kahneman, Paul Slovic, Amos Tversky. Judgment under uncertainty: Heuristics and biases. Cambridge University Press. pp.306334.
ISBN978-0-521-28414-1.
[5] Alpert, Marc; Howard Raiffa (1982). "A progress report on the training of probability assessors". In Daniel Kahneman, Paul Slovic, Amos
Tversky. Judgment under uncertainty: Heuristics and biases. Cambridge University Press. pp.294305. ISBN978-0-521-28414-1.
[6] [6] Harvey, N. (1997). Confidence in judgment. Trends in Cognitive Sciences, 1(2), 78-82.
[7] Hoffrage, Ulrich (2004). "Overconfidence". In Rdiger Pohl. Cognitive Illusions: a handbook on fallacies and biases in thinking, judgement
and memory. Psychology Press. ISBN978-1-84169-351-4.
[8] Gigerenzer, G. (1993). The bounded rationality of probabilistic mental modules. In K. I. Manktelow & D. E. Over (Eds.), Rationality (pp.
127-171). London: Routledge.
[9] Buehler, R., Griffin, D., & Ross, M. (1994). Exploring the "planning fallacy": Why people underestimate their task completion times. Journal
of Personality and Social Psychology, 67(3), 366-381.
[10] [10] Langer, E. J. (1975). The illusion of control. Journal of Personality and Social Psychology, 32(2), 311-328.
[11] Gino, F., Sharek, Z., & Moore, D. A. (2011). Keeping the illusion of control under control: Ceilings, floors, and imperfect calibration.
Organizational Behavior & Human Decision Processes, 114, 104-114.
[12] Krizan, Z., & Windschitl, P. D. (2007). The influence of outcome desirability on optimism. Psychological Bulletin, 133(1), 95-121.
[13] Norem, J. K., & Cantor, N. (1986). Defensive pessimism: Harnessing anxiety as motivation. Journal of Personality and Social Psychology,
51(6), 1208-1217.
[14] McGraw, A. P., Mellers, B. A., & Ritov, I. (2004). The affective costs of overconfidence. Journal of Behavioral Decision Making, 17(4),
281-295.
[15] Alicke, M. D., & Govorun, O. (2005). The better-than-average effect. In M. D. Alicke, D. Dunning & J. Krueger (Eds.), The self in social
judgment (pp. 85-106). New York: Psychology Press.
[16] [16] Svenson, O. (1981). Are we less risky and more skillful than our fellow drivers? Acta Psychologica, 47, 143-151.
[17] [17] Cannell, J. J. (1989). How public educators cheat on standardized achievement tests: The "Lake Wobegon" report.
[18] [18] Dunning, D. (2005). Self-insight: Roadblocks and detours on the path to knowing thyself. New York: Psychology Press.
[19] [19] Kruger, J. (1999). Lake Wobegon be gone! The "below-average effect" and the egocentric nature of comparative ability judgments. Journal
of Personality and Social Psychology, 77(2), 221-232.
[20] [20] Weinstein, N. D. (1980). Unrealistic optimism about future life events. Journal of Personality and Social Psychology, 39(5), 806-820.
[21] Chambers, J. R., & Windschitl, P. D. (2004). Biases in social comparative judgments: The role of nonmotivational factors in above-average
and comparative-optimism effects. Psychological Bulletin, 130(5).
[22] Chambers, J. R., Windschitl, P. D., & Suls, J. (2003). Egocentrism, event frequency, and comparative optimism: When what happens
frequently is "more likely to happen to me". Personality and Social Psychology Bulletin, 29(11), 1343-1356.
Overconfidence effect
176
[23] Kruger, J., & Burrus, J. (2004). Egocentrism and focalism in unrealistic optimism (and pessimism). Journal of Experimental Social
Psychology, 40(3), 332-340.
[24] Taylor, S. E., & Brown, J. D. (1988). Illusion and well-being: a social psychological perspective on mental health. Psychological Bulletin,
103(2), 193-210.
[25] Harris, A. J. L., & Hahn, U. (2011). Unrealistic Optimism about Future Life Events: A cautionary note. Psychological Review, 118(1),
135-154.
[26] Kahneman, Daniel (19 October 2011). "Don't Blink! The Hazards of Confidence" (http:/ / www. nytimes. com/ 2011/ 10/ 23/ magazine/
dont-blink-the-hazards-of-confidence. html?ref=general& src=me& pagewanted=all). New York Times. . Retrieved 25 October 2011.
[27] [27] Plous, S. (1993). The psychology of judgment and decision making. New York: McGraw-Hill.
[28] Thompson, L., & Loewenstein, G. (1992). Egocentric interpretations of fairness and interpersonal conflict. Organizational Behavior and
Human Decision Processes, 51(2), 176-197.
[29] Babcock, L., & Olson, C. (1992). The causes of impasses in labor disputes. Industrial Relations, 31, 348-360.
[30] [30] Johnson, D. D. P. (2004). Overconfidence and war: The havoc and glory of positive illusions. Cambridge, MA: Harvard University Press.
[31] [31] Aumann, R. J. (1976). Agreeing to disagree. Annals of Statistics, 4, 1236-1239.
[32] Daniel, K. D., Hirshleifer, D. A., & Sabrahmanyam, A. (1998). Investor psychology and security market under- and overreactions. Journal
of Finance, 53(6), 1839-1885.
[33] Oskamp, Stuart (1965). "Overconfidence in case-study judgements". The Journal of Consulting Psychology (American Psychological
Association) 2: 261265. reprinted in Kahneman, Daniel; Paul Slovic, Amos Tversky (1982). Judgment under uncertainty: Heuristics and
biases. Cambridge University Press. pp.287293. ISBN978-0-521-28414-1.
[34] Radzevick, J. R., & Moore, D. A. (2011). Competing to be certain (but wrong): Social pressure and overprecision in judgment. Management
Science, 57(1), 93-106.
[35] Fowler, James, and Dominic Johnson. "On Overconfidence." Seed Magazine. Seed Magazine, January 7, 2011. Web. 22 Jul 2011. http:/ /
seedmagazine.com/ content/ article/ on_overconfidence/
[36] Judge, T. A., Locke, E. A., & Durham, C. C. (1997). The dispositional causes of job satisfaction: A core evaluations approach. Research in
Organizational Behavior, 19, 151188.
References
Adams, P. A., & Adams, J. K. (1960). Confidence in the recognition and reproduction of words difficult to spell.
The American Journal of Psychology, 73(4), 544-552.
Cannell, J. J. (1989). How public educators cheat on standardized achievement tests: The "Lake Wobegon" report.
Johnson, D. D. P. (2004). Overconfidence and war: The havoc and glory of positive illusions. Cambridge, MA:
Harvard University Press.
Larrick, R. P., Burson, K. A., & Soll, J. B. (2007). Social comparison and confidence: When thinking you're
better than average predicts overconfidence (and when it does not). Organizational Behavior & Human Decision
Processes, 102(1), 76-94.
Further reading
Larrick, R. P., Burson, K. A., & Soll, J. (2007). Social comparison and confidence: When thinking you're better
than average predicts overconfidence (and when it does not). Organizational Behavior & Human Decision
Processes, 102(1), 76-94.
Moore, D. A., & Healy, P. J. (2008). The trouble with overconfidence. Psychological Review, 115(2), 502-517.
Baron, Johnathan (1994). Thinking and Deciding. Cambridge University Press. pp.219224.
ISBN0-521-43732-6.
Gilovich, Thomas; Dale Griffin, Daniel Kahneman (Eds.). (2002). Heuristics and biases: The psychology of
intuitive judgment. Cambridge, UK: Cambridge University Press. ISBN 0-521-79679-2
Sutherland, Stuart (2007). Irrationality. Pinter & Martin. pp.172178. ISBN978-1-905177-07-3.
Pareidolia
177
Pareidolia
A satellite photo of a mesa in Cydonia, often
called the Face on Mars. Later imagery from
other angles did not contain the illusion.
Pareidolia (pron.: /prdoli/ parr-i-DOH-lee-) is a psychological
phenomenon involving a vague and random stimulus (often an image or
sound) being perceived as significant. Common examples include
seeing images of animals or faces in clouds, the man in the moon or the
Moon rabbit, and hearing hidden messages on records when played in
reverse.
The word comes from the Greek words para (, "beside, alongside,
instead") in this context meaning something faulty, wrong, instead of;
and the noun eidlon ( "image, form, shape") the diminutive of
eidos. Pareidolia is a type of apophenia, seeing patterns in random data.
Examples
Religious
There have been many instances of perceptions of religious imagery and themes, especially the faces of religious
figures, in ordinary phenomena. Many involve images of Jesus,
[1]
the Virgin Mary,
[2]
the word Allah,
[3]
or other
religious phenomena: In September 2007 in Singapore, for example, a callus on a tree resembled a monkey, leading
believers to pay homage to the "Monkey god" (either Sun Wukong or Hanuman) in the so-called "monkey tree
phenomenon".
[4]
Publicity surrounding sightings of religious figures and other surprising images in ordinary objects has spawned a
market for such items on online auctions like eBay. One famous instance was a grilled cheese sandwich with the
Virgin Mary's face.
[5]
Divination
Various European ancient divination practices involve the interpretation of shadows cast by objects. For example, in
Nordic molybdomancy, a random shape produced by pouring molten tin into cold water is interpreted by the shadow
it casts in candlelight.
Fossils
From the late 1970s through the early 1980s, Japanese researcher Chonosuke Okamura self-published a famous
series of reports titled "Original Report of the Okamura Fossil Laboratory" in which he described tiny inclusions in
polished limestone from the Silurian period (425 mya) as being preserved fossil remains of tiny humans, gorillas,
dogs, dragons, dinosaurs, and other organisms, all of them only millimeters long, leading him to claim "There have
been no changes in the bodies of mankind since the Silurian period ... except for a growth in stature from 3.5 mm to
1,700 mm."
[6][7]
Okamura's research earned him a winner of the Ig Nobel Prize (a parody of the Nobel Prizes) in
biodiversity.
[8]
See List of Ig Nobel Prize winners (1996).
[9]
Pareidolia
178
Projective tests
The Rorschach inkblot test uses pareidolia in an attempt to gain insight into a person's mental state. The Rorschach is
a projective test, as it intentionally elicits the thoughts or feelings of respondents which are "projected" onto the
ambiguous inkblot images. Projection in this instance is a form of "directed pareidolia" because the cards have been
deliberately designed not to resemble anything in particular.
[1]
Electronic voice phenomenon
In 1971, Konstantns Raudive wrote Breakthrough, detailing what he believed was the discovery of electronic voice
phenomenon (EVP). EVP has been described as auditory pareidolia.
[1]
Backmasking
The allegations of backmasking in popular music have also been described as auditory pareidolia.
[1][10]
Art
In his notebooks, Leonardo da Vinci wrote of pareidolia as a device for painters, writing "if you look at any walls
spotted with various stains or with a mixture of different kinds of stones, if you are about to invent some scene you
will be able to see in it a resemblance to various different landscapes adorned with mountains, rivers, rocks, trees,
plains, wide valleys, and various groups of hills. You will also be able to see divers combats and figures in quick
movement, and strange expressions of faces, and outlandish costumes, and an infinite number of things which you
can then reduce into separate and well conceived forms."
[11]
Explanations
Evolutionary advantage
A drawing which, despite not bearing much
resemblance to a real face, most people will
identify as a picture of one.
Carl Sagan hypothesized that as a survival technique, human beings are
"hard-wired" from birth to identify the human face. This allows people
to use only minimal details to recognize faces from a distance and in
poor visibility but can also lead them to interpret random images or
patterns of light and shade as being faces.
[12]
The evolutionary
advantages of being able to discern friend from foe with split-second
accuracy are numerous; prehistoric (and even modern) men and women
who accidentally identify an enemy as a friend could face deadly
consequences for this mistake. This is only one among many
evolutionary pressures responsible for the development of the facial
recognition capability of modern humans.
[13]
In Cosmos: A Personal Voyage Sagan claimed that Heikegani crabs'
occasional resemblance to Samurai resulted in their being spared from
capture and thus exaggerate the trait in their offspring, a hypothesis
proposed by Julian Huxley in 1952. Such claims have been met with skepticism.
[14]
A 2009 magnetoencephalography study found that objects incidentally perceived as faces evoke an early (165 ms)
activation in the ventral fusiform cortex, at a time and location similar to that evoked by faces, whereas other
common objects do not evoke such activation. This activation is similar to a slightly earlier peak at 130 ms seen for
images of real faces. The authors suggest that face perception evoked by face-like objects is a relatively early
process, and not a late cognitive reinterpretation phenomenon.
[15]
An fMRI study in 2011 similarly showed that
repeated presentation of novel visual shapes that were interpreted as meaningful led to decreased fMRI responses for
Pareidolia
179
real objects. These result indicate that interpretation of ambiguous stimuli depends on similar processes as those
elicited for known objects.
[16]
These studies help to explain why people identify a few circles and a line as a "face" so quickly and without
hesitation. Cognitive processes are activated by the "face-like" object, which alert the observer to both the emotional
state and identity of the subject even before the conscious mind begins to process or even receive the
information. The "stick figure face," despite its simplicity, conveys mood information (in this case, disappointment
or mild unhappiness). It would be just as simple to draw a stick figure face that would be perceived (by most people)
as hostile and aggressive. This robust and subtle capability is the result of eons of natural selection favoring people
most able to quickly identify the mental state, for example, of threatening people, thus providing the individual an
opportunity to flee or attack preemptively. In other words, processing this information subcortically (and therefore
subconsciously) before it is passed on to the rest of the brain for detailed processing accelerates judgment and
decision making when alacrity is paramount.
[13]
This ability, though highly specialized for the processing and
recognition of human emotions, also functions to determine the demeanor of wildlife.
[17]
Combined with Apophenia (seeing patterns in randomness) and hierophany (a manifestation of the sacred),
pareidolia may have helped early societies organize chaos and make the world intelligible.
[18][19]
Pathologies
There are a number of conditions that can cause an individual to lose his/her ability to recognize faces; stroke,
tumors, and trauma to the ventral fusiform gyrus are the most common culprits. This is known as prosopagnosia.
Pareidolia can also be related to obsessivecompulsive disorder as seen in one woman's case.
[20]
Natural
Smiley face in Galle
Crater on Mars.
Human face on Pedra da
Gavea near Rio de Janeiro.
Garuda (Ancient Eagle) seen
from the sides of Tirumala
Hills
Apache head in rocks near
Ebihens, France.
Pareidolia
180
A face of Lord Venkateshwara is
seen on Tirumala Hills. It
appears to be sleeping.
Anthropomorphic grimacing face
visible in the red shale of the
gorges of Cians, France.
Bust of a woman with a hat, in
profile, in the gorges of Daluis,
France, better known as "La
Gardienne des Gorges".
Tree with mature ivy, suggestive
of a person clinging to the tree's
trunk, in Scotland.
Romanian Sphinx in Bucegi
Mountains.
Artificial
Pareidolia examples
This alarm clock appears to have a
sad face.
Pareidolia
181
False wood with multiple pareidolia
aspects.
A rusty piece of machinery looks like
the face of a beast.
Pareidolia
182
A cardboard box that appears to be shocked
and unhappy
An electrical outlet (center) that appears
to be happy
References
[1] Zusne, Leonard; Warren H. Jones (1989). Anomalistic Psychology: A Study of Magical Thinking (http:/ / books. google. com/
books?isbn=0805805087). Lawrence Erlbaum Associates. pp.7779. ISBN0-8058-0508-7. . Retrieved 2007-04-06.
[2] NY Times: "In New Jersey, a Knot in a Tree Trunk Draws the Faithful and the Skeptical" July 23, 2012 (http:/ / www. nytimes. com/ 2012/
07/ 23/ nyregion/ in-a-tree-trunk-in-new-jersey-some-see-our-lady-of-guadalupe. html?hpw)
[3] Ibrahim, Yahaya (2011-01-02). "In Maiduguri, a tree with engraved name of God turns spot to a Mecca of sorts" (http:/ / sundaytrust. com.
ng/ index. php?option=com_content& view=article&
id=5698:in-maiduguri-a-tree-with-engraved-name-of-god-turns-spot-to-a-mecca-of-sorts& catid=17:community-news-kanem-trust&
Itemid=28). Sunday Trust (Media Trust Limited, Abuja). . Retrieved 2012-03-21.
[4] Ng Hui Hui (13 September 2007). "Monkey See, Monkey Do?" (http:/ / newpaper. asia1. com. sg/ printfriendly/ 0,4139,141806,00. html).
The New Paper. pp.1213. .
[5] "'Virgin Mary' toast fetches $28,000" (http:/ / news. bbc. co. uk/ 2/ hi/ americas/ 4034787. stm). BBC News. 23 November 2004. . Retrieved
2006-10-27.
[6] Spamer, E.. "Chonosuke Okamura, Visionary" (http:/ / improbable. com/ airchives/ paperair/ volume6/ v6i6/ okamura-6-6. html).
Philadelphia: Academy of Natural Sciences. . archived at Improbable Research (http:/ / improbable. com/ )
[7] Berenbaum, May (2009). The earwig's tail: a modern bestiary of multi-legged legends. Harvard University Press. pp.7273.
ISBN0-674-03540-2.
[8] Marc Abrahams (2004-03-16). "Tiny tall tales: Marc Abrahams uncovers the minute, but astonishing, evidence of our fossilised past" (http:/ /
www.guardian.co.uk/ education/ 2004/ mar/ 16/ highereducation. research). London: The Guardian. .
[9] Conner, Susan; Kitchen, Linda (2002). Science's most wanted: the top 10 book of outrageous innovators, deadly disasters, and shocking
discoveries. Most Wanted Series. Brassey's. p.93. ISBN1-57488-481-6.
Pareidolia
183
[10] Vokey, John R; J. Don Read (November 1985). "Sublminal message: between the devil and the media". American Psychologist. 11 40 (11):
12311239. doi:10.1037/0003-066X.40.11.1231. PMID4083611.
[11] John R; J. Don Read (1923). "Leonardo Da Vinci S Note-Books Arranged And Rendered Into English" (http:/ / www. archive. org/ stream/
leonardodavincis007918mbp/ leonardodavincis007918mbp_djvu. txt). Empire State Book Company. ..
[12] Sagan, Carl (1995). The Demon-Haunted World Science as a Candle in the Dark. New York: Random House. ISBN0-394-53512-X.
[13] Svoboda, Elizabeth (2007-02-13). "Facial Recognition Brain Faces, Faces Everywhere" (http:/ / www. nytimes. com/ 2007/ 02/ 13/
health/ psychology/ 13face. html). The New York Times (New York Times). . Retrieved July 3, 2010.
[14] Joel W. Martin (1993). "The Samurai Crab" (http:/ / crustacea. nhm. org/ people/ martin/ publications/ pdf/ 103. pdf) (PDF). Terra 31 (4):
3034. .
[15] Hadjikhani N, Kveraga K, Naik P, Ahlfors SP (February 2009). "Early (M170) activation of face-specific cortex by face-like objects".
Neuroreport 20 (4): 4037. doi:10.1097/WNR.0b013e328325a8e1. PMC2713437. PMID19218867.
[16] Voss first1=JL last2=Federmeier first2=KD last3=Paller first3=K (2011). "The potato chip really does look like Elvis! Neural hallmarks of
conceptual processing associated with finding novel shapes subjectively meaningful" (http:/ / cercor. oxfordjournals. org/ content/ early/ 2011/
11/ 10/ cercor.bhr315). Cerebral Cortex. doi:10.1093/cercor/bhr315. PMID22079921. .
[17] "Dog Tips Emotions in Canines and Humans" (http:/ / www. paw-rescue. org/ PAW/ PETTIPS/ DogTip_EmotionsInCaninesAndHumans.
php). Partnership for Animal Welfare. . Retrieved July 3, 2010.
[18] Bustamante Patricio, Yao Fay, Bustamante Daniela, 2010 b, The Worship to the Mountains: A Study of the Creation Myths of the Chinese
Culture (http:/ / www.rupestreweb. info/ china.html)
[19] Bustamante, Patricio; Yao, Fay; Bustamante, Daniela (2010). "Pleistocene Art: the archeological material and its anthropological meanings"
(http:/ / www. ifraoariege2010.fr/ docs/ Articles/ Bustamante_et_al-Signes. pdf) (PDF). .
[20] Fontenelle, Leonardo. "Leonardo F. Fontenelle. Pareidolias in obsessive-compulsive disorder" (http:/ / www. ingentaconnect. com/ content/
psych/ nncs/ 2008/ 00000014/ 00000005/ art00004). . Retrieved October 28, 2011.
External links
Cloud pareidolia (http:/ / www. environmentalgraffiti. com/ featured/ 33-creepiest-clouds-on-earth/ 1515) 33
examples of meteorological pareidolia.
Religious Pareidolia (http:/ / www. yoism. org/ ?q=node/ 129) Extensive video and photographic collection of
pareidolia.
Skepdic.com (http:/ / skepdic. com/ pareidol. html) Skeptic's Dictionary definition of pareidolia
Lenin in my shower curtain (http:/ / www. badastronomy. com/ bad/ misc/ lenin. html) (Bad Astronomy)
The Stone Face: Fragments of An Earlier World (http:/ / www. mnmuseumofthems. org/ Faces/ intro. html)
Feb. 13, 2007, article in The New York Times about cognitive science of face recognition (http:/ / www. nytimes.
com/ 2007/ 02/ 13/ health/ psychology/ 13face. html)
Famous Pareidolias (http:/ / www. pareidolias. net)
Snopes.com (http:/ / www. snopes. com/ rumors/ wtcface. asp) Faces of 'Satan' seen in World Trade Center smoke
Pessimism bias
184
Pessimism bias
Pessimism bias is an effect in which people exaggerate the likelihood that negative things will happen to them. It
contrasts with optimism bias, which is a more general, systematic tendency to underestimate personal risks and
overestimate the likelihood of positive life events.
[1][2]
Depressed people are particularly likely to exhibit a
pessimism bias.
[3][4]
Surveys of smokers have found that their ratings of their risk of heart disease showed a small
but significant pessimism bias; however, the literature as a whole is inconclusive.
[1]
References
[1] SR Sutton, How accurate are smokers' perceptions of risk? (http:/ / www. informaworld. com/ index/ 784101725. pdf), Health, Risk &
Society, 1999,
[2] de Palma, Andre; Picard, Nathalie (2009). "Behaviour Under Uncertainty" (http:/ / books. google. com/ books?id=qlp8itjp-RcC& pg=PA423).
In Kitamura, Ryichi; Yoshii, Toshio; Yamamoto, Toshiyuki. The Expanding Sphere of Travel Behaviour Research: Selected Papers from the
11th International Conference on Travel Behaviour Research. Emerald Group Publishing. pp.423. ISBN978-1-84855-936-3. . Retrieved 6
January 2011.
[3] Sharot, Tali; Riccardi, Alison M.; Raio, Candace M.; Phelps, Elizabeth A. (2007). "Neural mechanisms mediating optimism bias". Nature 450
(7166): 102105. doi:10.1038/nature06280. ISSN0028-0836. PMID17960136.
[4] Wang, PS; AL Beck, P Berglund (2004), "Effects of major depression on moment-in-time work performance" (http:/ / ajp. psychiatryonline.
org/ cgi/ content/ abstract/ 161/ 10/ 1885), American Journal of Psychiatry (American Psychiatric Association) 161: 18851891,
Planning fallacy
Daniel Kahneman
The planning fallacy is a tendency for people and organizations to
underestimate how long they will need to complete a task, even when they
have experience of similar tasks over-running. The term was first proposed
in a 1979 paper by Daniel Kahneman and Amos Tversky.
[1][2]
Since then
the effect has been found for predictions of a wide variety of tasks,
including tax form completion, school work, furniture assembly, computer
programming and origami.
[1][3]
The bias only affects predictions about
one's own tasks; when uninvolved observers predict task completion times,
they show a pessimistic bias, overestimating the time taken.
[3][4]
In 2003,
Lovallo and Kahneman proposed an expanded definition as the tendency to
underestimate the time, costs, and risks of future actions and at the same
time overestimate the benefits of the same actions. According to this
definition, the planning fallacy results in not only time overruns, but also
cost overruns and benefit shortfalls.
[5]
Demonstration
In a 1994 study, 37 psychology students were asked to estimate how long it would take to finish their senior theses.
The average estimate was 33.9 days. They also estimated how long it would take "if everything went as well as it
possibly could" (averaging 27.4 days) and "if everything went as poorly as it possibly could" (averaging 48.6 days).
The average actual completion time was 55.5 days, with only about 30% of the students completing their thesis in
the amount of time they predicted.
[6]
Another study asked students to estimate when they would complete their personal academic projects. Specifically,
the researchers asked for estimated times by which the students thought it was 50%, 75%, and 99% probable their
personal projects would be done.
[4]
Planning fallacy
185
13% of subjects finished their project by the time they had assigned a 50% probability level;
19% finished by the time assigned a 75% probability level;
45% finished by the time of their 99% probability level.
A survey of Canadian tax payers, published in 1997, found that they mailed in their tax forms about a week later than
they predicted. They had no misconceptions about their past record of getting forms mailed in, but expected that they
would get it done more quickly next time.
[7]
This illustrates a defining feature of the planning fallacy; that people
recognize that their past predictions have been over-optimistic, while insisting that their current predictions are
realistic.
[3]
Explanations
Kahneman and Tversky's original explanation for the fallacy was that planners focus on the most optimistic scenario
for the task, rather than using their full experience of how much time similar tasks require.
[3]
One explanation
offered by Roger Buehler and colleagues is wishful thinking; in other words, people think tasks will be finished
quickly and easily because that is what they want to be the case.
[1]
In a different paper, Buehler and colleagues
suggest an explanation in terms of the self-serving bias in how people interpret their past performance. By taking
credit for tasks that went well but blaming delays on outside influences, people can discount past evidence of how
long a task should take.
[1]
One experiment found that when people made their predictions anonymously, they do not
show the optimistic bias. This suggests that the people make optimistic estimates so as to create a favorable
impression with others.
[1]
Some have attempted to explain the planning fallacy in terms of impression management theory.
One explanation, focalism, may account for the mental discounting of off-project risks. People formulating the plan
may eliminate factors they perceive to lie outside the specifics of the project. Additionally, they may discount
multiple improbable high-impact risks because each one is so unlikely to happen.
Planners tend to focus on the project and underestimate time for sickness, vacation, meetings, and other "overhead"
tasks. Planners also tend not to plan projects to a detail level that allows estimation of individual tasks, like placing
one brick in one wall; this enhances optimism bias and prohibits use of actual metrics, like timing the placing of an
average brick and multiplying by the number of bricks. Complex projects that lack immutable goals are also subject
to mission creep, scope creep, and featuritis. As described by Fred Brooks in The Mythical Man-Month, adding new
personnel to an already-late project incurs a variety of risks and overhead costs that tend to make it even later; this is
known as Brooks's law.
Another possible explanation is the "authorization imperative": Much of project planning takes place in a context
where financial approval is needed to proceed with the project and the planner often has a stake in getting the project
approved. This dynamic may lead to a tendency on the part of the planner to deliberately underestimate the project
effort required. It is easier to get forgiveness (for overruns) than permission (to commence the project if a realistic
effort estimate were provided.) Such deliberate underestimation has been named strategic misrepresentation.
Planning fallacy
186
Methods to curb the planning fallacy
Daniel Kahneman, Amos Tversky, and Bent Flyvbjerg developed reference class forecasting to eliminate or reduce
the effects of the planning fallacy in decision making.
[8]
Notes
[1] Pezzo, Mark V.; Litman, Jordan A.; Pezzo, Stephanie P. (2006). "On the distinction between yuppies and hippies: Individual differences in
prediction biases for planning future tasks". Personality and Individual Differences 41 (7): 13591371. doi:10.1016/j.paid.2006.03.029.
ISSN01918869.
[2] Kahneman, Daniel; Tversky, Amos (1979). "Intuitive prediction: biases and corrective procedures". TIMS Studies in Management Science 12:
313327.
[3] Buehler, Roger; Griffin, Dale, & Ross, Michael (2002). "Inside the planning fallacy: The causes and consequences of optimistic time
predictions". In Thomas Gilovich, Dale Griffin, & Daniel Kahneman (Eds.), Heuristics and biases: The psychology of intuitive judgment, pp.
250270. Cambridge, UK: Cambridge University Press.
[4] Buehler, Roger; Dale Griffin, Michael Ross (1995). "It's about time: Optimistic predictions in work and love". European Review of Social
Psychology (American Psychological Association) 6: 132. doi:10.1080/14792779343000112.
[5] Lovallo, Dan; Daniel Kahneman (July 2003). "Delusions of Success: How Optimism Undermines Executives' Decisions". Harvard Business
Review: 5663.
[6] Buehler, Roger; Dale Griffin, Michael Ross (1994). "Exploring the "planning fallacy": Why people underestimate their task completion
times". Journal of Personality and Social Psychology (American Psychological Association) 67 (3): 366381.
doi:10.1037/0022-3514.67.3.366.
[7] Buehler, Roger; Dale Griffin, Johanna Peetz (2010). "The Planning Fallacy: Cognitive, Motivational, and Social Origins" (http:/ / www.
psych. nyu. edu/ trope/ Ledgerwood et al_Advances chapter. PDF#page=10). Advances in Experimental Social Psychology (Academic Press)
43: 9. . Retrieved 2012-09-15.
[8] Flyvbjerg, B., 2008, "Curbing Optimism Bias and Strategic Misrepresentation in Planning: Reference Class Forecasting in Practice." (http:/ /
www.sbs.ox.ac. uk/ centres/ bt/ Documents/ Curbing Optimism Bias and Strategic Misrepresentation. pdf) European Planning Studies, vol.
16, no. 1, January, pp. 321.
References
Lev Virine and Michael Trumper. Project Decisions: The Art and Science (http:/ / www. projectdecisions. org),
Vienna, VA: Management Concepts, 2008. ISBN 978-1-56726-217-9
Further reading
If you don't want to be late, enumerate: Unpacking reduces the planning fallacy (http:/ / dx. doi. org/ 10. 1016/ j.
jesp. 2003. 11. 001) by Justin Kruger and Matt Evans
Post-purchase rationalization
187
Post-purchase rationalization
Post-purchase rationalization, also known as Buyer's Stockholm Syndrome, is a cognitive bias whereby someone
who purchases an expensive product or service overlooks any faults or defects in order to justify their purchase. It is
a special case of choice-supportive bias.
Expensive purchases often involve a lot of careful research and deliberation, and many consumers will often refuse
to admit that their decision was made in poor judgement. Many purchasing decisions are made emotionally, based on
factors such as brand-loyalty and advertising, and so are often rationalized retrospectively in an attempt to justify the
choice.
For example, a consumer cannot decide between two popular video game consoles, but in the end decides to
purchase the one that many of their peers also own. After purchasing it, they may find few games for their console
worth purchasing, and more for the console they did not purchase. However, they do not wish to feel they made the
wrong decision, and so will attempt to convince themselves, and their peers, that their original choice was the correct
one, and the consumer's opinion is better than everyone's opinion, e.g. using sour grapes arguments.
[1]
This rationalization is based on the Principle of Commitment and the psychological desire to stay consistent to that
commitment. Some authorities would also consider this rationalization a manifestation of cognitive dissonance.
References
[1] Joel B. Cohen; Marvin E. Goldberg (August 1970). "The Dissonance Model in Post-Decision Product Evaluation". Journal of Marketing
Research (American Marketing Association) 7 (3): 315321. doi:10.2307/3150288. JSTOR3150288.
Pro-innovation bias
In diffusion of innovation theory, a pro-innovation bias reflects a personal bias toward an innovation that someone
is trying to implement or diffuse among a population.
[1]
The bias refers to the fact that the innovation's "champion"
has such strong bias in favor of the innovation, that he/she may not see its limitations or weaknesses and continues to
promote it nonetheless.
An example may be an inventor who creates a new process or product and wants to take it to market for financial
gain. While the invention may be interesting and have promise, if the inventor is experiencing pro-innovation bias,
he/she may not heed market data (or even seek such data) that prostulates that the invention will or may not sell.
References
[1] "Beyond the pro-innovation bias" (http:/ / www. hanken. fi/ public/ en/ beyondtheproinnovationbias). January 26, 2010. . Retrieved April 17,
2011.
Further reading
Rogers, Everett (Original edition (August 16, 2003)). Diffusion of Innovations. Free Press. pp.512.
ISBN0-7432-2209-1.
Pseudocertainty effect
188
Pseudocertainty effect
The pseudocertainty effect is a concept from prospect theory. It refers to people's tendency to perceive an outcome
as certain while in fact it is uncertain (Kahneman & Tversky, 1986).
[1]
It is observed in multi-stage decisions, in
which evaluation of outcomes in previous decision stage is discarded when making an option in subsequent stages.
Example
Kahneman and Tversky (1986) illustrated the pseudocertainty effect by the following examples.
First, consider this problem:
Which of the following options do you prefer?
C. 25% chance to win $30 and 75% chance to win nothing
D. 20% chance to win $45 and 80% chance to win nothing
In this case, 42% of participants chose option C while 58% chose option D.
Now, consider this problem:
Consider the following two stage game. In the first stage, there is a 75% chance to end the game without winning
anything, and a 25% chance to move into the second stage. If you reach the second stage you have a choice between:
E. a sure win of $30
F. 80% chance to win $45 and 20% chance to win nothing
Your choice must be made before the outcome of the first stage is known.
This time,74% of participants chose option E while only 26% chose option F.
In fact, the actual probability of winning money in option E (25% x 100% = 25%) and option F (25% x 80% = 20%)
is the same as the probability of winning money in option C (25%) and option D (20%) respectively. In the second
problem, since individuals have no choice on options in the first stage, individuals tend to discard the first option
when evaluating the overall probability of winning money, but just to consider the options in the second stage that
individuals have a choice on. This is also known as cancellation, meaning that possible options are yielding to the
same outcome thus ignoring decision process in that stage.
External links
Kahneman, Daniel and Tversky, Amos. The Framing of Decisions and the Psychology of Choice Science 211
(1981), pp.4538, copyright 1981 by the American Association for the Advancement of Science. [2]
References
[1] (http:/ / www.cog. brown. edu/ courses/ cg195/ pdf_files/ fall07/ Kahneman& Tversky1986. pdf), Tversky, A., & Kahneman, D. (1986).
Rational Choice and the Framing of Decisions. The Journal of Business, 59, S251-S278.
[2] http:/ / www. cs. umu.se/ kurser/ TDBC12/ HT99/ Tversky. html
Reactance (psychology)
189
Reactance (psychology)
Reactance is a motivational reaction to offers, persons, rules, or regulations that threaten or eliminate specific
behavioral freedoms. Reactance occurs when a person feels that someone or something is taking away his or her
choices or limiting the range of alternatives.
Reactance can occur when someone is heavily pressured to accept a certain view or attitude. Reactance can cause the
person to adopt or strengthen a view or attitude that is contrary to what was intended, and also increases resistance to
persuasion. People using reverse psychology are playing on at least an informal awareness of reactance, attempting
to influence someone to choose the opposite of what they request.
Definition
Psychological reactance occurs in response to threats to perceived behavioral freedoms.
[1][2]
An example of such
behavior can be observed when an individual engages in a prohibited activity in order to deliberately taunt the
authority who prohibits it, regardless of the utility or disutility that the activity confers. An individual's freedom to
select when and how to conduct their behavior, and the level to which they are aware of the relevant freedomand
are able to determine behaviors necessary to satisfy that freedomaffect the generation of psychological reactance.
It is assumed that if a person's behavioral freedom is threatened or reduced, they become motivationally aroused. The
fear of loss of further freedoms can spark this arousal and motivate them to re-establish the threatened freedom.
Because this motivational state is a result of the perceived reduction of one's freedom of action, it is considered a
counterforce, and thus is called "psychological reactance".
There are four important elements to reactance theory: perceived freedom, threat to freedom, reactance, and
restoration of freedom. Freedom is not an abstract consideration, but rather a feeling associated with real behaviors,
including actions, emotions, and attitudes.
Reactance also explains denial as it is encountered in addiction counselling. According to William R. Miller,
[3]
"Research demonstrates that a counselor can drive resistance (denial) levels up and down dramatically according to
his or her personal counseling style". Use of a "respectful, reflective approach" described in motivational
interviewing and applied as motivation enhancement therapy, rather than by argumentation, the accusation of "being
in denial", and direct confrontations, lead to the motivation to change and avoid the resistance and denial, or
reactance, elicited by strong direct confrontation.
[4]
For a complete review of how confrontation became popular in
addiction treatment, see Miller, W.R. & White, W.
[5]
Theory
Reactance theory assumes there are "free behaviors" individuals perceive and can take part in at any given moment.
For a behavior to be free, the individual must have the relevant physical and psychological abilities to partake in it,
and must know they can engage in it at the moment, or in the near future.
"Behavior" includes any imaginable act. More specifically, behaviors may be explained as "what one does (or
doesn't do)", "how one does something", or "when one does something". It is not always clear, to an observer, or the
individuals themselves, if they hold a particular freedom to engage in a given behavior. When a person has such a
free behavior they are likely to experience reactance whenever that behavior is restricted, eliminated, or threatened
with elimination.
There are several rules associated with free behaviors and reactance:
1. When certain free behaviors are threatened or removed, the more important a free behavior is to a certain
individual the greater the magnitude of the reactance.
Reactance (psychology)
190
a. a. The level of reactance has a direct relationship to the importance of the eliminated or threatened behavioral
freedom, in relationship to the importance of other freedoms at the time.
2. 2. With a given set of free behaviors, the greater the proportion threatened or eliminated, the greater will be the total
level of reactance.
3. 3. When an important free behavior has been threatened with elimination, the greater will be the threat, and the
greater will be the level of reactance.
a. a. When there is a loss of a single free behavior, there may be by implication a related threat of removal of other
free behaviors now or in the future.
b. b. A free behavior may be threatened or eliminated by virtue of the elimination (or threat of elimination) of
another free behavior; therefore a free behavior may be threatened by the relation of the elimination of (or
threat to) another person's free behavior.
Other core concepts of the theory are justification and legitimacy. A possible effect of justification is a limitation of
the threat to a specific behavior or set of behaviors. For example, if Mr Doe states that he is interfering with Mrs.
Smith's expectations because of an emergency, this keeps Mrs Smith from imagining that Mr Doe will interfere on
future occasions as well. Likewise, legitimacy may point to a set of behaviors threatened since there will be a general
assumption that an illegitimate interference with a person's freedom is less likely to occur. With legitimacy there is
an additional implication that a person's freedom is equivocal.
Effects of reactance
In the phenomenology of reactance there is no assumption that a person will be aware of reactance. When a person
becomes aware of reactance, they will feel a higher level of self-direction in relationship to their own behavior. In
other words, they will feel that if they are able to do what they want, then they do not have to do what they do not
want. In this case when the freedom is in question, that person alone is the director of their own behavior.
When considering the direct re-establishment of freedom, the greater the magnitude of reactance, the more the
individual will try to re-establish the freedom that has been lost or threatened. When a freedom is threatened by a
social pressure then reactance will lead a person to resist that pressure. Also, when there are restraints against a direct
re-establishment of freedom, there can be attempts at re-establishment by implication whenever possible.
Freedom can and may be reestablished by a social implication. When an individual has lost a free behavior because
of a social threat, then the participation in a free-like behavior by another person similar to himself will allow him to
re-establish his own freedom.
In summary the definition of psychological reactance is a motivational state that is aimed at re-establishment of a
threatened or eliminated freedom. A short explanation of the concept is that the level of reactance has a direct
relationship between the importance of a freedom which is eliminated or threatened, and a proportion of free
behaviors eliminated or threatened.
Empirical evidence
A number of studies have looked at psychological reactance, providing empirical evidence for the behaviour; some
key studies are discussed below.
Brehm's 1981 study Psychological reactance and the attractiveness of unobtainable objects: sex differences in
children's responses to an elimination of freedom examined the differences in sex and age in a child's view of the
attractiveness of obtained and unobtainable objects. The study reviewed how well children respond in these
situations and determined if the children being observed thought the "grass was greener on the other side". It also
determined how well the child made peace with the world if they devalued what they could not have. This work
concluded that when a child cannot have what they want, they experience emotional consequences of not getting
it.
[6]
Reactance (psychology)
191
In this study the results were duplicated from a previous study by Hammock and J. Brehm (1966). The male subjects
wanted what they could not obtain, however the female subjects did not conform to the theory of reactance.
Although their freedom to choose was taken away, it had no overall effect on them.
Silvia's 2005 study Deflecting reactance: the role of similarity in increasing compliance and reducing resistance
concluded that one way to increase the activity of a threatened freedom is to censor it, or provide a threatening
message toward the activity. In turn a "boomerang effect" occurs, in which people choose forbidden alternatives.
This study also shows that social influence has better results when it does not threaten one's core freedoms. Two
concepts revealed in this study are that a communicator may be able to increase the positive force towards
compliance by increasing their credibility, and that increasing the positive communication force and decreasing the
negative communication force simultaneously should increase compliance.
[7]
Miller et al., concluded in their 2006 study, Identifying principal risk factors for the initiation of adolescent smoking
behaviors: the significance of psychological reactance, that psychological reactance is an important indicator in
adolescent smoking initiation. Peer intimacy, peer individuation, and intergenerational individuation are strong
predictors of psychological reactance. The overall results of the study indicate that children think that they are
capable of making their own decisions, although they are not aware of their own limitations. This is an indicator that
adolescents will experience reactance to authoritative control, especially the proscriptions and prescriptions of adult
behaviors that they view as hedonically relevant.
[8]
Measurement of reactance
Dillard & Shen, in their 2005 paper On the nature of reactance and its role in persuasive health communication,
provided evidence that psychological reactance could be measured,
[9]
in contrast to the contrary opinion of Jack
Brehm, who developed the theory. In their work they measured the impact of psychological reactance with two
parallel studies: one advocating flossing and the other urging students to limit their alcohol intake.
They formed several conclusions about reactance. Firstly reactance is mostly cognitive; this allows reactance to be
measurable by self-report techniques. Also, in support of previous research, they conclude reactance is in part related
to an anger response. This verifies Brehm's description that during the reactance experience one tends to have hostile
or aggressive feelings, often aimed more at the source of a threatening message than at the message itself. Finally,
within reactance, both cognition and affect are intertwined; Dillard and Shen suggest they are so intertwined that
their effects on persuasion cannot be distinguished from each other.
Dillard and Shen's research indicates reactance can effectively be studied using established self-report methods.
Furthermore, it provided a better understanding of reactance theory and its relationship to persuasive health
communication.
Miller et al. conducted their 2007 study Psychological reactance and promotional health messages: the effects of
controlling language, lexical concreteness, and the restoration of freedom at the University of Oklahoma, with the
primary goal being to measure the effects of controlling language in promotional health messages. Their research
revisited the notion of restoring freedom by examining the use of a short postscripted message tagged on the end of a
promotional health appeal. Results of the study indicated that more concrete messages generate greater attention than
less concrete (more abstract) messages. Also, the source of concrete messages can be seen as more credible than the
source of abstract messages. They concluded that the use of more concrete, low-controlling language, and the
restoration of freedom through inclusion of a choice-emphasizing postscript, may offer the best solution to reducing
ambiguity and reactance created by overtly persuasive health appeals.
[10]
Reactance (psychology)
192
References
[1] Brehm, J. W. (1966). A theory of psychological reactance. Academic Press.
[2] Brehm, S. S., & Brehm, J. W. (1981). Psychological Reactance: A Theory of Freedom and Control. Academic Press.
[3] Miller, W. R. (2000) Motivational Enhancement Therapy: Description of Counseling Approach. in Boren, J. J. Onken, L. S., & Carroll, K. M.
(Eds.) Approaches to Drug Abuse Counseling, National Institute on Drug Abuse, 2000, pp. 89-93.
[4] Miller, W.R. & Rollnick, S. Motivational Interviewing: Preparing People to Change Addictive Behavior. NY: Guilford Press, 1991.
[5] Miller, W. R., & White, W., (2007) Confrontation in Addiction Treatment Counselor Magazine October 4, 2007
[6] Brehm, Sharon S. (1981). Psychological reactance and the attractiveness of unobtainable objects: Sex differences in children's responses to an
elimination of freedom. Sex Roles, Volume 7, Number 9,937-949
[7] Silvia, P. J. (2005). Deflecting reactance: The role of similarity in increasing compliance and reducing resistance. Basic and Applied Social
Psychology, 27, 277284.
[8] Miller, C. H., Burgoon, M., Grandpre, J., & Alvaro, E. (2006). Identifying principal risk factors for the initiation of adolescent smoking
behaviors: The significance of psychological reactance. Health Communication 19, 241-252.
[9] Dillard, J., & Shen, L. (2005). On the nature of reactance and its role in persuasive health communication. Communication Monographs, 72,
144-168.
[10] Miller, C. H., Lane, L. T., Deatrick, L. M., Young, A. M., & Potts, K. A. (2007). Psychological reactance and promotional health messages:
The effects of controlling language, lexical concreteness, and the restoration of freedom. Human Communication Research, 33, 219-240.
Baron, R. A., et al. (2006). Social psychology, Pearson
Reactive devaluation
Reactive devaluation is a cognitive bias that occurs when a proposal is devalued if it appears to originate from an
antagonist. The bias was proposed by Lee Ross and Constance Stillinger.
In an initial experiment conducted in 1991, Stillinger and co-authors asked pedestrians whether they would support a
drastic bilateral nuclear arms reduction program. If they were told the proposal came from President Ronald Reagan,
90 percent said it would be favorable or even-handed to the United States; if they were told the proposal came from a
group of unspecified policy analysts, 80 percent thought it was favorable or even; but, if respondents were told it
came from Mikhail Gorbachev only 44 percent thought it was favorable or neutral to the United States.
[1]
In another experiment, a contemporaneous controversy at Stanford University led to the university divesting of
South African assets because of the apartheid regime. Students at Stanford were asked to evaluate the university's
divestment plan before it was announced publicly and after such. Proposals including the actual eventual proposal
were valued more highly when they were hypothetical.
[1]
In another study, experimenters showed Israeli participants a peace proposal which had been actually proposed by
Israel. If participants were told the proposal came from an Palestinian source they rated it lower than if they were
told (correctly) the identical proposal came from the Israeli government. If participants identified as "hawkish" were
told it came from "dovish" Israeli government they believed it was relatively bad for their people and good for the
other side, but not if participants identified as "doves".
[2]
Reactive devaluation could be caused by loss aversion or attitude polarization,
[3]
or nave realism.
[4]
Reactive devaluation
193
References
[1] Ross, Lee; Constance Stillinger (1991). "Barriers to conflict resolution". Negotiation Journal 8: 389404.
[2] Maoz, Ifat; Andrew Ward, Michael Katz & Lee Ross (2002). "Reactive Devaluation of an "Israeli" vs. "Palestinian" Peace Proposal". Journal
of Conflict Resolution 46 (4): 515546.
[3] Ross, Lee (1995). "Reactive Devaluation in Negotiation and Conflict Resolution". In Kenneth Arrow, Robert Mnookin, Lee Ross, Amos
Tversky, Robert B. Wilson (Eds.). Barriers to Conflict Resolution. New York: WW Norton & Co.
[4] Ross, L., & Ward, A. (1996). Naive realism in everyday life: Implications for social conflict and misunderstanding. In T. Brown, E. S. Reed
& E. Turiel (Eds.), Values and knowledge (pp. 103135). Hillsdale, NJ: Erlbaum.
Serial position effect
Graph showing the serial position effect. The vertical axis shows the
percentage of words recalled; the horizontal axis shows their position
in the sequence.
The serial position effect, a term coined by Hermann
Ebbinghaus through studies he performed on himself,
refers to the finding that recall accuracy varies as a
function of an item's position within a study list.
[1]
When asked to recall a list of items in any order (free
recall), people tend to begin recall with the end of the
list, recalling those items best (the recency effect).
Among earlier list items, the first few items are recalled
more frequently than the middle items (the primacy
effect).
[2][3]
One suggested reason for the primacy effect is that the
initial items presented are most effectively stored in
long-term memory because of the greater amount of
processing devoted to them. (The first list item can be rehearsed by itself; the second must be rehearsed along with
the first, the third along with the first and second, and so on.) The primacy effect is reduced when items are
presented quickly and is enhanced when presented slowly (factors that reduce and enhance processing of each item
and thus permanent storage). Longer presentation lists have been found to reduce the primacy effect.
[4]
One theorized reason for the recency effect is that these items are still present in working memory when recall is
solicited. Items that benefit from neither (the middle items) are recalled most poorly. An additional explanation for
the recency effect is related to temporal context: if tested immediately after rehearsal, the current temporal context
can serve as a retrieval cue, which would predict more recent items to have a higher likelihood of recall than items
that were studied in a different temporal context (earlier in the list).
[5]
The recency effect is reduced when an
interfering task is given. Intervening tasks involve working memory, as the distractor activity, if exceeding 15 to 30
seconds in duration, can cancel out the recency effect.
[6]
Additionally, if recall comes immediately after test, the
recency effect is consistent regardless of the length of the studied list,
[4]
or presentation rate.
[7]
Amnesiacs with poor ability to form permanent long-term memories do not show a primacy effect, but do show a
recency effect if recall comes immediately after study.
[8]
Patients with Alzheimer's Disease exhibit a reduced
primacy effect but do not produce a recency effect in recall.
[9]
Serial position effect
194
Primacy effect
The primacy effect, in psychology and sociology, is a cognitive bias that results in a subject recalling primary
information presented better than information presented later on. For example, a subject who reads a sufficiently
long list of words is more likely to remember words toward the beginning than words in the middle.
Many researchers tried to explain this phenomenon through free recall tests. In some experiments in the late 20th
century, it was noted that participants who knew that they were going to be tested on a list presented to them would
rehearse items. Meaning, as items were presented, the participants would repeat those items to themselves and as
new items were presented, the participants would continue to rehearse previous items along with the newer items. It
was demonstrated that the primacy effect had a greater influence on recall when there was more time between
presentation of items so that participants would have a greater chance to rehearse previous (prime) items.
[10][11][12]
Overt-Rehearsal was a technique that was meant to test participants' rehearsal patterns. In an experiment using this
technique, participants were asked to recite out loud the items that come to mind. In this way, the experimenter was
able to see that participants would repeat earlier items more than items in the middle of the list, thus rehearsing them
more frequently and having a better recall of the prime items than the middle items later on.
[13]
In another experiment, by Brodie and Murdock, the recency effect was found to be partially responsible for the
primacy effect.
[14]
In their experiment, they also used the overt-rehearsal technique and found that in addition to
rehearsing earlier items more than later items, participants were rehearsing earlier items later on in the list. In this
way, earlier items were closer to the test period by way of rehearsal and could be partially explained by the recency
effect.
Recency effect
Two traditional classes of theories explain the recency effect.
Dual Store Models
These models postulate that later study list items are retrieved from a highly accessible short-term buffer, i.e. the
short-term store (STS) in human memory. This allows items that are recently studied to have an advantage over
those that were studied earlier, as earlier study items have to be retrieved with greater effort from ones long-term
memory store (LTS).
An important prediction of such models is that the presentation of a distractor, for example solving arithmetic
problems for 10-30 seconds, during the retention period (the time between list presentation and test) attenuates such
the recency effect. Since the STS has limited capacity, the distractor displaces later study list items from the STS so
that at test, these items can only be retrieved from the LTS, and have lost their earlier advantage of being more easily
retrieved from the short-term buffer. As such, dual-store models successfully account for both the recency effect in
immediate recall tasks, and the attenuation of such an effect in the delayed free recall task.
A major problem with this model, however, is that it cannot predict the long-term recency effect observed in delayed
recall, when a distractor intervenes between each study item during the interstimulus interval (continuous distractor
task).
[15]
Since the distractor is still present after the last study item, it should displace the study item from STS such
that the recency effect is attenuated. The existence of this long-term recency effect thus raises the possibility that
immediate and long-term recency effects share a common mechanism.
[16]
Single Store Models
According to single-store theories, a single mechanism is responsible for serial position effects.
A first type of model is based on relative temporal distinctiveness, in which the time lag between test and the study
of each list item determines the relative competitiveness of an items memory trace at retrieval.
[17][18]
In this model,
end-of-list items are though to be more distinct, and hence more easily retrieved.
Serial position effect
195
Another type of model is based on contextual variability, which postulates that retrieval of items from memory is
cued not only based on ones mental representation of the study item itself, but also of the study context.
[19][20]
Since
context varies and increasingly changes with time, on an immediate free-recall test, when memory items compete for
retrieval, more recently studied items will have more similar encoding contexts to the test context, and are more
likely to be recalled.
Outside of immediate free recall, these models are also able to predict the presence of the recency effect (or lack
thereof) in delayed free recall and continual-distractor free recall conditions. Under delayed recall conditions, the
state of test context would have drifted away with an increasing retention interval, leading to an attenuated recency
effect. Under continual distractor recall conditions, while the increased interpresentation intervals reduces the
similarities between the given study contexts and test context, the relative similarities amongst items remains
unchanged. As long as the recall process is competitive, recent items will win out, so a recency effect is observed.
Ratio Rule
Overall, an important empirical observation regarding the recency effect is that it is not the absolute duration of
retention intervals (RI, the time between end of study and test period) or of inter-presentation intervals (IPI, the time
between different study items) that matters. Rather, the amount of recency is determined by the ratio of RI to IPI (the
ratio rule). As a result, as long as this ratio is fixed, recency will be observed regardless of the absolute values of
intervals, so that recency can be observed at all time scales, a phenomenon known as time scale invariance. This
contradicts dual-store models, which assume that recency depends on the size of STS, and the rule governing the
displacement of items in the STS.
Potential explanations either then explain the recency effect as occurring through a single, same mechanism, or
re-explain it through a different type of model that postulates two different mechanisms for immediate and long-term
recency effects. One such explanation is provided by Davelaar et al. (2005),
[21]
who argue that there are
dissociations between immediate and long-term recency phenomena that cannot be explained by a single-component
memory model, and who argues for the existence of a STS that explains immediate recency, and a second
mechanism based on contextual drift that explains long-term recency.
Related effects
In 1977, William Crano decided to outline a study to further the previous conclusions on the nature of order effects,
in particular those of primacy vs. recency, which were said to be unambiguous and opposed in their predictions. The
specifics tested by Crano were:
[22]
Change of meaning hypothesis
"adjectives presented first on a stimulus list established a set, or expectation, through which the meanings of
the later descriptors were modified in an attempt to maintain consistency in the mind of the receiver."
Inconsistency discounting
"later descriptions on the stimulus list were discounted if inconsistent with earlier trait adjectives."
Attention decrement hypothesis
"earlier adjectives would wield considerably more influence than the later ones, and a primacy effect in the
typical impression formation task would be expected to occur ... even when the stimulus list contains traits of a
high degree of consistency."
The continuity effect or lag-recency effect predicts that having made a successful recall, the next recall is likely to
be a neighboring item in serial position during the study period. The difference between the two items' serial position
is referred to as serial position lag. Another factor, called the conditional-response probability, represents the
likelihood that a recall of a certain serial position lag was made. A graph of serial position lag versus conditional
response probability reveals that the next item recalled minimizes absolute lag, with a higher likelihood for the
adjacent item rather than the previous one.
Serial position effect
196
Notes
[1] Ebbinghaus, Hermann (1913). On memory: A contribution to experimental psychology. New York: Teachers College.
[2] Deese and Kaufman (1957) Serial effects in recall of unorganized and sequentially organized verbal material, J Exp Psychol. 1957
Sep;54(3):180-7
[3] Murdock, B.B., Jr. (1962) The Serial Position Effect of Free Recall, Journal of Experimental Psychology, 64, 482-488.
[4] Murdock, Bennet (1962). "Serial Position Effect of Free Recall". Journal of Experimental Psychology 64 (2): 482488.
[5] Howard, Marc W.; Michael J. Kahana (2001). "A Distributed Representation of Temporal Context". Journal of Mathematical Psychology.
doi:10.1006/jmps.2001.1388.
[6] Bjork, Robert A.; William B. Whitten (1974). "Recency-Sensitive Retrieval Processes in Long-Term Free Recall". Cognitive Psychology 6:
173189.
[7] Murdock, Bennet; Janet Metcalf (1978). "Controlled Rehearsal in Single-Trial Free Recall". Journal of Verbal Learning and Verbal Behavior
17: 309324.
[8] Carlesimo, Giovanni; G.A. Marfia, A. Loasses, and C. Caltagirone (1996). "Recency effect in anterograde amneisa: Evidence for distinct
memory stores underlying enhanced retrieval of terminal items in immediate and delayed recall paradigms". Neuropsychologia 34 (3):
177184.
[9] Bayley, Peter J.; David P. Salmon, Mark W. Bondi, Barbara K. Bui, John Olichney, Dean C. Delis, Ronald G. Thomas, and Leon J. Thai
(March 2000). "Comparison of the serial position effect in very mild Alzheimer's disease, mild Alzheimer's disease, and amnesia associated
with electroconvulsive therapy". Journal of the International Neuropsychological Society 6 (3): 290298. doi:10.1017/S1355617700633040.
[10] Glenberg, A.M; M.M. Bradley, J.A. Stevenson, T.A. KrausM.J. Tkachuk, A.L. Gretz (1980). "A two-process account of long-term serial
position effects". Journal of Experimental Psychology: Human Learning and Memory (6): 355369.
[11] Marshall, P.H.; P.R. Werder (1972). "The effects of the elimination of rehearsal on primacy and recency". Journal of Verbal Learning and
Verbal Behavior (11): 649653.
[12] Rundus, D. "Maintenance rehearsal and long-term recency". Memory and Cognition (8(3)): 226230.
[13] Rundus, D (1971). "An analysis of rehearsal processes in free recall". Journal of Experimental Psychology (89): 6377.
[14] Brodie, D.A.; B.B. Murdock. "Effects of presentation time on nominal and functional serial position curves in free recall". Journal of Verbal
Learning and Verbal Behavior (16): 185200.
[15] Bjork & Whitten (1974). Recency sensitive retrieval processes in long-term free recall, Cognitive Psychology, 6, 173-189.
[16] Greene, R. L. (1986). Sources of recency effects in free recall. Psychological Bulletin, 99(12), 221-/228.
[17] Bjork & Whitten (1974). Recency sensitive retrieval processes in long-term free recall, Cognitive Psychology, 6, 173-189.
[18] Neath, I., & Knoedler, A. J. (1994). Distinctiveness and serial position effects in recognition and sentence processing. Journal of Memory
and Language, 33, 776-795
[19] Howard, M. W., & Kahana, M. (1999). Contextual variability and serial position effects in free recall. Journal of Experimental Psychology:
Learning, Memory and Cognition, 24(4), 923-941.
[20] Howard, M. W., & Kahana, M. J. (2002). A distributed representation of temporal context. Journal of Mathematical Psychology, 46(3),
269-299.
[21] Davelaar, E. K., Goshen-Gottstein, Y., Ashkenazi, A., Haarmann, H. J., & Usher, M. (2005). The demise of short-term memory revisited:
Empirical and computational investigations of recency effects. Psychological Review, 112, 3-42.
[22] Kohler, Christine. "Order Effects Theory: Primacy versus Recency" (http:/ / www. ciadvertising. org/ sa/ spring_04/ adv382j/ christine/
primacy. html). Center for Interactive Advertising, The University of Texas at Austin. . Retrieved 2007-11-04.
References
Frensch, P.A. (1994). Composition during serial learning: a serial position effect. Journal of Experimental
Psychology: Learning, Memory, and Cognition, 20(2), 423-443.
Healy, A.F., Havas, D.A., & Parkour, J.T. (2000). Comparing serial position effects in semantic and episodic
memory using reconstruction of order tasks. Journal of Memory and Language, 42, 147-167.
Glanzer, M. & Cunitz, A. R. (1966). Two storage mechanisms in Free Recall. Journal of Verbal Learning and
Verbal Behaviour,5,351-360.
Kahana, M. J., Howard, M. W., & Polyn, S. M.(2008). Associative Retrieval Processes in Episodic Memory.
Psychology.
Paper 3.
Howard, M. W. & Kahana, M. (1999). Contextual Variability and Serial Position Effects in Free Recall. Journal
of Experimental Psychology: Learning, Memory & Cognition, 25(4), 923-941.
Serial position effect
197
Further reading
Luchins, Abraham S. (1959) Primacy-recency in impression formation
Liebermann, David A. Learning and memory: An integrative approach. Belmont, CA: Thomson/Wadsworth,
2004, ISBN 978-0-534-61974-9.
Recency illusion
The recency illusion is the belief or impression that something is of recent origin when it is in fact long-established.
The term was invented by Arnold Zwicky, a linguist at Stanford University who was primarily interested in
examples involving words, meanings, phrases, and grammatical constructions.
[1]
However, use of the term is not
restricted to linguistic phenomena: Zwicky has defined it simply as, "the belief that things you have noticed only
recently are in fact recent".
[2]
Linguistic items prone to the Recency Illusion include:
"Singular they" - the use of they to reference a singular antecedent, as in someone said they liked the play.
Although this usage is often cited as a modern invention, it is found in Jane Austen and Shakespeare.
[3]
The grammatically incorrect phrase between you and I, a hypercorrection today which could also be found
occasionally in Early Modern English.
The intensifier really as in it was a really wonderful experience, and the moderating adverb pretty as in it was a
pretty exciting experience. Many people have the impression that these usages are somewhat slang-like, and have
developed relatively recently. In fact, they go back to at least the 18th century, and are commonly found in the
works and letters of such writers as Benjamin Franklin.
"Aks" as a production of African-American English only. Use of "aks" in place of "ask" dates back to the 1600s
and Middle English, though typically in this context spelled "ax."
[4]
According to Zwicky, the illusion is caused by selective attention.
[2]
References
[1] Intensive and Quotative ALL: something old, something new, John R. Rickford, Thomas Wasow, Arnold Zwicky, Isabelle Buchstaller,
American Speech 2007 82(1):3-31; Duke University Press (what Arnold Zwicky (2005) has dubbed the "recency illusion," whereby people
think that linguistic features theyve only recently noticed are in fact new).
[2] Language Log: Just between Dr. Language and I (http:/ / itre. cis. upenn. edu/ ~myl/ languagelog/ archives/ 002386. html)
[3] [3] Shakespeare, The Comedy of Errors, Act IV, Scene 3 (1594): "There's not a man I meet but doth salute me / As if I were their
well-acquainted friend"
[4] [4] [Lippi-Green, Rosina. English with an Accent: Language, Ideology, and Discrimination in the United States. London: Routledge, 1997.
Print.]
External links
New Scientist article (http:/ / www. newscientist. com/ channel/ being-human/ mg19626302.
300-the-word-recency-illusion. html) (subscription only; hard copy at New Scientist, 17 November 2007 p. 60)
Restraint bias
198
Restraint bias
Restraint bias is the tendency for people to overestimate their ability to control impulsive behavior. An inflated
self-control belief may lead to greater exposure to temptation, and increased impulsiveness. Therefore, the restraint
bias has bearing on addiction. For example, someone might experiment with drugs, simply because they believe they
can resist any potential addiction.
[1]
An individual's inability to control, or their temptation can come from several
different visceral impulses, Visceral Impulses can include hunger, sexual arousal, and fatigue. These impulses
provide information about the current state and behavior needed to keep the body satisfied.
[1]
Empathy Gap Effect: The Empathy Gap Effect deals with individuals having trouble appreciating the power that
the impulse states have on their behavior. There is a cold-to-hot empathy gap that states when people are in a cold
state, like not experiencing hunger, they tended to underestimate those influences in a hot state. The underestimation
of the visceral impulses can be contributed to restricted memory for the visceral experience which means the
individual can recall the impulsive state but cannot recreate the sensation of the impulsive state.
[1]
Impulse Control and Attention: Studies have proved that when people believe that they have stronger sense of
self-control over situations in their environment, they have greater control over their impulse control. Individuals
also tend to overestimate their capacity for self-control when one is told that they have a high capacity for
self-restraint.
[1]
The more someone is told that they have a high capacity for self-restraint, the more they believe it
and display higher levels of impulse control. Attention has a lot to do with biases, self and impulse controls in our
environment. The less attention an individual pays to something, the less control they have over whatever they are
doing. Focusing attention to oneself can lead to successful self-control which can be helpful in many aspects of life.
Self-control engages conflict between competing pressures, pressures that can be brought on by situational or
internal prompts from the environment. Some of the cues make the individual act on or engage in that behavior or act
to prevent the individual from taking action.
[2]
References
[1] Nordgren LF, van Harreveld F, van der Pligt J (2009). "The restraint bias: how the illusion of self-restraint promotes impulsive behavior."
(http:/ / www. ncbi.nlm. nih. gov/ entrez/ eutils/ elink. fcgi?dbfrom=pubmed& tool=sumsearch. org/ cite& retmode=ref& cmd=prlinks&
id=19883487). Psychol Sci 20 (12): 15238. doi:10.1111/j.1467-9280.2009.02468.x. PMID19883487. .
[2] Mann, T., & Ward, A. (2007). Attention, Self-Control, and Health Behaviors. Current Directions in Psychological Science , 280-283
Rhyme-as-reason effect
199
Rhyme-as-reason effect
The rhyme-as-reason effect is a cognitive bias whereupon a saying or aphorism is judged as more accurate or
truthful when it is rewritten to rhyme.
In experiments, subjects judged variations of sayings which did and did not rhyme, and tended to evaluate those that
rhymed as more truthful (controlled for meaning). For example, the statement "What sobriety conceals, alcohol
reveals" was judged to be more accurate than by different participants who saw "What sobriety conceals, alcohol
unmasks".
[1]
The effect could be caused by the Keats heuristic, according to which a statement's truth is evaluated according to
aesthetic qualities;
[2]
or the fluency heuristic, according to which things could be preferred due their ease of
cognitive processing.
[3]
For an example of the persuasive quality of the rhyme-as-reason effect, see "if it doesn't fit, you must acquit," the
signature phrase used by Johnnie Cochran to gain acquittal for O.J. Simpson in Simpson's murder trial.
References
[1] McGlone, M. S.; J. Tofighbakhsh (2000). "Birds of a feather flock conjointly (?): rhyme as reason in aphorisms.". Psychological Science 11
(5): 424428.
[2] McGlone, M. S.; J. Tofighbakhsh (1999). "The Keats heuristic: Rhyme as reason in aphorism interpretation". Poetics 26 (4): 235244.
[3] Kahneman, Daniel (2011). Thinking, Fast and Slow. New York: Farrar, Straus and Giroux.
Risk compensation
Booth's rule#2: "The safer skydiving gear becomes, the more chances skydivers will take,
in order to keep the fatality rate constant"
Risk compensation (also Peltzman
effect, risk homeostasis) is an
observed effect in ethology whereby
people tend to adjust their behavior in
response to perceived level of risk,
behaving less cautiously where they
feel more protected and more
cautiously where they feel a higher
level of risk. The theory emerged out
of road safety research after it was
observed that many interventions failed
to achieve the expected level of
benefits but has since found application
in many other fields.
Notable examples include observations
of increased levels of risky behaviour
by road users following the introduction of compulsory seatbelts and bicycle helmet and motorists driving faster and
following more closely behind the vehicle in front following the introduction of antilock brakes. It has also been
suggested that free condom distribution programs often fails to reduce HIV prevalence as predicted due to an
increase in risky sexual behavior and that "the safer skydiving gear becomes, the more chances skydivers will take,
in order to keep the fatality rate constant". This balancing behaviour does not mean an intervention does not work
Risk compensation
200
and the effect may be less than, equal to, or greater than the true efficacy of the intervention. It is likely to be least
when an intervention is imperceptible and greatest when an intervention is intrusive or conspicuous.
Shared space is a relatively new approach to the design of roads where the level of uncertainty for drivers and other
road users is deliberately increased by removing traditional demarcations between vehicle traffic such as railings and
traffic signals, and has been observed to result in lower vehicle speeds and fewer road casualties. In Sweden,
following the change from driving on the left to driving on the right there was a 40% drop in crashes, which was
linked to the increased apparent risk. The crash rate returned to its former level after people became familiar with the
new arrangement.
Moral hazard is a related effect where a decision-maker benefits from the positive effects of a decision, with others
suffering the related negative effects.
Examples
Road transport
Anti-lock brakes
There are at least three studies which show that drivers' response to antilock brakes is to drive faster, follow closer
and brake later, accounting for the failure of ABS to result in any measurable improvement in road safety. The
studies were performed in Canada, Denmark and Germany.
[1][2][3]
A study led by Fred Mannering, a professor of
civil engineering at Purdue University supports risk compensation, terming it the "offset hypothesis".
[4]
Bicycle helmets
The issue of risk compensation has been a central topic in the heated debate concerning the effectiveness of Bicycle
helmet legislation. A study from March 2007, first published in Accident Analysis & Prevention as reported in
Scientific American reported that drivers drove an average of 8.5 cm closer, and came within 1 meter 23% more
often, when a cyclist was wearing a helmet. Statements made in the report included: "The closer a driver is to the
cyclist, the greater chance of a collision", "Drivers passed closer to the rider the further out into the road he was",
and "The bicyclists apparel affects the amount of clearance the overtaking motorist gives the bicyclist". This
research thus implies risk compensation, not among cyclists but among fellow road users.
[5]
Seat belts
A 2007 study based on data from the Fatality Analysis Reporting System (FARS) of the National Highway Traffic
Safety Administration concluded that between 1985 and 2002 there were "significant reductions in fatality rates for
occupants and motorcyclists after the implementation of belt use laws", and that "seatbelt use rate is significantly
related to lower fatality rates for the total, pedestrian, and all non-occupant models even when controlling for the
presence of other state traffic safety policies and a variety of demographic factors."
[6]
In 1994 research of people both wore and habitually didn't wear seatbelts had concluded that drivers were found to
drive faster and less carefully when belted.
[7]
Earlier research carried out by John Adams in 1981 had suggested that there was no correlation between the passing
of seat belt legislation and the total reductions in injuries or fatalities based on comparisons between states with and
without seat belt laws. He also suggested that some injuries were being displaced from car drivers to pedestrians and
other road users.
[8]
This paper was published at a time when Britain was considering a seat belt law, so the
Department of Transport commissioned a report into the issue in which the author, Isles, agreed with Adams'
conclusions.
[9]
The Isles Report was never published officially but a copy was leaked to the Press some years
later.
[10]
The law was duly passed and subsequent investigation showed some reduction in fatalities, the cause of
which could not be conclusively stated, due to the simultaneous introduction of evidential breath testing.
[11]
Risk compensation
201
Shared space
Shared space is an urban design approach which seeks to minimise demarcations between vehicle traffic and
pedestrians, often by removing features such as curbs, road surface markings, traffic signs and regulations. Typically
used on narrower streets within the urban core and as part of living streets within residential areas, the approach has
also be applied to busier roads, including Exhibition Road in Kensington, London.
Schemes are often motivated by a desire to reduce the dominance of vehicles, vehicle speeds and road casualty rates.
First proposed in 1991, the term is now strongly associated to the work of Hans Monderman who suggested that by
creating a greater sense of uncertainty and making it unclear who had right of way, drivers reduce their speed, and
everyone reduces their level of risk compensation. The approach is frequently opposed by organisations representing
the interests of blind, partially sighted and deaf who often express a strong preference for the clear separation of
pedestrian and vehicular traffic.
Speed limits
There is strong evidence that reducing speed limits normally reduces crash, injury and fatality rates,
[12]
for example,
a 2003 review of changes to speed limits in a number of jurisdictions showed that in most cases where speed limits
had been decreased that the number of crashes and fatalities had decreased and that where speed limits had been
increased the number of crashes and fatalities had increased.
[12]
A 1994 study by Jeremy Jackson and Roger Blackman using a driving simulator, reported that increased speed limits
and a reduction of speeding fines had significantly increased driving speed but resulted in no change in the accident
frequency. It also showed that increased accident cost caused large and significant reductions in accident frequency
but no change in speed choice. The abstract states that the results suggest that regulation of specific risky behaviors
such as speed choice may have little influence on accident rates.
[13]
Sport
Ski helmets
In relation to the use of Ski helmets, Dr. Jasper Shealy, a professor from Rochester Institute of Technology who has
been studying skiing and snowboarding injuries for more than 30 years said "There is no evidence they reduce
fatalities,", and that "We are up to 40 percent usage but there has been no change in fatalities in a 10-year
period."
[14][15]
There is evidence that helmeted skiers tend to go faster.
[16]
Skydiving
Booth's rule #2, coined by skydiving pioneer Bill Booth, states that "The safer skydiving gear becomes, the more
chances skydivers will take, in order to keep the fatality rate constant". Even though skydiving equipment has made
huge leaps forward in terms of reliability in the past two decades, and safety devices such as AADs have been
introduced, the fatality rate has stayed roughly constant since the early 1980s.
[17]
This can largely be attributed to an
increase in the popularity of high performance canopies, which fly much faster than traditional parachutes. High
speed manoeuvres close to the ground have increased the number of landing fatalities in recent years,
[18]
even
though these jumpers have perfectly functioning parachutes over their heads.
Risk compensation
202
Safety equipment in children
Experimental studies have suggested that children who wear protective equipment are likely to take more risks.
[19]
Health
Risky sexual behavior and HIV/AIDS
Evidence on risk compensation associated with HIV prevention interventions is mixed. Harvard researcher Edward
C. Green argued that the risk compensation phenomenon could explain the failure of condom distribution programs
to reverse HIV prevalence, providing a detailed explanations of his views in an op-ed article for The Washington
Post
[20]
and an extended interview with the BBC.
[21]
A 2007 article in the Lancet suggested that "condoms seem to
foster disinhibition, in which people engage in risky sex either with condoms or with the intention of using
condoms".
[22][23]
Another report compared risk behaviour of men based on whether they were circumcised.
[24]
Peltzman effect
The Peltzman effect is the hypothesized tendency of people to react to a safety regulation by increasing other risky
behavior, offsetting some or all of the benefit of the regulation. It is named after Sam Peltzman, a professor of
Economics at the University of Chicago Booth School of Business. When the offsetting risky behavior encouraged
by the safety regulation has negative externalities, the Peltzman effect can result in redistributing risk to innocent
bystanders who would behave in a risk-averse manner even without the regulation. For example, if some
risk-tolerant drivers who would not otherwise wear a seat belt respond to a seat belt law by driving less safely, there
would be more total collisions. Overall injuries and fatalities may still decrease due to greater seat belt use, but
drivers who would wear seat belts regardless would see their overall risk increase. Similarly, safety regulations for
automobiles may put pedestrians or bicyclists in more danger by encouraging risky behavior in drivers without
offering additional protection for pedestrians and cyclists.
The Peltzman effect has been used to explain Smeed's Law, an empirical claim that traffic fatality rates increase with
the number of vehicle registrations per capita, and differing safety standards have no effect. Recent empirical studies
have rejected Smeed's Law, which is inconsistent with the observation of declining fatality rates in many countries,
along with the associated theory of risk homeostasis. [25]. Roy Baumeister has suggested that the use of helmets in
football and gloves in boxing lead to examples of the Peltzman effect.
[26]
Risk homeostasis
Professor Gerald J. S. Wilde, a professor emeritus of psychology at Queen's University, Kingston, Ontario, Canada
noted that when Sweden changed from driving on the left to driving on the right in 1967, this was followed by a
marked reduction in the traffic fatality rate for 18 months after which the trend returned to its previous values. It was
suggested that drivers had responded to increased perceived danger by taking more care, only to revert to previous
habits as they became accustomed to the new regime. This hypothesis is elucidated in Wilde's book.
[27]
The
hypothesis of risk homeostasis holds that everyone has his or her own fixed level of acceptable risk. When the level
of risk in one part of the individual's life changes, there will be a corresponding rise or fall in risk elsewhere to bring
the overall risk back to that individual's equilibrium. Wilde argues that the same is true of larger human systems, e.g.
a population of drivers.
For example, in a Munich study, part of a fleet of taxicabs were equipped with anti-lock brakes (ABS), while the
remainder had conventional brake systems. In other respects, the two types of cars were identical. The crash rates,
studied over 3 years, were a little higher for the cabs with ABS, and Wilde concluded that drivers of ABS-equipped
cabs took more risks, assuming that ABS would take care of them; non-ABS drivers were said to drive more
carefully since the could not rely on ABS in a dangerous situation. There is much more to this study as shown in
following reference:
[28]
Likewise, it has been found that drivers behave less carefully around bicyclists wearing
Risk compensation
203
helmets than around unhelmeted riders.
[29]
The idea of risk homeostasis has garnered criticism.
[30]
Some critics say that risk homeostasis theory is contradicted
by car crash fatality rates. These rates have fallen after the introduction of seat belt laws.
[31][32][33][34]
References
[1] Grant and Smiley, "Driver response to antilock brakes: a demonstration on behavioural adaptation" from Proceedings, Canadian
Multidisciplinary Road Safety Conference VIII, June 1416, Saskatchewan 1993.
[2] Sagberg, Fosser, and Saetermo, "An investigation of behavioural adaptation to airbags and antilock brakes among taxi drivers" Accident
Analysis and Prevention #29 pp 293302 1997.
[3] [3] Aschenbrenner and Biehl, "Improved safety through improved technical measures? empirical studies regarding risk compensation processes
in relation to anti-lock braking systems". In Trimpop and Wilde, Challenges to Accident Prevention: The issue of risk compensation behaviour
(Groningen, NL, Styx Publications, 1994).
[4] Venere, Emil. (2006-09-27). Study: Airbags, antilock brakes not likely to reduce accidents, injuries (http:/ / news. uns. purdue. edu/
html4ever/ 2006/ 060927ManneringOffset. html). Purdue University News Service.
[5] "Strange but True: Helmets Attract Cars to Cyclists" (http:/ / www. sciam. com/ article. cfm?chanID=sa029&
articleID=778EF0AB-E7F2-99DF-3594A60E4D9A76B2). Scientific American. .
[6] Houston, David J., and Lilliard E. Richardson. "Risk Compensation or Risk Reduction? Seatbelts, State Laws, and Traffic Fatalities." Social
Science Quarterly (Blackwell Publishing Limited) 88.4 (2007): 913936. Business Source Complete. EBSCO. Web. 9 June 2011.
[7] Janssen, W. (1994). "Seat belt wearing and driving behaviour: An instrumented-vehicle study Apr; Vol 26(2)" (http:/ / www. ncbi. nlm. nih.
gov/ entrez/ query. fcgi?cmd=Retrieve& db=PubMed& list_uids=8198694& dopt=Abstract). Accident Analysis and Prevention. pp.2492. .
[8] "The efficacy of [[seatbelt (http:/ / www. geog.ucl.ac.uk/ ~jadams/ PDFs/ SAE seatbelts. pdf)] legislation: A comparative study of road
accident fatality statistics from 18 countries"]. Dept of Geography, University College, London. 1981. .
[9] Adams, John (4 January 2007). "Seat belt legislation and the Isles Report" (http:/ / www. john-adams. co. uk/ 2007/ 01/ 04/
seat-belt-legislation-and-the-isles-report/ ). Risk in a Hypermobile World. . Retrieved 1 August 2012.
[10] Isles, J. E. (April 1981). "The implications of European Statistics" (http:/ / john-adams. co. uk/ wp-content/ uploads/ 2007/ 01/ isles report.
pdf). Department for Transport. . Retrieved 1 August 2012.
[11] Adams (1995), Risk
[12] British Columbia Ministry of Transportation (2003). "Review and Analysis of Posted Speed Limits and Speed Limit Setting Practices in
British Columbia" (http:/ / www.th. gov. bc. ca/ publications/ eng_publications/ speed_review/ Speed_Review_Report. pdf). p.26 (tables 10
and 11). . Retrieved 2009-09-17.
[13] Jackson JSH, Blackman R (1994). A driving-simulator test of Wilde's risk homeostasis theory. Journal of Applied Psychology.
[14] http:/ / www.rit.edu/ news/ utilities/ pdf/ 2008/ 2008_03_04_Buffalo_News_use_head_on_slopes_Shealy. pdf Use your head on the ski
slopes. Dont just rely on your helmet. By Fletcher Doyle --News Sports Reporter. Buffalo News. Updated: 03/04/08 9:19 AM.
[15] Do Helmets Reduce Fatalities or Merely Alter the Patterns of Death? Journal of ASTM International Volume 5, Issue 10 (November 2008).
ISSN: 1546-962X. Shealy, Jasper E. Professor Emeritus, Rochester Institute of Technology, NY. Johnson, Robert J. Professor, University of
Vermont College of Medicine, VT. Ettlinger, Carl F. President, Vermont Safety Research, VT. doi:10.1520/JAI101504 http:/ / www. astm.
org/ DIGITAL_LIBRARY/ JOURNALS/ JAI/ PAGES/ 1043. htm
[16] [16] How Fast Do Winter Sports Participants Travel on Alpine Slopes? Shealy, JE. Rochester Institute of Technology, Rochester, NY, USA.
Ettlinger, CF. Vermont Safety Research, Underhill Center, VT, USA. Johnson, RJ. McClure Musculoskeletal Research Center, University of
Vermont College of Medicine, Burlington, VT, USA. Journal of ASTM International Volume 2, Issue 7 (July/August 2005) "The average
speed for helmet users of 45.8 km/h (28.4 mph) was significantly higher than those not using a helmet at 41.0 km/h (25.4 mph."
doi:10.1520/JAI12092
[17] "US Skydiving Fatalities History" (http:/ / web. archive. org/ web/ 20030211051448/ http:/ / www. skydivenet. com/ fatalities/
fatalities_history. html). .
[18] "http:/ / mypages. iit.edu/ ~kallend/ skydive/ fatalities.gif" (http:/ / www. iit. edu/ ~kallend/ skydive/ fatalities. gif). .
[19] Understanding children's injury-risk behavior: Wearing safety gear can lead to increased risk taking. Morrongiello BA, Walpole B, and
Lasenby J. Accident Analysis & Prevention Volume 39, Issue 3, May 2007, Pages 618623
[20] Green, Edward C. (2009-03-29). "The Pope May Be Right" (http:/ / www. washingtonpost. com/ wp-dyn/ content/ article/ 2009/ 03/ 27/
AR2009032702825. html). The Washington Post. .
[21] "The pope was right about condoms, says Harvard HIV expert" (http:/ / www. bbc. co. uk/ blogs/ ni/ 2009/ 03/
aids_expert_who_defended_the_p.html). Sunday Sequence. BBC Radio Ulster. 2009-03-29. .
[22] Shelton, James D (2007-12-01). "Ten myths and one truth about generalised HIV epidemics" (http:/ / www. thelancet. com/ journals/ lancet/
article/ PIIS0140-6736(07)61755-3/ fulltext?_eventId=login). The Lancet 370 (9602): 18091811. doi:10.1016/S0140-6736(07)61755-3. .
[23] Gray (http:/ / www. jhsph.edu/ faculty/ directory/ profile/ 928/ Gray/ Ronald), , et al; Kigozi, Godfrey; Serwadda, David; Makumbi,
Frederick; Watya, Stephen; Nalugoda, Fred; Kiwanuka, Noah; Moulton, Lawrence H et al. (2007-02-01). "Male circumcision for HIV
prevention in men in Rakai, Uganda: a randomised trial" (http:/ / www. thelancet. com/ journals/ lancet/ article/ PIIS0140673607603134/
abstract). The Lancet 369 (9562): 657666. doi:10.1016/S0140-6736(07)60313-4. .
Risk compensation
204
[24] Wilson, Nicholas, Wentao Xiong, and Christine Mattson (2011). "Is Sex Like Driving? Risk Compensation Associated with Randomized
Male Circumcision in Kisumu, Kenya" (http:/ / web. williams. edu/ Economics/ wp/ Wilson_Circumcision. pdf). Williams College Economics
Department Working Paper Series. .
[25] http:/ / onlinelibrary.wiley. com/ doi/ 10. 1111/ j.1539-6924. 1986. tb00196. x/ abstract
[26] http:/ / www.econtalk. org/ archives/ 2011/ 11/ baumeister_on_g. html
[27] Wilde, Gerald J.S. (2001). Target Risk 2: A New Psychology of Safety and Health (http:/ / psyc. queensu. ca/ target/ index. html).
ISBN0-9699124-3-9. .
[28] Munich taxicab experiment discussion (http:/ / psyc.queensu. ca/ target/ chapter07. html)
[29] Drivers leave less margin when overtaking helmeted cyclists (http:/ / www. educatedguesswork. org/ movabletype/ archives/ 2006/ 09/
risk_homeostasi_1.html)
[30] O'Neill B, Williams A (June 1998). "Risk homeostasis hypothesis: a rebuttal". Inj. Prev. 4 (2): 923. doi:10.1136/ip.4.2.92. PMC1730350.
PMID9666359.
[31] D. C. Andreassen (1985). "Linking deaths with vehicles and population". Traffic Engineering and Control 26 (11): 547549.
[32] J. Broughton (1988). "Predictive models of road accident fatalities". Traffic Engineering and Control 29 (5): 296300.
[33] S. Oppe (1991). "The development of traffic and traffic safety in six developed countries". Accident Analysis and Prevention 23 (5):
401412. doi:10.1016/0001-4575(91)90059-E. PMID1741895.
[34] J. R. M. Ameen and J. A. Naji (2001). "Causal models for road accident fatalities in Yemen". Accident Analysis and Prevention 33 (4):
547561. doi:10.1016/S0001-4575(00)00069-5. PMID11426685.
Other sources
Adams, John (1995). Risk. Routledge. ISBN1-85728-068-7.
Wilde, Gerald J.S. (1994). Target Risk (http:/ / psyc. queensu. ca/ target/ ). PDE Publications.
ISBN0-9699124-0-4. Retrieved 2006-04-26.
External links
'Naked' streets are safer, say Tories (http:/ / www. timesonline. co. uk/ tol/ news/ article1295120. ece) The
Times
Sam Peltzman on IDEAS (http:/ / ideas. repec. org/ f/ ppe234. html) at RePEc
Sam Peltzman podcast (http:/ / www. econtalk. org/ archives/ 2006/ 11/ peltzman_on_reg. html) Interview at
EconTalk
"Regulation and the Wealth of Nations" (http:/ / pcpe. libinst. cz/ nppe/ 3_2/ nppe3_2_3. pdf) (New Perspectives
on Political Economy. Volume 3, Number 2, 2007, pp.185 204)
"Scrap the Traffic Lights" (http:/ / www. foxnews. com/ opinion/ 2010/ 08/ 03/
john-stossel-private-sector-government-business-economy-traffic-accidents/ ) John Stossel shows some concrete
examples.
"Regulation and the Natural Progress of Opulence" (http:/ / faculty. chicagobooth. edu/ sam. peltzman/ teaching/
aei brookings0904. pdf) (PDF), a lecture by Peltzman at the American Enterprise Institute in 2004
Selective perception
205
Selective perception
Selective perception is the process by which individuals perceive what they want to in media messages and
disregard the rest. It is a broad term to identify the behavior all people exhibit to tend to "see things" based on their
particular frame of reference. Selective perception may refer to any number of cognitive biases in psychology related
to the way expectations affect perception. Human judgment and decision making is distorted by an array of
cognitive, perceptual and motivational biases, and people tend not to recognise their own bias, though they tend to
easily recognise (and even overestimate) the operation of bias in human judgment by others.
[1]
One of the reasons
this might occur might be because people are simply bombarded with too much stimuli every day to pay equal
attention to everything, therefore, they pick and choose according to their own needs.
[2]
To understand when and why a particular region of a scene is selected, studies observed and described the eye
movements of individuals as they go about performing specific tasks. In this case, vision was an active process that
integrated scene properties with specific, goal-oriented oculomotor behaviour.
[3]
Several other studies have shown that students who were told they were consuming alcoholic beverages (which in
fact were non-alcoholic) perceived themselves as being "drunk", exhibited fewer physiological symptoms of social
stress, and drove a simulated car similarly to other subjects who had actually consumed alcohol. The result is
somewhat similar to the placebo effect.
In one classic study on this subject related to the hostile media effect (which is itself an example of selective
perception), viewers watched a filmstrip of a particularly violent Princeton-Dartmouth American football game.
Princeton viewers reported seeing nearly twice as many rule infractions committed by the Dartmouth team than did
Dartmouth viewers. One Dartmouth alumnus did not see any infractions committed by the Dartmouth side and
erroneously assumed he had been sent only part of the film, sending word requesting the rest.
[4]
Selective perception is also an issue for advertisers, as consumers may engage with some ads and not others based on
their pre-existing beliefs about the brand.
Seymour Smith, a prominent advertising researcher, found evidence for selective perception in advertising research
in the early 1960s, and he defined it to be "a procedure by which people let in, or screen out, advertising material
they have an opportunity to see or hear. They do so because of their attitudes, beliefs, usage preferences and habits,
conditioning, etc."
[5]
People who like, buy, or are considering buying a brand are more likely to notice advertising
than are those who are neutral toward the brand. This fact has repercussions within the field of advertising research
because any post-advertising analysis that examines the differences in attitudes or buying behavior among those
aware versus those unaware of advertising is flawed unless pre-existing differences are controlled for. Advertising
research methods that utilize a longitudinal design are arguably better equipped to control for selective perception.
Selective perceptions are of two types:
Low level Perceptual vigilance
High level Perceptual defense
Selective perception
206
References
[1] Emily Pronin, " Perception and misperception of bias in human judgment (http:/ / web. missouri. edu/ ~segerti/ capstone/ Biasinjudgement.
pdf)," Trends in Cognitive Sciences, Volume 11, Issue 1, January 2007, pp. 37-43.
[2] http:/ / lilt. ilstu.edu/ rrpope/ rrpopepwd/ articles/ perception3. html
[3] [3] Canosa, R.L. (2009). Real-world vision: selective perception and task. ACM Trans. Appl. Percpt., 6, 2, Article 11, 34 pages.
[4] Hastorf, A.H. & Cantril, H. (1954). They saw a game: A case study. Journal of Abnormal and Social Psychology, 49, 129-134.
[5] Nowak, Theodore and Smith, Seymour. "Advertising WorksAnd Advertising Research Does Too." Presentation to ESOMAR. Spain:
1970s.
Further reading
Selective Perception in Stock Investing (http:/ / www. investingator. org/ selective-perception. html)
Semmelweis reflex
The Semmelweis reflex or "Semmelweis effect" is a metaphor for the reflex-like tendency to reject new evidence or
new knowledge because it contradicts established norms, beliefs or paradigms.
The term originated from Ignaz Semmelweis, who discovered that childbed fever mortality rates could be reduced
ten-fold if doctors would wash their hands (we would now say disinfect) with a chlorine solution between having
contact with infected patients and non-infected patients. His hand-washing suggestions were rejected by his
contemporaries (see Contemporary reaction to Ignaz Semmelweis).
While there is some uncertainty regarding the origin and generally accepted use of the expression, the use of the
expression Semmelweis Reflex has been documented and at least used by the author Robert Anton Wilson.
[1]
In his
book The Game of Life, Dr. Timothy Leary provided the following polemical definition of the Semmelweis reflex:
"Mob behavior found among primates and larval hominids on undeveloped planets, in which a discovery of
important scientific fact is punished". The expression has found way into philosophy and religious studies as
"unmitigated Human skepticism concerning causality".
References
[1] Wilson, Robert Anton (1991). The Game of Life. New Falcon Publications. ISBN1561840505.
Selection bias
207
Selection bias
Selection bias is a statistical bias in which there is an error in choosing the individuals or groups to take part in a
scientific study.
[1]
It is sometimes referred to as the selection effect. The term "selection bias" most often refers to
the distortion of a statistical analysis, resulting from the method of collecting samples. If the selection bias is not
taken into account then certain conclusions drawn may be wrong.
Types
There are many types of possible selection bias, including:
Sampling bias
Sampling bias is systematic error due to a non-random sample of a population,
[2]
causing some members of the
population to be less likely to be included than others, resulting in a biased sample, defined as a statistical sample of
a population (or non-human factors) in which all participants are not equally balanced or objectively represented.
[3]
It is mostly classified as a subtype of selection bias,
[4]
sometimes specifically termed sample selection bias,
[5][6]
but
some classify it as a separate type of bias.
[7]
A distinction, albeit not universally accepted, of sampling bias is that it undermines the external validity of a test (the
ability of its results to be generalized to the rest of the population), while selection bias mainly addresses internal
validity for differences or similarities found in the sample at hand. In this sense, errors occurring in the process of
gathering the sample or cohort cause sampling bias, while errors in any process thereafter cause selection bias.
Examples of sampling bias include self-selection, pre-screening of trial participants, discounting trial subjects/tests
that did not run to completion and migration bias by excluding subjects who have recently moved into or out of the
study area.
Time interval
Early termination of a trial at a time when its results support a desired conclusion.
A trial may be terminated early at an extreme value (often for ethical reasons), but the extreme value is likely to
be reached by the variable with the largest variance, even if all variables have a similar mean.
Exposure
Susceptibility bias
Clinical susceptibility bias, when one disease predisposes for a second disease, and the treatment for the first
disease erroneously appears to predispose to the second disease. For example, postmenopausal syndrome gives
a higher likelihood of also developing endometrial cancer, so estrogens given for the postmenopausal
syndrome may receive a higher than actual blame for causing endometrial cancer.
[8]
Protopathic bias, when a treatment for the first symptoms of a disease or other outcome appear to cause the
outcome. It is a potential bias when there is a lag time from the first symptoms and start of treatment before
actual diagnosis.
[8]
It can be mitigated by lagging, that is, exclusion of exposures that occurred in a certain time
period before diagnosis.
[9]
Indication bias, a potential mix up between cause and effect when exposure is dependent on indication, e.g. a
treatment is given to people in high risk of acquiring a disease, potentially causing a preponderance of treated
people among those acquiring the disease. This may cause an erroneous appearance of the treatment being a
cause of the disease.
[10]
Selection bias
208
Data
Partitioning data with knowledge of the contents of the partitions, and then analyzing them with tests designed for
blindly chosen partitions.
Rejection of "bad" data on arbitrary grounds, instead of according to previously stated or generally agreed criteria.
Rejection of "outliers" on statistical grounds that fail to take into account important information that could be
derived from "wild" observations.
[11]
Studies
Selection of which studies to include in a meta-analysis (see also combinatorial meta-analysis).
Performing repeated experiments and reporting only the most favorable results, perhaps relabelling lab records of
other experiments as "calibration tests", "instrumentation errors" or "preliminary surveys".
Presenting the most significant result of a data dredge as if it were a single experiment (which is logically the
same as the previous item, but is seen as much less dishonest).
Attrition
Attrition bias is a kind of selection bias caused by attrition (loss of participants),
[12]
discounting trial subjects/tests
that did not run to completion. It includes dropout, nonresponse (lower response rate), withdrawal and protocol
deviators. It gives biased results where it is unequal in regard to exposure and/or outcome. For example, in a test of a
dieting program, the researcher may simply reject everyone who drops out of the trial, but most of those who drop
out are those for whom it was not working. Different loss of subjects in intervention and comparison group may
change the characteristics of these groups and outcomes irrespective of the studied intervention.
[12]
Observer selection
Data is filtered not only by study design and measurement, but by the necessary precondition that there has to be
someone doing a study. In situations where the existence of the observer or the study is correlated with the data
observation selection effects occur, and anthropic reasoning is required.
[13]
An example is the past impact event record of Earth: if large impacts cause mass extinctions and ecological
disruptions precluding the evolution of intelligent observers for long periods, no one will observe any evidence of
large impacts in the recent past (since they would have prevented intelligent observers from evolving). Hence there
is a potential bias in the impact record of Earth.
[14]
Astronomical existential risks might similarly be underestimated
due to selection bias, and an anthropic correction has to be introduced.
[15]
Avoidance
In the general case, selection biases cannot be overcome with statistical analysis of existing data alone, though
Heckman correction may be used in special cases. An informal assessment of the degree of selection bias can be
made by examining correlations between exogenous (background) variables and a treatment indicator. However, in
regression models, it is correlation between unobserved determinants of the outcome and unobserved determinants of
selection into the sample which bias estimates, and this correlation between unobservables cannot be directly
assessed by the observed determinants of treatment.
[16]
Selection bias
209
Related issues
Selection bias is closely related to:
publication bias or reporting bias, the distortion produced in community perception or meta-analyses by not
publishing uninteresting (usually negative) results, or results which go against the experimenter's prejudices, a
sponsor's interests, or community expectations.
confirmation bias, the distortion produced by experiments that are designed to seek confirmatory evidence instead
of trying to disprove the hypothesis.
exclusion bias, results from applying different criteria to cases and controls in regards to participation eligibility
for a study/different variables serving as basis for exclusion.
Notes
[1] Dictionary of Cancer Terms selection bias (http:/ / www. cancer. gov/ Templates/ db_alpha. aspx?CdrID=44087) Retrieved on September
23, 2009.
[2] Medical Dictionary - 'Sampling Bias' (http:/ / www. medilexicon. com/ medicaldictionary. php?t=10087) Retrieved on September 23, 2009
[3] TheFreeDictionary biased sample (http:/ / medical-dictionary. thefreedictionary. com/ Sample+ bias) Retrieved on 2009-09-23. Site in turn
cites: Mosby's Medical Dictionary, 8th edition.
[4] Dictionary of Cancer Terms Selection Bias (http:/ / medical. webends. com/ kw/ Selection Bias) Retrieved on September 23, 2009
[5] The effects of sample selection bias on racial differences in child abuse reporting (http:/ / www. ncbi. nlm. nih. gov/ pubmed/ 9504213) Ards
S, Chung C, Myers SL Jr. Child Abuse Negl. 1999 Dec;23(12):1209; author reply 1211-5. PMID 9504213
[6] Sample Selection Bias Correction Theory (http:/ / www. cs. nyu. edu/ ~mohri/ postscript/ bias. pdf) Corinna Cortes, Mehryar Mohri, Michael
Riley, and Afshin Rostamizadeh. New York University.
[7] Page 262 in: Behavioral Science. Board Review Series. (http:/ / books. google. com/ books?id=f0IDHvLiWqUC& printsec=frontcover&
source=gbs_navlinks_s#v=onepage& q=& f=false) By Barbara Fadem. ISBN 0-7817-8257-0, ISBN 978-0-7817-8257-9. 216 pages
[8] Feinstein AR, Horwitz RI (November 1978). "A critique of the statistical evidence associating estrogens with endometrial cancer". Cancer
Res. 38 (11 Pt 2): 40015. PMID698947.
[9] Tamim H, Monfared AA, LeLorier J (March 2007). "Application of lag-time into exposure definitions to control for protopathic bias".
Pharmacoepidemiol Drug Saf 16 (3): 2508. doi:10.1002/pds.1360. PMID17245804.
[10] Matthew R. Weir (2005). Hypertension (Key Diseases) (Acp Key Diseases Series). Philadelphia, Pa: American College of Physicians.
p.159. ISBN1-930513-58-5.
[11] Kruskal, W. (1960) Some notes on wild observations, Technometrics. (http:/ / www. tufts. edu/ ~gdallal/ out. htm)
[12] [12] Jni P, Egger M. Empirical evidence of attrition bias in clinical trials. Int J Epidemiol. 2005 Feb;34(1):87-8.
[13] [13] Nick Bostrom, Anthropic Bias: Observation selection effects in science and philosophy. Routledge, New York 2002
[14] Milan M. Crkovic, Anders Sandberg, and Nick Bostrom. Anthropic Shadow: Observation Selection Effects and Human Extinction Risks
(http:/ / www. nickbostrom.com/ papers/ anthropicshadow. pdf). Risk Analysis, Vol. 30, No. 10, 2010.
[15] Max Tegmark and Nick Bostrom, How unlikely is a doomsday catastrophe? (http:/ / arxiv. org/ abs/ astro-ph/ 0512204|) Nature, Vol. 438
(2005): 75. arXiv:astro-ph/0512204
[16] Heckman, J. (1979) Sample selection bias as a specification error. Econometrica, 47, 15361.
Social comparison bias
210
Social comparison bias
Social comparison bias can be defined as having feelings of dislike and competitiveness with someone that is seen
physically, or mentally better than yourself.
Introduction
A majority of people in society base their moods and feelings off of how well they are doing compared to other
people in their environment. Social comparison bias happens in everyday society regularly. Social comparison bias
can be defined as having feelings of dislike and competitiveness with someone that is seen physically, or mentally
better than yourself.
[1]
This can be compared to social comparison, which is believed to be central to achievement
motivation, feelings of injustice, depression, jealousy and people's willingness to remain in relationships or jobs.
[2][3]
In this competitive day and age, citizens in society are competing to get the best grades, the best jobs and the best
houses. Although in many situations, social comparison bias is fairly self-explanatory. For example, you might make
a comparison if you shop at low-end department stores and a peer shops at the designer stores, and you are overcome
with feelings of resentment, anger and envy with that peer. This social comparison bias involves wealth and social
status. Some of us make social comparisons, but are largely unaware of them.
[4]
In most cases, we try to compare
ourselves to those in our peer group or with whom we are similar.
[5]
Research
There are many studies revolving around social comparison and the effects it has on mental health. One study
involved the relationship between depression and social comparison.
[6]
Thwaites and Dagnan, in "Moderating
variables in the relationship between social comparison and depression", investigated the relationship between social
comparison and depression utilizing an evolutionary framework. Their hypothesis was that depression was an
outcome from social comparisons that people carried out. This study investigated the moderating effects on social
comparison of the importance of comparison dimensions to the person, and of the perceived importance of the
dimensions to other people. What the researchers used to measure the depression in their participants was a
self-esteem test called the Self Attributes Questionnaire created by Pelham and Swann in 1989. The test consisted of
10-point Likert scale ratings on 10 individual social comparison dimensions (e.g. intelligence, social skills, sense of
humor). Questions were added to explore beliefs regarding the importance of social comparison dimensions. Data
were collected from a combined clinical sample and non-clinical sample of 174 people.
[6]
They concluded that
social comparison did have a relationship with depression based on the data that they collected. More people that
contributed in social comparisons had a higher level of depression than people that rarely used social comparison.
Cognitive effects
One major symptom that can occur with social comparison bias is the mental disorder of depression. Depression is
typically diagnosed during a clinical encounter using the Diagnostic and Statistical Manual of Mental Disorders
volume IV (DSM-IV) criteria. Symptoms include depressed mood, hopelessness, and sleep difficulties, including
both hypersomnia and insomnia.
[6]
Clinical depression can be caused by many factors in a persons life. Depression
is the most common mental illness associated to social comparison bias.
[7]
Depression has a biological explanation to
why people lose hope in life. It is caused by the brain because of the hippocampus decreasing in size and the
lowering levels of serotonin that circulates through the brain.
[8]
Another negative symptom that is associated to
social comparison bias is suicide ideation. Suicide ideation can be defined as the constant thoughts about suicide
and suicide attempts. Suicide is the taking of ones own life.
[5]
Suicide ideation can occur due to social comparison
bias because people that compare themselves to people that are seen better than themselves get mentally discouraged
because they believe they can not perform or look a certain way which causes low self-esteem. Low self-esteem is
Social comparison bias
211
one of the main factors in suicide ideation.
[1]
Social comparison bias in the media
Mainstream media is also a main contributor to social comparisons.
[9]
Everywhere you go, advertisements tries to
portray to the public what beauty should be. Magazines, commercials, billboards, they all show what beauty is
supposed to look like. When growing generation of our youth and adults see this, they socially compare themselves
to the advertisements they see all around them.
[10]
When they do not look a certain way or weigh a certain amount,
society puts them down for it. This can cause low self-esteem and an onset of depression because they do not fit the
mold of what beauty is seen to be.
[9]
People get criticized when they do not look like the models in the magazine or
on TV. After socially comparing oneself to the people in the media, it can have negative effects and cause mental
anxiety, stress, negative body image and even eating disorders.
[11]
With media being such an important part of our
generation, having low self-esteems and negative self-images of ourselves impacts society with tragic incidents
including suicide and self-harm. Social comparison to others whether on TV and magazines can cause a person to
lose confidence in themselves and stress over trying to be perfect and be what society expects them to be. In an
experiment that studied womens body image after comparing themselves to different types of models, body image
was significantly more negative after viewing thin media images than after viewing images of either average size
models, or plus size models.
[11]
Media is one of the leading causes for bad body image among youth and adults
because of social comparison.
[12]
Social comparison bias through social media
Social media being a main source of news and breaking new stories, people can connect to people from all over the
world and learn in new ways.
[11]
It is easier to see peoples private life on a public network. This being said, social
networks such as Facebook makes viewing someones daily life as simple as sending a request. Society is exposed to
everyones lives and people are starting to compare themselves with their friends that they have on Facebook. It is
easy to log in and see someone brag about their success or their new belongings and feel bad about yourself. In
recent studies, researchers have been linking Facebook with depression in this generation of social media.
[11]
They
may start to have low self-esteem by seeing their friends online have more exciting lives and more popularity. This
social comparison bias among social network users online can make people start to think of their lives as not as
fulfilling as they want to be. They see pictures or statuses about job promotions or new jobs, vacations; new
relationships, fun outings or even those that can afford nice things. This can cognitively effect a persons self-esteem
and cause depression.
[13]
They can start to feel bad about their appearance and their life in general. Social media
does have an impact on the amount of social comparison people have.
[14]
Social comparison bias in the classroom
Social comparisons are also very important in the school system. Students depending on their grade level are very
competitive about the grades they receive compared to their peers. Social comparisons not only influence students'
self-concepts but also improve their performance.
[15]
This social comparison process leads to a lower self-concept
when the class level is high and to a higher self-concept when the class level is low.
[15]
Therefore, two students with
equal performance in a domain may develop different self-concepts when they belong to different classes with
different performance levels.
[10]
Social comparisons are important and valid predictors of students' self-evaluations
and achievement behavior. Students may feel jealously or competitiveness when it comes to grades and getting into
better colleges and universities than their peers. Social comparison can also motivate students to do well because
they want to keep along with their peers.
Social comparison bias
212
Conclusion
Social comparison bias can occur in peoples everyday life. Whether it is on social networking sites, in the media, in
society regarding wealth and social status or in the school system. It can be negative to ones mental health due to the
increasing risks of depression, suicide ideation and other mental disorders.
[16]
Social comparison in this generation is
everywhere and society revolves comparing ourselves to each other if it is to have a higher self-esteem or to try and
better themselves as a whole. With social comparison being so important, it will lead to social comparison bias and
cause negative effects in a persons life. With the research found, the hypothesis was proven correct stating that
depression does has a relationship with the social comparison that people in society participate in.
References
[1] Garciaa, Song & Tesser 2010, pp.97101
[2] Buunk & Gibbons 1997
[3] Suls & Wheeler 2000
[4] Smith & Leach 2004, pp.297308
[5] Taylor & Lobel 1989, pp.569575
[6] Thwaites & Dagnan 2004, pp.309323
[7] Pyszczynski & Greenberg 1987, pp.122138
[8] Nolen-Hoeksema & Morrow 1993, pp.561570
[9] Richins 1991, pp.7381
[10] [10] Wood V. J. 1989.
[11] Kaplan & Haenlein 2010, pp.5968
[12] Menon, Kyung & Agrawal 2008, pp.3952
[13] [13] Pappas 2012
[14] Kendler & Karkowski-Shuman 1997, pp.539547
[15] [15] Moller 2006
[16] Kendler 1995, pp.59
Sources
Burson, A.; Larrick, R; Soll, J. (2005). "Social Comparison and Confidence: When Thinking You're Better than
Average Predicts Overconfidence". Ross School of Business (Paper No. 1016).
Garciaa, S.; Song, H.; Tesser, A. (2010). "Tainted recommendations: The social comparison bias". Organizational
Behavior and Human Decision Processes 113 (2): 97101.
Huguet, P.; Dumas, F. (2001). "Social comparison choices in the classroom: further evidence for students' upward
comparison tendency and its beneficial impact on performance". European Journal of Social Psychology:
557578.
Kaplan, A.; Haenlein, M. (2010). "Users of the world, unite! The challenges and opportunities of Social Media".
Business Horizons 53: 5968.
Kendler, K. S. (1995). "Major depression and the environment: a psychiatric genetic perspective".
Pharmacopsychiatry 31: 59.
Kendler, K. S.; Karkowski-Shuman, L. (1997). "Stressful life events and genetic liability to major depression:
genetic control of exposure to the environment?". Psychol Med 27: 539547.
Menon, G.; Kyung, E.; Agrawal, N. (2008). "Biases in social comparisons: Optimism or pessimism?".
Organizational Behavior and Human Decision Processes 108 (2009): 3952.
Mller, J.; Kller, O. (2001). "Dimensional comparisons: An experimental approach to the internal/external frame
of reference model". Journal of Educational Psychology 93: 826835.
Monteil, J.; Huguet, P. (1993). "The Influence of Social Comparison Situations on Individual Task Performance:
Experimental Illustrations". International Journal Of Psychology 28 (5): 627643.
Nolen-Hoeksema, S.; Morrow, J. (1993). "Effects of rumination and distraction on naturally occurring depressed
mood". Cognitive Emotion 7: 561570.
Pappas, S. (2012). "Facebook With Care: Social Networking Site Can Hurt Self-Esteem". Live Science Journal.
Social comparison bias
213
Pyszczynski, T.; Greenberg, J. (1987). "Self-regulatory perseveration and the depressive self-focusing style: a
self-awareness theory of reactive depression". Psychol Bull 102: 122138.
Richins, M. (1991). "Social Comparison and the Idealized Images of Advertising.". Journal of Consumer
Research 18 (1): 7381.
Smith, H.; Leach, C. (2004). "Group membership and everyday social comparison experiences". Eur J Soc
Psychol 34 (3): 297308.
Taylor, S.; Lobel, M. (1989). "Social Comparison Activity Under Threat: Downward Evaluation and Upward
Contacts". Psychological Review (The American Psychological Association,) 96 (4): 569575.
Thwaites, R.; Dagnan, D. (2004). "Moderating variables in the relationship between social comparison and
depression: An evolutionary perspective". Psychology and Psychotherapy (Theo, Res, Pra) 77: 309323.
Social desirability bias
Social desirability bias is the tendency of respondents to answer questions in a manner that will be viewed
favorably by others. It can take the form of over-reporting "good behavior" or under-reporting "bad," or undesirable
behavior. The tendency poses a serious problem with conducting research with self-reports, especially
questionnaires. This bias interferes with the interpretation of average tendencies as well as individual differences.
Topics where SDR is of special concern are self-reports of abilities, personality, sexual behavior, and drug use.
When confronted with the question "How often do you masturbate?", for example, respondents may be pressured by
the societal taboo against masturbation, and either under-report the frequency or avoid answering the question.
Therefore the mean rates of masturbation derived from self-report surveys are likely to be severe underestimates.
When confronted with the question, "Do you use drugs/illicit substances?" the respondent may be influenced by the
fact that controlled substances, including the more commonly used marijuana, are generally illegal. Respondents
may feel pressured to deny any drug use or rationalize it, e.g., "I only smoke marijuana when my friends are around."
The bias can also influence reports of number of sexual partners. In fact, the bias may operate in opposite directions
for different subgroups: Whereas men tend to inflate the numbers, women tend to underestimate theirs. In either
case, the mean reports from both groups are likely to be distorted by social desirability bias.
Other topics that are sensitive to social desirability bias:
Personal income and earnings, often inflated when low and deflated when high.
Feelings of low self-worth and/or powerlessness, often denied.
Excretory functions, often approached uncomfortably, if discussed at all.
Compliance with medicinal dosing schedules, often inflated.
Religion, often either avoided or uncomfortably approached.
Patriotism, either inflated or, if denied, done so with a fear of other party's judgement.
Bigotry and intolerance, often denied, even if it exists within the responder.
Intellectual achievements, often inflated.
Physical appearance, either inflated or deflated
Acts of real or imagined physical violence, often denied.
Indicators of charity or "benevolence," often inflated.
Illegal acts, often denied.
Social desirability bias
214
Individual differences
The fact that people differ in their tendency to engage in socially desirable responding (SDR) is a special concern to
those measuring individual differences with self-reports. Individual differences in SDR make it difficult to
distinguish those people with good traits who are responding factually from those distorting their answers in a
positive direction.
When socially desirable responding (SDR) cannot be eliminated, researchers may resort to evaluating the tendency
and then control for it. A separate measure of SDR must be administered together with the primary measure (test or
interview) aimed at the subject matter of the research/investigation.The key assumption is that respondents who
answer in a socially desirable manner on that scale are also responding desirably to all self reports throughout the
study.
In some cases the entire questionnaire package from high scoring respondents may simply be discarded.
Alternatively, respondents' answers on the primary questionnaires may be statistically adjusted commensurate with
their SDR tendencies. For example, this adjustment is performed automatically in the standard scoring of MMPI
scales.
The major concern with SDR scales is that they confound style with content. After all, people actually differ in the
degree to which they possess desirable traits (e.g., nuns versus criminals). Consequently, measures of social
desirability confound true differences with social-desirability bias.
Standard measures
Until recently, the most commonly used measure of socially desirable responding was the Marlowe-Crowne Social
Desirability Scale.
[1]
The original version comprised 33 True-False items. A shortened version, the StrahanGerbasi
comprises only 10 items, but some have raised questions regarding the reliability of this measure.Thompson and
Phua
[2]
.
[3]
In 1991, Delroy Paulhus published the Balanced Inventory of Desirable Responding: a questionnaire designed to
measure two forms of SDR.
[4]
This 40-item instrument provides separate subscales for "impression management",
the tendency to give inflated self-descriptions to an audience; and self-deceptive enhancement, the tendency to give
honest but inflated self-descriptions. The commercial version of the BIDR called "Paulhus Deception Scales
(PDS)",
[5]
".
Non-English measures
Scales designed to tap response styles are available in all major languages, including Italian
[6]
and German
[7]
Another measure has been used in surveys or opinion polls carried out by interviewing people face-to-face or
through the telephone.
[8]
Other response styles
'Extreme response bias' (ERB) takes the form of exaggerated extremity preference, e.g. for '1' or '7' on 7-point scales.
Its converse, 'moderacy bias' entails a preference for middle range (or midpoint) responses (e.g. 3-5 on 7-point
scales). 'Acquiescence' is the tendency to prefer the higher ratings over lower ratings, whatever the content of the
question.
Social desirability bias
215
Anonymity and confidentiality
When the subjects' details are not required, as in sample investigations and screenings, anonymous administration is
preferably used as the person does not feel directly and personally involved in the answers he or she is going to give.
Anonymous self-administration provides neutrality, detachment and reassurance. An even better result is obtained by
returning the questionnaires by mail or ballot boxes so as to further guarantee anonymity and the impossibility to
identify the subjects who filled in the questionnaires.
Neutralized administration
SDR tends to be reduced by wording questions in a neutral fashion. Another is to use forced-choice questions where
the two options have been equated for their desirability.
One approach is to administer tests through a computer (self-administration software).
[9]
A computer, even compared
to the most competent interviewer, provides a higher sense of neutrality: it does not appear to be judging.
Behavioral measurement
The most recent approachthe Over-claiming Techniqueassesses the tendency to claim knowledge about
non-existent items. More complex methods to promote honest answers include the Randomized response and
Unmatched count techniques, as well as the Bogus Pipeline Technique.
References
[1] Crowne, D. P., & Marlowe, D. (1960). A new scale of social desirability independent of psychopathology. Journal of Consulting Psychology,
24, 349-354.
[2] http:/ / www. springerlink. com/ content/ g5771006303277ww/ fulltext. pdf
[3] Thompson, E. R. & Phua, F. T. T. 2005. Reliability among senior managers of the Marlowe-Crowne short-form social desirability scale
(http:/ / www. springerlink.com/ content/ g5771006303277ww/ fulltext. pdf), Journal of Business and Psychology, 19, 541-554.
[4] Paulhus, D.L. (1991). Measurement and control of response biases. In J.P. Robinson et al. (Eds.), Measures of personality and social
psychological attitudes. San Diego: Academic Press
[5] Paulhus D.L., (1998) Paulhus Deception Scales (PDS) is published by Multi-Health Systems of Toronto.
[6] Roccato M., (2003) Desiderabilit Sociale e Acquiescenza. Alcune Trappole delle Inchieste e dei Sondaggi. LED Edizioni Universitarie,
Torino. ISBN 88-7916-216-0
[7] Stoeber, J. (2001). The social desirability scale-17 (SD-17). European Journal of Psychological Assessment, 17, 222-232.
[8] Corbetta P., (2003) La ricerca sociale: metodologia e tecniche. Vol. I-IV. Il Mulino, Bologna.
[9] McBurney D.H., (1994) Research Methods. Brooks/Cole, Pacific Grove, California.
Status quo bias
216
Status quo bias
Status quo bias is a cognitive bias; an irrational preference for the current state of affairs. The current baseline (or
status quo) is taken as a reference point, and any change from that baseline is perceived as a loss. Status quo bias
should be distinguished from a rational preference for the status quo ante, as when the current state of affairs is
objectively superior to the available alternatives, or when imperfect information is a significant problem. A large
body of evidence, however, shows that an irrational preference for the status quo--a status quo bias--frequently
affects human decision-making.
Status quo bias interacts with other non-rational cognitive processes such as loss aversion, existence bias,
endowment effect, longevity, mere exposure, and regret avoidance. Experimental evidence for the detection of status
quo bias is seen through the use of the Reversal test. A vast amount of experimental and field examples exist.
Behavior in regards to retirement plans, health, and ethical choices show evidence of the status quo bias.
Examples
Kahneman, Thaler, and Knetsch created experiments that could produce this effect reliably.
[1]
Samuelson and
Zeckhauser (1988) demonstrated status quo bias using a questionnaire in which subjects faced a series of decision
problems, which were alternately framed to be with and without a pre-existing status quo position. Subjects tended to
remain with the status quo when such a position was offered to them.
[2]
Hypothetical Choice Tasks Subjects were given a hypothetical choice task in the following "neutral" version, in
which no status quo was defined: "You are a serious reader of the financial pages but until recently you have had few
funds to invest. That is when you inherited a large sum of money from your great-uncle. You are considering
different portfolios. Your choices are to invest in: a moderate-risk company, a high-risk company, treasury bills,
municipal bonds." Other subjects were presented with the same problem but with one of the options designated as
the status quo. In this case, the opening passage continued: "A significant portion of this portfolio is invested in a
moderate risk company . . . (The tax and broker commission consequences of any changes are insignificant.)" The
result was that an alternative became much more popular when it was designated as the status quo.
[3]
Electric Power Consumers California electric power consumers were asked about their preferences regarding
trade-offs between service reliability and rates. The respondents fell into two groups, one with much more reliable
service than the other. Each group was asked to state a preference among six combinations of reliability and rates,
with one of the combinations designated as the status quo. A strong bias to the status quo was observed. Of those in
the high-reliability group, 60.2 percent chose the status quo, whereas a mere 5.7 percent chose the low-reliability
option that the other group had been experiencing, despite its lower rates. Similarly, of those in the low reliability
group, 58.3 chose their low-reliability status quo, and only 5.8 chose the high-reliability option.
[4]
The US states of New Jersey and Pennsylvania inadvertently ran a real-life experiment providing evidence of status
quo bias in the early 1990s. As part of tort law reform programs, citizens were offered two options for their
automotive insurance: an expensive option giving them full right to sue and a less expensive option with restricted
rights to sue. In New Jersey the cheaper option was the default and most citizens selected it. Only a minority chose
the cheaper option in Pennsylvania, where the more expensive option was the default. Similar effects have been
shown for contributions to retirement plans, choice of internet privacy policies and the decision to become an organ
donor.
Status quo bias
217
Explanations
Status quo bias has been attributed to a combination of loss aversion and the endowment effect, two ideas relevant to
prospect theory. An individual weighs the potential losses of switching from the status quo more heavily than the
potential gains; this is due to the prospect theory value function being steeper in the loss domain.
[2]
As a result, the
individual will prefer not to switch at all. However, the status quo bias is maintained even in the absence of gain/loss
framing: for example, when subjects were asked to choose the colour of their new car, they tended towards one
colour arbitrarily framed as the status quo.
[2]
Loss aversion, therefore, cannot wholly explain the status quo bias,
[5]
with other potential causes including regret avoidance,
[5]
transaction costs
[6]
and psychological commitment.
[2]
Rational Routes to Status Quo Maintenance
A status quo bias can also be a rational route if there are cognitive or informational limitations.
Informational limitations
Decision outcomes are rarely certain, nor is the utility they may bring. Because some errors are more costly than
others
[7]
(Haselton & Nettle, 2006), sticking with what worked in the past is a safe option, as long as previous
decisions are good enough'.
[8]
Cognitive limitations
Choice is often difficult,
[9]
and decision makers may prefer to do nothing
[10]
and or to maintain their current course
of action
[11]
because it is easier. Status quo alternatives often require less mental effort to maintain (Eidelman &
Crandall, 2009).
Irrational Routes to the Status Quo Bias
The irrational maintenance of the status quo bias links and confounds many cognitive biases.
Existence bias
An assumption of longevity and goodness are part of the status quo bias. People treat existence as a prima facie case
for goodness, aesthetic and Longevity increases this preference.
[12]
The status quo bias affects peoples preferences;
people report preferences for what they are likely rather than unlikely to receive. People simply assume, with little
reason or deliberation, the goodness of existing states.
[12]
Longevity is a corollary of the existence bias: if existence is good, longer existence should be better. This thinking
resembles quasi-evolutionary notions of survival of the fittest, and also the augmentation principle in attribution
theory.
[13]
Inertia is another reason used to explain a bias towards the status quo. Another explanation is fear of regret in
making a wrong decision, i.e. If we choose a partner, when we think there could be someone better out there.
[14]
Mere exposure
Mere exposure is an explanation for the status quo bias. Existing states are encountered more frequently than
non-existent states and because of this they will be perceived as more true and evaluated more preferably. One way
to increase liking for something is repeated exposure over time.
[15]
Loss aversion
Loss aversion also leads to greater regret for action than for inaction;
[16]
more regret is experienced when a decision
changes the status quo than when it maintains it.
[17]
Together these forces provide an advantage for the status quo;
people are motivated to do nothing or to maintain current or previous decisions.
[11]
Change is avoided, and decision
makers stick with what has been done in the past.
Changes from the status quo will typically involve both gains and losses, with the change having good overall
consequences if the gains outweigh these losses. A tendency to overemphasize the avoidance of losses will thus
Status quo bias
218
favor retaining the status quo, resulting in a status quo bias. Even though choosing the status quo may entail
forfeiting certain positive consequences, when these are represented as forfeited "gains" they are psychologically
given less weight than the "losses" that would be incurred if the status quo were changed. (
[18]
)
Omission bias
Omission bias may account for some of the findings previously ascribed to status quo bias. Omission bias is
diagnosed when a decision maker prefers a harmful outcome that results from an omission to a less harmful outcome
that results from an action (; Ilana Ritov and Jonathan Baron, "Status-Quo and Omission Biases," Journal of Risk and
Uncertainty 5 [1992]: 4961).
Detection
The Reversal Test: When a proposal to change a certain parameter is thought to have bad overall consequences,
consider a change to the same parameter in the opposite direction. If this is also thought to have bad overall
consequences, then the onus is on those who reach these conclusions to explain why our position cannot be improved
through changes to this parameter. If they are unable to do so, then we have reason to suspect that they suffer from
status quo bias. The rationale of the Reversal Test is: if a continuous parameter admits of a wide range of possible
values, only a tiny subset of which can be local optima, then it is prima facie implausible that the actual value of that
parameter should just happen to be at one of these rare local optima.
[18]
Neural Activity
A study found that erroneous status quo rejections have a greater neural impact than erroneous status quo
acceptances. This asymmetry in the genesis of regret might drive the status quo bias on subsequent decisions.
[19]
A study was done using a visual detection task in which subjects tended to favor the default when making difficult,
but not easy, decisions. This bias was suboptimal in that more errors were made when the default was accepted. A
selective increase in subthalamic nucleus (STN) activity was found when the status quo was rejected in the face of
heightened decision difficulty. Analysis of effective connectivity showed that inferior frontal cortex, a region more
active for difficult decisions, exerted an enhanced modulatory influence on the STN during switches away from the
status quo.
[20]
Research by UCL scientists that examines the neural pathways involved in 'status quo bias' in the human brain and
found that the more difficult the decision we face, the more likely we are not to act. The study, published in
Proceedings of the National Academy of Sciences (PNAS), looked at the decision-making of participants taking part
in a tennis 'line judgement' game while their brains were scanned using functional MRI (fMRI). The 16 study
participants were asked to look at a cross between two tramlines on a screen while holding down a 'default' key.
They then saw a ball land in the court and had to make a decision as to whether it was in or out. On each trial, the
computer signalled which was the current default option - 'in' or 'out'. The participants continued to hold down the
key to accept the default and had to release it and change to another key to reject the default. The results showed a
consistent bias towards the default, which led to errors. As the task became more difficult, the bias became even
more pronounced. The fMRI scans showed that a region of the brain known as the subthalamic nucleus (STN) was
more active in the cases when the default was rejected. Also, greater flow of information was seen from a separate
region sensitive to difficulty (the prefrontal cortex) to the STN. This indicates that the STN plays a key role in
overcoming status quo bias when the decision is difficult.
[20]
Status quo bias
219
Behavioral Economics and the Default position
Against this background, two behavioral economists devised an opt-out plan to help employees of a particular
company build their retirement savings. In an opt-out plan, the employees are automatically enrolled unless they
explicitly ask to be excluded. They found evidence for status quo bias and other associated effects. They also noted
that changing the default alternatives has, in some instances, been shown to have dramatic effects on peoples
choices.
[21]
Conflict
Status-Quo educational bias can be both a barrier to political progress and a threat to the state's legitimacy/ argue that
the values of stability, compliance, and patriotism underpin important reasons for status quo bias that appeal not to
the substantive merits of existing institutions but merely to the fact that those institutions are the status quo
[22]
Relevant fields
The Status quo bias is seen in important real life decisions; it has been found to be prominent is data on selections of
health care plans and retirement programs.
[2]
Politics
Preference for the status quo represents a core component of conservative ideology because preference for the status
quo is one significant element of conservative ideology, the bias in its favor plays a role under certain conditions
in promoting political conservatism.
[12]
Ethics
Status quo bias may be responsible for much of the opposition to human enhancement in general and to genetic
cognitive enhancement in particular.
[18]
Education
Education can (sometimes unintentionally) encourage childrens belief in the substantive merits of a particular
existing law or political institution, where the effect does not derive from an improvement in their ability or critical
thinking about that law or institution. However, this biasing effect is not automatically illegitimate or
counterproductive: a balance between social inculcation and openness needs to be maintained.
[23]
Reading in schools within the elementary classroom, reading aloud sessions that exclude ethnically diverse materials
create a bias in favor of the status quo that is harmful to children's education.
[24]
Health
An experiment to determine if status-quo biasbias toward current medication even when better alternatives are
offeredexists in a stated-choice study among asthma patients who take prescription combination maintenance
medications. The results of this study indicate that the status quo bias may exist in stated-choice studies, especially
with medications that patients have to take daily such as asthma maintenance medications. Stated-choice
practitioners should include a current medication in choice surveys to control for this bias.
[25]
Status quo bias
220
Retirement plans
An example of the status quo bias affecting retirement plans is a study done that examined the U.S. equity mutual
fund. They found that people maintained the plan they had previously, even if it was no longer the optimal
choice.
[26]
References
[1] Kahneman, D.; Knetsch, J. L.; Thaler, R. H. (1991). "Anomalies: The Endowment Effect, Loss Aversion, and Status Quo Bias". Journal of
Economic Perspectives 5 (1): 193206.
[2] Samuelson, W.; Zeckhauser, R. (1988). "Status quo bias in decision making". Journal of Risk and Uncertainty 1: 759.
[3] Samuelson, William; Richard Zeckhauser (1988). "Status Quo Bias in Decision Making". Journal of Risk and Uncertainty: 759.
[4] Hartman, Raymond S.; Chi-Keung Woo (1991). "Consumer Rationality and the Status Quo". Quarterly Journal of Economics 106: 141162.
[5] Korobkin, R. (1997). "The status quo bias and contract default rules". Cornell Law Review 83: 608687.
[6] Tversky, A.; Kahneman, D. (1991). "Loss aversion in riskless choice: a reference-dependent model". The Quarterly Journal of Economics
106 (4): 10391061.
[7] Haselton; Nettle (2006). Personality and Social Psychology Review 10 (1): 4766.
[8] Simon, H.A. (March 1956). "Rational Choice and the Structure of the Environment". Psychological Review 63 (2): 129138.
doi:10.1037/h0042769.
[9] Iyengar, Sheena; Lepper, Mark R (December 2000). "When choice is demotivating: Can one desire too much of a good thing?". Journal of
Personality and Social Psychology 79 (6): 9951006.
[10] Baron, Jonathan; Ilana Ritov (2004). "Omission bias, individual differences, and normality". Organizational Behavior and Human Decision
Processes 94: 7485.
[11] Samuelson, William; Zeckhauser (1988). "Status Quo Bias in Decision Making". Boston University.
[12] Eidelman, Scott; Christian S. Crandall (March 2012). "Bias in Favour of the Status Quo". Social and Personality Psychology Compass 3 (6):
270281.
[13] Kelley, H.H. (1972). "Attribution in Social Interaction". General Learning Press.
[14] [14] Venkatesh, B.. "Benefitting from status quo bias".
[15] Bornstein, R.F. (1989). "Exposure and affect: Overview and meta- analysis of research". Psychological Bulletin 106: 265289.
[16] Kahneman; Tversky, Slovic (1982). "udgment Under Uncertainty: Heuristics and Biases". Cambridge University Press.
[17] Inman, J.J.; Zeelenberg (2002). "Regret repeat versus switch decisions: The attenuation role of decision justifiability". Journal of Consumer
Research 29: 116128.
[18] Bostrom, Nick; Toby Orb (July 2006). "The Reversal Effect: Eliminating Status Quo Bias in Applied Ethics". Thics 116.
[19] Nicolle; Fleming, Bach, Driver & Dolan (March 2011). "A Regret-Induced Status Quo Bias". The Journal of Neuroscience 31 (9): 3320
3327.
[20] Fleming; Thomas, Dolan (February 2010). "Overcoming Status-Quo in the Brain". MIT.
[21] Thaler, Richard H.; Shlomo Benartzi (2004). "Save More Tomorrow: Using Behavioural Economics to Increase Employee Saving". Journal
of Political Economy 112: 164187.
[22] MacMullen (07 2011). "On Status-Quo Bias in Civic Education". Journal of Politics 73 (3): 872886.
[23] MacMullen (7 2011). "On Status Quo Bias in Civic Education". Journal of Politics 73 (3): 872886.
[24] Gonzalez-Jensen, Margarita; Sadler, Norma (April 1997). "Behind Closed Doors: Status Quo Bias in Read Aloud Selections". Equity &
Excellence in Education 30 (1): 2731.
[25] Hauber, Mohamed; Meddis, Johnson, Wagner (2008). "Status Quo Bias in Stated Choice Studies: Is it Real?". Health Values: 567568.
[26] Kempf, Alexandre; Stefan Ruenzi (2006). "Status Quo Bias and the Number of Alternatives: An Empirical Illustration from the Mutual
Fund Industry". Journal of Behavioral Finance 7 (4): 204213.
Status quo bias
221
Further reading
Barry, W. J. (2012). "Challenging the Status Quo Meaning of Educational Quality: Introducing Transformational
Quality (TQ) Theory.". Educational Journal of Living Theories 4: 129.
Johnson, E. J.; Hershey, J.; Meszaros, J.; Kunreuther, H. (1993). "Framing, Probability Distortions, and Insurance
Decisions". Journal of Risk and Uncertainty 7: 3551.
Seiler, Michael J.; Vicky, Traub; Harrison (2008). "Familiarity Bias and the Status Quo Alternative". Journal of
Housing Research 17: 139154.
Mandler, Michael. (June, 2004). Welfare economics with status quo bias: a policy paralysis problem and cure.
Royal Holloway College, University of London.
Wittman, Donald (2007). "Is Status Quo Bias Consistent With Downward-Sloping Demand?.". Economic Inquiry
46: 243288.
Kim, H.W. and A. Kankanhalli. (2008). Investigating User Resistance to Information Systems Implementation: A
Status Quo Bias Perspective. "MIS Quarterly".
Stereotype
Police officers buying donuts and coffee, an
example of perceived stereotypical behavior in
North America.
A stereotype is a thought that may be adopted
[1]
about specific types
of individuals or certain ways of doing things, but that belief may or
may not accurately reflect reality.
[2][3]
However, this is only a
fundamental psychological definition of a stereotype.
[3]
Within and
across different psychology disciplines, there are different concepts
and theories of stereotyping that provide their own expanded
definition. Some of these definitions share commonalities, though each
one may also harbor unique aspects that may complement or contradict
the others.
Etymology
The term stereotype derives from the Greek words (stereos), "firm, solid"
[4]
and (typos),
"impression,"
[5]
hence "solid impression".
The term comes from the printing trade and was first adopted in 1798 by Firmin Didot to describe a printing plate
that duplicated any typography. The duplicate printing plate, or the stereotype, is used for printing instead of the
original.
The first reference to "stereotype" in its modern use in English, outside of printing, was in 1850, in a noun, meaning
"image perpetuated without change."
[6]
But it was not until 1922 that "stereotype" was first used in the modern
psychological sense by American journalist Walter Lippmann in his work Public Opinion.
[7]
Stereotype
222
Relationship with other types of intergroup attitudes
Stereotypes, prejudice and discrimination are understood as related but different concepts.
[8][9][10][11]
Stereotypes are
regarded as the most cognitive component, prejudice as the affective and discrimination as the behavioral component
of prejudicial reactions.
[8][9]
In this tripartite view of intergroup attitudes, stereotypes reflect expectations and beliefs
about the characteristics of members of groups perceived as different from one's own, prejudice represents the
emotional response, and discrimination refers to actions.
[8][9]
Although related, the three concepts can exist independently of each other.
[12][9]
According to Daniel Katz and
Kenneth Braly, stereotyping leads to racial prejudice when people emotionally react to the name of a group, ascribe
characteristics to members of that group, and then evaluate those characteristics.
[10]
Possible prejudicial effects of stereotypes
[3]
are:
Justification of ill-founded prejudices or ignorance
Unwillingness to rethink one's attitudes and behavior towards stereotyped group
Preventing some people of stereotyped groups from entering or succeeding in activities or fields
[13]
Content
Stereotype content model, adapted from Fiske et al. (2002): Four types of
stereotypes resulting from combinations of perceived warmth and competence.
Stereotype content refers to the attributes
that people think characterize a group.
Studies of stereotype content examine what
people think of others rather than the
reasons and mechanisms involved in
stereotyping.
[14]
Early theories of stereotype content
proposed by social psychologists like
Gordon Allport assumed that stereotypes of
outgroups reflected uniform
antipathy.
[15][16]
Katz and Braly, for
instance, argued in their classic 1933 study
that ethnic stereotypes were uniformly
negative.
[14]
By contrast, a newer model of stereotype
content theorizes that stereotypes are frequently ambivalent and vary along two dimensions: warmth and
competence. Warmth and competence are respectively predicted by lack of competition and status; groups that do
not compete with the ingroup for the same resources (e.g., college space) are perceived as warm while high-status
(e.g., economically or educationally successful) groups are considered competent. The groups within each of the four
combinations of high and low levels of warmth and competence elicit distinct emotions.
[17]
The model explains the
phenomenon that some outgroups are admired but disliked while others are liked but disrespected. It was empirically
tested on a variety of national and international samples and was found to reliably predict stereotype content.
[15][18]
Stereotype
223
Functions
Early studies believed that stereotypes were only used by rigid, repressed, and authoritarian people. This idea has
been overturned by more recent studies that suggested that stereotypes are commonplace. Now, stereotypes are said
to be collective group beliefs, meaning that people who belong to the same social group share the same set of
stereotypes.
[12]
Relationship between cognitive and social functions
Stereotyping can serve cognitive functions on an interpersonal level, and social functions on an intergroup
level.
[12][3]
For stereotyping to function on an intergroup level (see social identity approaches: social identity theory
and self-categorization theory), an individual must see themselves as part of a group and being part of that group
must also be salient for the individual.
[12]
Craig McGarty, Russell Spears, and Vincent Y. Yzerbyt (2002) argued that the cognitive functions of stereotyping
are best understood in relation to its social functions, and vice versa.
[19]
Cognitive functions
Stereotypes can help make sense of the world. They are a form of categorization which helps to simplify and
systematize information so the information is easier to be identified, recalled, predicted, and reacted to.
[12]
Stereotypes are categories of objects or people. Between stereotypes, objects or people are as different to each other
as possible.
[1]
Within stereotypes, objects or people are as similar to each other as possible.
[1]
As to why people find it easier to understand categorized information, Gordon Allport has suggested possible
answers in his 1954 publication:
[20]
First, people can consult the category of something for ways to respond to that
thing. Second, things are more specific when they are in a category than when they are not, because categorization
accentuates properties that are shared by all members of a group. Third, people can readily describe things in a
category, because, fourth and related, things in the same category have distinct characteristics. Finally, people can
take for granted the characteristics of a particular category because the category itself may be an arbitrary grouping.
Moreover, stereotypes function as time- and energy-savers which allow people to act more efficiently.
[1]
David
Hamilton's 1981 publication gave rise to the view that stereotypes are people's biased perceptions of their social
contexts.
[1]
In this view, people use stereotypes as shortcuts to make sense of their social contexts, and this makes
people's task of understanding their world less cognitively demanding.
[1]
Social functions: social categorization
When stereotypes are used for explaining social events, for justifying activities of ones own group (ingroup) to
another group (outgroup), or for differentiating the ingroup as positively distinct from outgroups, the overarching
purpose of stereotyping is for people to put their collective self (their ingroup membership) in positive light.
[21]
Explanation purposes
Stereotypes can be used to explain social events.
[12][21]
Henri Tajfel
[12]
gave the following example: Some people
found that the anti-Semitic contents of The Protocols of the Elders of Zion only made sense if the Jews have certain
characteristics. According to Tajfel,
[12]
Jews were stereotyped as being evil, as yearning for world domination, etc.,
because these stereotypes could explain the anti-Semitic facts as presented in The Protocols of the Elders of Zion.
Stereotype
224
Justification purposes
People create stereotypes of an outgroup to justify the actions that their ingroup has or plans to commit towards that
outgroup.
[12][20][21]
For example, according to Tajfel,
[12]
Europeans stereotyped Turkish, Indian, and Chinese people
as being incapable of achieving financial advances without European help. This stereotype was used to justify
European colonialism in Turkey, India, and China.
Intergroup differentiation
An assumption is that people want their ingroup to have a positive image relative to outgroups, and so people want to
differentiate their ingroup from relevant outgroups in a desirable way.
[12]
If an outgroup does not affect the ingroups
image, then from an image preservation point of view, there is no point for the ingroup to be positively distinct from
that outgroup.
[12]
People can actively create certain images for relevant outgroups by stereotyping. People do so when they see that
their ingroup is no longer as clearly and/or as positively differentiated from relevant outgroups, and they want to
restore the intergroup differentiation to a state that favours the ingroup.
[12][21]
Social functions: self categorisation
People will change their stereotype of their ingroups and outgroups to suit the context they are in.
[21][3]
People are
likely to self-stereotype their ingroup as homogenous in an intergroup context, and they are less likely to do so in an
intragroup context where the need to emphasise their group membership is not as great.
[21]
Stereotypes can
emphasise a persons group membership in two steps: First, stereotypes emphasise the persons similarities with
ingroup members on relevant dimensions, and also the persons differences from outgroup members on relevant
dimensions.
[21]
Second, the more the stereotypes emphasise within-group similarities and between-group
differences, the more salient the persons social identity will become, and the more depersonalise that person will
be.
[21]
A depersonalised person will abandon his/her individual differences and embrace the stereotypes associated
with his/her relevant group membership.
[21]
Social functions: social influence and consensus
Stereotypes are an indicator of ingroup consensus.
[21]
When there are intragroup disagreement over stereotypes of
the ingroup and/or outrgroups, ingroup members will take collective action to prevent other ingroup members from
diverging from each other.
[21]
John C. Turner proposed in 1987
[21]
that if ingroup members disagree on an outgroup stereotype, then one of three
possible collective actions will follow: First, ingroup members may negotiate with each other and conclude that they
have different outgroup stereotypes because they are stereotyping different subgroups of an outgroup (e.g., Russian
gymnasts versus Russian boxers). Second, ingroup members may negotiate with each other, but conclude that they
are disagreeing because of categorical differences amongst themselves. Accordingly, in this context, it is better to
categorise ingroup members under different categories (e.g., Democrats versus Republican) than under a shared
category (e.g., American). Finally, ingroup members may influence each other to arrive at a common outgroup
stereotype.
Stereotype
225
Formation
Different disciplines give different accounts of how stereotypes develop: Psychologists may focus on an individual's
experience with groups, patterns of communication about those groups, and intergroup conflict. As for sociologists,
they may focus on the relations among different groups in a social structure. They suggest that stereotypes are the
result of conflict, poor parenting, and inadequate mental and emotional development.
Illusory correlation
Research has shown that stereotypes can develop based on a cognitive mechanism known as illusory correlation an
erroneous inference about the relationship between two events.
[1][22][23]
If two events which are statistically
infrequent co-occur, observers overestimate the frequency of co-occurrence of these events. The underlying reason is
that rare, infrequent events are distinctive and salient and, when paired, become even more so. The heightened
salience results in more attention and more effective encoding, which strengthens the belief that the events are
correlated.
[24][25]
In the intergroup context, illusory correlations lead people to misattribute rare behaviors or traits at higher rates to
minority group members than to majority groups, even when both display the same proportion of the behaviors or
traits. Black people, for instance, are a minority group in the United States and interaction with blacks is a relatively
infrequent event for an average white American. Similarly, undesirable behavior (e.g. crime) is statistically less
frequent than desirable behavior. Since both events "blackness" and "undesirable behavior" are distinctive in the
sense that they are infrequent, the combination of the two leads observers to overestimate the rate of
co-occurrence.
[24]
Similarly, in workplaces where women are underrepresented and negative behaviors such as
errors occur less frequently than positive behaviors, women become more strongly associated with mistakes than
men.
[26]
In a landmark study, David Hamilton and Richard Gifford (1976) examined the role of illusory correlation in
stereotype formation. Subjects were instructed to read descriptions of behaviors performed by members of groups A
and B. Negative behaviors outnumbered positive actions and group B was smaller than group A, making negative
behaviors and membership in group B relatively infrequent and distinctive. Participants were then asked who had
performed a set of actions: a person of group A or group B. Results showed that subjects overestimated the
frequency with which both distinctive events, membership in group B and negative behavior, co-occurred, and
evaluated group B more negatively. This despite the fact the proportion of positive to negative behaviors was
equivalent for both groups and that there was no actual correlation between group membership and behaviors.
[24]
Although Hamilton and Gifford found a similar effect for positive behaviors as the infrequent events, a meta-analytic
review of studies showed that illusory correlation effects are stronger when the infrequent, distinctive information is
negative.
[22]
Hamilton and Gifford's distinctiveness-based explanation of stereotype formation was subsequently extended.
[25]
A
1994 study by McConnell, Sherman, and Hamilton found that people formed stereotypes based on information that
was not distinctive at the time of presentation, but was considered distinctive at the time of judgement.
[27]
Once a
person judges non-distinctive information in memory to be distinctive, that information is re-encoded and
re-represented as if it had been distinctive when it was first processed.
[27]
Stereotype
226
Common environment
One explanation for why stereotypes are shared is that they are the result of a common environment that stimulates
people to react in the same way.
[1]
The problem with the common environment explanation in general is that it does not explain how shared
stereotypes can occur without direct stimuli.
[1]
Research since the 1930s suggested that people are highly similar
with each other in how they describe different racial and national groups, although those people have no personal
experience with the groups they are describing.
[28]
Socialisation and upbringing
Another explanation says that people are socialised to adopt the same stereotypes.
[1]
Some psychologists believe that
although stereotypes can be absorbed at any age, stereotypes are usually acquired in early childhood under the
influence of parents, teachers, peers, and the media.
If stereotypes are defined by social values, then stereotypes will only change as per changes in social values.
[1]
The
suggestion that stereotype content depend on social values reflects Walter Lippman argument in his 1922 publication
that stereotypes are rigid because they cannot be changed at peoples will.
[10]
Studies emerging since the 1940s refuted the suggestion that stereotype contents cannot changed at peoples will.
Those studies suggested that one groups stereotype of another group will become more or less positive depending on
whether their intergroup relationship has improved or degraded.
[10][29][30]
Intergroup events (e.g., World War Two,
Persian Gulf conflict) often changed intergroup relationships. For example, after WWII, Black American students
held a more negative stereotype of people from countries that were Americas WWII enemies.
[10]
If there are no
changes to an intergroup relationship, then relevant stereotypes will not change.
[11]
Intergroup relations
According to a third explanation, shared stereotypes are neither caused by the coincidence of common stimuli, nor
by socialisation. It explains that stereotypes are shared because group members are motivated to behave in certain
ways, and stereotypes reflect those behaviours.
[1]
It is important to note from this explanation that stereotypes are the
consequence, not the cause, of intergroup relations. This explanation assumes that when it is important for people to
acknowledge both their ingroup and outgroup, then those people will aim to emphasise their difference from
outgroup members, and their similarity to ingroup members.
[1]
Activation
An initial study of stereotype activation was conducted by Patricia Devine in 1989. She suggested that stereotypes
are automatically activated in the presence of a member (or some symbolic equivalent) of a stereotyped group and
that the unintentional activation of the stereotype is equally strong for high- and low-prejudice persons. To test her
hypothesis, Devine used a priming paradigm: words related to the cultural stereotype of blacks were presented
rapidly in subjects' parafoveal visual field (i.e., out of their direct line of vision) so that the participants could not
consciously identify the primes. Some participants were presented with a high proportion (80%) of words related to
the racial stereotype, and others with a lower proportion (20%). Then, during an ostensibly unrelated
impression-formation task, subjects read a paragraph describing a race-unspecified target person's ambiguously
hostile behaviors and rated the target person on several trait scales. Ambiguously hostile behaviors were examined
because pretesting had revealed that hostility was an important component of the cultural stereotype of blacks.
Results showed that subjects who received the high proportion of racial primes rated the target person in the story as
significantly more hostile than participants who were presented with the lower proportion of ethnic primes. This
effect held true for both high- and low-prejudice subjects (as measured by the Modern Racism Scale). Thus, the
racial stereotype was activated even for low-prejudice individuals who did not personally endorse it.
[31][32][33]
Stereotype
227
Subsequent research challenged Devine's findings.
[34]
Lepore and Brown (1997), in particular, noted that the primes
used by Devine were both neutral category labels (e.g., "Blacks") and stereotypic attributes (e.g., "lazy"). They
argued that if only the neutral category labels were primed, people high and low in prejudice would respond
differently. In a design similar to Devine's, Lepore and Brown primed the category of African-Americans using
labels such as "blacks" and "West Indians" and then assessed the differential activation of the associated stereotype
in the subsequent impression-formation task. They found that high-prejudice participants increased their ratings of
the target person on the negative stereotypic dimensions and decreased them on the positive dimension whereas
low-prejudice subjects tended in the opposite direction. The results suggest that the level of prejudice and stereotype
endorsement affects people's judgements when the category and not the stereotype per se is primed.
[35]
Accuracy
A magazine feature from Beauty Parade from March 1952 stereotyping women
drivers. It features Bettie Page as the model.
Stereotypes can be efficient shortcuts and
sense-making tools. They can, however,
keep people from processing new or
unexpected information about each
individual, thus biasing the impression
formation process.
[1]
Early researchers
believed that stereotypes were inaccurate
representations of reality.
[28]
A series of
pioneering studies which appeared in the
1930s found no empirical support for widely
held racial stereotypes.
[10]
By the
mid-1950s, Gordon Allport wrote that "it is
possible for a stereotype to grow in defiance
of all evidence".
[20]
Research on the role of illusory correlations
in the formation of stereotypes suggests that stereotypes can develop because of incorrect inferences about the
relationship between two events (e.g., membership in a social group and bad or good attributes). This means that at
least some stereotypes are inaccurate.
[24][27][22]
There is empirical social science research which shows that stereotypes are often accurate.
[36]
Jussim et al. reviewed
four studies concerning racial and seven studies which examined gender stereotypes about demographic
characteristics, academic achievement, personality and behavior. Based on that, the authors argued that some aspects
of ethnic and gender stereotypes are accurate while stereotypes concerning political affiliation and nationality are
much less accurate.
[37]
A study by Terracciano et al. also found that stereotypic beliefs about nationality do not
reflect the actual personality traits of people from different cultures.
[38]
Effects
Attributional ambiguity
Attributional ambiguity refers to the uncertainty that members of stereotyped groups experience in interpreting the
causes of others' behavior toward them. Stereotyped individuals who receive negative feedback can attribute it either
to personal shortcomings, such as lack of ability or poor effort, or the evaluator's stereotypes and prejudice toward
their social group. Alternatively, positive feedback can either be attributed to personal merit or discounted as a form
of sympathy or pity.
[39][40][41]
Stereotype
228
Crocker et al. (1991) showed that when black participants were evaluated by a white person who was aware of their
race, black subjects mistrusted the feedback, attributing negative feedback to the evaluator's stereotypes and positive
feedback to the evaluator's desire to appear unbiased. When the black participants race was unknown to the
evaluator, they were more accepting of the feedback.
[42]
Attributional ambiguity has been shown to impact a person's self-esteem. When they receive positive evaluations,
stereotyped individuals are uncertain of whether they really deserved their success and, consequently, they find it
difficult to take credit for their achievements. In the case of negative feedback, ambiguity has been shown to have a
protective effect on self-esteem as it allows people to assign blame to external causes. Some studies, however, have
found that this effect only holds when stereotyped individuals can be absolutely certain that their negative outcomes
are due to the evaluators's prejudice. If any room for uncertainty remains, stereotyped individuals tend to blame
themselves.
[40]
Attributional ambiguity can also make it difficult to assess one's skills because performance-related evaluations are
mistrusted or discounted. Moreover, it can lead to the belief that one's efforts are not directly linked to the outcomes,
thereby depressing one's motivation to succeed.
[39]
Stereotype threat
The effect of stereotype threat (ST) on math test scores for girls and
boys. Data from Osborne (2007).
[43]
Stereotype threat occurs when people are aware of a
negative stereotype about their social group and
experience anxiety or concern that they might confirm
the stereotype.
[44]
Streotype threat has been shown to
undermine performance in a variety of domains.
[45][46]
Claude M. Steele and Joshua Aronson conducted the
first experiments showing that stereotype threat can
depress intellectual performance on standardized tests.
In one study, they found that black college students
performed worse than white students on a verbal test
when the task was framed as a measure of intelligence.
When it was not presented in that manner, the
performance gap narrowed. Subsequent experiments
showed that framing the test as diagnostic of intellectual ability made black students more aware of negative
stereotypes about their group, which in turn impaired their performance.
[47]
Stereotype threat effects have been demonstrated for an array of social groups in many different arenas, including not
only academics but also sports,
[48]
chess
[49]
and business.
[50]
Self-fulfilling prophecy
Stereotypes lead people to expect certain actions from members of social groups. These stereotype-based
expectations may lead to self-fulfilling prophecies, in which one's inaccurate expectations about a person's behavior,
through social interaction, prompt that person to act in stereotype-consistent ways, thus confirming one's erroneous
expectations and validating the stereotype.
[51][52]
Word, Zanna and Cooper (1974) demonstrated the effects of stereotypes in the context of a job interview. White
participants interviewed black and white subjects who, prior to the experiments, had been trained to act in a
standardized manner. Analysis of the videotaped interviews showed that black job applicants were treated
differently: They received shorter amounts of interview time and less eye contact; interviewers made more speech
errors (e.g., stutters, sentence incompletions, incoherent sounds) and physically distanced themselves from black
applicants. In a second experiment, trained interviewers were instructed to treat applicants, all of whom were white,
Stereotype
229
like the whites or blacks had been treated in the first experiment. As a result, applicants treated like the blacks of the
first experiment behaved in a more nervous manner and received more negative performance ratings than
interviewees receiving the treatment previously afforded to whites.
[53]
A 1977 study by Snyder, Tanke and Berscheid found a similar pattern in social interactions between men and
women. Male undergraduate students were asked to talk to female undergraduates, whom they believed to be
physically attractive or unattractive, on the phone. The conversations were taped and analysis showed that men who
thought that they were talking to an attractive woman communicated in a more positive and friendlier manner than
men who believed that they were talking to unattractive women. This altered the women's behavior: Female subjects
who, unknowingly to them, were perceived to be physically attractive behaved in a friendly, likeable, and sociable
manner in comparison with subjects who were regarded as unattractive.
[54]
Discrimination
Because stereotypes simplify and justify social reality, they have potentially powerful effects on how people
perceive and treat one another.
[55]
As a result, stereotypes can lead to discrimination in labor markets and other
domains.
[56]
For example, Tilcsik (2011) has found that employers who seek job applicants with stereotypically male
heterosexual traits are particularly likely to engage in discrimination against gay men, suggesting that discrimination
on the basis of sexual orientation is partly rooted in specific stereotypes and that these stereotypes loom large in
many labor markets.
[13]
Agerstrm and Rooth (2011) showed that automatic obesity stereotypes captured by the
Implicit Association Test can predict real hiring discrimination against the obese.
[57]
Similarly, experiments suggest
that gender stereotypes play an important role in judgments that affect hiring decisions.
[58][59]
Self-stereotyping
Stereotypes can affect self-evaluations and lead to self-stereotyping.
[60][3]
For instance, Correll (2001, 2004) found
that specific stereotypes (e.g., the stereotype that women have lower mathematical ability) affect women's and men's
evaluations of their abilities (e.g., in math and science), such that men assess their own task ability higher than
women performing at the same level.
[61][62]
Similarly, a study by Sinclair et al. (2006) has shown that Asian
American women rated their math ability more favorably when their ethnicity and the relevant stereotype that Asian
Americans excel in math was made salient. In contrast, they rated their math ability less favorably when their gender
and the corresponding stereotype of women's inferior math skills was made salient. Sinclair et al. found, however,
that the effect of stereotypes on self-evaluations is mediated by the degree to which close people in someone's life
endorse these stereotypes. People's self-stereotyping can increase or decrease depending on whether close others
view them in stereotype-consistent or inconsistent manner.
[63]
Stereotyping can also play a central role in depression, when people have negative self-stereotypes about themselves,
according to Cox, Abramson, Devine, and Hollon (2012).
[3]
This depression that is caused by prejudice (i.e.,
"deprejudice") can be related to a group membership (e.g., MeGayBad) or not (e.g., MeBad). If someone holds
prejudicial beliefs about a stigmatized group and then becomes a member of that group, they may internalize their
prejudice and develop depression. People may also show prejudice internalization through self-stereotyping because
of negative childhood experiences such as verbal and physical abuse.
Stereotype
230
Role in art and culture
American political cartoon titled The Usual Irish Way
of Doing Things, depicting a drunken Irishman lighting
a powder keg and swinging a bottle. Published in
Harper's Weekly, 1871.
Stereotypes are common in various cultural media, where they
take the form of dramatic stock characters. These characters are
found in the works of playwright Bertold Brecht, Dario Fo, and
Jacques Lecoq, who characterize their actors as stereotypes for
theatrical effect. In commedia dell'arte this is similarly common.
The instantly recognizable nature of stereotypes mean that they are
effective in advertising and situation comedy. These stereotypes
change, and in modern times only a few of the stereotyped
characters shown in John Bunyan's The Pilgrim's Progress would
be recognizable.
Media stereotypes of women first emerged in the early 20th
century. Various stereotypic depictions or "types" of women
appeared in magazines, including Victorian ideals of femininity,
the New Woman, the Gibson Girl, the Femme fatale, and the
Flapper.
[64]
More recently, artists such as Anne Taintor and
Matthew Weiner (the producer of Mad Men) have used vintage
images or ideas to insert their own commentary of stereotypes for
specific eras. Weiner's character Peggy Olson continually battles gender stereotypes throughout the series, excelling
in a workplace dominated by men.
Some contemporary studies indicate that racial, ethnic and cultural stereotypes are still widespread in Hollywood
blockbuster movies.
[65]
Portrayals of Latin Americans in film and print media are restricted to a narrow set of
characters. Latin Americans are largely depicted as sexualized figures such as the Latino macho or the Latina vixen,
gang members, (illegal) immigrants, or entertainers. By comparison, they are rarely portrayed as working
professionals, business leaders or politicians.
[66]
In literature and art, stereotypes are clichd or predictable characters or situations. Throughout history, storytellers
have drawn from stereotypical characters and situations, in order to connect the audience with new tales
immediately. Sometimes such stereotypes can be sophisticated, such as Shakespeare's Shylock in The Merchant of
Venice. Arguably a stereotype that becomes complex and sophisticated ceases to be a stereotype per se by its unique
characterization. Thus while Shylock remains politically unstable in being a stereotypical Jew, the subject of
prejudicial derision in Shakespeare's era, his many other detailed features raise him above a simple stereotype and
into a unique character, worthy of modern performance. Simply because one feature of a character can be
categorized as being typical does not make the entire character a stereotype.
Despite their proximity in etymological roots, clich and stereotype are not used synonymously in cultural spheres.
For example a clich is a high criticism in narratology where genre and categorization automatically associates a
story within its recognizable group. Labeling a situation or character in a story as typical suggests it is fitting for its
genre or category. Whereas declaring that a storyteller has relied on clich is to pejoratively observe a simplicity and
lack of originality in the tale. To criticize Ian Fleming for a stereotypically unlikely escape for James Bond would be
understood by the reader or listener, but it would be more appropriately criticized as a clich in that it is overused
and reproduced. Narrative genre relies heavily on typical features to remain recognizable and generate meaning in
the reader/viewer.
Stereotype
231
References
[1] McGarty, Craig; Yzerbyt, Vincent Y.; Spears, Russel (2002). "Social, cultural and cognitive factors in stereotype formation" (http:/ / catdir.
loc.gov/ catdir/ samples/ cam033/ 2002073438.pdf). Stereotypes as explanations: The formation of meaningful beliefs about social groups.
Cambridge: Cambridge University Press. pp.115. ISBN978-0-521-80047-1. .
[2] Judd, Charles M.; Park, Bernadette (1993). "Definition and assessment of accuracy in social stereotypes". Psychological Review 100 (1):
109128. doi:10.1037/0033-295X.100.1.109.
[3] Cox, William T. L.; Abramson, Lyn Y.; Devine, Patricia G.; Hollon, Steven D. (2012). "Stereotypes, Prejudice, and Depression: The
Integrated Perspective" (http:/ / www.archpsychological. com/ blog/ wp-content/ uploads/ 2012/ 09/
deprejudice-txng-dep-n-prejudice-w-tx-for-other.pdf). Perspectives on Psychological Science 7 (5): 427449.
doi:10.1177/1745691612455204. .
[4] (http:/ / www. perseus.tufts. edu/ hopper/ text?doc=Perseus:text:1999. 04. 0057:entry=stereo/ s), Henry George Liddell, Robert
Scott, A Greek-English Lexicon, on Perseus Digital Library
[5] (http:/ / www. perseus. tufts. edu/ hopper/ text?doc=Perseus:text:1999. 04. 0057:entry=tu/ pos), Henry George Liddell, Robert Scott, A
Greek-English Lexicon, on Perseus Digital Library
[6] Online Etymology Dictionary (http:/ / www.etymonline. com/ index. php?term=stereotype)
[7] Kleg, Milton (1993). Hate Prejudice and Racism (http:/ / books. google. com/ books?id=yKrrSa7WqNwC& pg=PA135). Albany: State
University of New York Press. pp.135137. ISBN978-0-585-05491-9. .
[8] Fiske, Susan T. (1998). "Stereotyping, Prejudice, and Discrimination" (http:/ / books. google. com/ books?id=w27pSuHLnLYC&
pg=PA357). In Gilbert, Daniel T.; Fiske, Susan T.; Lindzey, Gardner. The Handbook of Social Psychology. Volume Two (4th ed.). Boston,
Mass.: McGraw-Hill. p.357. ISBN978-0-19-521376-8. .
[9] Denmark, Florence L. (2010). "Prejudice and Discrimination" (http:/ / books. google. com/ books?id=hhGdag3Wf-YC& pg=PA1276). In
Weiner, Irving B.; Craighead, W. Edward. The Corsini Encyclopedia of Psychology. Volume Three (4th ed.). Hoboken, N.J.: John Wiley.
p.1277. ISBN978-0-470-47921-6. .
[10] Katz, Daniel; Braly, Kenneth W. (1935). "Racial prejudice and racial stereotypes". The Journal of Abnormal and Social Psychology
(American Psychological Association) 30 (2): 175193. doi:10.1037/h0059800.
[11] Oakes, P. J., Haslam, S. A., & Turner, J. C. (1994). Stereotyping and social reality. Oxford: Blackwell.
[12] Tajfel, Henri (1981). "Social stereotypes and social groups". In Turner, John C.; Giles, Howard. Intergroup behaviour. Oxford: Blackwell.
pp.144167. ISBN978-0-631-11711-7.
[13] Tilcsik, Andrs (2011). "Pride and Prejudice: Employment Discrimination against Openly Gay Men in the United States". American Journal
of Sociology 117 (2): 586626. doi:10.1086/661653.
[14] Operario, Don; Fiske, Susan T. (2003). "Stereotypes: Content, Structures, Processes, and Context" (http:/ / books. google. com/
books?id=Wfx55Z-Dw10C& pg=PA22). In Brown, Rupert; Gaertner, Samuel L. Blackwell Handbook of Social Psychology: Intergroup
Processes. Malden, MA: Blackwell. pp.2244. ISBN978-1-4051-0654-2. .
[15] Fiske, Susan T.; Cuddy, Amy J. C.; Glick, Peter; Xu, Jun (2002). "A Model of (Often Mixed) Stereotype Content: Competence and Warmth
Respectively Follow From Perceived Status and Competition" (http:/ / www. cos. gatech. edu/ facultyres/ Diversity_Studies/
Fiske_StereotypeContent. pdf). Journal of Personality and Social Psychology (American Psychological Association) 82 (6): 878902.
doi:10.1037//0022-3514.82.6.878. .
[16] Cuddy, Amy J. C.; Fiske, Susan T. (2002). "Doddering But Dear: Process, Content, and Function in Stereotyping of Older Persons" (http:/ /
books.google. com/ books?id=UvxEoFQ0LYwC& pg=PA7). In Nelson, Todd D. Ageism: Stereotyping and Prejudice against Older Persons.
Cambridge, Mass.: MIT Press. pp.78. ISBN978-0-262-14077-5. .
[17] Dovidio, John F.; Gaertner, Samuel L. (2010). "Intergroup Bias" (http:/ / books. google. com/ books?id=Pye5IkCFgRYC& pg=PA1085&
lpg=PA1084). In Susan T., Fiske; Gilbert, Daniel T.; Lindzey, Gardner. Handbook of Social Psychology. Volume Two (5th ed.). Hooboken,
N.J.: John Wiley. p.1085. ISBN978-0-470-13747-5. .
[18] Cuddy, Amy J. C.; et al. (2009). "Stereotype content model across cultures: Towards universal similarities and some differences" (http:/ /
www.people.hbs. edu/ acuddy/ 2009, cuddy et al. , BJSP. pdf). British Journal of Social Psychology (British Psychological Society) 48 (1):
133. doi:10.1348/014466608X314935. .
[19] McGarty, Craig; Spears, Russel; Yzerbyt, Vincent Y. (2002). "Conclusion: stereotypes are selective, variable and contested explanations".
Stereotypes as explanations: The formation of meaningful beliefs about social groups. Cambridge: Cambridge University Press. pp.186199.
ISBN978-0-521-80047-1.
[20] Allport, Gordon W. (1954). The Nature of Prejudice. Cambridge, MA: Addison-Wesley. p.189. ISBN978-0-201-00175-4.
[21] Haslam, S. A., Turner, J. C., Oakes, P. J., Reynolds, K. J., & Doosje, B. (2002). From personal pictures in the head to collective tools in the
word: how shared stereotypes allow groups to represent and change social reality. In C. McGarty, V. Y. Yzerbyt, & R. Spears (Eds.).
Stereotypes as explanations: The formation of meaningful beliefs about social groups (pp. 157-185). Cambridge: Cambridge University Press.
[22] Mullen, Brian; Johnson, Craig (1990). "Distinctiveness-based illusory correlations and stereotyping: A meta-analytic integration". British
Journal of Social Psychology (Wiley-Blackwell on behalf of the British Psychological Society) 29 (1): 1128.
doi:10.1111/j.2044-8309.1990.tb00883.x.
[23] Meiser, Thorsten (2006). "Contingency Learning and Biased Group Impressions" (http:/ / books. google. com/
books?id=RMZL_2H8A4kC& pg=PA183). In Fiedler, Klaus; Justin, Peter. Information Sampling and Adaptive Cognition. Cambridge:
Stereotype
232
Cambridge University Press. pp.183209. ISBN978-0-521-83159-8. .
[24] Hamilton, David L.; Gifford, Robert K. (1976). "Illusory correlation in interpersonal perception: A cognitive basis of stereotypic
judgments". Journal of Experimental Social Psychology (Elsevier) 12 (4): 392407. doi:10.1016/S0022-1031(76)80006-6.
[25] Berndsen, Maritte; Spears, Russel; van der Pligt, Joop; McGarty, Craig (2002). "Illusory correlation and stereotype formation: making
sense of group differences and cognitive biases" (http:/ / books. google. fr/ books?id=dkn8dceHRg8C& pg=PA90). In McGarty, Craig;
Yzerbyt, Vincent Y.; Spears, Russel. Stereotypes as explanations: The formation of meaningful beliefs about social groups. Cambridge:
Cambridge University Press. pp.90110. ISBN978-0-521-80047-1. .
[26] Moskowitz, Gordon B. (2005). Social Cognition: Understanding Self and Others (http:/ / books. google. com/ books?id=_-NLW8Ynvp8C&
pg=PA182). New York: Guilford Press. p.182. ISBN978-1-59385-085-2. .
[27] McConnell, Allen R.; Sherman, Steven J.; Hamilton, David L. (1994). "Illusory correlation in the perception of groups: an extension of the
distinctiveness-based account" (http:/ / allenmcconnell.net/ pdfs/ edbe-JPSP-1994. pdf). Journal of Personality and Social Psychology 67 (3):
414429. doi:10.1037/0022-3514.67.3.414. .
[28] Katz, Daniel; Braley, Kenneth (1933). "Racial stereotypes of one hundred college students". The Journal of Abnormal and Social
Psychology 28 (3): 280290. doi:10.1037/h0074049.
[29] Meenes, Max (1943). "A Comparison of Racial Stereotypes of 1935 and 1942". Journal of Social Psychology 17 (2): 327336.
doi:10.1080/00224545.1943.9712287.
[30] Haslam, S. Alexander; Turner, John C.; Oakes, Penelope J.; McGarty, Craig; Hayes, Brett K. (1992). "Context-dependent variation in social
stereotyping 1: The effects of intergroup relations as mediated by social change and frame of reference". European Journal of Social
Psychology 22 (1): 320. doi:10.1002/ejsp.2420220104.
[31] Devine, Patricia G. (1989). "Stereotypes and Prejudice: Their Automatic and Controlled Components" (http:/ / faculty. washington. edu/
donnaw/ Devine 1989. pdf). Journal of Personality and Social Psychology 56 (1): 518. doi:10.1037/0022-3514.56.1.5. .
[32] Devine, Patricia G.; Monteith, Margo J. (1999). "Automaticty and Control in Stereotyping" (http:/ / books. google. com/
books?id=5X_auIBx99EC& pg=PA341). In Chaiken, Shelly; Trope, Yaacov. Dual-Process Theories in Social Psychology. New York:
Guilford Press. pp.341342. ISBN978-1-57230-421-5. .
[33] Bargh, John A. (1994). "The Four Horsemen of Automaticity: Awareness, Intention, Efficiency, Control in Social Cognition" (http:/ / books.
google.com/ books?id=5ncW0DyNqVwC& pg=PA21). In Wyer, Robert S.; Srull, Thomas K.. Handbook of Social Cognition. Two (2nd ed.).
Hillsdale, NJ: Lawrence Earlbaum. p.21. ISBN978-0-8058-1056-1. .
[34] Brown, Rupert (2010). Prejudice: Its Social Psychology (http:/ / books. google. com/ books?id=PygYKbRoZjcC& pg=PA88) (2nd ed.).
Oxford: Wiley-Blackwell. pp.88. ISBN978-1-4051-1306-9. .
[35] Lepore, Lorella; Brown, Rupert (1997). "Category and Stereotype Activation: Is Prejudice Inevitable?" (http:/ / www. atkinson. yorku. ca/
~jsteele/ PDF/ Optional Readings/ Lepore_Brown_JPSP_1997. pdf). Journal of Personality and Social Psychology 72 (2): 275287.
doi:10.1037/0022-3514.72.2.275. .
[36] Yueh-Ting Lee, Lee J. Jussim, and Clark R. McCauley, ed. (September 1995). Stereotype Accuracy: Toward Appreciating Group
Differences. American Psychological Association. ISBN978-1-55798-307-7.
[37] Jussim, Lee; Cain, Thomas R.; Crawford, Jarret T.; Harber, Kent; Cohen, Florette (2009). "The unbearable accuracy of stereotypes". In
Nelson, Todd D. Handbook of prejudice, stereotyping, and discrimination. New York: Psychology Press. pp.199227.
ISBN978-0-8058-5952-2.
[38] Terracciano, A; Abdel-Khalek, AM; Adm, N; Adamovov, L; Ahn, CK; Ahn, HN; Alansari, BM; Alcalay, L et al. (2005). "National
Character Does Not Reflect Mean Personality Trait Levels in 49 Cultures" (http:/ / www. ncbi. nlm. nih. gov/ pmc/ articles/ PMC2775052/ ).
Science 310 (5745): 96100. doi:10.1126/science.1117199. PMC2775052. PMID16210536. .
[39] Zemore, Sarah E.; Fiske, Susan T.; Kim, Hyun-Jeong (2000). "Gender Stereotypes and the Dynamics of Social Interaction" (http:/ / books.
google.com/ books?id=yJ43_5tJGycC& pg=PA229). In Eckes, Thomas; Trautner, Hanns Martin. The Developmental Social Psychology of
Gender. Mahwah, NJ: Lawrence Erlbaum Associates. pp.229230. ISBN978-0-585-30065-8. .
[40] Crocker, Jennifer; Major, Brenda; Stelle, Claude (1998). "Social Stigma" (http:/ / books. google. com/ books?id=w27pSuHLnLYC&
pg=PA519). In Gilbert, Daniel T.; Fiske, Susan T.; Lindzey, Gardner. The Handbook of Social Psychology. Volume Two (4th ed.). Oxford:
Oxford University Press. pp.519521. ISBN978-0-19-521376-8. .
[41] Whiteley, Bernard E.; Kite, Mary E. (2010). The Psychology of Prejudice and Discrimination (http:/ / books. google. fr/
books?id=mXSJEjl4uZYC& pg=PA428) (2nd ed.). Belmont, CA: Wadsworth Cengage Learning. pp.428435. ISBN978-0-495-59964-7. .
[42] Crocker, Jennifer; Voelkl, Kristin; Testa, Maria; Major, Brenda (1991). "Social stigma: The affective consequences of attributional
ambiguity". Journal of Personality and Social Psychology 60 (2): 218 228. doi:10.1037/0022-3514.60.2.218.
[43] Osborne, Jason W. (2007). "Linking Stereotype Threat and Anxiety". Educational Psychology 27 (1): 135154.
doi:10.1080/01443410601069929.
[44] Quinn, Diane M.; Kallen, Rachel W.; Spencer, Steven J. (2010). "Stereotype Threat". In Dividio, John F.; et al. The SAGE Handbook of
Prejudice, Stereotyping and Discrimination. Thousand Oaks, CA: SAGE Publications. pp.379394. ISBN978-1-4129-3453-4.
[45] Inzlicht, Michael; Tullett, Alexa M.; Gutsell, Jennifer N. (2012). "Stereotype Threat Spillover: The Short- and Long-Term Effects of Coping
with Threats to Social Identity" (http:/ / books.google.com/ books?id=o1JBcAv3f14C& pg=PA108). In Inzlicht, Michael; Schmader, Toni.
Stereotype Threat: Theory, Process, and Application. New York, NY: Oxford University Press. p.108. ISBN978-0-19-973244-9. .
[46] Aronson, Joshua; Stelle, Claude M. (2005). "Chapter 24: Stereotypes and the Fragility of Academic Competence, Motivation, and
Self-Concept" (http:/ / books. google.com/ books?id=B14TMHRtYBcC& pg=PA436). In Elliot, Andrew J.; Dweck, Carol S.. Handbook of
Stereotype
233
Competence and Motivation. New York: Guilford Press. pp.436, 443. ISBN978-1-59385-123-1. .
[47] Steele, Claude M.; Aronson, Joshua (November 1995). "Stereotype threat and the intellectual test performance of African Americans" (http:/
/ users. nber.org/ ~sewp/ events/ 2005.01.14/ Bios+ Links/ Good-rec2-Steele_& _Aronson_95. pdf). Journal of Personality and Social
Psychology 69 (5): 797811. doi:10.1037/0022-3514.69.5.797. PMID7473032. .
[48] Stone, Jeff; Lynch, Christian I.; Sjomeling, Mike; Darley, John M. (1999). "Stereotype threat effects on Black and White athletic
performance". Journal of Personality and Social Psychology 77 (6): 12131227. doi:10.1037/0022-3514.77.6.1213.
[49] Maass, Anne; D'Ettole, Claudio; Cadinu, Mara (2008). "Checkmate? The role of gender stereotypes in the ultimate intellectual sport" (http:/
/ clarksvillechessclub. org/ pdf files/ The role of gender stereotypes in chess. pdf). European Journal of Social Psychology 38 (2): 231245.
doi:10.1002/ejsp.440. .
[50] Gupta, V. K.; Bhawe, N. M. (2007). "The Influence of Proactive Personality and Stereotype Threat on Women's Entrepreneurial Intentions".
Journal of Leadership & Organizational Studies 13 (4): 7385. doi:10.1177/10717919070130040901.
[51] Kassin, Saul M.; Fein, Steven; Markus, Hazel Rose (2011). Social psychology (http:/ / books. google. com/ books?id=3aCdjhGxDjgC&
pg=PA172) (8th ed.). Belmont, CA: Wadsworth, Cengage Learning. pp.172. ISBN978-0-495-81240-1. .
[52] Brown, Rupert (2010). Prejudice: Its Social Psychology (http:/ / books. google. com/ books?id=PygYKbRoZjcC& pg=PA94) (2nd ed.).
Oxford: Wiley-Blackwell. pp.9497. ISBN978-1-4051-1306-9. .
[53] Word, Carl O.; Zanna, Mark P.; Cooper, Joel (1974). "The nonverbal mediation of self-fulfilling prophecies in interracial interaction".
Journal of Experimental Social Psychology (Elsevier) 10 (2): 109120. doi:10.1016/0022-1031(74)90059-6.
[54] Snyder, Mark; Tanke, Elizabeth D.; Berscheid, Ellen (1977). "Social perception and interpersonal behavior: On the self-fulfilling nature of
social stereotypes" (http:/ / jefferson.library.millersville. edu/ reserve/ COMM301_Paul_SocialPerception. pdf). Journal of Personality and
Social Psychology 35 (9): 656666. doi:10.1037/0022-3514.35.9.656. .
[55] Banaji, Mahzarin R. (2002). "The Social Psychology of Stereotypes". In Smelser, Neil; Baltes, Paul. International Encyclopedia of the
Social and Behavioral Sciences. New York: Pergamon. pp.1510015104. doi:10.1016/B0-08-043076-7/01754-X. ISBN978-0-08-043076-8.
[56] Fiske, Susan T.; Lee, Tiane L. (2008). "Stereotypes and prejudice create workplace discrimination" (http:/ / books. google. fr/ books?hl=en&
lr=& id=8edJmBsyRHwC& oi=fnd& pg=PA13). In Brief, Arthur P. Diversity at Work. New York: Cambridge University Press. pp.1352.
ISBN978-0-521-86030-7. .
[57] Agerstrm, Jens; Rooth, Dan-Olof (2011). "The role of automatic obesity stereotypes in real hiring discrimination". Journal of Applied
Psychology 96 (4): 790805. doi:10.1037/a0021594. PMID21280934.
[58] Davison, Heather K.; Burke, Michael J. (2000). "Sex Discrimination in Simulated Employment Contexts: A Meta-analytic Investigation".
Journal of Vocational Behavior 56 (2): 225248. doi:10.1006/jvbe.1999.1711.
[59] Rudman, Laurie A.; Glick, Peter (2001). "Prescriptive Gender Stereotypes and Backlash toward Agentic Women" (https:/ / wesfiles.
wesleyan.edu/ courses/ PSYC-309-clwilkins/ Week3/ Rudman. Glick. 2001. pdf). Journal of Social Issues 57 (4): 743762.
doi:10.1111/0022-4537.00239. .
[60] Sinclair, Stacey; Huntsinger, Jeff (2006). "The Interpersonal Basis of Self-Stereotyping" (http:/ / books. google. com/
books?id=7WtgXfECza8C& pg=PA239). In Levin, Shana; Van Laar, Colette. Stigma and Group Inequality: Social Psychological
Perspectives. Claremont Symposium on Applied Social Psychology. Mahwah, NJ: Lawrence Erlbaum Associates. p.239.
ISBN978-0-8058-4415-3. .
[61] Correll, Shelley J. (2001). "Gender and the career choice process: The role of biased self-assessments" (http:/ / www. chaire-crsng-inal. fsg.
ulaval. ca/ fileadmin/ docs/ documents/ Article/ Gender_and_career_choice_process_2001. pdf). American Journal of Sociology 106 (6):
16911730. doi:10.1086/321299. .
[62] Correll, Shelley J. (2004). "Constraints into Preferences: Gender, Status, and Emerging Career Aspirations" (http:/ / people. uncw. edu/
maumem/ soc500/ Correll2004.pdf). American Sociological Review 69 (1): 93113. doi:10.1177/000312240406900106. .
[63] Sinclair, Stacey; Hardin, Curtis D.; Lowery, Brian S. (2006). "Self-Stereotyping in the Context of Multiple Social Identities" (http:/ / psych.
princeton.edu/ psychology/ research/ sinclair/ pubs/ self stereo and multiple identities. PDF). Journal of Personality and Social Psychology
(American Psychological Association) 90 (4): 529542. doi:10.1037/0022-3514.90.4.529. .
[64] Kitch, Carolyn L. (2001). The Girl on the Magazine Cover: The Origins of Visual Stereotypes in American Mass Media. Chapel Hill, NC:
University of North Carolina Press. pp.116. ISBN978-0-8078-2653-9.
[65] van Ginneken, Jaap (2007). Screening Difference: How Hollywood's Blockbuster Films Imagine Race, Ethnicity, and Culture (http:/ / books.
google.fr/ books?id=kd8WqdD7qIUC& printsec=frontcover). Lanham: Rowman & Littlefield. ISBN9780742555839. .
[66] Romn, Ediberto (2000). "Who Exactly Is Living La Vida Loca: The Legal and Political Consequences of Latino-Latina Ethnic and Racial
Stereotypes in Film and Other Media". Journal of Gender, Race & Justice 4 (1): 3768.
Stereotype
234
Further reading
Hilton, James L.; von Hippel, William (1996). "Stereotypes". Annual Review of Psychology 47 (1): 237271.
doi:10.1146/annurev.psych.47.1.237.
Stuart Ewen, Elizabeth Ewen, Typecasting: On the Arts and Sciences of Human Inequality. New York (Seven
Stories Press) 2006
Stereotype & Society (http:/ / www. stereotypeandsociety. typepad. com) A Major Resource: Constantly updated
and archived
Regenberg, Nina (2007). "Are Blonds Really Dumb?" (http:/ / beta. in-mind. org/ issue-3/
are-blonds-really-dumb). In mind (magazine) (3)
Are Stereotypes True? (http:/ / beta. in-mind. org/ node/ 126)
Stereotype Susceptibility: Identity Salience and Shifts in Quantitative Performance, Margaret Shih, Todd L.
Pittinsky, Nalini Ambady (http:/ / www. blackwell-synergy. com/ doi/ abs/ 10. 1111/ . 00111) Research about the
effects of 'positive' and negative stereotypes on encouraging/discouraging performance.
Turner, Chris (2004). Planet Simpson: How a Cartoon Masterpiece Documented an Era and Defined a
Generation. Toronto: Random House Canada.
Crawford, M. & Unger, R. (2004). Women and Gender: A Feminist Psychology. McGraw Hill New York. New
York. 45-49.
Spitzer, B.L., Henderson, K, A., & Zavian, M. T. (1999). Gender differences in population versus media body
sizes: A comparison over four decades. Sex Roles, 40, 545-565.
External links
Interview (http:/ / www. overfiftyandoutofwork. com/ experts/ susan-fiske-mike-north/ ) with social psychologists
Susan Fiske and Mike North about the stereotyping of older people
How gender stereotypes influence emerging career aspirations (http:/ / www. youtube. com/
watch?v=jwviTwO8M8Q) lecture by Stanford University sociologist Shelley Correll on October 21, 2010
Social Psychology Network (http:/ / www. understandingprejudice. org/ apa/ english/ page11. htm) Stereotyping
Stereotypes (http:/ / mediasmarts. ca/ backgrounder/ stereotypes-teaching-backgrounder) Media Smarts,
Canada's Centre for Digital and Media Literacy
Age and Health based stereotyping (http:/ / www. ahealthcareer. co. uk/ age-stereotypes-health-sector. html) Age
and Health based stereotyping
Subadditivity effect
235
Subadditivity effect
The subadditivity effect is the tendency to judge probability of the whole to be less than the probabilities of the
parts.
[1]
For instance, subjects in one experiment judged the probability of death from cancer in the United States was 18%,
the probability from heart attack was 22%, and the probability of death from "other natural causes" was 33%. Other
participants judged the probability of death from a natural cause was 58%. Natural causes are made up of precisely
cancer, heart attack, and "other natural causes," however, the sum of the latter three probabilities was 73%, and not
58%. According to Tversky and Koehler (1994) this kind of result is observed consistently.
[2]
The same mechanisms may underlie an effect of familiarity on probability judgment. More familiar events are more
available. This is known as the availability heuristic. We find it easier to think of reasons why these events will and
will not happen. In an experiment carried out by Fox and Levav (2000),
[3]
they asked students at Duke University
which of two events was more likely to occur. The first one was "Duke men's basketball defeats UNC men's
basketball at Duke's Cameron Indoor Stadium in January 1999," and the other one was "Duke men's fencing defeats
UNC men's fencing at Duke's Cameron Card Gym in January 1999." The researchers postulated that since Duke
students are much more familiar with basketball than with fencing, 75% of the students thought the basketball
victory was more likely. Other students answered exactly the same questions, however, with Duke and UNC
switched around. After the change, 44% of the students said that a UNC victory in basketball was more likely than a
UNC victory in fencing. 44% plus 75% is 119%, which is larger than 100%, and only one such basketball game
would be played. In this experiment, familiarity with basketball led subjects to think of the basketball event as more
likely than the fencing event, regardless which basketball event was described.
Explanations
In a 2012 article in Psychological Bulletin it is suggested the subadditivity effect can be explained by an
information-theoretic generative mechanism that assumes a noisy conversion of objective evidence (observation)
into subjective estimates (judgment).
[4]
This explanation is different than support theory, proposed as an explanation
by Tversky and Koehler,
[2]
which requires additional assumptions. Since mental noise is a sufficient explanation that
is much simpler and straightforward than any explanation involving heuristics or behavior, Occam's razor would
argue in its favor as the underlying generative mechanism (it is the hypotheses which makes the fewest
assumptions).
[4]
References
[1] [1] Baron, J. (in preparation). Thinking and deciding, 4th edition. New York: Cambridge University Press.
[2] Tversky, A., & Koehler, D.J. (1994). Support theory: A nonextentional representation of subjective probability. Psychological Review, 101,
547567.
[3] Fox, C.R., & Levav, J. (2000). Familiarity bias and belief reversal in relative likelihood judgments. Organizational Behavior and Human
Decision Processes, 82, 268292.
[4] Martin Hilbert (2012) "Toward a synthesis of cognitive biases: How noisy information processing can bias human decision making" (http:/ /
psycnet.apa. org/ psycinfo/ 2011-27261-001). Psychological Bulletin, 138(2), 211237; free access to the study here:
martinhilbert.net/HilbertPsychBull.pdf
Subjective validation
236
Subjective validation
Subjective validation, sometimes called personal validation effect, is a cognitive bias by which a person will
consider a statement or another piece of information to be correct if it has any personal meaning or significance to
them.
[1]
In other words, a person whose opinion is affected by subjective validation will perceive two unrelated
events (i.e., a coincidence) to be related because their personal belief demands that they be related. Closely related to
the Forer effect, subjective validation is an important element in cold reading. It is considered to be the main reason
behind most reports of paranormal phenomena.
[2]
According to Bob Carroll, psychologist Ray Hyman is considered
to be the foremost expert on subjective validation and cold reading.
[3]
References
[1] [1] Forer, B.R. (1949) "The Fallacy of Personal Validation: A classroom Demonstration of Gullibility," Journal of Abnormal Psychology, 44,
118-121.
[2] Cline, Austin. Flaws in Reasoning and Arguments: Subjective Validation, Seeing Patterns & Connections That Aren't Really There (http:/ /
atheism. about. com/ od/ logicalflawsinreasoning/ a/ subjective. htm), About.com, September 10, 2007. Accessed January 10, 2008.
[3] Carrol, Bob. ""Hope in Small Doses"" (http:/ / www.skepticality. com/ hope-in-small-doses/ ). Skepticality. . Retrieved 8/17/2012.
External links
The Skeptic's Dictionary entry on subjective validation (http:/ / skepdic. com/ subjectivevalidation. html)
Survivorship bias
Survivorship bias is the logical error of concentrating on the people or things that "survived" some process and
inadvertently overlooking those that didn't because of their lack of visibility. This can lead to false conclusions in
several different ways. The survivors may literally be people, as in a medical study, or could be companies or
research subjects or applicants for a job, or anything that must make it past some selection process to be considered
further.
Survivorship bias can lead to overly optimistic beliefs because failures are ignored, such as when companies that no
longer exist are excluded from analyses of financial performance. It can also lead to the false belief that the successes
in a group have some special property, rather than being just lucky. For example, if the three of the five students with
the best college grades went to the same high school, that can lead one to believe that the high school must offer an
excellent education. This could be true, but the question cannot be answered without looking at the grades of all the
other students from that high school, not just the ones who "survived" the top-five selection process.
Survivorship bias is a type of selection bias.
In finance
In finance, survivorship bias is the tendency for failed companies to be excluded from performance studies because
they no longer exist. It often causes the results of studies to skew higher because only companies which were
successful enough to survive until the end of the period are included.
For example, a mutual fund company's selection of funds today will include only those that are successful now.
Many losing funds are closed and merged into other funds to hide poor performance. In theory, 90% of extant funds
could truthfully claim to have performance in the first quartile of their peers if the peer group includes funds that
have closed.
Survivorship bias
237
In 1996 Elton, Gruber, & Blake showed that survivorship bias is larger in the small-fund sector than in large mutual
funds (presumably because small funds have a high probability of folding).
[1]
They estimate the size of the bias
across the U.S. mutual fund industry as 0.9% per annum, where the bias is defined and measured as:
"Bias is defined as average a for surviving funds minus average for all funds"
(Where is the risk-adjusted return over the S&P 500. This is the standard measure of mutual fund
out-performance).
Additionally, in quantitative backtesting of market performance or other characteristics, survivorship bias is the use
of a current index membership set rather than using the actual constituent changes over time. Consider a backtest to
1990 to find the average performance (total return) of S&P 500 members who have paid dividends within the
previous year. To use the current 500 members only and create an historical equity line of the total return of the
companies that met the criteria, would be adding survivorship bias to the results. S&P maintains an index of healthy
companies, removing companies that no longer meet their criteria as a representative of the large-cap U.S. stock
market. Companies that had healthy growth on their way to inclusion in the S&P 500, would be counted as if they
were in the index during that growth period, when they were not. Instead there may have been another company in
the index that was losing market capitalization and was destined for the S&P 600 Small-cap Index, that was later
removed and would not be counted in the results. Using the actual membership of the index, applying entry and exit
dates to gain the appropriate return during inclusion in the index, would allow for a bias-free output.
As a general experimental flaw
Survivorship bias (or survivor bias) is a statistical artifact in applications outside finance, where studies on the
remaining population are fallaciously compared with the historic average despite the survivors having unusual
properties. Mostly, the unusual property in question is a track record of success (like the successful funds).
For example, the parapsychology researcher Joseph Banks Rhine believed he had identified the few individuals from
hundreds of potential subjects who had powers of ESP. His calculations were based on the improbability of these
few subjects guessing the Zener cards shown to a partner by chance.
A major criticism which surfaced against his calculations was the possibility of unconscious survivor bias in subject
selections. He was accused of failing to take into account the large effective size of his sample (all the people he
didn't choose as 'strong telepaths' because they failed at an earlier testing stage). Had he done this he might have seen
that from the large sample, one or two individuals would probably achieve the track record of success he had found
purely by chance.
Writing about the Rhine case, Martin Gardner explained that he didn't think the experimenters had made such
obvious mistakes out of statistical naivet, but as a result of subtly disregarding some poor subjects. He said that
without trickery of any kind, there would always be some people who had improbable success, if a large enough
sample were taken. To illustrate this, he speculates about what would happen if one hundred professors of
psychology read Rhine's work and decided to make their own tests; he said that survivor bias would winnow out the
typical failed experiments, but encourage the lucky successes to continue testing. He thought that the common null
hypothesis (of no result) wouldn't be reported, but:
"Eventually, one experimenter remains whose subject has made high scores for six or seven successive
sessions. Neither experimenter nor subject is aware of the other ninety-nine projects, and so both have a strong
delusion that ESP is operating."
He concludes:
"The experimenter writes an enthusiastic paper, sends it to Rhine who publishes it in his magazine, and the
readers are greatly impressed".
If enough scientists study a phenomenon, some will find statistically significant results by chance, and these are the
experiments submitted for publication. Additionally, papers showing positive results may be more appealing to
Survivorship bias
238
editors.
[2]
This problem is known as positive results bias, a type of publication bias. To combat this, some editors
now call for the submission of 'negative' scientific findings, where "nothing happened."
Survivorship bias is one of the issues discussed in the provocative 2005 paper "Why Most Published Research
Findings Are False."
[2]
In business law
Survivorship bias can raise truth-in-advertising problems when the success rate advertised for a product or service is
measured with respect to a population whose makeup differs from that of the target audience whom the company
offering that product or service targets with advertising claiming that success rate. These problems become
especially significant when a) the advertisement either fails to disclose the existence of relevant differences between
the two populations or describes them in insufficient detail; b) these differences result from the company's deliberate
"pre-screening" of prospective customers to ensure that only customers with traits increasing their likelihood of
success are allowed to purchase the product or service, especially when the company's selection procedures and/or
evaluation standards are kept secret; and c) the company offering the product or service charges a fee, especially one
that is non-refundable and/or not disclosed in the advertisement, for the privilege of attempting to become a
customer.
For example, the advertisements of online dating service eHarmony.com pass this test because they fail the first
two prongs but not the third: They claim a success rate significantly higher than that of competing services while
generally not disclosing that the rate is calculated with respect to a viewership subset who possess traits that
increase their likelihood of finding and maintaining relationships and lack traits that pose obstacles to their doing
so (a), and the company deliberately selects for these traits by administering a lengthy pre-screening process
designed to reject prospective customers who lack the former traits and/or possess the latter ones (b), but the
company does not charge a fee for administration of its pre-screening test, with the effect that its prospective
customers face no "downside risk" other than losing the time and expending the effort involved in completing the
pre-screening process (negating c).
[3]
(Similarly, many investors believe that chance is the main reason that most successful fund managers have the track
records they do.)
References
[1] http:/ / rfs.oupjournals. org/ cgi/ reprint/ 9/ 4/ 1097 Elton, Gruber, & Blake, 1996, Survivorship Bias and Mutual Fund Performance, from
"The Review of Financial Studies", volume 9, number 4. In this paper the researchers eliminate survivorship bias by following the returns on
all funds extant at the end of 1976. They show that other researchers have drawn spurious conclusions by failing to include the bias in
regressions on fund performance.
[2] http:/ / www. plosmedicine. org/ article/ info:doi/ 10.1371/ journal. pmed. 0020124 Ioannidis JPA (2005) Why Most Published Research
Findings Are False. PLoS Med 2(8): e124.
[3] http:/ / www. washingtonpost. com/ wp-dyn/ content/ article/ 2007/ 05/ 12/ AR2007051201350. html
Texas sharpshooter fallacy
239
Texas sharpshooter fallacy
The Texas sharpshooter fallacy is an informal fallacy in which pieces of information that have no relationship to
one another are called out for their similarities, and that similarity is used for claiming the existence of a pattern.
[1]
This fallacy is the philosophical/rhetorical application of the multiple comparisons problem (in statistics) and
apophenia (in cognitive psychology). It is related to the clustering illusion, which refers to the tendency in human
cognition to interpret patterns in randomness where none actually exist.
The name comes from a joke about a Texan who fires some shots at the side of a barn, then paints a target centered
on the biggest cluster of hits and claims to be a sharpshooter.
[2][3]
Structure
The Texas sharpshooter fallacy often arises when a person has a large amount of data at his disposal, but only
focuses on a small subset of that data. Random chance may give all the elements in that subset some kind of
common property (or pair of common properties, when arguing for correlation). If the person fails to account for the
likelihood of finding some subset in the large data with some common property strictly by chance alone, that person
is likely committing a Texas Sharpshooter fallacy.
To illustrate, if we pay attention to a cluster of cancer cases in a certain sub-population and then draw our "circle"
around the smallest area that includes this cluster, this sample will appear to be suffering an unusually high rate of
cancer, but if we included the rest of the population, the incidence would regress to the average.
[4]
The fallacy is characterized by a lack of specific hypothesis prior to the gathering of data, or the formulation of a
hypothesis only after data has already been gathered and examined.
[5]
Thus, it typically does not apply if one had an
ex ante, or prior, expectation of the particular relationship in question before examining the data. For example one
might, prior to examining the information, have in mind a specific physical mechanism implying the particular
relationship. One could then use the information to give support or cast doubt on the presence of that mechanism.
Alternatively, if additional information can be generated using the same process as the original information, one can
use the original information to construct a hypothesis, and then test the hypothesis on the new data. See hypothesis
testing. What one cannot do is use the same information to construct and test the same hypothesis (see hypotheses
suggested by the data) to do so would be to commit the Texas sharpshooter fallacy.
Examples
A Swedish study in 1992 tried to determine whether or not power lines caused some kind of poor health effects.
The researchers surveyed everyone living within 300 meters of high-voltage power lines over a 25-year period
and looked for statistically significant increases in rates of over 800 ailments. The study found that the incidence
of childhood leukemia was four times higher among those that lived closest to the power lines, and it spurred calls
to action by the Swedish government. The problem with the conclusion, however, was that the number of
potential ailments, i.e. over 800, was so large that it created a high probability that at least one ailment would
exhibit statistically significant difference just by chance alone. Subsequent studies failed to show any links
between power lines and childhood leukemia, neither in causation nor even in correlation.
[6]
Attempts to find cryptograms in the works of William Shakespeare, which tended to report results only for those
passages of Shakespeare for which the proposed decoding algorithm produced an intelligible result. This could be
explained as an example of the fallacy because passages which do not match the algorithm have not been
accounted for.
Attempts to find cryptograms in the Bible, and the Quran Code.
This fallacy is often found in modern-day interpretations of the quatrains of Nostradamus. Nostradamus' quatrains
are often liberally translated from the original (archaic) French, stripped of their historical context, and then
Texas sharpshooter fallacy
240
applied to support the conclusion that Nostradamus predicted a given modern-day event, after the event actually
occurred. For instance, the Nostradamus lines that supposedly predicted 9/11 were taken from three separate and
unrelated passages and a fictional line was added.
[7]
References
[1] Bennett, Bo (2010). Logically Fallacious: The Ultimate Collection of Over 300 Logical Fallacies (http:/ / books. google. com/
books?id=WFvhN9lSm5gC& pg=PR7& dq="texas+ sharpshooter+ fallacy"& hl=en& sa=X& ei=yORuT5mRNYrgiAKxqYSzBQ&
ved=0CEQQ6AEwAjgK#v=onepage& q="texas sharpshooter fallacy"& f=false). ebookit.com. ISBN1456607375. . Retrieved 2012-03-25.
"description: ignoring the difference while focusing on the similarities, thus coming to an inaccurate conclusion"
[2] Atul Gawande (2/8/1999). "The cancer-cluster myth" (http:/ / www. crab. rutgers. edu/ ~mbravo/ cluster. pdf). The New Yorker. . Retrieved
2009-10-10.
[3] Carroll, Robert Todd (2003). The Skeptic's Dictionary: a collection of strange beliefs, amusing deceptions, and dangerous delusions (http:/ /
books.google. com/ books?id=6FPqDFx40vYC& lpg=PA375& vq=texas sharpshooter& dq="texas sharpshooter fallacy" logical fallacies&
pg=PA375#v=snippet& q=texas sharpshooter& f=false). John Wiley & Sons. p.375. ISBN0471272426. . Retrieved 2012-03-25. "The term
refers to the story of the Texan who shoots holes in the side of a barn and then draws a bull's-eye around the bullet holes"
[4] "Cancer Clusters" (http:/ / www. ncri. ie/ cancerinfo/ clusters. shtml). NCR. 2010. . Retrieved 2012-03-25. "The Texas sharpshooter shoots at
the side of a barn and then draws a bull's-eye around the bullet holes. In the same way, we might notice a number of cancer cases, then draw
our population base around the smallest area possible, neglecting to remember that the cancer cases actually came from a much larger
population."
[5] Thompson, William C. (July 18, 2009). "Painting the target around the matching profile: the Texas sharpshooter fallacy in forensic DNA
interpretation" (http:/ / lpr. oxfordjournals.org/ content/ 8/ 3/ 257. full. pdf+ html). Law, Probability, & Risk 8 (3): 257-258.
doi:10.1093/lpr/mgp013. . Retrieved 2012-03-25. "Texas sharpshooter fallacy...this article demonstrates how post hoc target shifting occurs
and how it can distort the frequency and likelihood ratio statistics used to characterize DNA matches, making matches appear more probative
than they actually are."
[6] "FRONTLINE: previous reports: transcripts: currents of fear" (http:/ / www. pbs. org/ wgbh/ pages/ frontline/ programs/ transcripts/ 1319.
html). PBS. 1995-06-13. . Retrieved 2012-07-03.
[7] "Nostradamus Predicted 9/11?" (http:/ / www. snopes. com/ rumors/ nostradamus. asp). snopes.com. . Retrieved 2012-07-03.
External links
Fallacy files entry (http:/ / www. fallacyfiles. org/ texsharp. html)
Time-saving bias
241
Time-saving bias
The time-saving bias describes people's tendency to misestimate the time that could be saved (or lost) when
increasing (or decreasing) speed. In general, people underestimate the time that could be saved when increasing from
a relatively low speed (e.g., 25mph or 40km/h) and overestimate the time that could be saved when increasing from
a relatively high speed (e.g., 55mph or 90km/h). People also underestimate the time that could be lost when
decreasing from a low speed and overestimate the time that could be lost when decreasing from a high speed.
Examples
In one study, participants were asked to judge which of two road improvement plans would be more efficient in
reducing mean journey time. Respondents preferred a plan that would increase the mean speed from 70 to 110km/h
more than a plan that would increase the mean speed from 30 to 40km/h, although the latter actually saves more
time (Svenson, 2008, Experiment 1).
In another study drivers were asked to indicate how much time they feel can be saved when increasing from either a
low (30mph) or high (60mph) speed (Fuller et al., 2009). For example, participants were asked the following
question: You are driving along an open road. How much time do you feel you would gain if you drove for 10 miles
at 40mph instead of 30mph? (Fuller et al., 2009, p.14). Another question had a higher starting speed (60mph) and
two other questions asked about losing time when decreasing speed (from either 30 or 60mph).
Results supported the predictions of the time-saving bias as participants underestimated the time saved when
increasing from a low speed and overestimated the time saved when increasing from a relatively high speed. In
addition, participants also misestimated the time lost when decreasing speed: they generally underestimated the time
lost when decreasing from a low speed and overestimated the time lost when decreasing from a relatively high speed
(Fuller et al., 2009).
Explanation
The physical formula for calculating the time gained when increasing speed is:
(1) t=cD (1/V1 1/V2),
where c is constant and used to transform between units of measurement, t is the time gained, D is the distance
traveled and V1 and V2 are the original and increased speeds, respectively. This formula shows that the relationship
between increasing speed and journey time is curvilinear: a similar speed increase would result in more time saved
when increasing from a low speed compared to a higher speed. For example, when increasing from 20 to 30mph the
time required to complete 10 miles decreases from 30 to 20 minutes, saving 10 minutes. However, the same speed
increase of 10mph would result in less time saved if the initial speed is higher (e.g., only 2 minutes saved when
increasing from 50mph to 60mph). Changing the distance of the journey from 10 miles to a longer or shorter
distance will increase or decrease these time savings, but will not affect the relationship between speed and time
savings.
Svenson (2008) suggested that peoples judgments of time-savings actually follow a Proportion heuristic, by which
people judge the time saved as the proportion of the speed increase from the initial speed. Another study suggested
that people might follow a simpler difference heurtic, by which they judge the time saved based solely on the
difference between the initial and higher speed (Peer, 2010b, Study 3). It seems that people falsely believe that
journey time decreases somewhat linearly as driving speed increases, irrespective of the initial speed, causing the
time-saving bias. Although it is still unclear what is the dominant heuristic people use to estimate time savings, it is
evident that almost none follow the above curvelinear relationship.
Time-saving bias
242
Consequences in driving
Drivers who underestimated the time saved when increasing from a low speed or overestimated the time lost when
decreasing from a high speed, overestimated the speed required for arriving on a specific time and chose unduly high
speeds, sometimes even exceeding the stated speed limit (Peer, 2010a). Similarly, drivers who overestimated the
time saved when increasing from a high speed underestimated the speed required for arriving on time and chose
lower speeds (Peer, 2011).
Consequences in other domains
The time-saving bias is not limited to driving. The same faulty estimations emerge when people are asked to estimate
savings in patients waiting time when adding more physicians to a health care center (Svenson, 2008, Experiment 2)
or when estimating an increase in the productivity of a manufacturing line by adding more workers (Svenson, 2011).
References
1. Fuller, R., Gormley, M., Stradling, S., Broughton, P., Kinnear, N. ODolan, C., & Hannigan, B. (2009). Impact of
speed change on estimated journey time: Failure of drivers to appreciate relevance of initial speed. Accident
Analysis and Prevention, 41, 10-14.
2. 2. Peer, E. (2011). The time-saving bias, speed choices and driving behavior, Transportation Research Part F:
Traffic Psychology and Behaviour. 14, 543-554.
3. Peer, E. (2010a). Speeding and the time-saving bias: How drivers estimations of time saved when increasing
speed affects their choice of speed. Accident Analysis and Prevention, 42, 1978-1982.
4. 4. Peer, E. (2010b). Exploring the time-saving bias: How drivers misestimate time saved when increasing speed.
Judgment and Decision Making, 5(7), 477-488.
5. 5. Svenson, O. (2008). Decisions among time saving options: When intuition is strong and wrong, Acta
Psycholgica, 127, 501-509.
6. 6. Svenson, O. (2009). Driving speed changes and subjective estimates of time savings, accident risks and braking.
Applied Cognitive Psychology, 23, 543-560.
7. Svenson, O. (2011). Biased decisions concerning productivity increase options. Journal of Economic Psychology,
32(3), 440445.
External links
The MPG Illusion
[1]
References
[1] http:/ / www. mpgillusion. com/ %20
Well travelled road effect
243
Well travelled road effect
The well travelled road effect is a cognitive bias in which travellers will estimate the time taken to traverse routes
differently depending on their familiarity with the route. Frequently travelled routes are assessed as taking a shorter
time than unfamiliar routes.
[1][2]
This effect creates errors when estimating the most efficient route to an unfamiliar
destination, when one candidate route includes a familiar route, whilst the other candidate route includes no familiar
routes. The effect is most salient when subjects are driving, but is still detectable for pedestrians and users of public
transport. Much like the Stroop Task
[3][4]
it is hypothesised that drivers use less cognitive effort when traversing
familiar routes and therefore underestimate the time taken to traverse the familiar route.
[5]
The effect has been
observed for centuries but was first studied scientifically in the 1980s and '90s following from earlier fallacy work
undertaken by Daniel Kahneman and Amos Tversky.
[6]
The well travelled road effect has been hypothesised as a
reason that self-reported experience curve effects are overestimated (see Experience curve effects).
References
[1] Allan, L.G. (1979). The perception of time. Perception & Psychophysics, 26, 340-354.
[2] Zakay, B., & Block, R.A. (2004). Prospective and retrospective duration judgments: an executive-control perspective. Acta Neurobiologiae
Experimentalis, 64, 319-328
[3] http:/ / www. snre.umich. edu/ eplab/ demos/ st0/ stroopdesc. html
[4] Zakay D., & Fallach, E. (1984). Immediate and remote time estimation a comparison. Acta Psychologica, 57, 69-81.
[5] Rubia, K. & Smith, A. (2004). The neural correlates of cognitive time management: a review. Acta Neurobiologica, 64, 329-340.
[6] An empirical study of travel time variability and travel choice behavior. Transportation Science, 16, 460475, Jackson, W., & Jucker, J.
Steven J. Milloy (1981)
Zero-risk bias
Zero-risk bias is a tendency to prefer the complete elimination of a risk even when alternative options produce a
greater reduction in risk (overall). This effect on decision making has been observed in surveys presenting
hypothetical scenarios and certain real world policies have been interpreted as being influenced by it.
Baron, Gowda, and Kunreuther identified a zero-risk bias in responses to a questionnaire about a hypothetical
cleanup scenario involving two hazardous sites X and Y, with X causing 8 cases of cancer annually and Y causing 4
cases annually. The respondents ranked three cleanup approaches: two options each reduced the number of cancer
cases by 6 while the third reduced the number by 5 and completely eliminated the cases at site Y. While the latter
option featured the worst reduction overall, 42% of the respondents ranked it better than at least one of the other
options. This conclusion resembled one from an earlier economics study that found people were willing to pay high
costs to completely eliminate a risk.
[1][2]
Multiple real world policies have been said to be affected by this bias. In American federal policy, the Delaney
clause outlawing cancer-causing additives from foods (regardless of actual risk) and the desire for perfect cleanup of
Superfund sites have been alleged to be overly focused on complete elimination. Furthermore, the effort needed to
implement zero-risk laws grew as technological advances enabled the detection of smaller quantities of hazardous
substances. Limited resources were increasingly being devoted to low-risk issues.
[3]
Other biases might underlie the zero-risk bias. One is a tendency to think in terms of proportions rather than
differences. A greater reduction in proportion of deaths is valued higher than a greater reduction in actual deaths. The
zero-risk bias could then be seen as the extreme end of a broad bias about quantities as applied to risk. Framing
effects can enhance the bias, for example, by emphasizing a large proportion in a small set or can attempt to mitigate
the bias by emphasizing total quantities.
[4]
Zero-risk bias
244
References
[1] Baron, Jonathan; Gowda, Rajeev; Kunreuther, Howard (1993). "Attitudes toward managing hazardous waste: What should be cleaned up and
who should pay for it?" (http:/ / www. sas.upenn. edu/ ~baron/ papers. htm/ gowda. html). Risk Analysis 13: 183192. .
[2] Viscusi, W. K.; Magat, W. A.; Huber, J. (1987). "An investigation of the rationality of consumer valuation of multiple health risks". Rand
Journal of Economics 18: 465479.
[3] Kunreuther, Howard (1991). "Managing hazardous waste: past, present and future" (http:/ / opim. wharton. upenn. edu/ risk/ downloads/
archive/ arch156. pdf). Risk Analysis 11: 1926. .
[4] Baron, Jonathan (2003). "Value analysis of political behavior - self-interested : moralistic :: altruistic : moral" (http:/ / www. sas. upenn. edu/
~baron/ papers. htm/ ratsymp.html). University of Pennsylvania Law Review 151: 11351167. .
Actorobserver asymmetry
Actorobserver asymmetry (also actorobserver bias) explains the errors that one makes when forming
attributions about behavior (Jones & Nisbett, 1971). When a person judges their own behavior, and they are the
actor, they are more likely to attribute their actions to the particular situation than to a generalization about their
personality. Yet when a person is attributing the behavior of another person, thus acting as the observer; they are
more likely to attribute this behavior to the persons overall disposition than as a result of situational factors. This
frequent error shows the bias that people hold in their evaluations of behavior (Miller & Norman, 1975). People are
more likely to see their own behavior as affected by the situation they are in, or the sequence of occurrences that
have happened to them throughout their day. But, they see other peoples actions as solely a product of their overall
personality, and they do not afford them the chance to explain their behavior as exclusively a result of a situational
effect.
This term falls under "attribution" or "attribution theory". The specific hypothesis of an actor-observer asymmetry in
attribution (explanations of behavior) was originally proposed by Jones and Nisbett (1971), when they claimed that
"actors tend to attribute the causes of their behavior to stimuli inherent in the situation, while observers tend to
attribute behavior to stable dispositions of the actor (p. 93). Supported by initial evidence, the hypothesis was long
held as firmly established, describing a robust and pervasive phenomenon of social cognition.
However, a meta-analysis of all the published tests of the hypothesis between 1971 and 2004 (Malle, 2006) yielded a
stunning finding: there was no actor-observer asymmetry of the sort Jones and Nisbett (1971) had proposed. Malle
(2006) interpreted this result not so much as proof that actors and observers explained behavior exactly the same way
but as evidence that the original hypothesis was fundamentally flawed in the way it framed people's explanations of
behaviornamely, as attributions to either stable dispositions or to the situation. Against the background of a
different theory of explanation, Malle, Knobe, and Nelson (2007) tested an alternative set of three actor-observer
asymmetries and found consistent support for all of them. Thus, the actor-observer asymmetry does not exist in one
theoretical formulation (traditional attribution theory) but does exist in the new alternative theoretical formulation.
Malle (2011) argues that this favors the alternative theoretical formulation, but current textbooks have not yet fully
addressed this theoretical challenge.
Considerations of actor-observer differences can be found in other disciplines as well, such as philosophy (e.g.,
privileged access, incorrigibility), management studies, artificial intelligence, semiotics, anthropology, and political
science (see Malle, Knobe, & Nelson, 2007, for relevant references).
Actorobserver asymmetry
245
Background and initial formulation
The background to this hypothesis was social psychology's increasing interest in the 1960s in the cognitive
mechanisms by which people make sense of their own and other people's behavior. This interest was instigated by
Fritz Heider's (1958) book, The Psychology of Interpersonal Relations, and the research in its wake has become
known as "attribution research" or "attribution theory."
The specific hypothesis of an "actorobserver asymmetry" was first proposed by social psychologists Jones and
Nisbett in 1971. Jones and Nisbett hypothesized that these two roles produce asymmetric explanations. Actors tend
to attribute the causes of their behavior to stimuli inherent in the situation, while observers tend to attribute behavior
to stable dispositions of the actor (Jones & Nisbett, 1971, p.93). According to this hypothesis, a student who studies
hard for an exam is likely to explain her own (the "actor"'s) intensive studying by referring to the upcoming difficult
exam whereas other people (the "observers") are likely to explain her studying by referring to her dispositions such
as being hardworking or ambitious.
Early evidence and reception
Soon after the publication of the actor-observer hypothesis, numerous research studies tested its validity, most
notably the first such test by Nisbett, Caputo, Legant, and Marecek (1973). The authors found initial evidence for the
hypothesis, and so did Storms (1973), who also examined one possible explanation of the hypothesis: that actors
explain their behaviors by reference to the situation because they attend to the situation (not to their own behaviors)
whereas observers explain the actor's behavior by reference to the actor's dispositions because they attend to the
actor's behavior (not to the situation). Based largely on this initial supporting evidence, the confidence in the
hypothesis became uniformly high. The asymmetry was described as robust and quite general,
[1]
"firmly
established" (Watson, 1982, p.698), and an entrenched part of scientific psychology.
[2]
Likewise, evidence for the
asymmetry was considered to be "plentiful
[3]
and pervasive.
[4]
Recent evidence
Over 100 studies have been published since 1971 in which the hypothesis was put to further tests (often in the
context of testing another hypothesis about causal attributions). Malle (2006) examined this entire literature in a
meta-analysis, which is a robust way of identifying consistent patterns of evidence regarding a given hypothesis
across a broad set of studies. The result of this analysis was stunning: across 170 individual tests, the asymmetry
practically did not exist. (The average effect sizes, computed in several accepted ways, ranged from d = -0.016 to d =
0.095; corrected for publication bias, the average effect size was 0.) Under circumscribed conditions, it could
sometimes be found, but under other conditions, the opposite was found. The conclusion was that the widely held
assumption of an actor-observer asymmetry in attribution was false.
Theoretical reformulation
The result of the meta-analysis implied that, across the board, actors and observers explain behaviors the same way.
But all the tests of the classic hypothesis presupposed that people explain behavior by referring to "dispositional" vs.
"situational" causes. This assumption turned out to be incorrect for the class of behavioral events that people explain
most frequently in real life (Malle & Knobe, 1997): intentional behaviors (e.g., buying a new car, making a mean
comment). People explain unintentional behaviors in ways that the traditional disposition-situation framework can
capture, but they explain intentional behaviors by using very different concepts (Buss, 1989; Heider, 1958). A recent
empirical theory of how people explain behavior was proposed and tested by Malle (1999, 2004), centering on the
postulate that intentional behaviors are typically explained by reasonsthe mental states (typically beliefs and
desires) in light of which and on the grounds of which the agent decided to act (a postulate long discussed in the
philosophy of action). But people who explain intentional behavior have several choices to make, and the theory
Actorobserver asymmetry
246
identifies the psychological antecedents and consequences of these choices: (a) giving either reason explanations or
"causal history of reason (CHR) explanations" (which refer to background factors such as culture, personality, or
contextcausal factors that brought about the agent's reasons but were not themselves reasons to act); (b) giving
either desire reasons or belief reasons; and (c) linguistically marking a belief reason with its mental state verb (e.g.,
"She thought that..."; "He assumes that..."). Empirical studies have so far supported this theoretical framework (for a
review see Malle, 2011).
Within this framework, the actor-observer asymmetry was then reformulated as in fact consisting of three
asymmetries: that actors offer more reason explanations (relative to CHR explanations) than observers do; that actors
offer more belief reasons (relative to desire reasons) than observers do; and that actors use fewer belief reason
markers than observers do (Malle, 1999). Malle, Knobe, and Nelson (2007) tested these asymmetries across 9 studies
and found consistent support for them. In the same studies they also tested the classic person/disposition vs. situation
hypothesis and consistently found no support for it.
Thus, people do seem to explain their own actions differently from how they explain other people's actions. But
these differences do not lie in a predominance of using "dispositional" vs. "situational" causes. Only when people's
explanations are separated into theoretically meaningful distinctions (e.g., reasons vs. causal history of reason
explanations) do the differences emerge.
Implications
The choices of different explanations for intentional behavior (reasons, belief reasons, etc.) indicate particular
psychological functions. Reasons, for example, appear to reflect (among other things) psychological closeness.
People increase reason explanations (relative to CHR explanations) when they explain their own rather than another
person's behavior (Malle et al., 2007), when they portray another person in a positive light (Malle et al., 2007), and
when they explain behaviors of nonhuman agents for whom they have ownership and affection (e.g., a pet fish;
Kiesler, Lee, & Kramer, 2006). Conversely, people use fewer reasons and more CHR explanations when explaining
behaviors of collectives or aggregate groups (O'Laughlin & Malle, 2002). Actor-observer asymmetries can therefore
be seen as part of a broader continuum of psychological distance people have to various kinds of minds (their own,
others', groups', animals' etc.).
Related but distinct concepts
Actor-observer "bias"
Instead of speaking of a hypothesis of an actor-observer asymmetry, some textbooks and research articles speak of
an "actor-observer bias" (within the framework of dispositional vs. situation causes). The term "bias" is typically
used to imply that one of the explainerseither the actor or the observeris biased or incorrect in their
explanations. But which onethe actor or the observeris supposed to be incorrect is not clear from the literature.
On the one hand, Ross's (1977) hypothesis of a fundamental attribution error suggests that observers are incorrect,
because they show a general tendency to overemphasize dispositional explanations and underemphasize situational
ones. On the other hand, Nisbett and Wilson (1975) argued that actors don't really know the true causes of their
actions and often merely invent plausible explanations. Jones and Nisbett (1971) themselves did not commit to
calling the hypothesized actor-observer asymmetry a bias or an error. Similarly, recent theoretical positions consider
asymmetries not a bias but rather the result of multiple cognitive and motivational differences that fundamentally
exist between actors and observers (Malle et al., 2007; Robins et al., 1996).
Actorobserver asymmetry
247
Self-serving bias
The actor-observer asymmetry is often confused with the hypothesis of a self-serving bias in attribution the claim
that people choose explanations in a strategic way so as to make themselves appear in a more positive light. The
important difference between the two hypotheses is that the assumed actor-observer asymmetry is expected to hold
for all events and behaviors (whether they are positive or negative) and require a specific comparison between actor
explanations and observer explanations. The self-serving bias is often formulated as a complete reversal in actors'
and observers' explanation tendencies as a function of positive vs. negative events. In traditional attribution terms,
this means that for positive events (e.g., getting an A on an exam), actors will select explanations that refer to their
own dispositions (e.g., "I am smart") whereas observers will select explanations that refer to the actor's situation
(e.g., "The test was easy"); however, for negative events (e.g., receiving an F on the exam), actors will select
explanations that refer to the situation (e.g., "The test was impossibly hard") whereas observers will select
explanations that refer to the actor's dispositions (e.g., "She is not smart enough").
References
[1] [1] Jones, 1976, p.304
[2] Robins, Spranca, & Mendelsohn, 1996, p.376
[3] Fiske & Taylor, 1991, p.73
[4] [4] Aronson, 2002, p.168
Aronson, E. (2002). The social animal. (8th ed.) New York, NY: Wiley.
Buss, A. R. (1978). Causes and reasons in attribution theory: A conceptual critique. Journal of Personality and
Social Psychology, 36, 13111321.
Fiske, S. T., & Taylor, S. E. (1991). Social cognition (2nd ed.). New York: McGraw-Hill.
Heider, F. (1958). The psychology of interpersonal relations. New York: Wiley.
Jones, E. E., & Nisbett, R. E. (1971). The actor and the observer: Divergent perceptions of the causes of
behavior. New York: General Learning Press.
Jones, E. E. (1976). How do people perceive the causes of behavior? American Scientist, 64, 300305.
Kiesler S., Lee, S. L., & Kramer, A. D. I. (2006). Relationship effects in psychological explanations of nonhuman
behavior. Anthrozoos, 19, 335-352.
Malle, B. F. (1999). How people explain behavior: A new theoretical framework. Personality and Social
Psychology Review, 3, 2348.
Malle, B. F. (2004). How the mind explains behavior: Folk explanations, meaning, and social interaction.
Cambridge, MA: MIT Press.
Malle, B. F. (2006). The actor-observer asymmetry in causal attribution: A (surprising) meta-analysis.
Psychological Bulletin, 132, 895-919.
Malle, B. F. (2011). Time to give up the dogmas of attribution: An alternative theory of behavior explanation. In
J. M. Olson and M. P. Zanna (eds.), Advances of Experimental Social Psychology (Vol. 44, pp. 297-352).
Burlington: Academic Press.
Malle, B. F., & Knobe, J. (1997). Which behaviors do people explain? A basic actor-observer asymmetry. Journal
of Personality and Social Psychology, 72, 288-304.
Malle, B. F., Knobe, J., & Nelson, S. (2007). Actor-observer asymmetries in explanations of behavior: New
answers to an old question. Journal of Personality and Social Psychology, 93, 491-514.
Nisbett, R. E., Caputo, C., Legant, P., & Marecek, J. (1973). Behavior as seen by the actor and as seen by the
observer. Journal of Personality and Social Psychology, 27, 154 164.
Robins, R. W., Spranca, M. D., & Mendelsohn, G. A. (1996). The actorobserver effect revisited: Effects of
individual differences and repeated social interactions on actor and observer attributions. Journal of Personality
and Social Psychology, 71, 375389.
Actorobserver asymmetry
248
Ross, L. D. (1977). The intuitive psychologist and his shortcomings: Distortions in the attribution process.
Advances in Experimental Social Psychology, 10, 173220.
Storms, M. D. (1973). Videotape and the attribution process: Reversing actors and observers points of view.
Journal of Personality and Social Psychology, 27, 165175.
Watson, D. (1982). The actor and the observer: How are their perceptions of causality divergent? Psychological
Bulletin, 92, 682700.
Defensive attribution hypothesis
The defensive attribution hypothesis (or defensive attribution bias) is a social psychological term from the
attributional approach referring to a set of beliefs held by an individual with the function of defending the individual
from concern that they will be the cause or victim of a mishap. Commonly, defensive attributions are made when
individuals witness or learn of a mishap happening to another person. In these situations, attributions of
responsibility to the victim or harm-doer for the mishap will depend upon the severity of the outcomes of the mishap
and the level of personal and situational similarity between the individual and victim. More responsibility will be
attributed to the harm-doer as the outcome becomes more severe, and as personal or situational similarity decreases.
Holding the victim or harm-doer more responsible allows the individual to believe that the mishap was controllable
and thus, the individual is able to prevent suffering the same mishap in the future.
[1]
Decreasing attributions of
responsibility as similarity increases allows the individual to proactively lay the ground work to protect their own
self-esteem. That is, if they would suffer the mishap themselves, they can see themselves as not blameworthy.
[2]
The
use of defensive attributions is considered a bias because an individual will change their beliefs about a situation
based upon motivation to protect their self-esteem rather than being based upon the characteristics of the
situation.
[2]:112
Foundational studies: Walster (1966 & 1967) and Shaver (1970)
The basis of the defensive attribution bias was developed in studies conducted by Elaine Walster (Hatfield) and
Kelly Shaver. Walster (1966) hypothesized that as the consequences for an accident increase so do the likelihood of
an individual to assign blame to the harm-doer, and presented experimental evidence to support this hypothesis.
Walster assumes, when consequences are mild, that it is easy to feel sympathy for a victim or harm-doer and not
blame them; but as the severity of the consequences increase, it becomes more unpleasant to believe that such a
misfortune could happen to anyone and attributing responsibility helps an individual manage this emotional
reaction.
[1]
Shaver (1970) recognized that Walster had identified an important concept but did not fully apply it to her own
study. Walster(1966) stated that the defensive attribution bias would occur in response to concerns that the accident
could befall the perceiver. Thus, the similarity of the perceiver to the victim in terms of situational similarity or
personality similarity is required for the defensive attributional bias to be activated. Shaver (1970) manipulated the
severity of consequences and personal similarity of the target person described in his experiments to his research
participants and found support for the defensive attribution bias: as personal similarity increased attributions of
responsibility decreased.
Defensive attribution hypothesis
249
Later research: confusion and clarity
The foundational research of Walster (1966) and Shaver (1970) was not as clear-cut as presented above. In a follow
up study, Walster (1967) was unable to replicate her findings in two separate experiments. Shaver (1970) found a
small negative relationship between the severity of the consequences and the responsibility attributed to the
harm-doer.
Clarity to this confusion came in 1981 when Burger
[3]
published a meta-analysis of 22 published studies on the
defensive attribution hypothesis. First, he concluded that there is evidence to suggest that Walsters hypothesized
positive relationship between severity and attributions of responsibility exist. However, the evidence suggests that
this relationship, while positive as Walster predicted, was moderate to weak in terms of the strength of the
relationship. And secondly, he concluded that there is strong evidence to support Shavers hypothesized negative
relationship between similarity and responsibility.
Applied uses
The defensive attribution hypothesis has found many applied uses, especially in regards to sexual assault blame
attributions.
Researchers examining how individuals make blame attributions to victims (women who are rape victims) and
harm-doers (rapists) in sexual assault situations have consistently found that male research participants blamed
rapists less than female research participants did, and that male research participants blamed rape victims more than
female research participants did.
[4]
These findings support Shavers similarity-responsibility hypothesis: male
participants, who are personally similar to (male) rapists, blame rapists less than female participants who are
dissimilar to rapists. On the other hand, female participants, who are personally similar to (female) rape victims,
blame the victims less than male participants.
References
[1] Walster, E. (1966). "Assignment for responsibility for an accident.". Journal of Personality and Social Psychology 3 (1): 7379.
[2] Shaver, K. G. (1970). "Defensive Attribution: Effects of severity and relevance on the responsibility assigned for an accident.". Journal of
Personality and Social Psychology 14 (2): 101113.
[3] Burger, J.M. (1981). "Motivational biases in the attribution of responsibility for an accident: A meta-analysis of the defensive-attribution
hypothesis.". Psychological Bulletin 90 (3): 496512.
[4] Grubb, A.; Harrower, J. (2008). "Attribution of blame in cases of rape: An analysis of participant gender, type of rape and perceived
similarity to the victim.". Aggression and Violent Behavior 13: 396405.
DunningKruger effect
250
DunningKruger effect
The DunningKruger effect is a cognitive bias in which unskilled individuals suffer from illusory superiority,
mistakenly rating their ability much higher than average. This bias is attributed to a metacognitive inability of the
unskilled to recognize their mistakes.
[1]
Actual competence may weaken self-confidence, as competent individuals may falsely assume that others have an
equivalent understanding. David Dunning and Justin Kruger conclude, "the miscalibration of the incompetent stems
from an error about the self, whereas the miscalibration of the highly competent stems from an error about others".
[2]
Historical references
Although the DunningKruger effect was put forward in 1999, Dunning and Kruger have quoted Charles Darwin
("Ignorance more frequently begets confidence than does knowledge")
[3]
and Bertrand Russell ("One of the painful
things about our time is that those who feel certainty are stupid, and those with any imagination and understanding
are filled with doubt and indecision")
[4]
as authors who have recognised the phenomenon. Geraint Fuller,
commenting on the paper, notes that Shakespeare expresses similar sentiment in As You Like It ("The fool doth think
he is wise, but the wise man knows himself to be a fool." (V.i)).
[5]
Hypothesis
The hypothesized phenomenon was tested in a series of experiments performed by Justin Kruger and David
Dunning, both of them of Cornell University.
[2][6]
Kruger and Dunning noted earlier studies suggesting that
ignorance of standards of performance is behind a great deal of incompetence. This pattern was seen in studies of
skills as diverse as reading comprehension, operating a motor vehicle, and playing chess or tennis.
Kruger and Dunning proposed that, for a given skill, incompetent people will:
1. 1. tend to overestimate their own level of skill;
2. 2. fail to recognize genuine skill in others;
3. 3. fail to recognize the extremity of their inadequacy;
4. recognize and acknowledge their own previous lack of skill, if they are exposed to training for that skill
Dunning has since drawn an analogy ("the anosognosia of everyday life")
[1][7]
to a condition in which a person who
suffers a physical disability because of brain injury seems unaware of or denies the existence of the disability, even
for dramatic impairments such as blindness or paralysis.
Supporting studies
Kruger and Dunning set out to test these hypotheses on Cornell undergraduates in various psychology courses. In a
series of studies, they examined the subjects' self-assessment of logical reasoning skills, grammatical skills, and
humor. After being shown their test scores, the subjects were again asked to estimate their own rank, whereupon the
competent group accurately estimated their rank, while the incompetent group still overestimated their own rank. As
Dunning and Kruger noted,
Across four studies, the authors found that participants scoring in the bottom quartile on tests of humor,
grammar, and logic grossly overestimated their test performance and ability. Although test scores put them in
the 12th percentile, they estimated themselves to be in the 62nd.
Meanwhile, people with true ability tended to underestimate their relative competence. Roughly, participants who
found tasks to be relatively easy erroneously assumed, to some extent, that the tasks must also be easy for others.
A follow-up study, reported in the same paper, suggests that grossly incompetent students improved their ability to
estimate their rank after minimal tutoring in the skills they had previously lackedregardless of the negligible
DunningKruger effect
251
improvement in actual skills.
In 2003, Dunning and Joyce Ehrlinger, also of Cornell University, published a study that detailed a shift in people's
views of themselves when influenced by external cues. Participants in the study (Cornell University undergraduates)
were given tests of their knowledge of geography, some intended to positively affect their self-views, some intended
to affect them negatively. They were then asked to rate their performance, and those given the positive tests reported
significantly better performance than those given the negative.
[8]
Daniel Ames and Lara Kammrath extended this work to sensitivity to others, and the subjects' perception of how
sensitive they were.
[9]
Other research has suggested that the effect is not so obvious and may be due to noise and bias
levels.
[10]
Dunning, Kruger, and coauthors' 2008 paper on this subject comes to qualitatively similar conclusions to their
original work, after making some attempt to test alternative explanations. They conclude that the root cause is that, in
contrast to high performers, "poor performers do not learn from feedback suggesting a need to improve."
[4]
Studies on the DunningKruger effect tend to focus on American test subjects. A study on some East Asian subjects
suggested that something like the opposite of the DunningKruger effect may operate on self-assessment and
motivation to improve.
[11]
Awards
Dunning and Kruger were awarded the 2000 Ig Nobel Prize in Psychology for their paper, "Unskilled and Unaware
of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments".
[12]
References
[1] Morris, Errol (2010-06-20). "The Anosognosic's Dilemma: Something's Wrong but You'll Never Know What It Is (Part 1)" (http:/ /
opinionator. blogs.nytimes. com/ 2010/ 06/ 20/ the-anosognosics-dilemma-1/ ). New York Times. . Retrieved 2011-03-07.
[2] Kruger, Justin; David Dunning (1999). "Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to
Inflated Self-Assessments". Journal of Personality and Social Psychology 77 (6): 112134. doi:10.1037/0022-3514.77.6.1121.
PMID10626367. CiteSeerX: 10.1.1.64.2655 (http:/ / citeseerx. ist. psu. edu/ viewdoc/ summary?doi=10. 1. 1. 64. 2655).
[3] Charles Darwin (1871). "The Descent of Man" (http:/ / en. wikiquote. org/ wiki/ Charles_Darwin#The_Descent_of_Man_. 281871. 29) (w).
pp.Introduction, page 4. . Retrieved 2008-07-18.
[4] Ehrlinger, Joyce; Johnson, Kerri; Banner, Matthew; Dunning, David; Kruger, Justin (2008). "Why the unskilled are unaware: Further
explorations of (absent) self-insight among the incompetent" (PDF). Organizational Behavior and Human Decision Processes 105 (105):
98121. doi:10.1016/j.obhdp.2007.05.002. PMC2702783. PMID19568317.
[5] Fuller, Geraint (2011). "Ignorant of ignorance?" (http:/ / pn. bmj. com/ content/ 11/ 6/ 365. short). Practical Neurology 11 (6): 365.
doi:10.1136/practneurol-2011-000117. PMID22100949. .
[6] Dunning, David; Kerri Johnson, Joyce Ehrlinger and Justin Kruger (2003). "Why people fail to recognize their own incompetence" (http:/ /
psy. mq. edu. au/ vision/ ~peterw/ corella/ 237/ incompetence. pdf) (PDF). Current Directions in Psychological Science 12 (3): 8387.
doi:10.1111/1467-8721.01235. . Retrieved 29 December 2012.
[7] Dunning, David, Self-Insight: Roadblocks and Detours on the Path to Knowing Thyself (Essays in Social Psychology), Psychology Press:
2005, pp.1415. ISBN 1-84169-074-0
[8] Joyce Ehrlinger; David Dunning (January 2003). "How Chronic Self-Views Influence (and Potentially Mislead) Estimates of Performance".
Journal of Personality and Social Psychology (American Psychological Association) 84 (1): 517. doi:10.1037/0022-3514.84.1.5.
PMID12518967.
[9] Daniel R. Ames; Lara K. Kammrath (September 2004). "Mind-Reading and Metacognition: Narcissism, not Actual Competence, Predicts
Self-Estimated Ability" (http:/ / www. columbia. edu/ ~da358/ . . . / ames_kammrath_mindreading. pdf) (PDF). Journal of Nonverbal
Behavior 28 (3): 187209. doi:10.1023/B:JONB.0000039649.20015.0e. . Retrieved 29 December 2012.
[10] Burson, K.; Larrick, R.; Klayman, J. (2006). "Skilled or unskilled, but still unaware of it: how perceptions of difficulty drive miscalibration
in relative comparisons". Journal of Personality and Social Psychology 90 (1): 6077. doi:10.1037/0022-3514.90.1.60. PMID16448310.
hdl:2027.42/39168.
[11] DeAngelis, Tori (feb 2003). "Why we overestimate our competence" (http:/ / www. apa. org/ monitor/ feb03/ overestimate. aspx). Monitor
on Psychology (American Psychological Association) 34 (2): 60. . Retrieved 2011-03-07.
[12] "Ig Nobel Past Winners" (http:/ / improbable.com/ ig/ ig-pastwinners. html#ig2000). . Retrieved 2011-03-07.
Egocentric bias
252
Egocentric bias
Egocentric bias is the inclination to overstate changes between the present and the past to make ourselves look
better than we actually are.<ref name="
[1]
Besides simply claiming credit for positive outcomes, which might simply be self-serving bias, people exhibiting
egocentric bias also cite themselves as overly responsible for negative outcomes of group behavior as well (however
this last attribute would seem to be lacking in megalomania).
This may be because people's own actions are immediately accessible to them than others' actions. This is an
example of what is called the availability heuristic.
Egocentric bias in estimates of consensus could be interpreted to support and/or justify one's feelings that their own
behavioral choices are appropriate, normal or correct.
[2]
Michael Ross and Fiore Sicoly first identified this cognitive bias.
One study found that egocentric bias influences perceived fairness. Subjects felt that overpayment to themselves
were more fair than overpayment to others; by contrast, they felt the underpayment to themselves were less fair than
underpayment to others. Greenberg's studies showed that this egocentricism was eliminated when the subjects were
put in a self-aware state, which was applied in his study with a mirror being placed in front of the subjects. When a
person is not self-aware, they perceive that something can be fair to them but not necessarily fair to others and so
fairness was something biased and in the eye of the beholder. When a person is self-aware, there is a uniform
standard of fairness and there is no bias. When made self-aware, subjects rated overpayment and underpayment to
both themselves and to others as equally unfair. It is believed that these results were obtained because self-awareness
elevated subjects' concerns about perceived fairness in payment, thereby overriding egocentric tendencies.
[3]
Egocentric bias has influenced ethical judgements to the point where people not only believe that self-interested
outcomes are preferential but are also the morally sound way to proceed.
Example
A best known example of egocentric bias is a study Ross, Greene and House did in 1977. Students are asked to walk
around a campus with a sandwich board that has the word "repent" on it. People who agreed to do so (50%)
estimated that most of their peers would also agree to do so (average estimation 63.5%). Visa versa, those who
refused to do the experiment think that most people would make the same decision as theirs.
[4]
Another study of egocentric bias took place in Japan. Subjects were asked to write down fair or unfair behaviors that
they themselves or others did. When writing about fair behavior, they tended to start with the word "I" rather than
"others". Likewise, they began unfair behaviors with "others" rather than "I".
[5]
False-consensus effect
Considered to be a facet of egocentric bias, the false-consensus effect contributes to people believing that their
thoughts, actions, and opinions are much more common than they are in reality. They think that they are more
normal and typical than others would consider them.
[2]
Results from a study comparing the perceptual distortion and motivational explanations of egocentric bias in
estimates of consensus showed that an egocentric bias in estimates of consensus was more likely a result of
perceptual distortion than of motivational strategies.
[2]
Egocentric bias
253
References
Epley, N., Caruso, E. M.(2004). Egocentric Ethics. Social Justice Research, 17, 171-185 OCLC363254336
Ross, M. & Sicoly, F. (1979). Egocentric biases in availability and attribution. Journal of Personality and Social
Psychology 37, 322-336. OCLC4646238323
Footnotes
[1] Schacter, Daniel L. (2011). Psychology Ed. 2. 41 Madison Avenue New York, NY 10010: Worth Publishers. ISBN1429237198.
[2] Mullen, Brian (1983-10-01). "Egocentric bias in estimates of consensus". Journal of Social Psychology 121 (1): 31-38.
doi:10.1080/00224545.1983.9924463.
[3] Greenberg, Jerald. Overcoming Egocentric Bias in Perceived Fairness Through Self-Awareness. American Sociological Association. Social
Psychology Quarterly , Vol. 46, No. 2 (Jun., 1983), pp. 152-156. OCLC483814059
[4] [4] Wallin, A. (2011). Is egocentric bias evidence for simulation theory? 178(3), pp. 503-514. Synthese
[5] Tanaka, K. (1993). "Egocentric bias in preceived fairness: it it observed in Japan?". Social Justice Research 6 (3): 273-285.
doi:10.1007/BF01054462.
Extrinsic incentives bias
The extrinsic incentives bias is an attributional bias according to which people attribute relatively more to "extrinsic
incentives" (such as monetary reward) than to "intrinsic incentives" (such as learning a new skill) to others rather
than themselves. It is related to but distinct from self-serving bias; and, it is a counter-example to the fundamental
attribution error as according to the extrinsic bias others are presumed to have situational motivations while oneself
is seen as have dispositional motivations, the opposite of what the fundamental attribution error would predict. The
term was first proposed by Chip Heath, citing earlier research by others in management science.
[1]
In the simplest experiment Heath reported, MBA students were asked to rank the expected job motivations of
Citibank customer service representatives. Their average ratings were as follows:
1. 1. Amount of pay
2. 2. Having job security
3. 3. Quality of fringe benefits
4. 4. Amount of praise from your supervisor
5. 5. Doing something that makes you feel good about yourself
6. 6. Developing skills and abilities
7. 7. Accomplishing something worthwhile
8. 8. Learning new things
Actual customer service representatives rank ordered their own motivations as follows:
1. 1. Developing skills and abilities
2. 2. Accomplishing something worthwhile
3. 3. Learning new things
4. 4. Quality of fringe benefits
5. 5. Having job security
6. 6. Doing something that makes you feel good about yourself
7. 7. Amount of pay
8. 8. Amount of praise from your supervisor
The order of the predicted and actual reported motivations was nearly reversed; in particular, pay was rated first by
others but near last for respondents of themselves. Similar effects were observed when MBA students rated
managers and their classmates.
[1]
Extrinsic incentives bias
254
Debiasing
Heath suggests trying to infer others' motivations as one would by inferring one's motivations.
[1]
References
[1] Heath, Chip (1999). "On the Social Psychology of Agency Relationships: Lay Theories of Motivation Overemphasize Extrinsic Incentives"
(http:/ / faculty-gsb. stanford. edu/ heath/ documents/ social psych of agency. pdf). Organizational Behavior and Human Decision Processes
78 (1): 2562. .
Halo effect
The halo effect or halo error is a cognitive bias in which our judgments of a persons character can be influenced by
our overall impression of him or her. It can be found in a range of situationsfrom the courtroom to the classroom
and in everyday interactions. The halo effect was given its name by psychologist Edward Thorndike, and since then
several researchers have studied the halo effect in relation to attractiveness, and its bearing on the judicial and
educational systems.
History
Edward Thorndike, known for his contributions to educational psychology, coined the term "halo effect" and was the
first to support it with empirical research.
[1]
He gave the phenomenon its name in his 1920 article The Constant
Error in Psychological Ratings. He had noted in a previous study made in 1915 that estimates of traits in the same
person were very highly and evenly correlated. In Constant Error, Thorndike set out to replicate the study in hopes
of pinning down the bias that he thought was present in these ratings.
Supporting evidence
Thorndike's first study of the halo effect was published in 1920. The study included two commanding officers who
were asked to evaluate their soldiers in terms of physical qualities (neatness, voice, physique, bearing, and energy),
intellect, leadership skills, and personal qualities (including dependability, loyalty, responsibility, selflessness, and
cooperation). Thorndike's goal was to see how the ratings of one characteristic affected other characteristics.
Thorndike's experiment showed how there was too much of a correlation in the responses of the commanding
officers. In Thorndike's review he stated, "The correlations were too high and too even. For example, for the three
raters next studied the average correlation for physique with intelligence is .31; for physique with leadership, .39;
and for physique with character, .28."
[1]
The ratings of one of the special qualities of an officer tend to start a trend in
the rating results. If an officer had a particular "negative" attribute given off to the commanding officer, it would
correlate in the rest of that soldier's results.
The correlation in the halo effect experiment was concluded to be a halo error. The halo error showed that the
officers relied mainly on general perception of certain characteristics that determined the results of their answers.
Halo effect
255
Role of attractiveness
The halo effect is not exclusively limited to individual traits or an individual's overall appearance. A persons
attractiveness has also been found to produce a halo effect.
On personality and happiness
Dion and Berscheid (1972) conducted a study on the relationship between attractiveness and the halo effect.
[2]
Sixty
students from University of Minnesota took part in the experiment, half being male and half being female. Each
subject was given three different photos to examine: one of an attractive individual, one of an individual of average
attractiveness, and one of an unattractive individual.
The participants judged the photos subjects along 27 different personality traits (including altruism, conventionality,
self-assertiveness, stability, emotionality, trustworthiness, extraversion, kindness, and sexual promiscuity).
Participants were then asked to predict the overall happiness the photos' subjects would feel for the rest of their lives,
including marital happiness (least likely to get divorced), parental happiness (most likely to be a good parent), social
and professional happiness (most likely to experience life fulfillment), and overall happiness. Finally, participants
were asked if the subjects would hold a job of high status, medium status, or low status.
Results showed that participants overwhelmingly believed the more attractive subjects to have more socially
desirable personality traits than either the averagely attractive or unattractive subjects. Participants also believed that
the attractive individuals would lead happier lives in general, have happier marriages, be better parents, and have
more career success than the unattractive or averagely attractive individuals. Also, results showed that attractive
people were believed to be more likely to hold secure, prestigious jobs compared to unattractive individuals.
[3]
Academics and intelligence
Landy and Sigalls 1974 study demonstrated the halo effect on judgments of intelligence and competence on
academic tasks. 60 male undergraduate students rated the quality of written essays, which included both well-written
and poorly written samples. One third of the participants were presented with a photo of an attractive female as an
author, another third were presented with a photo of an unattractive female as the author, and the last third were not
shown a photo.
Participants gave significantly better writing evaluations for the more attractive author. On a scale of 19 with 1
being the poorest, the well-written essay by the attractive author received an average of 6.7 while the unattractive
author received a 5.9 (with a 6.6 as a control). The gap was larger on the poor essay: the attractive author received an
average of 5.2, the control a 4.7, and the unattractive a 2.7. These results suggest that people are generally more
willing to give physically attractive people the benefit of the doubt when performance is below standard, whereas
unattractive people are less likely to receive this favored treatment.
[4]
In Moore, Filippou, and Perrets 2011 study, the researchers sought to determine if residual cues to intelligence and
personality existed in male and female faces. Researchers attempted to control for the attractiveness halo effect, but
failed. They manipulated the perceived intelligence of photographs of individuals, and it was found that those faces
that were manipulated to look high in perceived intelligences were also rated as more attractive. It was also found
that the faces high in perceived intelligence were also rated highly on perceived friendliness and sense of humor.
[5]
Effects on jurors
Multiple studies have found the halo effect operating within juries. Research shows that attractive individuals receive
lesser sentences and are not as likely to be found guilty than an unattractive individual. Efran (1974) found that
subjects were more generous when giving out sentences to attractive individuals than to unattractive individuals,
even when exactly the same crime was committed. One reason why this occurs is because people with a high level of
attractiveness are seen as more likely to have a brighter future in society due to the socially desirable traits they are
Halo effect
256
believed to possess.
[6]
Monahan (1941) did a study on social workers who are accustomed to interacting with people from all different
types of backgrounds. The study found that the majority of these social workers found it very difficult to believe that
beautiful looking people are guilty of a crime.
[7]
The relation of the crime itself to attractiveness is also subject to the halo effect.
[8]
A study presented two
hypothetical crimes: a burglary and a swindle. The burglary involved a woman illegally obtaining a key and stealing
$2,200; the swindle involved a woman manipulating a man to invest $2,000 in a fabricated business. The results
showed that when the offense was not related to attractiveness (in this case, the burglary), the unattractive defendant
was punished more severely than the attractive one. However, when the offense was related to attractiveness (the
swindle), the attractive defendant was punished more severely than the unattractive one. Participants may have
believed the attractive person more likely to manipulate someone using their looks.
Halo effect in education
Abikoff found that the halo effect is also present in the classroom. In this study, both regular and special education
elementary school teachers watched videotapes of what they believed to be children in regular 4th-grade classrooms.
In reality, the children were actors, depicting behaviors present in attention deficit hyperactivity disorder (ADHD),
oppositional defiant disorder (ODD), or standard behavior. The teachers were asked to rate the frequency of
hyperactive behaviors observed in the children. Teachers rated hyperactive behaviors accurately for children with
ADHD; however, the ratings of hyperactivity and other behaviors associated with ADHD were rated much higher for
the children with ODD-like behaviors, showing a halo effect for children with oppositional defiant disorder.
[9]
Foster and Ysseldyke (1976) also found the halo effect present in teachers evaluations of children. Regular and
special education elementary school teachers watched videos of a normal child whom they were told was either
emotionally disturbed, possessing a learning disorder, mentally retarded, or "normal". The teachers then completed
referral forms based on the child's behavior. The results showed that teachers held negative expectancies toward
emotionally disturbed children, maintaining these expectancies even when presented with normal behavior. In
addition, the mentally retarded label showed a greater degree of negative bias than the emotionally disturbed or
learning disabled.
[10]
Criticisms and limitations
Some researchers allege that the halo effect is not as pervasive as once believed. Kaplans 1978 study yielded much
of the same results as are seen in other studies focusing on the halo effectattractive individuals were rated high in
qualities such as creativity, intelligence, and sensitivity than unattractive individuals. In addition these results,
Kaplan found that women were influenced by the halo effect on attractiveness only when presented with members of
the opposite sex. When presented with an attractive member of the same sex, women actually tended to rate the
individual lower on socially desirable qualities.
[11]
Criticisms have also pointed out that jealousy of an attractive individual could be a major factor in evaluation of that
person. A study by Dermer and Thiel has shown this to be more prevalent among females than males, with females
describing physically attractive women as having socially undesirable traits.
[12]
Halo effect
257
Halo effect and NGOs
The term "halo effect" has been applied to human rights organizations that have used their status to move away from
their stated goals. Political scientist Gerald Steinberg has claimed that non-governmental organizations (NGOs) take
advantage of the "halo effect" and are "given the status of impartial moral watchdogs" by governments and the
media.
[13][14]
Devil effect
The devil effect, also known as the reverse halo effect, is when people allow an undesirable trait to influence their
evaluation of other traits, such as in Nisbett and Wilson's study on likeable versus unlikeable lecturers.
[15]
The devil
effect can work outside the scope of personality traits and is expressed by both children and adults.
[16]
The Guardian
wrote of the devil effect in relation to Hugo Chavez: "Some leaders can become so demonised that it's impossible to
assess their achievements and failures in a balanced way."
[17]
References
[1] Thorndike, E.L (1). "A constant error in psychological ratings.". Journal of Applied Psychology 4 (1): 2529. doi:10.1037/h0071663.
[2] Dion, K; Berscheid, E; Walster, E (December 1972). "What is beautiful is good". Journal of personality and social psychology 24 (3):
285-90. doi:10.1037/h0033731. PMID4655540.
[3] Dion, Karen; Ellen Berscheid, Elaine Walster. "What is Beautiful is Good". Journal of Personality and Social Psychology 3 (24): 285-290.
[4] Landy, D.; Sigall, H.. "Task Evaluation as a Function of the Performers' Physical Attractiveness". Journal of Personality and Social
Psychology 29 (3): 299-304.
[5] Moore, F. R.; Filippou, D., Perrett, D. (2011). "Intelligence and Attractiveness in the Face: Beyond the Attractiveness Halo Effect". Journal
of Evolutionary Psychology 9 (3): 205-217.
[6] Efran, M. G.. "The Effect of Physical Appearance on the Judgment of Guilt, Interpersonal Attraction, and Severity of Recommended
Punishment in Simulated Jury Task". Journal of Research in Personality 8: 45-54.
[7] Monahan, F. (1941). Women in Crime. New York: Washburn.
[8] Ostrove, Nancy; Sigall, Harold (1975). "Beautiful but Dangerous: Effects of Offender Attractiveness an Nature of the Crime on Juridic
Judgment" (http:/ / dtrebouxclasses.pbworks.com/ w/ file/ fetch/ 50029614/ sigall and ostrove. pdf). Journal of Personality and Social
Psychology 31 (3): 410414. .
[9] Abikoff, H.; Courtney, M., Pelham, W.E., Koplewicz, H.S.. "Teachers' Ratings of Disruptive Behaviors: The Influence of Halo Effects".
Journal of Abnormal Child Psychology 21 (5): 519-533.
[10] Foster, Glen; James Ysseldyke (1976). "Expectancy and Halo Effects as a Result of Artificially Induced Teacher Bias". Contemporary
Educational Psychology 1 (1): 37-45.
[11] Kaplan, Robert M. (1978). "Is Beauty Talent? Sex Interaction in the Attractiveness Halo Effect". Sex Roles 4 (2): 195-204.
[12] Dermer, M.; Thiel, D.L (1975). "When beauty may fail" (https:/ / pantherfile. uwm. edu/ dermer/ public/ vita/ dermer_beauty. pdf). Journal
of Personality and Social Psychology 31 (6): 11681176. .
[13] Jeffray, Nathan (24 June 2010). "Interview: Gerald Steinberg" (http:/ / www. thejc. com/ news/ uk-news/ 33415/ interview-gerald-steinberg).
The Jewish Chronicle. .
[14] Balanson, Naftali (8 October 2008). "The 'halo effect' shields NGOs from media scrutiny" (http:/ / www. jpost. com/ Opinion/
Op-EdContributors/ Article. aspx?id=110648). The Jerusalem Post. .
[15] Nisbett, Richard E.; Wilson, Timothy D (1977). "The halo effect: Evidence for unconscious alteration of judgments". Journal of Personality
and Social Psychology (American Psychological Association) 35 (4): 250256. doi:10.1037/0022-3514.35.4.250. ISSN1939-1315.
[16] Koenig, Melissa A.; Jaswal, Vikram K (1 September 2011). "Characterizing Childrens Expectations About Expertise and Incompetence:
Halo or Pitchfork Effects?". Child Development 82 (5): 16341647. doi:10.1111/j.1467-8624.2011.01618.x.
[17] Glennie, Jonathan (3 May 2011). "Hugo Chvez's reverse-halo effect" (http:/ / www. guardian. co. uk/ global-development/ poverty-matters/
2011/ may/ 03/ hugo-chavez-reverso-halo-effect). The Guardian. .
Halo effect
258
Further reading
Sutherland, Stuart (2007). Irrationality (Reprinted. ed.). London: Pinter & Martin. ISBN978-1-905177-07-3.
Dean, Jeremy (2007). "The Halo Effect: When Your Own Mind is a Mystery" (http:/ / www. spring. org. uk/
2007/ 10/ halo-effect-when-your-own-mind-is. php). PsyBlog.
Rosenzweig, Phil (2007). The halo effect : ... and the eight other business delusions that deceive managers (1st
Free Press trade pbk. ed. ed.). New York, NY [etc.]: Free Press. ISBN978-0-7432-9125-5.
Steinberg, Gerald M (30 December 2009). "Human Rights NGOs Need a Monitor" (http:/ / forward. com/ articles/
122209/ human-rights-ngos-need-a-monitor/ ). The Jewish Daily Forward.
Chandra, Ramesh (2004). Social development in India. Delhi, India: Isha. ISBN81-8205-024-3.
Illusion of asymmetric insight
The illusion of asymmetric insight is a cognitive bias whereby people perceive their knowledge of others to surpass
other people's knowledge of themselves. This bias seems to be due to the conviction that observed behaviors are
more revealing of others than self, while private thoughts and feelings are more revealing of the self.
[1]
A study finds that people seem to believe that they know themselves better than their peers know themselves and that
their social group knows and understands other social groups better than other social groups know them
[1]
.
References
[1] Pronin E, Kruger J, Savitsky K, Ross L (October 2001). "You don't know me, but I know you: the illusion of asymmetric insight" (http:/ /
content. apa. org/ journals/ psp/ 81/ 4/ 639). J Pers Soc Psychol 81 (4): 63956. doi:10.1037/0022-3514.81.4.639. PMID11642351. .
External links
http:/ / youarenotsosmart. com/ 2011/ 08/ 21/ the-illusion-of-asymmetric-insight/
Illusion of external agency
259
Illusion of external agency
The illusion of external agency is a set of attributional biases consisting of illusions of influence, insight and
benevolence, proposed by Daniel Gilbert, Timothy D. Wilson, Ryan Brown and Elizabeth Pinel.
[1][2]
In a series of experiments, experimenters induced participants to rationalize a choice or experience (called the
"optimizing" condition) after which they were more likely to make certain attributions of an external agent, as
follows:
illusion of influence. Subjects who had been induced to rationalize liking for a teammate were more likely to
attribute this liking to the influence of "subliminal messages" which experimenters claimed to have attempted to
influence them to the best outcome. In this experiment the experimenters were presumed to have "insight" into the
problem and "benevolence" towards participants.
illusion of insight. Subjects listened to a song chosen for them by a "SmartRadio" that they were told was
benevolent and effective. Some subjects were informed of and rated the song before listening, and these subjects
rated the song more highly and were more likely to continue using it, attributing their liking to the device's
"insight".
illusion of benevolence. Subjects were given a gift; some rated it before receiving it and some rated it afterwards.
Those in the afterwards condition rated it more highly (endowment effect). All participants were told that they
were given the gift by another (unseen) participant as the best gift for them based on a questionnaire; those in the
afterwards condition were more likely to believe that their liking was due to the benevolence of the gift-giver.
Gilbert et al. argued that "participants confused their own optimization of subjective reality with an external agents'
optimizing of objective reality. Simply speaking, participants mistook 'the magic in here' for 'the magic out there.'"
[1]
References
[1] Gilbert, Daniel T.; Ryan P. Brown, Elizabeth C. Pinel, Timothy D. Wilson (2000). "The Illusion of External Agency" (http:/ / wjh-www.
harvard. edu/ ~dtg/ Gilbert et al (External Agency).PDF). Journal of Personality and Social Psychology 79 (5): 690700. .
[2] Gilbert, Daniel (2005). "The vagaries of religious experience" (http:/ / www. edge. org/ 3rd_culture/ gilbert05/ gilbert05_index. html).
Edge.org. .
Illusion of transparency
260
Illusion of transparency
The illusion of transparency is a tendency for people to overestimate the degree to which their personal mental
state is known by others. Another manifestation of the illusion of transparency (sometimes called the observer's
illusion of transparency) is a tendency for people to overestimate how well they understand others' personal mental
states. This cognitive bias is similar to the illusion of asymmetric insight.
Experimental support
Psychologist Elizabeth Newton created a simple test that she regarded as an illustration of the phenomenon. She
would tap out a well-known song, such as "Happy Birthday" or the national anthem, with her finger and have the test
subject guess the song. People usually estimate that the song will be guessed correctly in about 50percent of the
tests, but only 3percent pick the correct song. The tapper can hear every note and the lyrics in his or her head;
however, the observer, with no access to what the tapper is thinking, only hears a rhythmic tapping.
[1]
Public speaking and stage fright
The illusion of transparency is commonly prominent in public speakers. It may be increased by the spotlight effect.
The speaker has an exaggerated sense of how obvious his or her nervousness about a speech is to the audience.
Studies have shown that when the audience is surveyed, the speaker's emotions were not nearly so evident to the
crowd as the speaker perceived them to be.
[2]
Initial anxiety in a public speaking situation can cause stress that,
because of the illusion of transparency, the speaker may feel is evident to the listeners. This mistaken perception can
cause the speaker to compensate, which he or she then feels is even more obvious to the crowd, and the stress
increases in a feedback loop. Awareness of the limits of others' perceptions of one's mental state can help break the
cycle and reduce speech anxiety.
[3]
Studies on public speaking and the illusion of transparency
Kenneth Savitsky and Thomas Gilovich performed two experiments on public speaking anxiety in relation to the
illusion of transparency. The first focused on the speaker's perception of his or her anxiety levels versus an observer's
perception of the speaker's anxiety levels. The results were as expected: the speaker judged himself or herself more
harshly than the observer did.
[3]
In their second study, Savitsky and Gilovich focused on the connection between the illusion of transparency and the
exacerbation of speech anxiety. Participants in this study were divided into three groups: control, reassured, and
informed. All were given a topic and had five minutes to prepare a speech in front of a crowd, after which they rated
themselves on anxiety, speech quality, and appearance, and observers also rated them on anxiety levels and speech
quality. The control group were given no other advance instructions. The reassured and informed groups were both
told in advance that it is normal to feel anxiety about giving a speech. The reassured group were told that research
indicates they should not worry about this. The informed group were told about the illusion of transparency and that
research indicates that emotions are usually not as evident to others as people believe they are. The informed group
rated themselves higher in every respect and were also rated higher by the observers. The informed group,
understanding that the audience would not be able to perceive their nervousness, had less stress and their speech
tended to be better.
[3]
Illusion of transparency
261
The bystander effect
Thomas Gilovich, Victoria Husted Medvec, and Kenneth Savitsky believe that this phenomenon is partially the
reason for the bystander effect. They found that concern or alarm were not as apparent to observers as the individual
experiencing them thought, and that people believed they would be able to read others' expressions better than they
actually could.
[4]
When confronted with a potential emergency, people typically play it cool, adopt a look of nonchalance, and
monitor the reactions of others to determine if a crisis is really at hand. No one wants to overreact, after all, if
it might not be a true emergency. However, because each individual holds back, looks nonchalant, and
monitors the reactions of others, sometimes everyone concludes (perhaps erroneously) that the situation is not
an emergency and hence does not require intervention.
Thomas Gilovich, Victoria Husted Medvec, and Kenneth Savitsky,Journal of Personal and Social
Psychology, Vol. 75, No. 2
Further reading
Kenneth Savitsky and Thomas Gilovich published findings
[5]
Thomas Gilovich, Victoria Husted Medvec, and Kenneth Savitsky published findings
[6]
References
Footnotes
[1] McRaney, David (14 July 2010). "The Illusion of Transparency" (http:/ / youarenotsosmart. com/ 2010/ 07/ 14/ the-illusion-of-transparency/
). . Retrieved 20 July 2011.
[2] "The illusion of transparency and the alleviation of speech anxiety" (http:/ / www. psych. cornell. edu/ sec/ pubPeople/ tdg1/ Savitsky&
Gilovich.03.pdf). Journal of Experimental Social Psychology 39. 25 March 2003. . Retrieved 20 July 2011.
[3] "The illusion of transparency and the alleviation of speech anxiety" (http:/ / www. psych. cornell. edu/ sec/ pubPeople/ tdg1/ Savitsky&
Gilovich.03.pdf). Journal of Experimental Social Psychology 39. 25 March 2003. . Retrieved 20 July 2011.
[4] "The Illusion of Transparency: Biased Assessments of Others' Ability to Read One's Emotional States". Journal of Personality and Social
Psychology 75 (2): 332346. 1998.
[5] http:/ / www. psych.cornell. edu/ sec/ pubPeople/ tdg1/ Savitsky& Gilovich. 03. pdf
[6] http:/ / www. psych.cornell. edu/ sec/ pubPeople/ tdg1/ Gilo. Sav. Medvec. pdf
Bibliography
The three illusions on interpersonal perception: Effects of relationship intimacy on two types of illusion of
transparency and the illusion of asymmetric insight (PDF) (http:/ / psywww. human. metro-u. ac. jp/ Personal/
miamia/ Take_Numa05spsp. pdf)
K Savitsky, T Gilovich, 2003. "The illusion of transparency and the alleviation of speech anxiety." Journal of
Experimental Social Psychology. http:/ / www. psych. cornell. edu/ sec/ pubPeople/ tdg1/ Savitsky& Gilovich. 03.
pdf
T Gilovich, V Medvec, K Savitsky, 1998. "The Illusion of Transparency: Biased Assessments of Others' Ability
to Read One's Emotional States." Journal of Personality and Social Psychology. http:/ / www. psych. cornell. edu/
sec/ pubPeople/ tdg1/ Gilo. Sav. Medvec. pdf
D McRaney 2010. "The Illusion of Transparency." http:/ / youarenotsosmart. com/ 2010/ 07/ 14/
the-illusion-of-transparency/
O Burkeman 2011. "The illusion of transparency: Why your feelings arent really written all over your face."
http:/ / www. oliverburkeman. com/ 2011/ 01/
the-illusion-of-transparency-why-your-feelings-arent-really-written-all-over-your-face/
Illusory superiority
262
Illusory superiority
Illusory superiority is a cognitive bias that causes people to overestimate their positive qualities and abilities and to
underestimate their negative qualities, relative to others. This is evident in a variety of areas including intelligence,
performance on tasks or tests, and the possession of desirable characteristics or personality traits. It is one of many
positive illusions relating to the self, and is a phenomenon studied in social psychology.
Illusory superiority is often referred to as the above average effect. Other terms include superiority bias, leniency
error, sense of relative superiority, the primus inter pares effect,
[1]
and the Lake Wobegon effect (named after
Garrison Keillor's fictional town where "all the children are above average"). The phrase "illusory superiority" was
first used by Van Yperen and Buunk in 1991.
[1]
Effects in different situations
Illusory superiority has been found in individuals' comparisons of themselves with others in a wide variety of
different aspects of life, including performance in academic circumstances (such as class performance, exams and
overall intelligence), in working environments (for example in job performance), and in social settings (for example
in estimating one's popularity, or the extent to which one possesses desirable personality traits, such as honesty or
confidence), as well as everyday abilities requiring particular skill.
[1]
For illusory superiority to be demonstrated by social comparison, two logical hurdles have to be overcome. One is
the ambiguity of the word "average". It is logically possible for nearly all of the set to be above the mean if the
distribution of abilities is highly skewed. For example, the mean number of human legs is slightly lower than two,
because of a small number of people have only one or no legs. Hence experiments usually compare subjects to the
median of the peer group, since by definition it is impossible for a majority to exceed the median.
A further problem in inferring inconsistency is that subjects might interpret the question in different ways, so it is
logically possible that a majority of them are, for example, more generous than the rest of the group each on their
own understanding of generosity.
[2]
This interpretation is confirmed by experiments which varied the amount of
interpretive freedom subjects were given. As subjects evaluate themselves on a specific, well-defined attribute,
illusory superiority remains.
[3]
Cognitive ability
IQ
One of the main effects of illusory superiority in IQ is the Downing effect. This describes the tendency of people
with a below average IQ to overestimate their IQ, and of people with an above average IQ to underestimate their IQ.
The propensity to predictably misjudge one's own IQ was first noted by C. L. Downing who conducted the first
cross-cultural studies on perceived 'intelligence'. His studies also evidenced that the ability to accurately estimate
others' IQ was proportional to one's own IQ. This means that the lower the IQ of an individual, the less capable they
are of appreciating and accurately appraising others' IQ. Therefore individuals with a lower IQ are more likely to rate
themselves as having a higher IQ than those around them. Conversely, people with a higher IQ, while better at
appraising others' IQ overall, are still likely to rate people of similar IQ as themselves as having higher IQs.
The disparity between actual IQ and perceived IQ has also been noted between genders by British psychologist
Adrian Furnham, in whose work there was a suggestion that, on average, men are more likely to overestimate their
intelligence by 5 points, while women are more likely to underestimate their IQ by a similar margin.
[4][5]
Illusory superiority
263
Memory
Illusory superiority has been found in studies comparing memory self-report, such as Schmidt, Berg & Deelman's
research in older adults. This study involved participants aged between 46 and 89 years of age comparing their own
memory to that of peers of the same age group, 25-year-olds and their own memory at age 25. This research showed
that participants exhibited illusory superiority when comparing themselves to both peers and younger adults,
however the researchers asserted that these judgements were only slightly related to age.
[6]
Cognitive tasks
In Kruger and Dunning's experiments participants were given specific tasks (such as solving logic problems,
analyzing grammar questions, and determining whether or not jokes were funny), and were asked to evaluate their
performance on these tasks relative to the rest of the group, enabling a direct comparison of their actual and
perceived performance.
[7]
Results were divided into four groups depending on actual performance and it was found that all four groups
evaluated their performance as above average, meaning that the lowest-scoring group (the bottom 25%) showed a
very large illusory superiority bias. The researchers attributed this to the fact that the individuals who were worst at
performing the tasks were also worst at recognizing skill in those tasks. This was supported by the fact that, given
training, the worst subjects improved their estimate of their rank as well as getting better at the tasks.
[7]
The paper, titled "Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to
Inflated Self-Assessments," won a 2000 Ig Nobel Prize.
[8]
In 2003 Dunning and Joyce Ehrlinger, also of Cornell University, published a study that detailed a shift in people's
views of themselves influenced by external cues. Participants in the study (Cornell University undergraduates) were
given tests of their knowledge of geography, some intended to positively affect their self-views, some intended to
affect them negatively. They were then asked to rate their performance, and those given the positive tests reported
significantly better performance than those given the negative.
[9]
Daniel Ames and Lara Kammrath extended this work to sensitivity to others, and the subjects' perception of how
sensitive they were.
[10]
Work by Burson Larrick and Joshua Klayman has suggested that the effect is not so obvious
and may be due to noise and bias levels.
[11]
Dunning, Kruger, and coauthors' latest paper on this subject comes to qualitatively similar conclusions after making
some attempt to test alternative explanations.
[12]
Academic ability and job performance
In a survey of faculty at the University of Nebraska, 68% rated themselves in the top 25% for teaching ability.
[13]
In a similar survey, 87% of MBA students at Stanford University rated their academic performance as above the
median.
[14]
Findings of illusory superiority in research have also explained phenomena such as the large amount of stock market
trading (as each trader thinks they are the best, and most likely to succeed),
[15]
and the number of lawsuits that go to
trial (because, due to illusory superiority, many lawyers have an inflated belief that they will win a case).
[16]
Self, friends and peers
One of the first studies that found the effect of illusory superiority was carried out in 1976 by the College Board in
the USA.
[17]
A survey was attached to the SAT exams (taken by approximately one million students per year),
asking the students to rate themselves relative to the median of the sample (rather than the average peer) on a number
of vague positive characteristics. In ratings of leadership ability, 70% of the students put themselves above the
median. In ability to get on well with others, 85% put themselves above the median, and 25% rated themselves in the
top 1%.
Illusory superiority
264
More recent research
[18]
has found illusory superiority in a social context, with participants comparing themselves to
friends and other peers on positive characteristics (such as punctuality and sensitivity) and negative characteristics
(such as naivety or inconsistency). This study found that participants rated themselves more favorably than their
friends, but rated their friends more favorably than other peers. These findings were, however, affected by several
moderating factors.
Research by Perloff and Fetzer,
[19]
Brown,
[20]
and Tajfel and Turner
[21]
also found similar effects of participants
rating friends higher than other peers. Tajfel and Turner attributed this to an "ingroup bias" and suggested that this
was motivated by the individual's desire for a "positive social identity".
Popularity
In Zuckerman and Jost's study, participants were given detailed questionnaires about their friendships and asked to
assess their own popularity. By using social network analysis, they were able to show that the participants generally
had exaggerated perceptions of their own popularity, particularly in comparison to their own friends.
[22]
Relationship happiness
Researchers have also found the effects of illusory superiority in studies into relationship satisfaction. For example,
one study found that participants perceived their own relationships as better than others' relationships on average, but
thought that the majority of people were happy with their relationships. Also, this study found evidence that the
higher the participants rated their own relationship happiness, the more superior they believed their relationship was.
The illusory superiority exhibited by the participants in this study also served to increase their own relationship
satisfaction, as it was found that in men especially satisfaction was particularly related to the perception that
one's own relationship was superior as well as to the assumption that few others were unhappy with their
relationship, whereas women's satisfaction was particularly related to the assumption that most others were happy
with their relationship.
[23]
Health
Illusory superiority effects have been found in a self-report study of health behaviors (Hoorens & Harris, 1998). The
study involved asking participants to estimate how often they, and their peers, carried out healthy and unhealthy
behaviors. Participants reported that they carried out healthy behaviors more often than the average peer, and
unhealthy behaviors less often, as would be expected given the effect of illusory superiority. These findings were for
both past self-report of behaviors and expected future behaviors.
[24]
Driving ability
Svenson (1981) surveyed 161 students in Sweden and the United States, asking them to compare their driving safety
and skill to the other people in the experiment. For driving skill, 93% of the US sample and 69% of the Swedish
sample put themselves in the top 50% (above the median). For safety, 88% of the US group and 77% of the Swedish
sample put themselves in the top 50%.
[25]
McCormick, Walkey and Green (1986) found similar results in their study, asking 178 participants to evaluate their
position on eight different dimensions relating to driving skill (examples include the "dangerous-safe" dimension and
the "considerate-inconsiderate" dimension). Only a small minority rated themselves as below average (the midpoint
of the dimension scale) at any point, and when all eight dimensions were considered together it was found that
almost 80% of participants had evaluated themselves as being above the average driver.
[26]
Illusory superiority
265
Immunity to bias
Subjects describe themselves in positive terms compared to other people, and this includes describing themselves as
less susceptible to bias than other people. This effect is called the bias blind spot and has been demonstrated
independently.
Cultural differences
A vast majority of the literature on self-esteem originates from studies on participants in the United States. However,
research that only investigates the effects in one specific population is severely limited as this may not be a true
representation of human psychology as a whole. As a result, more recent research has focused on investigating
quantities and qualities of self-esteem around the globe. The findings of such studies suggest that illusory superiority
varies between cultures.
Self-esteem
While a great deal of evidence suggests that we compare ourselves favorably to others on a wide variety of traits, the
links to self-esteem are uncertain. The theory that those with high self-esteem maintain this high level by rating
themselves over and above others does carry some evidence behind it; it has been reported that non-depressed
subjects rate their control over positive outcomes higher than that of a peer; despite an identical level in performance
between the two individuals.
[27]
Furthermore, it has been found that non-depressed students will also actively rate peers below themselves, as
opposed to rating themselves higher; students were able to recall a great deal more negative personality traits about
others than about themselves.
[28]
The data suggests those with a positive self view are more likely to display the above-average effect, as opposed to
those with a negative self appraisal. Similarly, those with low self-esteem appear to engage in far less illusory
superiority, showing more realism in their self-rating.
These results go against a basic humanistic principle within psychology. In particular, Carl Rogers, a pioneer of
humanistic psychology, claims that those with low self-esteem will be far more likely to attempt to belittle others,
with the aim of strengthening their fragile self view. On the other hand, Rogers hypothesizes that those with high
self-esteem will have no need to put others down or below themselves; and therefore, would be unlikely to exhibit
illusory superiority.
It should be noted though, that in these studies there was no distinction made between people with legitimate and
illegitimate high self-esteem, as other studies have found that absence of positive illusions may coexist with high
self-esteem
[29]
and that self-determined individuals with personality oriented towards growth and learning are less
prone to these illusions.
[30]
Thus it may be likely that while illusory superiority is associated with illegitimate high
self-esteem, people with legitimate high self-esteem don't exhibit it.
Relation to mental health
Psychology has traditionally assumed that generally accurate self-perceptions are essential to good mental health.
[2]
This was challenged by a 1988 paper by Taylor and Brown, who argued that mentally healthy individuals typically
manifest three cognitive illusions, namely illusory superiority, illusion of control and optimism bias.
[2]
This idea
rapidly became very influential, with some authorities concluding that it would be therapeutic to deliberately induce
these biases.
[31]
Since then, further research has both undermined that conclusion and offered new evidence
associating illusory superiority with negative effects on the individual.
[2]
One line of argument was that in the Taylor and Brown paper, the classification of people as mentally healthy or
unhealthy was based on self-reports rather than objective criteria.
[31]
Hence it was not surprising that people prone to
Illusory superiority
266
self-enhancement would exaggerate how well-adjusted they are. One study claimed that "mentally normal" groups
were contaminated by defensive deniers who are the most subject to positive illusions.
[31]
A longitudinal study found
that self-enhancement biases were associated with poor social skills and psychological maladjustment.
[2]
In a
separate experiment where videotaped conversations between men and women were rated by independent observers,
self-enhancing individuals were more likely to show socially problematic behaviors such as hostility or irritability.
[2]
A 2007 study found that self-enhancement biases were associated with psychological benefits (such as subjective
well-being) but also inter- and intra-personal costs (such as anti-social behavior).
[32]
Neuroimaging
The degree to which people view themselves as more desirable than the average person links to reduced activation in
their orbitofrontal cortex and dorsal anterior cingulate cortex. This is suggested to link to the role of these areas in
processing "cognitive control".
[33]
Explanations
Noisy mental information processing
A recent Psychological Bulletin suggests that illusory superiority (as well as other biases) can be explained by a
simple information-theoretic generative mechanism that assumes a noisy conversion of objective evidence
(observation) into subjective estimates (judgment).
[34]
The study suggests that the underlying cognitive mechanism
is essentially similar to the noisy mixing of memories that can cause the conservatism bias or overconfidence: after
our own performance, we readjust our estimates of our own performance more than we readjust our estimates of
others performances. This implies that our estimates of the scores of others are even more conservative (more
influenced by the previous expectation) than our estimates of our own performance (more influenced by the new
evidence received after giving the test). The difference in the conservative bias of both estimates (conservative
estimate of our own performance, and even more conservative estimate of the performance of others) is enough to
create illusory superiority. Since mental noise is a sufficient explanation that is much simpler and straightforward
than any other explanation involving heuristics, behavior, or social interaction,
[17]
Occam's razor would argue in its
favor as the underlying generative mechanism (it is the hypotheses which makes the fewest assumptions).
Selective recruitment
This is the idea that when making a comparison with a peer an individual will select their own strengths and the
other's weaknesses in order that they appear better on the whole. This theory was first tested by Weinstein (1980);
however, this was in an experiment relating to optimistic bias, rather than the better-than-average effect. The study
involved participants rating certain behaviors as likely to increase or decrease the chance of a series of life events
happening to them. It was found that individuals showed less optimistic bias when they were allowed to see others'
answers.
[35]
Perloff and Fetzer (1986) suggested that when comparing themselves to an average peer on a particular ability or
characteristic an individual would choose a comparison target (the peer being compared) that scored less well on that
ability or characteristic, in order that the individual would appear to be better than average. To test this theory Perloff
and Fetzer asked participants to compare themselves to specific comparison targets (a close friend), and found that
illusory superiority decreased when specific targets were given, rather than vague constructs such as the "average
peer". However these results are not completely reliable and could be affected by the fact that individuals like their
close friends more than an "average peer" and may as a result rate their friend as being higher than average, therefore
the friend would not be an objective comparison target.
[19]
Illusory superiority
267
Egocentrism
The second explanation for how the better-than-average effect works is egocentrism. This is the idea that an
individual places greater importance and significance on their own abilities, characteristics and behaviors than those
of others. Egocentrism is therefore a less overtly self-serving bias. According to egocentrism, individuals will
overestimate themselves in relation to others because they believe that they have an advantage that others do not
have, as an individual considering their own performance and another's performance will consider their performance
to be better, even when they are in fact equal. Kruger (1999) found support for the egocentrism explanation in his
research involving participant ratings of their ability on easy and difficult tasks. It was found that individuals were
consistent in their ratings of themselves as above the median in the tasks classified as "easy" and below the median
in the tasks classified as "difficult", regardless of their actual ability. In this experiment the better-than-average effect
was observed when it was suggested to participants that they would be successful, but also a worse-than-average
effect was found when it was suggested that participants would be unsuccessful.
[36]
Focalism
The third explanation for the better-than-average effect is focalism, the idea that greater significance is placed on the
object that is the focus of attention. Most studies of the better-than-average effect place greater focus on the self
when asking participants to make comparisons (the question will often be phrased with the self being presented
before the comparison target e.g. "compare yourself to the average person..."). According to focalism this means
that the individual will place greater significance on their own ability or characteristic than that of the comparison
target. This also means that in theory if, in an experiment on the better-than-average effect, the questions were
phrased so that the self and other were switched (e.g. "compare the average peer to yourself") the better-than-average
effect should be lessened.
[37]
Research into focalism has focused primarily on optimistic bias rather than the better-than-average effect. However,
two studies found a decreased effect of optimistic bias when participants were asked to compare an average peer to
themselves, rather than themselves to an average peer.
[38][39]
Windschitl, Kruger & Simms (2003) have conducted research into focalism, focusing specifically on the
better-than-average effect, and found that asking participants to estimate their ability and likelihood of success in a
task produced results of decreased estimations when they were asked about others' chances of success rather than
their own.
[40]
"Self versus aggregate" comparisons
This idea, put forward by Giladi and Klar, suggests that when making comparisons any single member of a group
will be evaluated to rank above that group's statistical mean performance level or the median performance level of its
members. Research has found this effect in many different areas of human performance and has even generalized it
beyond individuals' attempts to draw comparisons involving themselves.
[41]
Findings of this research therefore
suggest that rather than individuals evaluating themselves as above average in a self-serving manner, the
better-than-average effect is actually due to a general tendency to evaluate any single person or object as better than
average.
Better-than-average heuristic
Alicke and Govorun proposed the idea that, rather than individuals consciously reviewing and thinking about their
own abilities, behaviors and characteristics and comparing them to those of others, it is likely that people instead
have what they describe as an "automatic tendency to assimilate positively-evaluated social objects toward ideal trait
conceptions".
[17]
For example, if an individual evaluated themselves as honest, they would be likely to then
exaggerate their characteristic towards their perceived ideal position on a scale of honesty. Importantly, Alicke has
noted that this ideal position is not always the top of the scale, for example, in the case of honesty, someone who is
Illusory superiority
268
always brutally honest may be regarded as rude. Instead, the ideal is a balance perceived differently by different
individuals.
Non-social explanations
The better-than-average effect may not have wholly social origins: judgements about inanimate objects suffer similar
distortions.
[41]
Moderating factors
While illusory superiority has been found to be somewhat self-serving, this does not mean that it will predictably
occur: it is not constant. Instead the strength of the effect is moderated by many factors, the main examples of which
have been summarized by Alicke and Govorun (2005).
[17]
Interpretability/ambiguity of trait
This is a phenomenon that Alicke and Govorun have described as "the nature of the judgement dimension" and refers
to how subjective (abstract) or objective (concrete) the ability or characteristic being evaluated is.
[17]
Research by
Sedikides & Strube (1997) has found that people are more self-serving (the effect of illusory superiority is stronger)
when the event in question is more open to interpretation,
[42]
for example social constructs such as popularity and
attractiveness are more interpretable than characteristics such as intelligence and physical ability.
[43]
This has been
partly attributed also to the need for a believable self-view.
[44]
The idea that ambiguity moderates illusory superiority has empirical research support from a study involving two
conditions: in one, participants were given criteria for assessing a trait as ambiguous or unambiguous, and in the
other participants were free to assess the traits according to their own criteria. It was found that the effect of illusory
superiority was greater in the condition where participants were free to assess the traits.
[45]
The effects of illusory superiority have also been found to be strongest when people rate themselves on abilities at
which they are totally incompetent. These subjects have the greatest disparity between their actual performance (at
the low end of the distribution) and their self-rating (placing themselves above average). This DunningKruger
effect is interpreted as a lack of metacognitive ability to recognize their own incompetence.
[7]
Method of comparison
The method used in research into illusory superiority has been found to have an implication on the strength of the
effect found. Most studies into illusory superiority involve a comparison between an individual and an average peer,
of which there are two methods: direct comparison and indirect comparison. A direct comparison which is more
commonly used involves the participant rating themselves and the average peer on the same scale, from "below
average" to "above average"
[46]
and results in participants being far more self-serving.
[47]
Researchers have
suggested that this occurs due to the closer comparison between the individual and the average peer, however use of
this method means that it is impossible to know whether a participant has overestimated themselves, underestimated
the average peer, or both.
The indirect method of comparison involves participants rating themselves and the average peer on separate scales
and the illusory superiority effect is found by taking the average peer score away from the individual's score (with a
higher score indicating a greater effect). While the indirect comparison method is used less often it is more
informative in terms of whether participants have overestimated themselves or underestimated the average peer, and
can therefore provide more information about the nature of illusory superiority.
[46]
Illusory superiority
269
Comparison target
The nature of the comparison target is one of the most fundamental moderating factors of the effect of illusory
superiority, and there are two main issues relating to the comparison target that need to be considered.
First, research into illusory superiority is distinct in terms of the comparison target because an individual compares
themselves with a hypothetical average peer rather than a tangible person. Alicke et al. (1995) found that the effect
of illusory superiority was still present but was significantly reduced when participants compared themselves with
real people (also participants in the experiment, who were seated in the same room), as opposed to when participants
compared themselves with an average peer. This suggests that research into illusory superiority may itself be biasing
results and finding a greater effect than would actually occur in real life.
[46]
Further research into the differences between comparison targets involved four conditions where participants were at
varying proximity to an interview with the comparison target: watching live in the same room; watching on tape;
reading a written transcript; or making self-other comparisons with an average peer. It was found that when the
participant was further removed from the interview situation (in the tape observation and transcript conditions) the
effect of illusory superiority was found to be greater. Researchers asserted that these findings suggest that the effect
of illusory superiority is reduced by two main factors, individuation of the target and live contact with the target.
Second, Alicke et al.'s (1995) studies investigated whether the negative connotations to the word "average" may have
an effect on the extent to which individuals exhibit illusory superiority, namely whether the use of the word
"average" increases illusory superiority. Participants were asked to evaluate themselves, the average peer and a
person whom they had sat next to in the previous experiment, on various dimensions. It was found that they placed
themselves highest, followed by the real person, followed by the average peer, however the average peer was
consistently placed above the mean point on the scale, suggesting that the word "average" did not have a negative
effect on the participant's view of the average peer.
[46]
Controllability
An important moderating factor of the effect of illusory superiority is the extent to which an individual believes they
are able to control and change their position on the dimension concerned. According to Alicke & Govorun positive
characteristics that an individual believes are within their control are more self-serving, and negative characteristics
that are seen as uncontrollable are less detrimental to self-enhancement.
[17]
This theory was supported by Alicke's
(1985) research, which found that individuals rated themselves as higher than an average peer on positive
controllable traits and lower than an average peer on negative uncontrollable traits. The idea, suggested by these
findings, that individuals believe that they are responsible for their success and some other factor is responsible for
their failure is known as the self-serving bias.
Individual differences of judge
Personality characteristics vary widely between people and have been found to moderate the effects of illusory
superiority, one of the main examples of this is self-esteem. Brown (1986) found that in self-evaluations of positive
characteristics participants with higher self-esteem showed greater illusory superiority bias than participants with
lower self-esteem.
[48]
Similar findings come from a study by Suls, Lemos & Stewart (2002), but in addition they
found that participants pre-classified as having high self-esteem interpreted ambiguous traits in a self-serving way,
whereas participants who were pre-classified as having low self-esteem did not do this.
[18]
Illusory superiority
270
Worse-than-average effect
In contrast to what is commonly believed, research has found that better-than-average effects are not universal. In
fact, much recent research has found the opposite effect in many, especially more difficult, tasks.
[49]
Notes
[1] Hoorens, Vera (1993). "Self-enhancement and Superiority Biases in Social Comparison". European Review of Social Psychology (Psychology
Press) 4 (1): 113139. doi:10.1080/14792779343000040.
[2] Colvin, C. Randall; Jack Block, David C. Funder (1995). "Overly Positive Self-Evaluations and Personality: Negative Implications for Mental
Health". Journal of Personality and Social Psychology (American Psychological Association) 68 (6): 11521162.
doi:10.1037/0022-3514.68.6.1152. PMID7608859.
[3] Dunning, David; Judith A. Meyerowitz, Amy D. Holzberg (1989). "Ambiguity and self-evaluation: The role of idiosyncratic trait definitions
in self-serving assessments of ability". Journal of Personality and Social Psychology (American Psychological Association) 57 (6):
10821090. doi:10.1037/0022-3514.57.6.1082. ISSN1939-1315.
[4] Davidson, J. E. & C. L. Downing, CMOF Intelligence Handbook of Intelligence, 2000
[5] International Journal of Selection and Assessment, Vol. 13, No. 1, pp. 1124, March 2005
[6] Schmidt, I.W.; I.J. Berg, B.G. Deelman (1999). "Illusory superiority in self-reported memory of older adults". Aging, Neuropsychology, and
Cognition (Neuropsychology, Development and Cognition) 6 (4): 288301. doi:10.1076/1382-5585(199912)06:04;1-B;FT288.
[7] Kruger, Justin; David Dunning (1999). "Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to
Inflated Self-Assessments". Journal of Personality and Social Psychology 77 (6): 112134. doi:10.1037/0022-3514.77.6.1121.
PMID10626367.
[8] "The 2000 Ig Nobel Prize Winners" (http:/ / www.improb. com/ ig/ ig-pastwinners. html#ig2000). Improbable Research. . Retrieved
2008-05-27.
[9] Joyce Ehrlinger; David Dunning (January 2003). "How Chronic Self-Views Influence (and Potentially Mislead) Estimates of Performance".
Journal of Personality and Social Psychology (American Psychological Association) 84 (1): 517. doi:10.1037/0022-3514.84.1.5.
PMID12518967.
[10] Daniel R. Ames; Lara K. Kammrath (September 2004). "Mind-Reading and Metacognition: Narcissism, not Actual Competence, Predicts
Self-Estimated Ability". Journal of Nonverbal Behavior (Springer Netherlands) 28 (3): 187209. doi:10.1023/B:JONB.0000039649.20015.0e.
[11] Katherine A. Burson; Richard P. Larrick; Joshua Klayman; YUTAO (2006). "Skilled or Unskilled, but Still Unaware of It: How Perceptions
of Difficulty Drive Miscalibration in Relative Comparisons" (http:/ / faculty. fuqua. duke. edu/ ~larrick/ bio/ Files/ 2006 Burson Larrick
Klayman JPSP. pdf). Journal of Personality and Social Psychology 90 (1): 6077. doi:10.1037/0022-3514.90.1.60. PMID16448310. .
[12] Ehrlinger, Joyce; Johnson, Kerri; Banner, Matthew; Dunning, David; Kruger, Justin (2008). "Why the unskilled are unaware: Further
explorations of (absent) self-insight among the incompetent" (http:/ / www. psy. fsu. edu/ ~ehrlinger/ Self_& _Social_Judgment/
Ehrlinger_et_al_2008. pdf) (PDF). Organizational Behavior and Human Decision Processes (105): 98121. .
[13] Cross, P. (1977). "Not can but will college teachers be improved?". New Directions for Higher Education 17: 115.
[14] "It's Academic." 2000. Stanford GSB Reporter, April 24, pp.145. via Zuckerman, Ezra W.; John T. Jost (2001). "What Makes You Think
You're So Popular? Self Evaluation Maintenance and the Subjective Side of the "Friendship Paradox"" (http:/ / www. psych. nyu. edu/ jost/
Zuckerman & Jost (2001) What Makes You Think You're So Popular1. pdf). Social Psychology Quarterly (American Sociological
Association) 64 (3): 207223. doi:10.2307/3090112. JSTOR3090112. . Retrieved 2009-08-29.
[15] Odean, T. (1998). "Volume, volatility, price, and profit when all traders are above average". Journal of Finance 53 (6): 18871934.
doi:10.1111/0022-1082.00078.
[16] Neale, M.A., & Bazerman, M.H. (1985). The effects of framing and negotiator overconfidence on bargaining behaviours and outcomes.
Academy of Management Journal, 28(1), 3449.
[17] Alicke, Mark D.; Olesya Govorun (2005). "The Better-Than-Average Effect". In Mark D. Alicke, David A. Dunning, Joachim I. Krueger.
The Self in Social Judgment. Studies in Self and Identity. Psychology Press. pp.85106. ISBN978-1-84169-418-4. OCLC58054791.
[18] Suls, J.; K. Lemos, H.L. Stewart (2002). "Self-esteem, construal, and comparisons with the self, friends and peers". Journal of Personality
and Social Psychology (American Psychological Association) 82 (2): 252261. doi:10.1037/0022-3514.82.2.252. PMID11831414.
[19] Perloff, L.S.; B.K. Fetzer (1986). "Self-other judgments and perceived vulnerability to victimization". Journal of Personality and Social
Psychology (American Psychological Association) 50 (3): 502510. doi:10.1037/0022-3514.50.3.502.
[20] Brown, J.D. (1986). "Evaluations of self and others: Self-enhancement biases in social judgments". Social Cognition 4 (4): 353376.
doi:10.1521/soco.1986.4.4.353.
[21] Tajfel, H.; J.C. Turner. "The social identity theory of intergroup behaviour". In S. Worchel & W.G. Austin. Psychology of intergroup
relations (2nd ed.). pp.724. ISBN0-12-682550-5.
[22] Zuckerman, Ezra W.; John T. Jost (2001). "What Makes You Think You're So Popular? Self Evaluation Maintenance and the Subjective
Side of the "Friendship Paradox"" (http:/ / www. psych. nyu. edu/ jost/ Zuckerman & Jost (2001) What Makes You Think You're So Popular1.
pdf). Social Psychology Quarterly (American Sociological Association) 64 (3): 207223. doi:10.2307/3090112. JSTOR3090112. . Retrieved
2009-08-29.
Illusory superiority
271
[23] Buunk, B.P. (2001). "Perceived superiority of one's own relationship and perceived prevalence of happy and unhappy relationships". British
Journal of Social Psychology 40 (4): 565574. doi:10.1348/014466601164984.
[24] Hoorens, V.; P. Harris (1998). "Distortions in reports of health behaviours: The time span effect and illusory superiority". Psychology and
Health 13 (3): 451466. doi:10.1080/08870449808407303.
[25] Svenson, O. (February 1981). "Are we all less risky and more skillful than our fellow drivers?". Acta Psychologica 47 (2): 143148.
doi:10.1016/0001-6918(81)90005-6.
[26] McCormick, Iain A.; Frank H. Walkey, Dianne E. Green (June 1986). "Comparative perceptions of driver ability A confirmation and
expansion". Accident Analysis & Prevention 18 (3): 205208. doi:10.1016/0001-4575(86)90004-7.
[27] Martin, D.J.; Abramson, L.Y.; Alloy, L.B. (1984). "Illusion of control for self and others in depressed and non-depressed college students".
Journal of Personality and Social Psychology 46: 126136.
[28] Kuiper, N.A.; Macdonald, M.R. (1982). "Self and other perception in mild depressives". Social Cognition 1 (3): 223239.
doi:10.1521/soco.1982.1.3.223.
[29] Compton William C. (1992). "Are positive illusions necessary for self-esteem: a research note". Personality and Individual Differences 13
(12): 13431344. doi:10.1016/0191-8869(92)90177-Q.
[30] Knee, C.R., & Zuckerman, M. (1998). A nondefensive personality: Autonomy and control as moderators of defensive coping and
self-handicapping. Journal of Research in Personality, 32(2), 115-130. doi = http:/ / dx. doi. org/ 10. 1006/ jrpe. 1997. 2207
[31] Shedler, Jonathan; Martin Mayman, Melvin Manis (1993). "The Illusion of Mental Health". American Psychologist (American
Psychological Association) 48 (11): 11171131. doi:10.1037/0003-066X.48.11.1117. PMID8259825.
[32] Sedikides, Constantine; Robert S. Horton, Aiden P. Gregg (2007). "The Why's the Limit: Curtailing Self-Enhancement With Explanatory
Introspection". Journal of Personality (Wiley Periodicals) 75 (4): 783824. doi:10.1111/j.1467-6494.2007.00457.x. ISSN0022-3506.
PMID17576359.
[33] Beer JS, Hughes BL. (2010). "Neural systems of social comparison and the "above-average" effect". Neuroimage 49 (3): 26712679.
doi:10.1016/j.neuroimage.2009.10.075. PMID19883771.
[34] Martin Hilbert (2012) "Toward a synthesis of cognitive biases: How noisy information processing can bias human decision making" (http:/ /
psycnet.apa. org/ psycinfo/ 2011-27261-001). Psychological Bulletin, 138(2), 211237; free access to the study here:
martinhilbert.net/HilbertPsychBull.pdf
[35] Weinstein, N.D. (1980). "Unrealistic optimism about future life events". Journal of Personality and Social Psychology (American
Psychological Association) 39 (5): 806820. doi:10.1037/0022-3514.39.5.806.
[36] Kruger, J. (1999). "Lake Woebegon be gone! The "below-average effect" and the egocentric nature of comparative ability judgments".
Journal of Personality and Social Psychology 77 (2): 221232. doi:10.1037/0022-3514.77.2.221. PMID10474208.
[37] Schkade, D.A.; D. Kahneman (1998). "Does living in California make people happy? A focussing illusion in judgments of life satisfaction".
Psychological Science 9 (5): 340346. doi:10.1111/1467-9280.00066.
[38] Otten, W.; J. Van der Pligt (1966). "Context effects in the measurement of comparative optimism in probability judgments". Journal of
Personality and Social Psychology 15: 80101.
[39] Eiser, J.R.; S. Pahl, Y.R.A. Prins (2001). "Optimism, pessimism, and the direction of self-other comparisons". Journal of Experimental
Social Psychology 37: 7784. doi:10.1006/jesp.2000.1438.
[40] Windschitl, P.D.; J. Kruger, E.N. Sims (2003). "The influence of egocentrism and focalism on people's optimism in competition: When what
affects us equally affects me more". Journal of Personality and Social Psychology (American Psychological Association) 85 (3): 389408.
doi:10.1037/0022-3514.85.3.389. PMID14498778.
[41] E.E. Giladi & Y. Klar (2002). "When standards are wide of the mark: Nonselective superiority and inferiority biases in comparative
judgments of objects and concepts". Journal of Experimental Psychology: General 131 (4): 538551. doi:10.1037/0096-3445.131.4.538.
[42] Sedikides, C., & Strube, M.J. (1997). Self-evaluation: To thine own self be good, to thine own self be sure, to thine own self be true, and to
thine own self be better. In M.P. Zanna (Ed.), Advances in experimental social psychology (Vol. 29, pp. 209269). New York: Academic
Press.
[43] Reeder, G.D.; Brewer, M.B. (1979). "A schematic model of dispositional attribution in interpersonal perception". Psychological Review 86:
6179. doi:10.1037/0033-295X.86.1.61.
[44] Swann, W.B., Rentfrow, P.J., & Guinn, J.S. (2003). Self Verification: The search for coherence. In M.R. Leary & J.P. Tangney (Eds.),
Handbook of Self and Identity (pp. 367383). New York: Guildford Press.
[45] Dunning, D.; Meyerowitz, J.A.; Holzberg, A.D. (1989). "Ambiguity and self-evaluation: The role of idiosyncratic trait definitions in
self-serving assessments of ability". Journal of Personality and Social Psychology 57 (6): 10821090. doi:10.1037/0022-3514.57.6.1082.
[46] Alicke, M.D.; Klotz, M.L.; Breitenbecher, D.L.; Yurak, T.J.; Vredenburg, D.S. (1995). "Personal contact, individuation, and the
better-than-average effect". Journal of Personality and Social Psychology 68 (5): 804825. doi:10.1037/0022-3514.68.5.804.
[47] Otten, W.; Van; der Pligt, J. (1966). "Context effects in the measurement of comparative optimism in probability judgments". Journal of
Social and Clinical Psychology 15: 80101.
[48] Brown, J.D. (1986). "Evaluations of self and others: Self-enhancement biases in social judgments". Social Cognition 4 (4): 353376.
doi:10.1521/soco.1986.4.4.353.
[49] Moore, D.A. (2007). "Not so above average after all: When people believe they are worse than average and its implications for theories of
bias in social comparison". Organizational Behaviour and Human Decision Processes 102 (1): 4258. doi:10.1016/j.obhdp.2006.09.005.
Illusory superiority
272
References
Alicke, Mark D.; David A. Dunning, Joachim I. Kruger (2005). The Self in Social Judgment. Psychology Press.
pp.85106. ISBN978-1-84169-418-4. especially chapters 5 and 4
Kruger, J. (1999). Lake Wobegon be gone! The "below-average effect" and the egocentric nature of comparative
ability judgments. Journal of Personality and Social Psychology, 77, 221232.
Matlin, Margaret W. (2004). "Pollyanna Principle". In Rdiger Pohl. Cognitive Illusions: a handbook on fallacies
and biases in thinking, judgement and memory. Psychology Press. ISBN978-1-84169-351-4.
Myers, David G. (1980). The inflated self: human illusions and the Biblical call to hope. New York: Seabury
Press. ISBN 978-0-8164-0459-9
Sedikides, Constantine; Aiden P. Gregg. (2003). "Portraits of the Self" in Sage handbook of social psychology
Further reading
Dunning, David; Kerri Johnson, Joyce Ehrlinger and Justin Kruger (2003). "Why people fail to recognize their
own incompetence" (http:/ / www3. interscience. wiley. com/ journal/ 118890796/ abstract?CRETRY=1&
SRETRY=0). Current Directions in Psychological Science 12 (3): 8387. doi:10.1111/1467-8721.01235.
E.E. Giladi & Y. Klar (2002). "When standards are wide of the mark: Nonselective superiority and inferiority
biases in comparative judgments of objects and concepts". Journal of Experimental Psychology: General 131 (4):
538551. doi:10.1037/0096-3445.131.4.538.
In-group favoritism
In-group favoritism, sometimes known as in-groupout-group bias, in-group bias, or intergroup bias, refers to a
pattern of preferencing members of ones in-group over out-group members. This can be expressed in evaluation of
others, allocation of resources and many other ways.
[1]
This interaction has been researched by many psychologists and linked to many theories related to group conflict and
prejudice. The phenomenon is primarily viewed from a social psychology standpoint rather than a personality
psychology perspective. Two prominent theoretical approaches to the phenomenon of ingroup favoritism are realistic
conflict theory and social identity theory. Realistic conflict theory proposes that intergroup competition, and
sometimes intergroup conflict, arises when two groups have opposing claims to scarce resources. In contrast, social
identity theory posits a psychological drive for positively distinct social identities as the general root cause of
ingroup favouring behavior.
Origins of the research tradition
In 1906, the sociologist William Sumner posited that humans are a species that join together in groups by their very
nature. However, he also maintained that, beyond this, humans had an innate tendency to favor their own group over
others; saying, "Each group nourishes its own pride and vanity, boasts itself superior, exists in its own divinities, and
looks with contempt on outsiders" (p.13).
[2]
This is seen on the group level with ingroup-outgroup bias, and when
experienced in such larger groups as tribes, ethnic groups, or nations, it is referred to as ethnocentrism.
In-group favoritism
273
Explanations
Competition
Competition between groups for resources has been suggested as the cause of negative prejudices and stereotypes of
the out-group, a phenomenon called realistic conflict theory, or realistic group conflict.
[3]
The Robbers Cave
Experiment is commonly used to exemplify this perspective. In this experiment, 22 eleven year-old boys with similar
backgrounds were studied in a mock summer camp situation. The boys were broken into two groups of twelve and
were studied on their in-group out-group behavior in several different situations. The research revealed startling
evidence that regardless of two groups similarity, group members will behave viciously toward the out-group when
competing for limited resources.
[4]
The ingroup-outgroup bias could readily be seen in the boys' behaviors toward
each other. They underestimated the performance of the other group and overestimated the performance of their own
group. Moreover, "the pro-ingroup tendency went hand in hand with the anti-outgroup tendency" (p.423).
[5]
Self-esteem
It is argued that one of the key determinants of group biases is the need to improve self-esteem. That is individuals
will find a reason, no matter how insignificant, to prove to themselves why their group is superior. This phenomenon
was pioneered and studied most extensively by Henri Tajfel, a British social psychologist who looked at the
psychological root of in-group/out-group bias. To study this in the lab, Tajfel and colleagues created what are now
known as minimal groups (see minimal group paradigm) which occur when complete strangers are formed into
groups using the most trivial criteria imaginable. In Tajfels studies, participants were split into groups by flipping a
coin, and each group then was told to appreciate a certain style of painting none of the participants were familiar
with when the experiment began. What Tajfel and his colleagues discovered was regardless of the fact that a)
participants did not know each other, b) their groups were completely meaningless and c) none of the participants
had any inclination as to which style they like better, almost always across the board participants liked the
members of their own group better and they rated the members of their in-group as more likely to have pleasant
personalities. By having a more positive impression of individuals in the in-group, individuals are able to boost their
own self-esteem as members of that group.
[1]
Robert Cialdini and his research team looked at the number of university T-shirts being worn on college campuses
following either a win or loss at the football game. Not surprisingly, the Monday after a win there were more T-shirts
being worn, on average, than following a loss.
[1][6]
In another set of studies done in the 1980s by Jennifer Crocker and colleagues, self-esteem was studied using
minimal group processes in which it was shown that individuals with high self-esteem who suffer a threat to the
self-concept exhibit greater ingroup biases than people with low self-esteem who suffer a threat to the
self-concept.
[7]
While some studies have supported this notion of a negative correlation between self-esteem and
in-group bias,
[8]
other researchers have found that individuals with low self-esteem have a higher prejudice to both
in-group and out-group members.
[7]
Some studies have even showed that high-self-esteem groups showed a greater
prejudice than did lower self-esteem groups.
[9]
This research may suggest that there is an alternative explanation and
additional reasoning as to the relationship between self-esteem and in-group/out-group biases. Alternatively, it is
possible that researchers have used the wrong sort of self-esteem measures to test the link between self-esteem and
in-group bias (global personal self-esteem rather than specific social self-esteem).
[10]
In-group favoritism
274
In-group favoritism versus out-group negativity
Social psychologists have long made the distinction between ingroup favouritism and outgroup negativity, where
outgroup negativity is the act of punishing or placing burdens upon the outgroup.
[11]
Indeed, a significant body of
research exist that attempts to identify the relationship between ingroup favouritism and outgroup negativity, as well
as conditions that will lead to outgroup negativity.
[12][13][14]
For example, Struch and Schwartz found support for the
predictions of belief congruence theory.
[15]
The belief congruence theory concerns itself with the degree of similarity
in beliefs, attitudes, and values perceived to exist between individuals. This theory also states that dissimilarity
increases negative orientations towards others. When applied to racial discrimination, the belief congruence theory
explains that its the perceived dissimilarity of beliefs that has more of an impact on racial discrimination than race
itself.
References
[1] Aronson, E., Wilson, T. D., & Akert, R. (2010). Social psychology. 7th ed. Upper Saddle River: Prentice Hall.
[2] [2] Sumner, William (1906).
[3] Whitley, B.E., & Kite, M.E. (2010). The Psychology of Prejudice and Discrimination. Belmont, CA: Wadsworth
[4] [4] Muzafer Sherif, O. J. Harvey, B. Jack White, William R. Hood, Carolyn W. Sherif (1954/1961) "Intergroup Conflict and Cooperation: The
Robbers Cave Experiment"
[5] [5] Forsythe (2009).
[6] Cialdini, R., Borden, R., Thorne, A., Walker, M., Freeman, S., & Sloan, L. (1976). Basking in reflected glory: Three (football) field studies.
Journal of Personality and Social Psychology 34, 366-375.
[7] Crocker, J., Thompson, L., McGraw, K., & Ingerman, C. (1987). Downward comparison, prejudice, and evaluations of others: Effects of
self-esteem and threat. Journal of Personality and Social Psychology, 52, 907-916)
[8] Abrams, D., & Hogg, M. (1988). Comments on the motivational status of self-esteem in social identity and intergroup discrimination.
European Journal of Social Psychology, 18, 317-344
[9] Sachdev, I., & Bourhis, R. (1987). Status differentials and intergroup behavior. European Journal of Social Psychology, 17, 277-293.
[10] Rubin, M., & Hewstone, M. (1998). Social identity theorys self-esteem hypothesis: A review and some suggestions for clarification.
Personality and Social Psychology Review, 2, 40-62. [View] (http:/ / dx. doi. org/ 10. 1207/ s15327957pspr0201_3)
[11] Tajfel, H., & Turner, J. C. (1979). An integrative theory of intergroup conflict. In W. G. Austin & S. Worchel (Eds.), The social psychology
of intergroup relations (pp. 3347). Monterey, CA: Brooks/Cole
[12] Bourhis, R. Y.; Gagnon, A. (2001). Brown, S. L.; Gaertner. eds. "Social Orientations in the Minimal Group Paradigm". Blackwell Handbook
of Social Psychology: Intergroup processes 3 (1): 133-152.
[13] Mummendey, A.; Otten, S. (2001). Brown, S. L.; Gaertner. eds. "Aversive Discrimination". Blackwell Handbook of Social Psychology:
Intergroup processes 3 (1): 112-132.
[14] Turner, J. C.; Reynolds, K. H. (2001). Brown, S. L.; Gaertner. eds. "The Social Identity Perspective in Intergroup Relations: Theories,
Themes, and Controversies". Blackwell Handbook of Social Psychology: Intergroup processes 3 (1): 133-152.
[15] Struch, Naomi; Shalom Schwartz (1989). "Intergroup aggression: Its predictors and distinctness from in-group bias.". Journal of Personality
and Social Psychology 56 (3): 364373.
Nave cynicism
275
Nave cynicism
Naive cynicism is a cognitive bias that occurs when people expect more egocentric bias in others than actually is the
case. The term was proposed by Justin Kruger and Thomas Gilovich.
In one series of experiments, groups including married couples, video game players, darts players and debaters were
asked how often they were responsible for good or bad events relative to a partner. Participants evenly apportioned
themselves for both good and bad events, but expected their partner to claim more responsibility for good events
than bad events (egocentric bias) than they actually did.
Naive cynicism may complement nave realism and the bias blind spot.
References
Kruger, J., & Gilovich, T. (1999). "Naive cynicism" in everyday theories of responsibility assessment: On biased
assumptions of bias. Journal of Personality and Social Psychology, 76, 743-753.
Worse-than-average effect
The worse-than-average effect or below-average effect is the human tendency to underestimate one's achievements
and capabilities in relation to others.
[1]
It is the opposite of the usually pervasive better-than-average effect (in contexts where the two are compared or the
overconfidence effect in other situations). It has been proposed more recently to explain reversals of that effect,
where people instead underestimate their own desirable traits.
This effect seems to occur when chances of success are perceived to be extremely rare. Traits which people tend to
underestimate include juggling ability, the ability to ride a unicycle, the odds of living past 100 or of finding a U.S.
twenty dollar bill on the ground in the next two weeks.
Some have attempted to explain this cognitive bias in terms of the regression fallacy or of self-handicapping. In a
2012 article in Psychological Bulletin it is suggested the worse-than-average effect (as well as other cognitive biases)
can be explained by a simple information-theoretic generative mechanism that assumes a noisy conversion of
objective evidence (observation) into subjective estimates (judgment).
[2]
References
[1] Kruger, J. (1999). "Lake Wobegon be gone! The "below-average effect" and the egocentric nature of comparative ability judgments". Journal
of Personality and Social Psychology 77 (2): 221-232.
[2] Hilbert, Martin (2012). "Toward a synthesis of cognitive biases: How noisy information processing can bias human decision making" (http:/ /
psycnet.apa. org/ psycinfo/ 2011-27261-001). Psychological Bulletin 138 (2): 211237. . "free access to the study here:
martinhilbert.net/HilbertPsychBull.pdf"
Google effect
276
Google effect
The Google effect is the tendency to forget information that can be easily found using internet search engines such
as Google, instead of remembering it.
The phenomenon was described and named by Betsy Sparrow (Columbia), Jenny Liu (Wisconsin) and Daniel M.
Wegner (Harvard) in July 2011.
[1][2]
Having easy access to the Internet, the study showed, makes people less likely to remember certain details they
believe will be accessible online. People can still remember, because they will remember what they cannot find
online. They also remember how to find what they need on the Internet.
[3]
Sparrow said this made the Internet a type
of transactive memory.
[2]
One result of this phenomenon is dependence on the Internet; if an online connection is
lost, the researchers said, it is similar to losing a friend.
The study included four experiments conducted with students at Columbia and Harvard.
[3]
In part one, subjects had
to answer trivia questions, followed by naming the colors of words, some of which related to searching on the
Internet. In part two, the subjects read statements related to the trivia questions and had to remember what they read.
They had an easier time with those statements they believed they could find online. In phase three, the subjects had
to remember the details of the statements based on whether they believed the information could be found
somewhere, whether it could be found in a specific place, or whether it could not be found. They remembered the
information they believed to be deleted most easily. In the final phase, the subjects believed the statements would be
stored in folders. They had an easier time remembering the folder names than the statements. One conclusion: people
can remember information if they do not know where to find it, and they can remember how to find what they need
if they cannot remember the information.
[2]
Sparrow said, "We're not thoughtless empty-headed people who don't have memories anymore. But we are becoming
particularly adept at remembering where to go find things. And that's kind of amazing."
[3]
The theory has one major drawback, pointed out by several sources - there is no reason why the effect would not
have also been commonly found before the days of internet-available information, given that books of information
and libraries were readily available. The response is of course that the internet makes information almost
instantaneously available, in a wide variety of locations (even more so with the advent of smartphones), in a way that
has never been possible with books, which are not always readily available.
References
[1] Betsy Sparrow, et al., "Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips," Science
333:6043:776-778 doi:10.1126/science.1207745, July 15, 2011
[2] "Study Finds That Memory Works Differently in the Age of Google" (http:/ / news. columbia. edu/ research/ 2490). Columbia University.
2011-07-14. . Retrieved 2011-08-04.
[3] Krieger, Lisa M. (2011-07-16). "Google changing what we remember" (http:/ / www. charlotteobserver. com/ 2011/ 07/ 16/ 2457835/
google-changing-what-we-remember.html). San Jose Mercury News. . Retrieved 2011-08-04.
External links
Link to the Science study (http:/ / scim. ag/ B-Sparrow)
Link to video of Betsy Sparrow discussing her research (http:/ / news. columbia. edu/ research/ 2490)
Article Sources and Contributors
277
Article Sources and Contributors
List of biases in judgment and decision making Source: http://en.wikipedia.org/w/index.php?oldid=529909521 Contributors: "alyosha", .digamma, 4RugbyRd, Aaron Kauppi, AerobicFox,
Aeternus, Allion, Altsarc, Amarakana, Andrewaskew, Andries, Anomalocaris, Anonymous4367, Antedater, Anthon.Eff, Arjuna909, Arno Matthias, AstroHurricane001, Avenue, Aznfanatic6,
Badger Drink, Badgettrg, Barnacles phd, Bbartho, Beland, Bella224 44, Bender235, Benjamin Mako Hill, Bert56, Biddingers, Big Bird, Calu2000, CapitalR, CapitalSasha, Captain-n00dle, Cat
Cubed, Cervello84, Chasfh, Chris the speller, Christophernandez, Clay Juicer, Clayoquot, Cogpsych, Colorfulharp233, Counseladvise, Cretog8, CrookedAsterisk, Crzer07, CuriousOne,
Cycleskinla, Cyfal, DCDuring, Dangph, Danman3459, Dave Runger, Deanba, DenisHowe, Diablotin, Digi843, Diomedea Exulans, Douglasjarquin, Download, Dr Ashton, DrL, Dragice,
Dreadfullyboring, Drjeanne, Droll, Drsa12, DwightKingsbury, Edgarde, Edit650, Effie.wang, Ego White Tray, ElentariAchaea, Endogenous -i, ErkDemon, ErnestC, Everything counts, Exeunt,
Eyal.peer, Fjarlq, Foocha, Frogular, Frdrick Lacasse, Fwkb, Gamewizard71, General Wesc, GoingBatty, Good Vibrations, Gracefool, Grumpyyoungman01, Gwern, HDarke, Hahih, Hankconn,
Hans Adler, Hapli, Harold f, Heqwm, Hobsonlane, HonoreDB, Iaoth, InfoCmplx, Ingolfson, Int3gr4te, IvR, J.Ammon, JForget, Jcreigno, Jeffq, Jeisensei, Jenifan, Jmrowland, Joebieber, John
Cross, Johnkarp, Jon.baron, Jonathan.s.kt, Joniale, Jose Icaza, Jtneill, Juniper blue, Jweiss11, Kilmer-san, Knucmo2, Koavf, Kookyunii, Kpmiyapuram, Kwhitten, L2blackbelt, L33tminion, LCP,
Lamjeremy, Lamro, Lawrencekhoo, Leejc, Leonard G., Letranova, Lexein, Lightbound, Litholight, Loudsox, Lova Falk, MJAspen, Magmi, Mahjongg, MartinPoulter, Marudubshinki, Mattjs,
Matzeachmann, Mbrooks21, McGeddon, McSly, MdMcAli, Metamatic, Michael Hardy, Michele123, Mindmatrix, MitchMcM, Mohawkjohn, MoonLichen, Moorlock, Mrwojo, Mukadderat,
NFOlson, Nbarth, Nclean, Nekteo, Nhunt, Nick, NinjaKid, Northamerica1000, Number 0, Olsen34, Osubuckeyeguy, PFConroy, Patriarch, Paul Magnussen, Paul1andrews, Peace01234, Pedrobh,
Pgreenfinch, Philwelch, Pilgaard, Poliquin, Polskivinnik, Power.corrupts, Proberts2003, Punanimal, R42, Rachel Pearce, RafaelRGarcia, Rallette, Rich Farmbrough, RichardF, Richmcl,
Rj.amdphreak, Rjwilmsi, Rl, Rob.bastholm, Robhd, Robin klein, Robinh, Roleren, Rrburke, Rurik3, Samohyl Jan, Sandstein, Sat143su, Sburke, Schlafly, Scwlong, SebastianHelm, Seren-dipper,
Sergei Peysakhov, Shnookle72, Sills bend, SocialPsych101, Sondreskarsten, Sparkit, Srich32977, Startxxx, SteveJanssen77, Stevegallery, StradivariusTV, Superjoe30, Sverre, Swift2plunder,
Taak, Tabletop, Tdent, Teratornis, Texture, The Anome, The Kytan Apprentice, Timwi, Tisane, Tkinkhorst, Toetoetoetoe, Tom harrison, Travelbird, Trylks, Uselessmoose, Utrechton, ValerieBK,
Valerius Tygart, Vincentdebruijn, Waerloeg, Wavelength, WheezePuppet, Whitepaw, WikiSlasher, Wikiteck, Willi252, Wittylama, Wix86, Wjbeaty, Wk muriithi, Wkerst, Wknight94,
Wolfkeeper, Wragge, Wunderbarrrrr, Wykily, Wykypydya, WynnQuon, YechezkelZilber, Zbxgscqf, ZoneSeek, Zvika, , , 260 anonymous edits
Ambiguity effect Source: http://en.wikipedia.org/w/index.php?oldid=526104937 Contributors: Aaron Kauppi, Asparagus, BenC7, JohnsonL623, Jon.baron, Lova Falk, MSchmahl,
MikeCapone, Olsen34, Pearle, Platothefish, Tesseran, Ugncreative Usergname, 4 anonymous edits
Anchoring Source: http://en.wikipedia.org/w/index.php?oldid=527429280 Contributors: 2004-12-29T22:45Z, AThing, Aaron Kauppi, Action potential, Andy Smith, Arthena, Asocall,
AxelBoldt, Bethan 182, BlaiseFEgan, Bluerasberry, ChrisG, Claireilima, DJLumination, DavidWBrooks, Diberri, DonRus, Doug4, Epeefleche, Erwinduke, Excirial, FT2, Gap9551, Gary King,
Grumpyyoungman01, Hadal, Haymaker, Icarus3, JTycherin, Johanna-Hypatia, John of Reading, JorisvS, Kai-Hendrik, Karada, Larrymcp, Lova Falk, Markell West, MartinPoulter,
MathewTownsend, McGeddon, Mdixson, Meco, Msml, Mulat, Nopetro, Outriggr, Peterdjones, Pgreenfinch, Radagast83, Rgfolsom, Robert Weemeyer, Rubingr, Simoes, Spinningspark, Taak,
Tonkiro, Ugncreative Usergname, Vladimir Volokhonsky, Wikipelli, Yeli23, Yunshui, 67 anonymous edits
Attentional bias Source: http://en.wikipedia.org/w/index.php?oldid=530071184 Contributors: Aaron Kauppi, Amykw, Andrewman327, Borkert, Cogpsych, EPM, Grutness, Icarus3, Ioannes
Pragensis, Jon.baron, Kehilles, Lova Falk, MathewTownsend, Mattisse, Nabeth, Rob.bastholm, Skpearman, The Anome, Wavelength, Wrelwser43, 9 anonymous edits
Availability heuristic Source: http://en.wikipedia.org/w/index.php?oldid=527328736 Contributors: (, 2001:660:3302:2822:F66D:4FF:FE17:C4A9, Aaron Kauppi, Anonymous Dissident,
Atomiktoaster, Bearcat, Bender235, Bert Macklin, Bobblewik, Bporopat, Bryan Derksen, Butters7, Cat Cubed, Centrepull, Chantoke, CharlotteWebb, Chuck Carroll, Craig.borchardt, Critic11,
DanielCD, Dukeofomnium, Edward, Eric Shalov, Foggy Morning, Fplay, Framhein, Freakofnurture, Fuzzy artist, Goffrie, Helixblue, Heroeswithmetaphors, Ideatr, Interlope, Jeff Muscato,
Johnkarp, JudithBrizuela, Koavf, Kwhitten, LauraHale, Life of Riley, Lmauri1, Magioladitis, Maroboduus, MartinPoulter, MathewTownsend, Mattisse, Maurice Carbonaro, Michael Hardy,
Midinastasurazz, Mkamensek, MrLV83, Muboshgu, Nishkid64, Orioneight, POYNOR, Paradisefades, Philwelch, Piotrus, Podkayne, Psp2010, QuiteUnusual, Raider Duck, Reetep, Rene Thomas,
Rjwilmsi, SMcCandlish, SP612, Sardanaphalus, Sefeist, Taak, Teratornis, Terpsichoreus, The Brain, TheEditrix2, Xyzzyplugh, YSSYguy, 94 anonymous edits
Availability cascade Source: http://en.wikipedia.org/w/index.php?oldid=458327778 Contributors: Aaron Kauppi, Anthon.Eff, Hq3473, LilHelpa, Malcolma, Sketchmoose, TallulahBelle,
Torrentweb, 6 anonymous edits
Confirmation bias Source: http://en.wikipedia.org/w/index.php?oldid=530453190 Contributors: 2over0, AManWithNoPlan, Aaron Kauppi, AaronSw, Action potential, AdamM, Aeusoes1,
Ahrie, AllGloryToTheHypnotoad, AlphaPhoenixDown, Ambience2000, Anarchia, AndroidCat, Andy Fugard, Angr, Anythingyouwant, Argumzio, Arno Matthias, Art LaPella, Ashmoo,
AxelBoldt, Beland, Belinrahs, Bender235, BiT, Big Bird, Bionicburrito, Bob conklin, Boffob, Bongwarrior, Borkert, BrothaLikeABiark, Bus stop, Bylerda, CRGreathouse, Casliber, Chendy,
Christian75, Cirt, Clark42, Coren, Cosmoguy, CurtisJohnson, Cybercobra, DCGeist, DRosenbach, Dabomb87, Dank, David D., Dbachmann, Ddxc, Dino, Diomedea Exulans, Dr Marcus Hill,
Dreftymac, Duke Ganote, Elembis, Embedded I, Erik, ErkDemon, Finell, Finetooth, Fnielsen, Fraggle81, Furrykef, Geh, Glass Sword, Gregbard, Grumpyyoungman01, Gwern, Hamiltonstone,
Hans Adler, Hayvac, Hectorguinness, Hriber, Hylas Chung, Hyper84, Ibanix, Ida Shaw, IjonTichyIjonTichy, Italo Svevo, J.delanoy, JHP, JQF, James Kidd, JanWMerks, Jeff G., Johndburger,
Johnuniq, Jpgordon, JuhazOne, KSchutte, Kaldari, Karada, Kassorlae, Kelvin Palm, Kintetsubuffalo, Kinu, Koavf, Kobrabones, KrishnaVindaloo, L Kensington, LeadSongDog, Leinad-Z,
Letovanic, Light in Water, Likebox, LilHelpa, Limulus, Loodog, Lorenzo Braschi, Lova Falk, Madd bm, Magioladitis, MartinPoulter, Marudubshinki, MastCell, Mattisse, Mauler90, Mgmirkin,
Michael Hardy, Miguel de Servet, MisterSheik, MistyMorn, Mmortal03, Moni3, Moppy6969, NATTO, Nagelfar, NickCT, Nihiltres, Northwestgnome, Odd nature, Oldag07, Oliver Lineham,
Philwiki, Pmj, Porlob, PseudoPserious, Psych psych, RJHall, Radon210, Raimundo Pastor, Rich Farmbrough, Richard001, Rjwilmsi, Roadrunner, RobertG, Ruakh, SDC, Sadads, SandyGeorgia,
Sharaya, Simplifier, Skalman, Skipnyip, Skywalker415, Slow Thinker, Sluzzelin, Snowmanradio, SpaceFlight89, Speculatrix, StAnselm, Stang, Steven Paul Jobs Apple 1, Straw Cat,
SuperMarioMan, Taak, TableManners, Tabor, TakuyaMurata, Tbhotch, ThatPeskyCommoner, The Anome, The Thing That Should Not Be, The Wilschon, The ed17, TheOriginalSoni, Thespian,
Thetuxone98, Timbatron, Tktktk, Tobacman, Toddst1, Toh, Tommy2010, Tony1, Torrentweb, Trbdavies, Trekphiler, Trilobitealive, Twinsday, U3964057, Useight, ValenShephard, Variasveces,
Visor, Vuo, WeijiBaikeBianji, Weleepoxypoo, Wetman, Wheresmysocks, WikHead, William Avery, William Graham, Wjbeaty, Xadhu, Xanzzibar, Xmarm, Y2kcrazyjoker4, Yaris678, Yelyos,
Yivi00, Zvika, 213 anonymous edits
Bandwagon effect Source: http://en.wikipedia.org/w/index.php?oldid=530649393 Contributors: 1snailbyte, 7, Aaron Kauppi, Achangeisasgoodasa, Aeternus, Al.locke, Aleph-4, Aleph4,
Alexclaz, Andrewaskew, Andrewpmk, Angry Lawyer, Anomalocaris, Applejaxs, Archanamiya, Ashmoo, AxelBoldt, BLueFiSH.as, Badger Drink, Bbarkley2, Bearian, Bender235, Biblbroks,
BigDunc, Bizso, Blmille1, Bluecat4, Bobo192, Bongwarrior, Bremerenator, Brews ohare, Bryan Derksen, Burningdwarf, CJ DUB, Ccacsmss, Chip Zero, Chris the speller, Chuck80, Circumspice,
Cnilep, Colchicum, Count de Des Moines, Cst17, Cunard, Curtis, Cuthbert29, Deconstructhis, Dorftrottel, Dreftymac, Duke33, Eric-Wester, EricEnfermero, Escape Orbit, FHMRUSSIA,
Faradayplank, Fastily, Fayenatic london, FileMaster, Fillmore Jive, Flavio.tosi, Fluffernutter, FrankTobia, Frosted14, FunPika, GeeJo, Gnevin, Gregbard, Gscshoyru, Hackenabush, Hayden120,
Heegoop, IceCreamAntisocial, Idkbro, It's The Economics, Stupid!, J.delanoy, JHP, JYi, James McBride, Jamesmoorhouse23, Jarble, Jeff Dahl, Jkomlos, Joejones2028, Joerg Kurt Wegner,
Josh3580, Jpoelma13, Kaihsu, Kaonslau, Kingturtle, Kopitarian, Kotosan, Krabmeat, LW77, LibraryLion, LinguistAtLarge, Lmatt, Loremaster, Lova Falk, MaCRoEco, Mark Arsten,
MartinPoulter, Masalih, Matty1469, Maurreen, Mcorazao, Mellery, Mfhiller, Mgiganteus1, Mmortal03, Moorlock, Mpcoder, Nareek, NeonMerlin, NoelFlicken, Northamerica1000, Ohnoitsjamie,
OlEnglish, Omphaloscope, Opticburst, Orphan Wiki, Patrick, Penbat, Peter Karlsen, Piepie, Pixelface, Pollinator, Portillo, Retired username, Revth, Rhobite, Richard75, Rjlabs, Rjwilmsi,
Rmfitzgerald50, Roadstaa, Robin63, Sanskritg, Scartelak, Schuym1, Scott Keeler, Seglea, Shanes, Shawnc, ShelfSkewed, Shivambvij, Shoy, SimonP, Skysmith, Smyth, Snowolf, SpaceFlight89,
Sprints, Stefanomione, SteinbDJ, Stephen378, Stevietheman, Student7, Sun Creator, Tabletop, Tasc, Tesseran, The Anome, The wub, ThePoorGuy, Thespian, Thingg, Thudworthy, Tktktk,
Tmtoulouse, Torrentweb, Tresrboles, Trialsanderrors, Twinsday, User2004, VMS Mosaic, Voldemore, Wassermann, Wikipelli, Xxsmarthxx1, 237 anonymous edits
Base rate fallacy Source: http://en.wikipedia.org/w/index.php?oldid=522023101 Contributors: Aaron Kauppi, Aboulis, Aunt Entropy, Barrkel, Bettis211, Bryan Derksen, Diberri, Dragice,
Drilnoth, Eelamstylez77, EmersonLowry, Enochlau, EverGreg, Gary King, Greyengine5, Groyolo, Hairy Dude, Hanxu9, Harold f, Hob Gadling, Huji, Itinerant1, Jacob Robertson, Jcblackmon,
Jesin, JohnWEPurchase, Josang, Julian Brown, Lavenderbunny, Logicchecker, Lova Falk, MartinPoulter, MathewTownsend, Mindmatrix, Omicronpersei8, OnBeyondZebrax, Pgreenfinch, Pjf,
Qbk711, Quantic.quintic, Reedy, Reschly, Revolving Bugbear, Richard001, Rjwilmsi, Schmancy47, Schneelocke, SchuminWeb, Shirley Hou, Sietse, Silence, Taak, Tanner Swett, Tehcarp, The
Anome, WavePart, Wikihiki, Zack, Zingus, 122 anonymous edits
Belief bias Source: http://en.wikipedia.org/w/index.php?oldid=522780155 Contributors: Aaron Kauppi, Apgold, Duckalicious, Fatfingers, Grumpyyoungman01, Grutness, MartinPoulter,
MathewTownsend, Mecanismo, Noamohana, RichardF, Rjwilmsi, SP612, StatsTeacher, The Anome, Warmstar, 9 anonymous edits
Bias blind spot Source: http://en.wikipedia.org/w/index.php?oldid=497108534 Contributors: 2004-12-29T22:45Z, AED, Aaron Kauppi, Andreasmperu2008, Arthur Rubin, D4g0thur,
Flammingo, Grumpyyoungman01, Grutness, Gyan Veda, JHP, Jruderman, Karada, MartinPoulter, Taak, Thespian, 18 anonymous edits
Choice-supportive bias Source: http://en.wikipedia.org/w/index.php?oldid=510817866 Contributors: Aaron Kauppi, Aleksd, Bookschoice, Ccarlson6, Cogpsych, Edit650, JHP, JLaTondre,
JaGa, Jfurr1981, Kostmo, Markclark, MathewTownsend, Mattisse, Neutrality, Pichpich, RichardF, Rjwilmsi, Ruakh, SlayerBloodySlayer, SpaceFlight89, Taak, Thespian, Tom Morris,
Woohookitty, ^demon, 24 anonymous edits
Clustering illusion Source: http://en.wikipedia.org/w/index.php?oldid=520096828 Contributors: 2004-12-29T22:45Z, Aaron Kauppi, Atakdoug, Audacity, Bainemo, Bkell, BlazingThunder,
Boffob, Brandon5485, DavidWBrooks, Dcoetzee, Freude.schoner.gotterfunken, Fyyer, Ground, Gwalla, Hohum, Ieopo, Inky, Janko, Lambiam, Lazarus666, Localh77, Lova Falk, LtNOWIS,
Article Sources and Contributors
278
MER-C, Makescleaf, McGeddon, Michael Hardy, Nandesuka, Nickg, Pavel Vozenilek, Peter.C, Rich Farmbrough, Rjwilmsi, Robin S, ScopyCat, Skeptiker, Taak, Tabor, Tangopaso, Tlogmer,
Zippanova, 42 anonymous edits
Congruence bias Source: http://en.wikipedia.org/w/index.php?oldid=488897155 Contributors: Aaron Kauppi, Anomalocaris, Bearian, Cmdrjameson, Heycam, Jon.baron, MathewTownsend,
Poliquin, RafaelRGarcia, The Anome, Thespian, 6 anonymous edits
Conjunction fallacy Source: http://en.wikipedia.org/w/index.php?oldid=527768433 Contributors: Aaron Kauppi, Andeggs, BenFrantzDale, Bryan Derksen, Bunnyhop11, Claytondaley,
Dfoxvog, Ed g2s, El mbs, Evencorrigeren, Fnielsen, Gaius Cornelius, Iridescent, Karada, Machine Elf 1735, MarkSweep, MartinPoulter, Mattisse, Maximus Rex, Michael Hardy, Mliggett,
Petiatil, Phillipsmcgee, Puellanivis, Qbk, Rasmus Faber, Rbarreira, Riccardofranco, SparxDragon, Strike Eagle, Taak, Unimaxium, Vyznev Xnebara, 38 anonymous edits
Conservatism (belief revision) Source: http://en.wikipedia.org/w/index.php?oldid=521119960 Contributors: Alexwagner, Andrewaskew, Auric, GEBStgo, Gregbard, Kiefer.Wolfowitz,
Melcombe, Porejide, Qwfp, Taak, 3 anonymous edits
Contrast effect Source: http://en.wikipedia.org/w/index.php?oldid=527665047 Contributors: Aaron Kauppi, Bulwersator, Charles Matthews, Danlock8, Danorux, Grutness, Jacobolus,
Jamesmcardle, Jennavecia, Johnkarp, Jonadin93, Katharineamy, Kpmiyapuram, Lugia2453, Mandarax, MartinPoulter, Neparis, Nuvitauy07, Nyttend, Psych psych, Qef, RichardF, Robert P.
O'Shea, Rsabbatini, SF007, Saber girl08, Sgeo, Shwimla, Terpsichoreus, Tevildo, The Anome, Thespian, Typhoon, WBNS, Wykypydya, Xanzzibar, 28 anonymous edits
Curse of knowledge Source: http://en.wikipedia.org/w/index.php?oldid=529335345 Contributors: Barumpus, Bearcat, Bellemonde, Benvewikilerim, CompliantDrone, Donthedev, Gracefool,
Heroeswithmetaphors, Katharineamy, Polskivinnik, Robert1947, Taak, 2 anonymous edits
Decoy effect Source: http://en.wikipedia.org/w/index.php?oldid=507283380 Contributors: Gnobal, Loqi, Rich Farmbrough, Rjwilmsi, Ruakh, Skapur, Wasabe3543, 12 anonymous edits
Denomination effect Source: http://en.wikipedia.org/w/index.php?oldid=526469929 Contributors: DustFormsWords, Grutness, Melchoir, Milowent, 1 anonymous edits
Distinction bias Source: http://en.wikipedia.org/w/index.php?oldid=467584855 Contributors: Aaron Kauppi, Avalon, Bugbrain 04, Fabrictramp, Gmoney5jr, Ioeth, Leolaursen, Mbrooks21,
Rjwilmsi, 3 anonymous edits
Duration neglect Source: http://en.wikipedia.org/w/index.php?oldid=506088439 Contributors: Evercat, Magioladitis, Rjwilmsi, Taak
Empathy gap Source: http://en.wikipedia.org/w/index.php?oldid=530670094 Contributors: 08alisalutsc, 2004-12-29T22:45Z, Aaron Kauppi, Bellemonde, Darkwind, Doczilla, Dravecky,
Framhein, F, Grutness, Guptakhy, Khazar, Lord Arador, Mattisse, Michael Hardy, Must We, Remuel, RichardF, Rjwilmsi, Ronbredo, Skrewler, SocialPsyc, Vegaswikian, Yale3000, 11
anonymous edits
Endowment effect Source: http://en.wikipedia.org/w/index.php?oldid=522652171 Contributors: ARG1900, Aaron Kauppi, Auric, CALR, ChrisGualtieri, Clefticjayjay, Cretog8, David-Sarah
Hopwood, Dillard421, Emfraser, Everything counts, FrankTobia, Heshacher, Isaacdealey, John Broughton, JohnKiat, Jweiss11, Karada, Lesath, MathewTownsend, Mattisse, Mogism, Monre,
NellieBly, Nirmanor, NoWikiFeedbackLoops, Paul Magnussen, Pgreenfinch, Psychobabble, Recury, Rich Farmbrough, RichardF, Rjwilmsi, Rlove, Rongrong.shu, Rory096, Sicherlich,
Simetrical, StradivariusTV, Taak, Teratornis, The Anome, Thespian, 38 anonymous edits
Essentialism Source: http://en.wikipedia.org/w/index.php?oldid=528116499 Contributors: 2002:CFC2:E35E:B:21E:C2FF:FEA0:E72, Abiyoyo, Abuskell, Adomh, Adoniscik, Algae,
Alinerbeaner, Anarchia, Andres, Anon423, Aykantspel, Aztolens, Bataille23, Beland, Bobo192, Bordello, Byelf2007, Camerong, Can't sleep, clown will eat me, Celtlen, Cesarschirmer, Chris the
speller, ChrisChantrill, Clicketyclack, Cybercobra, DNewhall, DSatz, Daedalus71, DancingPenguin, Dannyvocal, Dbachmann, Delaraha, Djmckee1, Dysepsion, Dzethmayr, EPM, Eldamorie,
Evercat, Fearquit87, Fokion, Georg Fedor, George100, Goethean, Goldavius, Green caterpillar, Gregbard, Guanaco, Hippalus, HotshotCleaner, Hriber, Hyacinth, Iamthedeus, Infophile, Infovoria,
Itafroma, J.delanoy, J04n, JWLogue, Jamesofur, Jaredwf, Jbmurray, Jfraatz, Joel Justiss, John Wilkins, Jon Awbrey, Juliancolton, Lauranrg, Leatherstocking, Levineps, Lgbtoz, Life of Riley,
Lucidish, M.O.X, MER-C, Madprofessional, Magister Mathematicae, Mani1, Mendel, Mercury, Mgoodyear, Nathan Laing, Ndaco, Nonexistant User, Novickas, Ontoraul, Owen, Patrick0Moran,
Paul Barlow, PedR, Penyulap, Philogik, Pion, PotatoSamurai, Quadell, Ramaksoud2000, RedWordSmith, Redrose64, Richard001, Rjwilmsi, Robina Fox, RodC, Rorybowman, Rtc, Rumostra,
SP612, Sam Hocevar, Sam Spade, Samlabrier, Santa Sangre, SchreiberBike, ScienceApologist, Silentelkofyesterday, SimonP, SlipperyN, Slrubenstein, StaticElectric, SteveMcCluskey,
Stevertigo, T of Locri, Taoboy49, Template namespace initialisation script, Tesseract2, The Famous Movie Director, Tim bates, Tommy2010, Toric, Twinsday, TyrS, Wetman, Whoosit, Will
Ockham, William Avery, Yonghokim, Yst, Yvwv, 148 anonymous edits
Experimenter's bias Source: http://en.wikipedia.org/w/index.php?oldid=494403939 Contributors: 2over0, Adrian-from-london, Amatulic, Arctic.gnome, Bender235, Bert56, Btyner,
Captain-n00dle, Chris the speller, Cpl Syx, Davidmack, Doczilla, Eewild, Emble64, Fnielsen, Galloping Ghost U of I, Henrygb, Jasongilder, Jfromcanada, Kostmo, Lethe, Limulus, Qwfp,
Rjwilmsi, Rsabbatini, Sisyphustkd, Steve carlson, Tide rolls, Uncle Milty, WikHead, 22 anonymous edits
False-consensus effect Source: http://en.wikipedia.org/w/index.php?oldid=521921813 Contributors: 2602:306:2472:B0D9:125:AB2C:D914:244F, Aaron Kauppi, Alan Liefting, Avb, Avihu,
Bishonen, Black Shadow, Bus Bax, Christopherlin, Coin945, Cretog8, David Delony, Dejitarob, DerHexer, Hazard-SJ, Heidimo, JaGa, Jabowery, Jake Wartenberg, Johnkarp, Jonathan Kovaciny,
JorisvS, Karada, Kesal, Koavf, Lawrencekhoo, Lotje, MartinPoulter, Mattisse, Midway, Namazukage7, Northamerica1000, Nroese, Piano non troppo, PoisonedQuill, RJBurkhart, RichardF,
RyanHoliday, Skysmith, Sslevine, Taak, Thanatos7474, TheRegicider, Thefreemarket, Tp1024, Trymybest, Vandypsyddbc, WhatamIdoing, 37 anonymous edits
Functional fixedness Source: http://en.wikipedia.org/w/index.php?oldid=525540924 Contributors: Arbitrarily0, Artichoke-Boy, Bahboom03, Captain Quirk, Closedmouth, Damian Yerrick,
Dcflyer, Dental, DragonflySixtyseven, EPM, Gary King, GregorB, Hinrik, Imnotminkus, Jackk, Jdick1, Jdralphs, John Vandenberg, Klnorman, Mattisse, Mboverload, Perfect Proposal, Piotrus,
Pmanderson, Reedy, Robert Stevenson, Salthizar, Stepha, SteveBaker, StriderTol, Teratornis, VsevolodKrolikov, Youneedmoretrees, 70 anonymous edits
Forer effect Source: http://en.wikipedia.org/w/index.php?oldid=530567405 Contributors: 806f0F, ALargeElk, Aaron Kauppi, Aeroknight, Android Mouse, Andy4226uk, Andycjp, Argumzio,
Ashmoo, Astrologist, AxelBoldt, BevvyB, Big Bird, Bobrayner, Brinerustle, Ccacsmss, Chealer, Cognoscente18, Consumed Crustacean, Cosmic Latte, Cybercobra, David Edgar, David H Braun
(1964), DavidFarmbrough, Denisgomes, Diberri, Doczilla, Download, Drbug, Dream of Nyx, Dugwiki, Dupontgu, EEng, Fnielsen, GreenReaper, Henrygb, Hu12, Iohannes Animosus, Ioverka,
Jazzwick, Jdevola, Jefffire, Jezcentral, Jfliu, JoeSmack, Jokestress, Jonathan.s.kt, Jonathanbishop, Jynus, Kelner, Kokey, Krelnik, Kvn8907, Livajo, Lova Falk, Martarius, Mattisse, Mboverload,
McGeddon, Mikker, Mrwojo, Mushroom, Myanw, NL Derek, Nathan Johnson, Of, Padillah, Panslabrinth, Paul Magnussen, Phe, Poko, Pppwikiwikime, Quiddity, Rabidwolfe, Rainbrow Man,
Repku, Rich Farmbrough, RichLow, RichardF, SGGH, SURIV, Samuella, Sardaka, Skomorokh, Smyth, SpeedyGonsales, Stevage, Steven Walling, T@Di, Taak, Tfiascone, The bellman,
Themindset, Tom harrison, Tom.Reding, Trivialist, UnicornTapestry, Urhixidur, Vanished user 47736712, Vaughan, Vrenator, WikiDoctorchecker, William Pietri, WinkJunior, Xnuala, -jlfr,
107 anonymous edits
Framing effect (psychology) Source: http://en.wikipedia.org/w/index.php?oldid=530272433 Contributors: Aaron Kauppi, Alfredo ougaowen, Cogpsych, Cretog8, Dark Silver Crow,
DarwinPeacock, Dzforman, Emurphy42, Fabrictramp, Glaje, Hazard-SJ, Helvetius, Hwang.joshua, Jodyng888, Kebes, Lova Falk, MartinPoulter, MathewTownsend, MatthewVanitas, MrOllie,
PeterEastern, Rjwilmsi, Stefanomione, Superp, Tisane, Ucucha, WissensDrster, Wodrow, 18 anonymous edits
Gambler's fallacy Source: http://en.wikipedia.org/w/index.php?oldid=530273222 Contributors: 2004-12-29T22:45Z, 2005, AKGhetto, Aaron Kauppi, Alex W, AlexanderM, Alksub, Aly89,
Andeggs, Andyroo316, Antarctic-adventurer, Aoxiang, Areldyb, Ashley Pomeroy, Aveekbh, Avirunes, AxelBoldt, Baccyak4H, Badger Drink, Banedon, Barklund, Bender235, Bfinn, Bigturtle,
Bkell, Blue Tie, Brad78, Bryan Derksen, Bush6984, CSTAR, Cablehorn, Calair, Camaj, Camw, Cgwaldman, Cjrcl, Cmglee, CobaltBlue, Constructive editor, Conversion script, Courcelles,
Cyclist, D o m e, DanielCD, DavidDouthitt, DavidWBrooks, Day viewing, Dcoetzee, Den fjttrade ankan, Deor, DocWatson42, Doniago, Donnaidh sidhe, DragonHawk, E090, Edward,
Electricbassguy, Emurphy42, Enric Naval, Eurosong, Evanreyes, Father Goose, FeatherPluma, Feezo, Fiveclubs, Fleecemaster, FoCuSandLeArN, Fosterd2, Fr, Furrykef, GDstew4, GVnayR,
Gayasri, Gazpacho, Giftlite, Gonzalo Diethelm, Grace Note, Graham87, GreenReaper, Gregbard, Grumpyyoungman01, Gwern, HAGADAG, Headcase88, Heron, Horovits, HumphreyW, Hyphz,
Iceberg3k, Jaguar9a9, Jasperdoomen, JesseRafe, Jimjam27, Jnestorius, Jokes Free4Me, Julesd, Karada, Kazvorpal, Labans, Lenoxus, Lifefeed, Liko81, LilHelpa, LukeH, Luqui, Magog the Ogre,
Malcolm Farmer, MathHisSci, McGeddon, Melchoir, Melcombe, Meno25, Michaelbluejay, Mike Van Emmerik, Molinari, Musiphil, Navigatr85, Nbarth, Netsumdisc, Notoldyet, NykeYoung,
O'kelly, Orthologist, Ozkaplan, PAR, Pacomartin, Pakaran, PanagosTheOther, Pat Hayes, PatrikR, Pigman, Pimnl, Pratik.mallya, Quarl, Quiddity, Qwfp, RatnimSnave, Rbarreira, Reki, Rhalah,
Rjwilmsi, Roma emu, Ruzbehabbasi, S2000magician, SCF71, Sandebert, Sbyrnes321, Serious stam, Shantavira, Sietse Snel, Silence, Slicing, SmartGuy Old, Smjg, Snoyes, Sockatume, Songm,
Spoon!, Statoman71, StuRat, Superm401, Taak, Takwish, Tamfang, Tarquin, The Anome, TheFix63, TheOtherStephan, Thumperward, Timo Honkasalo, Tomeasy, Tomisti, Torbad,
UltimateHombre, Uusijani, Uvaphdman, Vbailo, Vicki Rosenzweig, Vonbontee, Waleswatcher, Wolfkeeper, Woodstone, Wotnow, Wrp103, Xanzzibar, 192 anonymous edits
Hindsight bias Source: http://en.wikipedia.org/w/index.php?oldid=530212244 Contributors: 1-is-blue, Aaron Kauppi, Acidjazz1, Adiseven, Arno Matthias, Ashmoo, Auric, Baseball Bugs,
Billymac00, Bnosek, CCRoxtar, Can't sleep, clown will eat me, Cmcalder, Cogpsych, Cookiehead, David Gerard, Dcoetzee, Diza, Dzhim, EEaton, Elekhh, Epeefleche, Exiled Ambition,
GenDanvs, Gilliam, Gioto, Ground Zero, Grumpyyoungman01, Hraefen, IceKarma, Inimino, Irimi, JorisvS, Karada, Kevin Gorman, Khazar, Kimleonard, LOL, Lamro, Lawandeconomics1,
Letranova, LoveMonkey, Maikel, MartinPoulter, MathewTownsend, Mattisse, Michael Hardy, Mild Bill Hiccup, Mmendis, Mogism, Mpontes, NewEnglandYankee, Nickg, Northern bear,
PMLawrence, Peterlewis, Piledhigheranddeeper, Psyc3330 w11, RichardF, Rjwilmsi, Sam Spade, Shakescene, Sikyanakotik, Sillyjack, Skywalker415, Sordon1234, Spidey104, Taak, Tabletop,
Tariqabjotu, Tktktk, Twistedream13, Txomin, Tyciol, Ubiq, Unreal7, Vadmium, Wetman, Wikidea, Wtatour, Wykypydya, Xenlab, 89 anonymous edits
Article Sources and Contributors
279
Hostile media effect Source: http://en.wikipedia.org/w/index.php?oldid=512126904 Contributors: Aaron Kauppi, Afa86, Andrewaskew, Bender235, Bookowl, Circeus, Crosbiesmith,
DanielCD, Dcoetzee, Duncharris, Dusty78, Ec5618, Florian Blaschke, Ginagrammatica, Hall Monitor, Harizotoh9, Hkimscil, Homo sapiens, Humblefool, Humus sapiens, Ida Shaw, J.smith,
J04n, JQF, Jfdwolff, Johnthescavenger, Jruderman, Jwillbur, Karada, Keith-264, Ken Gallager, Livewireo, MartinPoulter, Mattbuck, Mattisse, Maurreen, Michael Safyan, Mr. Wood,
NorthernThunder, Padidliwa, Palmiro, RichardF, Rick Norwood, Rjwilmsi, SimonP, Spot87, Subversive.sound, Taak, Tempshill, Unara, Uriber, VeryVerily, Viajero, 26 anonymous edits
Hyperbolic discounting Source: http://en.wikipedia.org/w/index.php?oldid=522720487 Contributors: 1-is-blue, 271828182, Aaron Kauppi, AaronSw, Allion, Andrem07, Atticusdrew,
Bender235, Berendt, Boing! said Zebedee, Btyner, Can't sleep, clown will eat me, Cancan101, Ciphergoth, Cybercobra, David Eppstein, Devadatta, Dmmaus, Drpickem, Edward, Eon01,
Gainslie, IamNotU, Insignia1983, John Quiggin, Karada, Kgrad, Khazar2, Lova Falk, Lpetrazickis, Mandarax, MarginalCost, Matthewcgirling, Matthewfallshaw, Moxfyre, NathanHurst,
PegArmPaul, Pgreenfinch, Pkearney, Quorn3000, R'n'B, Redglasses, Rjwilmsi, Rl, Seglea, Taak, Temporaluser, Ticoneva, Tobacman, Undead warrior, Wrelwser43, WriterHound, 61 anonymous
edits
Illusion of control Source: http://en.wikipedia.org/w/index.php?oldid=524422632 Contributors: Aaron Kauppi, Andreasmperu2008, Andrewaskew, Archola, Axblood, Belovedfreak,
Byelf2007, Charles Matthews, ChicJanowicz, Croweml11, Edgar181, Emperor, Giraffedata, Gmukskr, Gregbard, Hanxu9, Karada, Kevinkor2, Kwertii, Magenta1, Markoc, MartinPoulter,
MathewTownsend, Narssarssuaq, Noblige, Peterdjones, Populus, Presearch, Rfl, Rich Farmbrough, Rjwilmsi, Sam Spade, Schwnj, SleekWeasel, Taak, Tesseract2, The Anome, Thespian, WAS,
Xyzzyplugh, 27 anonymous edits
Illusion of validity Source: http://en.wikipedia.org/w/index.php?oldid=498971850 Contributors: Rjwilmsi, Sun Creator, Taak, 1 anonymous edits
Illusory correlation Source: http://en.wikipedia.org/w/index.php?oldid=525487227 Contributors: 1000Faces, Aaron Kauppi, Aarre, Adashiel, Arbitrarily0, Avicennasis, BL Lacertae, Bkel,
Bolgj, BrooxBroox91, Chester Markel, Chriszuma, DynamoDegsy, Eleland, Fotherge, Gretchen Tiddlywinks, InfoCmplx, Jennisama84, Karmela, Kelly Martin, Khazar2, Kschutz, Lova Falk,
MartinPoulter, MathewTownsend, Mav, Mrrhum, NinjaKid, Picaroon, Poliquin, Qwfp, Rebecca.lidh, Rjwilmsi, TFF 23, The Anome, U3964057, Uncle G, Wizardman, 32 anonymous edits
Information bias (psychology) Source: http://en.wikipedia.org/w/index.php?oldid=457016985 Contributors: Aaron Kauppi, Alansohn, Bporopat, Can't sleep, clown will eat me, Cmdrjameson,
Croctotheface, Hanxu9, MUNpsych, Mikael Hggstrm, Poliquin, The Anome, Whitepaw, Yingsen, 16 anonymous edits
Insensitivity to sample size Source: http://en.wikipedia.org/w/index.php?oldid=519290452 Contributors: Magioladitis, Mistercow, Rjwilmsi, Sun Creator, Taak, 1 anonymous edits
Just-world hypothesis Source: http://en.wikipedia.org/w/index.php?oldid=525903161 Contributors: 2001:708:30:1480:D593:D9DE:4CF:1894, 2004-12-29T22:45Z, 20040302, Aaron
Brenneman, Aaron Kauppi, Achowat, Antaeus Feldspar, Audriusa, Azcat1997, Badger Drink, BenFrantzDale, Benandorsqueaks, Bender235, CaptainVideo890, Careds44, Cheaal01, DrL,
Edgar181, Eep, EurekaLott, Exeunt, Fragglet, Frdrick Lacasse, George Ponderevo, Gioto, Gregbard, Gunza, Gurch, Gkhan, Halil1, Icarus3, Ilkali, Jayen466, Jesin, John Bessa, JohnOwens,
Jokestress, JoshuaZ, Kai-Hendrik, Karada, Lisamh, Lova Falk, Machine Elf 1735, Meshach, Mindmatrix, Mohehab, Naomi Gratis, Niceguyedc, Paulscrawl, Penbat, Piledhigheranddeeper, Pnrj,
Portillo, R'n'B, Rebroad, Rjwilmsi, SkpVwls, Soultaco, Steven X, SummerWithMorons, TJRC, Taak, The Anome, TurtleTrax, U3964057, Ubiq, Vipinhari, Vladkornea, Wolfdog, Yngvadottir,
106 anonymous edits
Less-is-better effect Source: http://en.wikipedia.org/w/index.php?oldid=506091628 Contributors: Bearian, Magioladitis, Rjwilmsi, Taak
Loss aversion Source: http://en.wikipedia.org/w/index.php?oldid=529417370 Contributors: Aaron Kauppi, Amorymeltzer, Andycjp, Atakdoug, BenjaminBarber, Can't sleep, clown will eat me,
Charlespeirce11, Choess, Ciphergoth, Cosfly, Cpryby, Cretog8, DVirus101, Dw31415, E0N, Epeefleche, Ethanpew, Fmerenda, GAdam, Galinarou, Gnfnrf, Harryfdoherty, Iaoth, Jlngjlng, John
Broughton, John Quiggin, JohnChrysostom, Jzietz, Karada, Leelee Sobieskihamper, Luk, Mandarax, MartinPoulter, MathewTownsend, Maurreen, Maxcanada, Mydogategodshat, Netsumdisc,
Outofthebox, Proberts2003, Psychobabble, Quiddity, RandomP, Reagle, Regancy42, RegentsPark, SHIMONSHA, Sewadj, Stefano85, Sunbeam44, Taak, Tabledhote, The Gnome,
Themightyrambo, Umutarikan, Visualday, WolfmanSF, WpZurp, ZildjianAVC, 80 anonymous edits
Ludic fallacy Source: http://en.wikipedia.org/w/index.php?oldid=516079793 Contributors: Aaron Kauppi, Andycjp, Arthur Rubin, BigHairRef, Boffob, Chris 73, Ciphergoth, Cretog8,
Cybercobra, Edenscourt, Epistemeter, Eric Kvaalen, Everyking, FeralOink, Gregbard, Hakeem.gadi, Herda050, Heron, Horrisgoesskiing, Intellectual47, JMSwtlk, Jrf, Jweiss11, KelleyCook,
Khazar, Lachambre, Lamro, Lasindi, Lolo Sambinho, LoveMonkey, MER-C, Malcolmxl5, Merriam, Michael Hardy, Nbarth, OrangeDog, Paradoctor, ReverendDave, Robert Brockway,
Rrostrom, Schi, Skomorokh, Srich32977, Staszek Lem, Stephen378, The Mysterious El Willstro, Uranographer, VinceBowdren, YechezkelZilber, Znmlnkth, 39 anonymous edits
Mere-exposure effect Source: http://en.wikipedia.org/w/index.php?oldid=494940346 Contributors: Aaron Kauppi, Aeternus, AndyJones, Brighterorange, Causa sui, Ciphergoth, CloudNine,
Cogpsych, Cymru.lass, Dcoetzee, Dpr, Guslacerda, Harb7833, Janviermichelle, Jcbutler, Johnkarp, Jonah.harris, Jonathan.s.kt, JorisvS, Joyous!, Karada, LilHelpa, MER-C, MartinPoulter,
Masterofpsi, Mattisse, Nectarflowed, Oakraiders3184, Pathoschild, Psp2010, Psychonaut, Rhinoracer, RichardF, Sadi Carnot, Sam Spade, Schwnj, Snacks good, Steven X, Subversive.sound,
Taak, Tabletop, The Magnificent Clean-keeper, TheProject, Woohookitty, 41 anonymous edits
Money illusion Source: http://en.wikipedia.org/w/index.php?oldid=524415930 Contributors: Anual, Atlastawake, Beland, Bender235, Bob Hu, Bombastus, Byelf2007, Byronmercury, Calm,
CommonsDelinker, Copyeditor42, Cybercobra, Dkish1, DontClickMeName, Dusik, Esaintpierre, Fplay, Fuzzbox, Fyrael, Gareth Jones, Grafen, Gregbard, Jasonbook99, John Nevard, John
Quiggin, Johnkarp, Karada, Magioladitis, Magister, MartinPoulter, MathewTownsend, Mfulvio, MrBurns, One, Pamar1908, Pgreenfinch, Plastikspork, Psychobabble, Rich Farmbrough,
Rjwilmsi, Sam Hocevar, SimonP, Taak, TheGeoffMeister, Thomasmeeks, Unimath, X!, XMCHx, 43 anonymous edits
Moral credential Source: http://en.wikipedia.org/w/index.php?oldid=522959124 Contributors: Dozerbraum, Dweezle7, Grutness, Hongmt, Keith Cascio, Loogel, Pathoschild, Peace01234,
Rallette, Rjwilmsi, U3964057, 2 anonymous edits
Negativity bias Source: http://en.wikipedia.org/w/index.php?oldid=529562663 Contributors: Aaron Kauppi, Batard0, Belovedfreak, BrandiCarolyn, Brighterorange, Cmdrcool,
CommonsDelinker, Doczilla, Edit650, Editor Emeritus, Fnielsen, Funandtrvl, Geoffjw1978, Gobonobo, HatchMcGatch, Inhumandecency, JorisvS, Kenton Scott, Lockley, Magioladitis,
Mandarax, Mattisse, Mogism, Moonriddengirl, PonyToast, Recury, Rjwilmsi, Selket, Ups46694, Wolfdog, YK Times, 10 anonymous edits
Neglect of probability Source: http://en.wikipedia.org/w/index.php?oldid=525872719 Contributors: Aaron Kauppi, Arno Matthias, Can't sleep, clown will eat me, Danman3459, Exeunt,
Groyolo, Jon.baron, Loma66, Primalmoon, The Anome, 3 anonymous edits
Normalcy bias Source: http://en.wikipedia.org/w/index.php?oldid=526783504 Contributors: 4RugbyRd, Aaron Kauppi, Crystallina, Goodwin-Brent, Gunnanmon, Hibana, J04n,
LaFolleCycliste, Lova Falk, Malcolma, Meclee, Mheenan, Michaeltaft, Niteowlneils, Piotrus, Remedia8, Rjwilmsi, SchreiberBike, Snorre, StvnLunsford, Teratornis, Woohookitty, X736e65616b,
Zstauber, 35 anonymous edits
Observer-expectancy effect Source: http://en.wikipedia.org/w/index.php?oldid=508874056 Contributors: 2over0, Aaron Kauppi, Akeron, Alienlifeformz, Andycjp, AstroHurricane001,
BD2412, Brandon5485, BullRangifer, Chealer, Circeus, Darrenhusted, DavidWBrooks, Dino, Doctor Dodge, Draconiszeta, Ed Poor, Elabro, Emc2, Grutness, Herd of Swine, JH-man, Jokl,
Karada, Kermit2, Koavf, Ksyrie, LMackinnon, Lectonar, Lindsay658, Lova Falk, Mattisse, Maury Markowitz, Mikael Hggstrm, MrX, Neutrality, Notoldyet, Oligomous, One Salient Oversight,
Paulduffill, Plastikspork, Res.being, Richard001, RichardF, Roberto Almeida, Rul3rOfW1k1p3d1a, Salsb, Steve carlson, Themightyquill, Unreal, WLU, Wikid77, 29 , anonymous edits
Omission bias Source: http://en.wikipedia.org/w/index.php?oldid=517058838 Contributors: Aaron Kauppi, Echo927, Harro5, Headwes, Jarble, John Cross, Nutcracker, Rami R, Sergei
Peysakhov, The Anome, 10 anonymous edits
Optimism bias Source: http://en.wikipedia.org/w/index.php?oldid=521546678 Contributors: 4RugbyRd, Aaron Kauppi, Allion, Cheapskate08, Circeus, Cutler, Cybercobra, Ehheh, Elektrik
Shoos, Gsaup, J04n, Jim Sukwutput, JonPoley, Joseph Solis in Australia, Kwhitten, Leibelk, Lova Falk, MartinPoulter, MathewTownsend, Mdd, Mileworth, Nclean, Northamerica1000, Pilgaard,
Pm master, Pnm, ProjectSavior, Psinu, Quisquillian, Redhanker, Renesis, Rjwilmsi, Seglea, Simon Kilpin, Smcg8374, SocialNeuro, Sonderbro, Ssscienccce, Sun Creator, The Thing That Should
Not Be, Thumperward, Tisane, Tom Morris, Trigger hurt, Twinsday, Wcby205, 60 anonymous edits
Ostrich effect Source: http://en.wikipedia.org/w/index.php?oldid=529398643 Contributors: 72Dino, Andycjp, Biscuittin, Cretog8, Cybercobra, Fabrictramp, Father Goose, KConWiki,
MartinPoulter, Mikhailovich, Nem0sum, Onyu2008, Pan Dan, Rackabello, Radagast83, Rjwilmsi, Solomonfromfinland, Speednat, Taak, 7 anonymous edits
Outcome bias Source: http://en.wikipedia.org/w/index.php?oldid=415004403 Contributors: Aaron Kauppi, Absentis, Comrade Graham, Engineeringsimon, Hanxu9, JHunterJ, JubalHarshaw,
Loodog, Merehap, Mrwojo, Pseudomonas, Shnookle72, The Anome, Wykypydya, 3 anonymous edits
Overconfidence effect Source: http://en.wikipedia.org/w/index.php?oldid=529132146 Contributors: A crazy cranium, Aaron Kauppi, Ancheta Wis, Argumzio, Arno Matthias, Ashertg,
Bender235, CIreland, ChrisKnott, Cretog8, CyberSkull, DarkAdonis255, DigitalNinja, Donandrewmoore, Ewlyahoocom, Funguscheese, Geoffrey Pruitt, Giraffedata, Gmukskr, Iaoth,
Isaacdealey, Jacobisq, JorisvS, Karoch, KillerChihuahua, Lamro, LilHelpa, Lova Falk, MartinPoulter, Martinlc, Mattisse, Muboshgu, Penbat, Pilcrow, RJBurkhart, RichardF, Sam Spade,
SheeEttin, Simon1223hk, Smalljim, Theconfidenceman, Timrichardson, Tomeasy, Wikiliki, WissensDrster, 36 anonymous edits
Article Sources and Contributors
280
Pareidolia Source: http://en.wikipedia.org/w/index.php?oldid=530675940 Contributors: 2001:470:816F:0:14B7:43DC:739D:CF92, 2004-12-29T22:45Z,
2A01:E35:2EF7:6150:B413:5AF9:660:D975, 7&6=thirteen, 83d40m, A2Z, AV3000, Aaron Kauppi, Adamantios, Agamemnon2, Agota, Aiken drum, Al Lemos, Alan Liefting, Ankimai, Arno
Matthias, Arrivisto, Asenine, AstroHurricane001, Audacity, Augustosfaces, Axeman89, BD2412, Befuddled steve, Beland, Beyondsolipsist, Binksternet, Bluefist, Bostwickenator, Brandon5485,
Cablehorn, Cacycle, Calicocat, Can't sleep, clown will eat me, Celuici, Cgingold, Chinju, Chris Capoccia, Chrisorapello, Chrissmith, Citizen Premier, Cmdrjameson, CobraWiki, Codicorumus,
CoombaDelray, Crystalroseluv, Csernica, Cybercobra, Cynwolfe, DSatz, Daniel J. Leivick, Darrenhusted, David Kernow, DavidWBrooks, Davkal, Deflective, Deglr6328, Dirkbike, Djdole,
Docether, Dorgan, Drbreznjev, DreamGuy, Drugonot, Duomillia, Dureo, Dyanega, Ec5618, Edhubbard, Embe111, Emilylbaker, Emurphy42, Exok, Fabrice Ferrer, Fama Clamosa, Flcelloguy,
Fotaun, Gadz, Gelbukh, Genghisgandhi, Geoff B, Ghelae, Ghostexorcist, GorillaWarfare, Grendelkhan, Groyolo, Gwalla, Halsteadk, HamburgerRadio, Heah, Hellbus, Hormigo,
ILoveGarrysmod, Icy118, Incnis Mrsi, IsmaelCavazos, JATorres6, JFlav, JJ Harrison, JMCC1, JamesMLane, Jchthys, Jclerman, Jeffq, Johann Gambolputty, Johnuniq, Joseph Banks,
JoshHolloway, JuanTres, Justaperfectday, Karada, Kazvorpal, Kencf0618, KennethBarnes, Keraunoscopia, Kieff, Koven.rm, Krsont, Kwamikagami, Kylemcinnes, Light current, Lotje,
MacGyverMagic, Macedonian, Machead, Magnetic hill, Maproom, Mark Renier, Martarius, Martinevans123, Marx01, Master Jay, Maury Markowitz, Mauveipedia, McSly, MementoVivere,
Mendaliv, Metapsych27, Michael Hardy, Miklos legrady, Mimzy1990, Mizunoryu, Mogism, Nagelfar, Nameyxe, Nestify, Nivas28, Nufy8, Omnipaedista, Osiris333, Ost316, Pablo-flores, Paul
Richter, Paulburnett, Paulnasca, Perey, Perfectblue97, Plantigrade, Plasmic Physics, PoccilScript, Portillo, President Rhapsody, Preslethe, Psychonaut3000, Pt, Quiddity, Quinet, Raistlinknight,
Randallmcdabb, Rjwilmsi, Robin Johnson, Roi 1986, Rui Silva, Rursus, Saintlink, Saltywood, Scientific29, Seduisant, Seraphita, Sergeant Cribb, Shawn81, Shenme, Shirahadasha, Shirudo,
Sibanak, Sonjaaa, Speedoflight, Spiral5800, Subfonic, SudoGhost, Swdev, Synthe, THB, Taak, Tecsaz, Teratornis, Tevildo, ThePedanticPrick, Thrissel, Til Eulenspiegel, Timotheus Canens II,
Toddst1, Tommy2010, Treeman1234, Tronno, Twang, Twthmoses, Undomelin, V-Man737, Valugi, Vegaswikian, Velella, Violetmermaid, Viriditas, Vsion, Vuo, Wetman, Wikipedian231,
WikipedianProlific, WurmWoode, Xanzzibar, Xasodfuih, Yomna 1, Zzyzx11, , 207 anonymous edits
Pessimism bias Source: http://en.wikipedia.org/w/index.php?oldid=491636057 Contributors: Bearcat, George Ho, Grutness, MartinPoulter, Rjwilmsi, Tisane, Xezbeth, 4 anonymous edits
Planning fallacy Source: http://en.wikipedia.org/w/index.php?oldid=512859155 Contributors: 4RugbyRd, A bit iffy, Aaron Kauppi, Acidjazz1, Allion, Boffob, Cheapskate08, DJ Clayworth,
DavidLevinson, Dontdoit, Ehheh, Elpincha, Engi08, Epeefleche, Eric Hawthorne, Gsaup, Ike9898, Jackvinson, JonDePlume, Karada, Kwhitten, MartinPoulter, MathewTownsend, Mbarbier,
Michael Hardy, Pilgaard, Pm master, Poli08, SchuminWeb, Sonderbro, Taak, The Anome, Trilliumz, TuomoPaqvalin, Van helsing, Widefox, 17 anonymous edits
Post-purchase rationalization Source: http://en.wikipedia.org/w/index.php?oldid=512281749 Contributors: Aeluwas, Aymatth2, Benlisquare, Blazemore, Bluetooth954, CanadianPenguin,
Closedmouth, Corbenine, Denisarona, Dizzious, Excirial, Gfoley4, Googfan, Grayfell, GregorB, Grutness, Havermayer, Hqb, Jim1138, Jtneill, Kbh3rd, Kostmo, LFaraone, MartinPoulter,
Medicineluke, Mejogid, Nixeagle, Odie5533, Oligophagy, PKT, Penbat, Pikamander2, PlasticPackage, Snigbrook, Tbhotch, The Anome, Wikipelli, ZildjianAVC, Zzuuzz, , 141 anonymous
edits
Pro-innovation bias Source: http://en.wikipedia.org/w/index.php?oldid=524507238 Contributors: Buddy23Lee, Chowbok, DoctorKubla, Emeraude, Malcolma, Mbamark, 1 anonymous edits
Pseudocertainty effect Source: http://en.wikipedia.org/w/index.php?oldid=488718082 Contributors: Aaron Kauppi, Bluemoose, Charles Matthews, Grutness, Jodyng888, Loudsox,
MathewTownsend, Mattisse, Maurreen, Mostargue, RichardF, Rjwilmsi, Smmurphy, The Anome, 7 anonymous edits
Reactance (psychology) Source: http://en.wikipedia.org/w/index.php?oldid=506340880 Contributors: Abiliocesar, Adambro, Atlantia, B.S. Lawrence, Chemturion, Curlie, Dicklyon, Dinomite,
DrMel, Dravecky, Efiiamagus, Ferrarimangp, Fifelfoo, Fratrep, Fubar Obfusco, GregorB, Gypsydancer, Henrysteinberger, Internoob, Iridescent, IronGargoyle, JamesAM, Jjron, Jmah, Johnor,
JorisvS, KHAAAAAAAAAAN, Mandarax, Mattisse, Pathoschild, Pbugyi, Pegship, PhnomPencil, Piercetheorganist, Poco a poco, Psywikiuser, Recury, Sam Spade, SchreiberBike, Sketchmoose,
Steve carlson, Stifle, That Guy, From That Show!, Topbanana, Versageek, Wikieditor1988, 50 anonymous edits
Reactive devaluation Source: http://en.wikipedia.org/w/index.php?oldid=522654680 Contributors: Magioladitis, Paul Magnussen, Taak, 2 anonymous edits
Serial position effect Source: http://en.wikipedia.org/w/index.php?oldid=524639218 Contributors: 12dstring, Aaron Kauppi, Beland, Bfinn, Brossow, Dhaluza, Discospinster, Doczilla,
Egboring, Elizkatz, Eshana89, Franois Pichette, Gary King, HereToHelp, Hintswen, IanManka, Jarble, Jason Quinn, Jeff3000, Kevin.strong, Kku, Kpmiyapuram, Lova Falk, MER-C,
Malcolmxl5, Mattisse, Mikemoral, Mindmatrix, Mkahana, Obli, Piledhigheranddeeper, RichardF, Rkj22, SJS1971, Sonicyouth86, Taak, 61 anonymous edits
Recency illusion Source: http://en.wikipedia.org/w/index.php?oldid=528104307 Contributors: Bedivere, Cnilep, Entail, Jmk, Keith Cascio, Kyoakoa, Pol098, Tabledhote, Windharp, 23
anonymous edits
Restraint bias Source: http://en.wikipedia.org/w/index.php?oldid=513536079 Contributors: Ahart1psy, Sun Creator, ThePastIsObdurate, 2 anonymous edits
Rhyme-as-reason effect Source: http://en.wikipedia.org/w/index.php?oldid=506094709 Contributors: Archieboy2, Magioladitis, Taak, 4 anonymous edits
Risk compensation Source: http://en.wikipedia.org/w/index.php?oldid=528120836 Contributors: ADM, Adambro, Alanbraggins, Alex Sims, AmericanEnglish, Aslakken, Bearian, Beefman,
Beland, Boonukem, Brian the Editor, Cutler, Dan100, Daniel J. Leivick, DeFacto, Dennis Bratland, Dhodges, Ephebi, Fmalina, Fratrep, George Ponderevo, Gfbs, GregorB, InTheZone, J36miles,
JHP, John Nevard, Just zis Guy, you know?, JzG, LMB, Mattisse, Mokgand, Nubiatech, Old Moonraker, PPBlais, Parrot of Doom, Peregrine981, PeterEastern, Prumpf, Pruss, RedWolf, Richard
Keatinge, RichardF, Rjwilmsi, Russell Thomas, SabreWolfy, Sf, Sharkford, Snori, SrJoben, Srich32977, Steinsky, TeamZissou, Triku, Varitek, Walraslaws, 45 anonymous edits
Selective perception Source: http://en.wikipedia.org/w/index.php?oldid=526104389 Contributors: Aaron Kauppi, Adresearchpro, Alan Au, Amtiss, Bjoram11@yahoo.co.in, Duke toaster,
Embram, Emrahertr, Gyan Veda, Jleboeuf, Karada, Lithoderm, Mattisse, Myasuda, Nsanshaman, Pmanderson, Rschmertz, Skrshnn, Spidern, TSullivan, Taak, Tiffuny, Ugncreative Usergname,
20 anonymous edits
Semmelweis reflex Source: http://en.wikipedia.org/w/index.php?oldid=499015448 Contributors: Adam78, AguC, Auric, Bdamokos, Bender235, Boffob, DragonflySixtyseven, Florian
Blaschke, LilHelpa, LordIlford, Margaret9mary, Misarxist, Polisher of Cobwebs, Power.corrupts, Seb az86556, Seren-dipper, Tapalmer99, William Avery, Wireless Keyboard, Zellskelington, 7
anonymous edits
Selection bias Source: http://en.wikipedia.org/w/index.php?oldid=515277688 Contributors: Aaron Kauppi, Adrian J. Hunter, Amatulic, Anders Sandberg, AoS1014, Arcadian, Beland, Boffob,
Cureden, Darrenhusted, Dbachmann, Den fjttrade ankan, Diza, Doors22, Dr.K., Drae, Ed Poor, Esben.juel, Farmanesh, Gazpacho, Giftlite, Graeme Bartlett, HaeB, Henrygb, Hlovdal, Hstovring,
Hylas Chung, Impossiblepolis, Insanity Incarnate, Iris lorain, Isomorphic, Jpatokal, Kanodin, Karada, Kiefer.Wolfowitz, Kvng, Lamro, Luciuskwok, Madhero88, Michael Hardy, Mikael
Hggstrm, Millahnna, MishaPan, MistyMorn, Naadia07, Oddity, PeR, Quantanew, Raimundo Pastor, Reyk, Rjwilmsi, Roadrunner, RodC, RoyBoy, Salih, Securiger, Sigma 7, Sked123,
SlayerBloodySlayer, StAnselm, Tabbbycat, Tabletop, Taxman, Thosjleep, Tobacman, Tom Lougheed, WinstonSmith, Zvika, 41 anonymous edits
Social comparison bias Source: http://en.wikipedia.org/w/index.php?oldid=503122275 Contributors: F, Grutness, John, Lish792, LittleWink, Pjoef, Redrose64, Seren-dipper,
ThePastIsObdurate, Zach014, 2 anonymous edits
Social desirability bias Source: http://en.wikipedia.org/w/index.php?oldid=527123437 Contributors: Ancodia, Anschelsc, Arno Matthias, BDD, Bender235, Biglovinb, Brentt, Chris the speller,
Cjmclark, Dcoetzee, DisplayGeek, Dpaulhus, Ecrooker, Etfp2008, Fallenangei, Htra0497, Joejones2028, Joseph Solis in Australia, LilHelpa, Mattisse, Melcombe, Michaelcarraher, Mycatharsis,
NaBUru38, Nick Number, Nick Wilson, NisJorgensen, Outercell, Polocrunch, Psinu, Scientific29, Sioraf, Someones life, TigerShark, Trevinci, Underpants, Wandatheavenger, Ybbor, 20
anonymous edits
Status quo bias Source: http://en.wikipedia.org/w/index.php?oldid=530608856 Contributors: 806f0F, ASmartKid, Ackatsis, Anders Sandberg, Andrewaskew, Arcandam, Btyner, Byelf2007,
CALR, Chameleon, Cherubino, Colipon, Comrade Graham, Cretog8, Eumolpo, Futurix, Giftlite, Grumpyyoungman01, Grunfe07, Grutness, Gurchzilla, Gwern, JHP, Jeneralist, Justinfr, Karada,
Liza Freeman, Luna Santin, MathewTownsend, Mlichter, Netsumdisc, Psychobabble, R'n'B, Rjwilmsi, Smyth, Taak, TheJJJunk, Ute in DC, Wykypydya, 32 anonymous edits
Stereotype Source: http://en.wikipedia.org/w/index.php?oldid=527535391 Contributors: (, -- April, 159753, 21655, 78.26, A-giau, A3RO, A8UDI, ABF, Aasb, Abce2, Abeg92, Abomasnow,
Abusive Aussie Husband-Battered Southern Wife stereotype, Ace of Spades, Ace ofgabriel, Acroterion, Adam78, Addihockey10, Addshore, Aditya, AdjustShift, Aeusoes1, Ahoerstemeier,
Airpirate545, Ajo Mama, Aka042, Alan Liefting, Alansohn, Alexandre Vassalotti, Alexf, AlexiusHoratius, Alexwany, All Hallow's Wraith, Allens, Allstar86, Alphachimp, Altenmann, Andrew
Levine, AndrewHowse, Andrewaskew, Andy Marchbanks, Andy pyro, Angela, Animum, Anna Lincoln, Anshuk, Anthius, Aranel, Arbor, ArchonMagnus, Arkwatem, Artdemon01, Ashley
Pomeroy, Attys, Auntof6, Auric, Avalean, Avenged Eightfold, Avjoska, Avoided, Avono, Awesomeguy92, B, BD2412, Bazonka, Bbbrown, Bearcat, Beezhive, Beland, Bellerophon5685, Ben
Ward, Ben@liddicott.com, Benc, Bencherlite, Benefros, BennettL, Benson85, Bfoxius, BigHairRef, Bigcitydeserter, Billinghurst, Bit Lordy, Bjsmd, BjrnEF, Blaise Joshua, Blazin213,
Bloodkith, Boaby, Bobbaxter, Bobo192, Boffob, Boing! said Zebedee, Bonadea, Bongwarrior, BoogieRock, Bookgrrl, BorderlineWaxwork, Boris Barowski, Bows&Arrows, Brendan Moody,
Brews ohare, Bsadowski1, Bubblegumwrapper, Bz2, C xong, CJGB, CTF83!, Cab88, Cajade, Calicore, Calineed, Callmarcus, Calmer Waters, Caltas, Cameron Dewe, Capricorn42,
Captain-n00dle, Capybara21, CardinalDan, Carlsotr, Carolinamnz, Cat10001a, Cautioned band, Cdc, Cenarium, Chamal N, Changchih228, CharlesC, Chartran, Chickenfeeders, Chris the speller,
Christopher Connor, Christopher Kraus, Cinco555, Ck lostsword, ClanCC, Classicstruggle, ClaudineChionh, Clegs, Cleopatra*Cate, Cliffy01, Clintville, Clovis Sangrail, ClydeOnline,
Cmptrsvyfm, Cobi, Cocytus, Cogibyte, Cogito-ergo-sum, Computerjoe, Conical Johnson, Cpiral, Cramyourspam, Cremepuff222, Crimzon Sun, Crito2161, Cro fever, Crohall, Crosbiesmith,
Cryptic, Cspalletta, Cst17, Curps, D6, DMacks, DVD R W, DVdm, Dancingwombatsrule, Daniel Quinlan, Darth Panda, Davemarshall04, Dc freethinker, Deb, Deitrib, Delbart27, DeltaQuad,
Article Sources and Contributors
281
Dennisthe2, DennyColt, Deon, Der Falke, Der kenner, DerHexer, Deutschgirl, Dgreen34, Dgw, Diddims, Difluoroethene, Dina, Djm256, Dneyder, Dnvrfantj, DoctorW, Donreed, Doris Don't,
Doulos Christos, Dpbsmith, Dpr, DrOliPo, Drbreznjev, Dreaded Walrus, Dreamafter, Drib55, Drilnoth, Dubious Irony, Dylan Lake, Dysepsion, E2eamon, Eb00kie, Edhabib, Edwalton, Edward,
El aprendelenguas, Elagatis, Elias Enoc, Elipongo, Eliteunited, Emeraldcityserendipity, EncephalonSeven, Endy Leo, Ensrifraff, Enviroboy, Epbr123, Equalityactiv, Escape Orbit, Etafly,
Euchiasmus, Euryalus, Everyking, Exert, Extransit, Fagtard123, Falcon8765, Falconleaf, Favonian, Fieldday-sunday, Fisher.G, FlareNUKE, Fluffernutter, Flyer22, FlyingToaster, Flyspeck,
FonsScientiae, Formeruser-81, Frecklefoot, FredR, Freechild, Fritz freiheit, Fritzpoll, Froid, FrostyBytes, Frymaster, Furrykef, Fyyer, F, GB fan, Gaff, Gambiteer, Gatorgirl7563, Gdo01,
Gene.arboit, GeorgeBuchanan, Gilliam, Gjd001, Glane23, Goatasaur, GoingBatty, Gointemm, Goplat, GorillaWarfare, Govus, Graeme Bartlett, Grafen, Greatrobo76, Grim-Gym, Grim23, Gssq,
Gurch, Gurchzilla, Guybrarian, Gw2005, Gkhan, Hadal, Haham hanuka, Hajahmz, Hamera123, HappyCamper, Happyapples19, Hariva, Hauskalainen, Haze120190, Hbackman, Hbent, Hdt83,
Hello71, Hemanshu, Henry W. Schmitt, Heracles31, Hersfold, Herunumen, Highvale, Hignopulp, Hmains, Hoof Hearted, Howth575, Hunt567, I run like a Welshman, IGeMiNix, IPb0mb3r,
IZAK, Ian Moody, Iantnm, IceCreamSammich, IceUnshattered, Ikanreed, Iluvdawgs, Imaperson123, ImperatorExercitus, ImperfectlyInformed, Inferno, Lord of Penguins, Inklein, Insanity
Incarnate, InverseHypercube, Ipharvey09, Iridescent, Iritakamas, IronGargoyle, Irunwithscissors, Ixfd64, J.delanoy, J3ff, JD554, JDoorjam, JForget, JNW, JSpung, Ja 62, Jacek Kendysz, Jagged
85, Jagz, Jambronination, JamesAM, JamieJones, Janejellyroll, Janus Shadowsong, JayFout, Jayinhar, Jcbutler, Jclemens, Jcw69, Jd027, Jeandr du Toit, Jengod, Jeodesic, Jiang, Jim.henderson,
Jim1138, Jimmy da tuna, Jimphilos, JmanofAus, Jmatter1, JoanneB, Joe9320, John of Reading, JohnBlackburne, JohnInDC, Johnny 42, Jokestress, Joost26, Jorobeq, Jovianeye, Jtneill, Jtoomim,
Juffodnreofdniruneo, Juliaguar, Jumping cheese, JustPhil, Justinfr, Justinphd, Kan06e, KaragouniS, Karpouzi, Kaszeta, Katalaveno, Kathleen.sheedy, Katieh5584, Kavehmz, Keitei, Keith D,
Kerotan, Kevinngo1234, Keyblade5, Kg3042, Khazar2, Kiiimiko, Kilo-Lima, King Lopez, King of Hearts, Kingpin13, Kingturtle, Kirkevan11, Kirzmac, Kiteinthewind, Kittykat94,
Kiyokoakiyama, Kjell Knudde, KlappCK, Knight of Truth, KnowledgeOfSelf, KnowlegeFirst, Koavf, Konye obaji ori, Kowkamurka, Krychek, Kruter-Oliven, Kschutz, Kubigula, Kukini,
Kurt10, Kyoko, Kzzl, L Kensington, L.to.the.P, LERK, Lanztrain, LcawteHuggle, LeeJ55, LeedsKing, Leonardo2505, Levineps, Light current, Lijnema, Likeminas, LilHelpa, Lilleskvat,
Llykstw, Lokionly, Loodog, Lord Lugie, Lordoliver, Loremaster, Lova Falk, Lowtech42, Lph, Lu-igi board, Lucas Duke, Lucidish, Lukefulford, MNAdam, Maberk, Macedonian, MagicBear,
Magiclite, Mahmud Halimi Wardag, Mailer diablo, Makeemlighter, MalakronikMausi, Malone23kid, Mancl20, Manticore, MapsMan, Marek69, Maris stella, Mark Arsten, Martarius,
Martian.knight, MartinPoulter, Master Jay, MatthewVanitas, Mattis, Maximillion Pegasus, Maywoods, Mboverload, Mc95, Meaghan, Memo@sdsu.edu, Menchi, Mendaliv, Mentifisto, Meol,
Meph1986, Mephistoe, MichaK, Michael Hardy, Mihai Capot, Mihalis, Mike Klaassen, Mike2000, Milnivri, Mirokado, Mirv, Mishatx, Miss Madeline, Miss Mama Bear, Miss kat,
MissQCgold2005, Moe Epsilon, Monnicat, Moonriddengirl, Morenoodles, Mr. Billion, Mr. Stradivarius, Mrmuk, Mrvoid, Msikma, Mwelch, My76Strat, Mygerardromance, N. Harmonik, N5iln,
NByz, Nabeth, Navy Blue, NawlinWiki, Nazgul812, Nburden, Neg, NellieBly, Nemesis 961, Neo-Jay, NeonNiteLite, Neuropsychology, Neutrality, Niceguyedc, Nightenbelle, Nightscream,
Ninja-4976, Nkocharh, Nmatavka, Nnp, Noctibus, Noleander, Northamerica1000, NorthernThunder, NorwegianBlue, Notay001, Nowheresville, OGoncho, ONEder Boy, Oda Mari,
Oddball31593, Ohnoitsjamie, Oleg Alexandrov, Oli Filth, Oliver Lineham, Omicronpersei8, Omnipaedista, Onceonthisisland, Onexdata, Optiguy54, Optoi, Oranjeboom31, Oxymoron83,
PDXblazers, PL290, PM800, Packages, Pastinakel, Patrick, Paul A, Paul Magnussen, Pax85, Penbat, Pengyanan, Philip Trueman, Phoenix7777, PhoenixWing, Piano non troppo, Pietru,
Pink!Teen, Piotrus, Planetary, Plushpuffin, Polozooza, PonileExpress, Ponyo, PotentialDanger, Pretzelpaws, Protonk, Pryd3, Psy463 1029, Pundit, PurpleAlex, Qtoktok, QuixoticKate, Qwyrxian,
R-41, R.G., RA0808, RB972, RJHall, Ranjithsutari, Rapturerocks, Ratemonth, Razorflame, ReZips, Reach Out to the Truth, Realismadder, Rebeleleven, Recardojoe, Recognizance, RedWolf,
Rednbluearmy, Reenem, Res2216firestar, ResearchRave, Revolutionary, Rhotard, Rich Farmbrough, Richardspraus, RickDC, Rippa76, Rjwilmsi, Rlquall, Robbie098, RobbieTitwank, Robert K
S, Robertvan1, Ronhjones, Roodaman1, Rotem Dan, Rowmn, Rrburke, Rushbugled13, Rx4evr, Ryan032, SAE1962, SJP, SMC, Saint-Paddy, Salvio giuliano, Sandwichsauce, Sapphire Flame,
Sardanaphalus, SatyrTN, SaveThePoint, Sceptre, SchfiftyThree, Schickel, ScholarK93, Schroeder74, Seb144, Secretlondon, Semperf, Sentenal01, Sephiroth BCR, Seraphcrono, Sfrostee,
Shadowjams, Shannon.jones553, Shanny98pretty, Shedlund, Shifter95, Shirik, Shnitzled, Shovan Luessi, Siebrand, Sifaka, SigPig, SimonP, Sizzlefoshizzle, SkyWalker, Slakr, Slawojarek,
Slysplace, Smallman12q, Smaug123, Snowdog, Snowmanradio, Socialpsychra, Soetermans, Soliloquial, Sonicyouth86, Sophixer, Southafrican41, SpaceFlight89, SpeedyGonsales, Stacin61,
StaticGull, Stefanomione, StillmakerR, StradivariusTV, Stuartewen, Suffusion of Yellow, Suidafrikaan, Suncrafter, SuperHamster, Superking, Sutcher, Sven Manguard, Svetlana Miljkovic,
Sweetfreek, Swimmerz, Syst3mfailur3, T-borg, TFOWR, THB, Taak, Tanaats, Temporaluser, Terracciano, Th1rt3en, Thand, The Anome, The Iconoclast, The MoUsY spell-checker, The
Rambling Man, The Squicks, The Thing That Should Not Be, TheDoober, TheLadyRaven, TheTechieGeek63, Thebanjohype, Theli34, Thingg, Thisis0, ThomasO1989, Thomasmeeks,
TiagoTiago, Tide rolls, TigerBasenji, Titoxd, Tktktk, Tobby72, Tommy2010, Tomsega, Tonsofpcs, Toolboks, Tregoweth, Trusilver, Tstormcandy, Tucker001, Twinsday, U3964057, U4667275,
USN1977, Ubardak, Ughmypussyhurts, Ukexpat, Ulric1313, Ultraexactzz, UnDeRsCoRe, User2004, User92361, Uvmcdi, Vague, Vanished user e99239jf9rf980239ifmlsmlsi4u, Vdegroot,
Vegetator, Vemblut, Verne Equinox, Versus22, Vicarious, Vicenarian, Vicpro, Vinny Burgoo, Violetriga, Vlad2000Plus, Voivod616, Vrenator, Vwu, Vzbs34, WBardwin, WODUP, WTucker,
WadeSimMiser, Wafulz, Wakaw, Wavehunter, Wavelength, Wcp07, Welshleprechaun, Westendgirl, WhatamIdoing, Whomp, WikHead, Wiki13, Wikidenizen, Wikiwatcher1, Will Beback,
WillMak050389, William Avery, Willie44, Wimt, Wiwaxia, Wknight94, Wolfdog, WolfgangFaber, Woohookitty, Writ Keeper, Wtmitchell, Wykypydya, Xerodn, Xiner, YUL89YYZ, YVNP,
Yahel Guhan, Yamamoto Ichiro, Ydong2, Yopie, Yuvn86, Z-d, Zachary8222, Zadcat, Zane RH, Zanibas, Zanimum, Zeboko13, ZeiP, Zeraeph, Zib Blooog, Zigger, Ziggurat, Zimmygirl7,
Zorro-the-coyote, , , , , 2182 anonymous edits
Subadditivity effect Source: http://en.wikipedia.org/w/index.php?oldid=517392441 Contributors: Aaron Kauppi, CRGreathouse, Craig Pemberton, GoingBatty, JeffreyN, Jon.baron, Jweiss11,
Pedrobh, The Anome, 2 anonymous edits
Subjective validation Source: http://en.wikipedia.org/w/index.php?oldid=517612569 Contributors: Aaron Kauppi, Argumzio, Big Bird, Fasten, Gregbard, Grutness, Ilikeliljon, Jokestress,
MartinPoulter, Mattisse, Neothunder, Robofish, Sgerbic, Wavelength, 7 anonymous edits
Survivorship bias Source: http://en.wikipedia.org/w/index.php?oldid=528174026 Contributors: Alternator, BunnyandYummy, Charliebruce, Den fjttrade ankan, Destynova, DireColt,
DocendoDiscimus, Ehn, Farmanesh, Foobaz, Gettingtoit, Gracefool, Iridescent, JimHardy, Jonathanstray, Koavf, MartinPoulter, Rahulkamath, Saxifrage, Shawnc, Skarsa72, UnitedStatesian,
Wordsmith, Wragge, Ze miguel, 26 anonymous edits
Texas sharpshooter fallacy Source: http://en.wikipedia.org/w/index.php?oldid=529286319 Contributors: A. di M., AlexWangombe, Amcbride, An Sealgair, Andeggs, Auto469680, Bassington,
BenFrantzDale, BrainMagMo, Bryan Derksen, BryanD, Chardish, Cold Light, Cydmab, DavidWBrooks, Dcljr, Dom Kaos, Duoduoduo, Dysmorodrepanis, Editor2020, Erebos12345, Everyking,
Gdr, George100, Gwern, Hippo43, Hu, JH-man, Jemmy Button, Kvn8907, L33th4x0rguy, Lazarus666, Lo2u, Logan, Lova Falk, Machine Elf 1735, Matt Gies, Mrdice, Mukadderat, Namangwari,
NantucketNoon, NiD.29, Omedalus, Penfield, Planet-man828, Primarscources, Pudge MclameO, Redfell, Rfl, Rjwilmsi, Robert K S, Rumping, ShowToddSomeLove, Silence, Skaaii, Skeptiker,
Slyguy, Stefanomione, StradivariusTV, Taak, Terpsichoreus, The Anome, Thunderbunny, Tktktk, User2004, Xerces8, Yworo, , , 54 anonymous edits
Time-saving bias Source: http://en.wikipedia.org/w/index.php?oldid=526804394 Contributors: Delusion23, Eyal.peer, Gregbard, Kolbasz, Righteousskills, RudolfRed, SwisterTwister
Well travelled road effect Source: http://en.wikipedia.org/w/index.php?oldid=505913326 Contributors: Aaron Kauppi, Chris the speller, Gregbard, Jeffpc2, SHIMONSHA, Shadowjams, 2
anonymous edits
Zero-risk bias Source: http://en.wikipedia.org/w/index.php?oldid=518648158 Contributors: Aspects, Cesiumfrog, Effie.wang, Evercat, Jeepday, Jon.baron, Mrwojo, Omnipaedista,
Rodneylbrownjr, Schutz, Tlogmer, Wk muriithi, ZildjianAVC, 7 anonymous edits
Actorobserver asymmetry Source: http://en.wikipedia.org/w/index.php?oldid=529824516 Contributors: 1000Faces, Akegarasu, Arno Matthias, BD2412, Bovineone, Chronulator, David0811,
Editor64, Elimegrover, Frdrick Lacasse, Gyrobo, JorisvS, Kanadajinlee, Koavf, MartinPoulter, Mattg82, Mboverload, Northamerica1000, Peace01234, Phorapples, Rlove, Ruakh,
Rul3rOfW1k1p3d1a, Unara, Ynhockey, 14 anonymous edits
Defensive attribution hypothesis Source: http://en.wikipedia.org/w/index.php?oldid=525696550 Contributors: DoctorKubla, Dr Ashton, Funnyfarmofdoom, JorisvS, Rich Farmbrough,
SwisterTwister, Taak, 2 anonymous edits
DunningKruger effect Source: http://en.wikipedia.org/w/index.php?oldid=530362582 Contributors: 19cass20, 2over0, 49oxen, AKAF, Aaron Kauppi, AaronTovo, Abominatorz,
Airplaneman, Al E., Alansohn, Algebraist, Andres, Andycjp, Antandrus, Anthon.Eff, Anthonyhcole, Antonielly, Aprock, Argumzio, Arjuna909, Arthur Rubin, Ashahmur, Aunt Entropy,
AussieScribe, Avish, Awgy, Aymatth2, BaShildy, Badger151, Bearian, Ben Standeven, Billswelden, Blaxthos, Brian A Schmidt, Bryan Derksen, Callidior, Captain obtuse, Carmichael,
Chameleon, Chasingtheflow, Chowbok, Chrylis, Chuunen Baka, Cogiati, Cognita, Colorfulharp233, CommunistPancake, Control.valve, Cowtown850, DANKASHEN, DMacks, DRTllbrg,
DVdm, Dabomb87, DanielCD, Darrell Greenwood, David Gerard, Davididd, Dennis Bratland, Dicklyon, Diego Moya, Dogface, Dusti, ERcheck, ElbridgeGerry, Enigmocracy, Enquire,
Enzodogoslo, Ernestfax, Eshafto, Everdred, Ewlyahoocom, Extremidiz2000, EyeKnows, F5487jin4, Famspear, Fastily, Fifty53, Florian Blaschke, F, Gabriel Kielland, Garkbit, Gary King,
Gavin.collins, Geoffrey.landis, Geometry guy, Glrx, Gracefool, Gstrz, Gzuckier, Hallows AG, Hans Adler, HonoreDB, Hu, IRWolfie-, IjonTichyIjonTichy, Immanos, InBalance,
Informationtheory, Jack Merridew, JamesBWatson, Janeky, Jarbon, Jesusaurous, JoeSmith9751, Jonathanischoice, Jrtayloriv, Jtneill, Julia Rossi, Julian Herzog, Julianonions, Jumbolino,
Jusdafax, Just plain Bill, Kakofonous, Kazrak, KennethSides, Kephir, KillerChihuahua, KimDabelsteinPetersen, Kintetsubuffalo, Kmpolacek, Koavf, Krausertoss, Kuru, L Kensington, LZ6387,
Lawrencekhoo, Leandrod, Leftmostcat, Lesath, LiberalDiggingEffect, LilHelpa, Logos5557, Lova Falk, MZMcBride, Mandarax, MarSch, MartinPoulter, Materialscientist, Mattisse, Mavigogun,
Mbmiller, McGeddon, MelbourneStar, Mephistophelian, Metazeno, Mgiganteus1, Michael C Price, Micmachete, MikeDawg, Mindmatrix, MonoApe, Mooveoveryou, Nial2k7, OisinisiO,
PatrickFisher, Patrickgoold, Penbat, Perkerk, Phil Boswell, Pietrow, Pigsonthewing, Pinethicket, Power.corrupts, Prari, Pyrospirit, Raithlin, Raul654, Reconsider the static, RedHouse18,
RevWaldo, Ritartederchild, Rjwilmsi, Robert1947, Rolo Tamasi, Roman clef, Ronbtni, Ronz, Rtyq2, Salamurai, SamJain1975, Secretza, Shadowjams, Shaggorama, Siawase, SkepticalRaptor,
Soap, Splashburn, Sroc, Suffusion of Yellow, SunshineSet, Superborsuk, Svick, TJPotomac, Tagishsimon, TangLung, Tbhotch, Tempodivalse, Teratornis, The Thing That Should Not Be,
TheFSaviator, Thine Antique Pen, ThreeOfCups, Thumperward, Timp21337, Tom Sauce, Tommy2010, Tritium6, Tulkss, Tyrol5, Unomi, Utcursch, Uucp, V2Blast, Vaughan, Vicki Rosenzweig,
Vrenator, VsevolodKrolikov, Wegesrand, WhatamIdoing, Wikipelli, Willerror, William Avery, William Pietri, Wingman4l7, Winston365, Wyldweasil, XP1, Xanzzibar, Xerographica, Xmacro,
Yakushima, Yamamoto Ichiro, Yayay, Youremyjuliet, ZX81, Zachweiss0491, ZoneSeek, 376 ,. anonymous edits
Article Sources and Contributors
282
Egocentric bias Source: http://en.wikipedia.org/w/index.php?oldid=527533557 Contributors: Aaron Kauppi, Archie06, Aventureuse, Bovineone, Diberri, Ellery7, George100, Grutness,
J.delanoy, Jahiegel, JorisvS, Koavf, Ktspectate, MartinPoulter, Melaniegyq, Mikimacmiki, Rebrane, Schwnj, Silkroses123, Sly1993, Taak, The wub, WikHead, Wmahan, 16 anonymous edits
Extrinsic incentives bias Source: http://en.wikipedia.org/w/index.php?oldid=506088658 Contributors: Magioladitis, Taak, 2 anonymous edits
Halo effect Source: http://en.wikipedia.org/w/index.php?oldid=528555846 Contributors: ***Ria777, 16@r, 2004-12-29T22:45Z, Aaron Kauppi, Andycjp, Anna Frodesiak, Anna Lincoln, Arno
Matthias, Ashmoo, Audriusa, Bagatelle, Bedson21, Betacommand, Bryantl05, CZmarlin, Calen11, Carmichael, CatherineMunro, Cavenba, Chameleon, ColdFeet, Cretog8, D o m e,
DanEdmonds, David Hoeffer, Dcflyer, Dddaye, Dorftrottel, Download, DropDeadGorgias, Dryman, E2eamon, Henrygb, Hobartimus, Ibn Battuta, Im Kwando, Inhumandecency,
InverseHypercube, Ishikawa Minoru, Joriki, JoshuaZ, Jrockley, JustAGal, Karada, Kent Wang, Kimhaney3, Kjkolb, Koavf, Kramsti, Kripkenstein, Kukini, L Square, Leosdad, Lionel Allorge,
Liquidblue8388, Lord Spring Onion, Lova Falk, Lue3378, MartinPoulter, MathewTownsend, Matthew.murdoch, Mattis, McSly, Mild Bill Hiccup, Mindloss, Mrpaintedwings, Nesbit, Netkinetic,
Nuujinn, Ocaasi, Oliphaunt, Patriarch, PaulAndrewAnderson, Photobiker, Pinethicket, Prof. Squirrel, Quenjames, RadManCF, Raymondwinn, Recognizance, Recury, Redbull addict, Rfl,
RichardF, Rjwilmsi, Robin S, Rorro, Rosalien, Rossami, SP612, Salavat, Slogby, SlubGlub, Soosim, Spencer, Spencerk, Squiddy, Suidafrikaan, Superking, Susko, SynergyStar, Taak, Tesseran,
Tom.k, TomTheHand, Usernamefortonyd, Vegetator, Viriditas, Waldo333, WereSpielChequers, Whaiaun, WhyBeNormal, Xyzzyplugh, Yaksar, 2009, , 207 anonymous edits
Illusion of asymmetric insight Source: http://en.wikipedia.org/w/index.php?oldid=498447704 Contributors: Andrewaskew, Arno Matthias, Cat Cubed, M4gnum0n, MathewTownsend, Pol098,
RDBrown, Rjwilmsi, Sketchmoose, The Anome, Yngvadottir, 3 anonymous edits
Illusion of external agency Source: http://en.wikipedia.org/w/index.php?oldid=481451264 Contributors: Giraffedata, Taak, TucsonDavid
Illusion of transparency Source: http://en.wikipedia.org/w/index.php?oldid=516534076 Contributors: Aaron Kauppi, Cat Cubed, George Ponderevo, GregorB, Int3gr4te, KJamison7,
MTHarden, Mandarax, Mgiganteus1, Pnrj, Reaper Eternal, Robin klein, Sharktopus, The Anome, Yngvadottir, 3 anonymous edits
Illusory superiority Source: http://en.wikipedia.org/w/index.php?oldid=529544667 Contributors: AKAF, Aaron Kauppi, AdrianLozano, Alansohn, Andrewaskew, Andycjp, Antonielly,
Ayvengo21, BGH122, Barticus88, Bayardo San Roman, Bender235, Brooke87, Bruce1ee, Chris the speller, Chronulator, Coldnorth, Crossmr, Crzer07, Cybercobra, Darrell Greenwood,
DreamGuy, Dvd-junkie, Eastlaw, Edward, Elvey, Florian Blaschke, Gioto, Groyolo, Imersion, Jack Merridew, Jake Wartenberg, Jakebarrington, JamesDC, Jeff Silvers, Jim1138, Johnuniq,
Kai-Hendrik, Koavf, Ksyrie, LittleHow, Male1979, MartinPoulter, Michael C Price, Monkats, MoraSique, Northamerica1000, OpenFuture, PatrickFisher, Penbat, Pietrow, Pinethicket, Psych
psych, Rinick, RiverDesPeres, Rjwilmsi, Safety Cap, Slmcguinness, Sue Rangell, Sun Creator, SuzanneIAM, Svick, Teratornis, Timstreet1, Usernameandnonsense, Viniciusmc, WhatamIdoing,
Wjejskenewr, Z8, Zanotam, Zenomax, 74 anonymous edits
In-group favoritism Source: http://en.wikipedia.org/w/index.php?oldid=529771533 Contributors: Aaron Kauppi, Acadmica Orientlis, Alliefaye13, Antaeus Feldspar, Avb, B528491, Beland,
Ben Ben, Chameleon, ChrisGualtieri, Coreyrudd, CourtsW, Dicklyon, Donkeykong0303, Drmies, Eey, GhostDude, Iaoth, Inkblot svr, Jlchenn, Johnkarp, Karada, Kentma, Khazar2, Koavf,
Manop, MartinPoulter, Michael Snow, Myriddin07, Nyttend, Okcmorgan, Pace212, Penbat, R'n'B, RichardF, Robertsteadman, Roop25, SchreiberBike, Spokewrote, SummerWithMorons, Sun
Creator, Taak, Tanner Swett, Tassedethe, Tectonicura, Thishyperreality, Trnj2000, U3964057, Unara, Vodafone3, ZigZagZoug, 20 anonymous edits
Nave cynicism Source: http://en.wikipedia.org/w/index.php?oldid=511263453 Contributors: Bgwhite, Darkwind, Magioladitis, Taak
Worse-than-average effect Source: http://en.wikipedia.org/w/index.php?oldid=528073309 Contributors: Aaron Kauppi, CWenger, Fanra, Fenice, Gioto, Graymornings, HerbertHuey,
Johnkarp, Karada, Kephir, MartinPoulter, Mattisse, Mike-stalkfleet, RichardF, Rushbugled13, Schwnj, Taak, 7 anonymous edits
Google effect Source: http://en.wikipedia.org/w/index.php?oldid=520275788 Contributors: BDS2006, Baseball Watcher, Evasivo, Grutness, Hajatvrc, Macrakis, Nohomers48, SchreiberBike,
Ser Amantio di Nicolao, Tim!, Vchimpanzee, 4 anonymous edits
Image Sources, Licenses and Contributors
283
Image Sources, Licenses and Contributors
File:Daniel KAHNEMAN.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Daniel_KAHNEMAN.jpg License: Public Domain Contributors: Ephraim33, InverseHypercube,
Tabularius, Urbourbo
File:Fred Barnard07.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Fred_Barnard07.jpg License: Public Domain Contributors: Fred Barnard (1846-1896)
File:MRI-Philips.JPG Source: http://en.wikipedia.org/w/index.php?title=File:MRI-Philips.JPG License: Creative Commons Attribution 3.0 Contributors: Jan Ainali
File:Handgun collection.JPG Source: http://en.wikipedia.org/w/index.php?title=File:Handgun_collection.JPG License: Creative Commons Attribution-Sharealike 3.0 Contributors:
Joshuashearn
File:Pourbus Francis Bacon.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Pourbus_Francis_Bacon.jpg License: Public Domain Contributors: BurgererSF, Hsarrazin, Shakko
Image:Klayman Ha1.svg Source: http://en.wikipedia.org/w/index.php?title=File:Klayman_Ha1.svg License: Public Domain Contributors: MartinPoulter
Image:Klayman Ha2.svg Source: http://en.wikipedia.org/w/index.php?title=File:Klayman_Ha2.svg License: Public Domain Contributors: MartinPoulter
Image:Klayman ha3 annotations.svg Source: http://en.wikipedia.org/w/index.php?title=File:Klayman_ha3_annotations.svg License: Creative Commons Attribution 3.0 Contributors:
MartinPoulter
Image:Witness impeachment.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Witness_impeachment.jpg License: Creative Commons Attribution 2.0 Contributors: Eric Chan from
Palo Alto, United States
Image:Simultaneous Contrast.svg Source: http://en.wikipedia.org/w/index.php?title=File:Simultaneous_Contrast.svg License: Public Domain Contributors: This hand-written SVG version by
Qef Original bitmap version by English Wikipedia user Xanzzibar Based on a similar bitmap image by K. P. Miyapuram
Image:Successive contrast.svg Source: http://en.wikipedia.org/w/index.php?title=File:Successive_contrast.svg License: Public Domain Contributors: This hand-written SVG version by Qef
Original bitmap version by K. P. Miyapuram (public domain)
Image:Contrast.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Contrast.jpg License: Public Domain Contributors: Nuvitauy07 (talk)
File:ValunFunProspectTheory2.png Source: http://en.wikipedia.org/w/index.php?title=File:ValunFunProspectTheory2.png License: Creative Commons Attribution-ShareAlike 3.0 Unported
Contributors: ValunFunProspectTheory.png: *Valuefun.jpg: Rieger at en.wikipedia derivative work: JohnKiat (talk) derivative work: JohnKiat (talk)
File:Simple-indifference-curves-2.png Source: http://en.wikipedia.org/w/index.php?title=File:Simple-indifference-curves-2.png License: Creative Commons Attribution 2.5 Contributors:
Simple-indifference-curves.svg: Original uploader was SilverStar at en.wikipedia derivative work: JohnKiat (talk)
File:Toronto Maple Leafs bild.JPG Source: http://en.wikipedia.org/w/index.php?title=File:Toronto_Maple_Leafs_bild.JPG License: Creative Commons Attribution-ShareAlike 3.0 Unported
Contributors: Egon Eagle
File:Little Rock integration protest.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Little_Rock_integration_protest.jpg License: Public Domain Contributors: John T. Bledsoe
Image:genimage.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Genimage.jpg License: Public Domain Contributors: Karl Duncker
File:Lawoflargenumbersanimation2.gif Source: http://en.wikipedia.org/w/index.php?title=File:Lawoflargenumbersanimation2.gif License: Creative Commons Zero Contributors:
User:Sbyrnes321
File:Hyperbolic vs. exponential discount factors.svg Source: http://en.wikipedia.org/w/index.php?title=File:Hyperbolic_vs._exponential_discount_factors.svg License: Creative Commons
Attribution-Sharealike 3.0 Contributors: Moxfyre
Image:Martian face viking cropped.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Martian_face_viking_cropped.jpg License: Public Domain Contributors: Viking 1, NASA
File:Fakeface.svg Source: http://en.wikipedia.org/w/index.php?title=File:Fakeface.svg License: Public Domain Contributors: Germo
File:Galle crater.gif Source: http://en.wikipedia.org/w/index.php?title=File:Galle_crater.gif License: Public Domain Contributors: Foroa, Lotse, Ruslik0, Waldir, WinstonSmith
Image:Pedra da Gavea proche.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Pedra_da_Gavea_proche.jpg License: Public Domain Contributors: Anaximander, Carlos Luis M C
da Cruz, CarolSpears, Chronus, Dantadd, Parigot, Zephynelsson Von, 4 anonymous edits
Image:Garuda_Pareidolia.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Garuda_Pareidolia.jpg License: Creative Commons Attribution-Sharealike 3.0 Contributors:
Kavyagandhi21
Image:Apache head in rocks, Ebihens, France.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Apache_head_in_rocks,_Ebihens,_France.jpg License: Public Domain Contributors:
Erwan Mirabeau
Image:Tirupati2.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Tirupati2.jpg License: Creative Commons Zero Contributors: Nivas28
Image:Paridolie Cians.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Paridolie_Cians.jpg License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:Agota
Image:Gardienne.Daluis.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Gardienne.Daluis.jpg License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:Agota
File:Ivy on tree in Burn anne Woodland.JPG Source: http://en.wikipedia.org/w/index.php?title=File:Ivy_on_tree_in_Burn_anne_Woodland.JPG License: Public Domain Contributors: Roger
Griffith
File:Bucegi Sphinx - Romania - August 2007.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Bucegi_Sphinx_-_Romania_-_August_2007.jpg License: Creative Commons
Attribution 2.0 Contributors: Cristian Bortes from Cluj-Napoca, Romania
Image:Pareidolia 3.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Pareidolia_3.jpg License: Creative Commons Attribution-Sharealike 2.0 Contributors: Bukk, Geofrog,
Imbrettjackson, JMCC1, Jat, Markus3, Nikola Smolenski, Wst, 1 anonymous edits
Image:Pareidolia false wood.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Pareidolia_false_wood.jpg License: Public Domain Contributors: Paulnasca, 1 anonymous edits
Image:Pareidolia.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Pareidolia.jpg License: Creative Commons Attribution 3.0 Contributors: Thom Quine
Image:box-pareidolia-2011-01-30.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Box-pareidolia-2011-01-30.jpg License: Creative Commons Attribution-Sharealike 3.0
Contributors: Bostwickenator
File:107-2-D1 - Danish electrical plugs - Studio 2011.jpg Source: http://en.wikipedia.org/w/index.php?title=File:107-2-D1_-_Danish_electrical_plugs_-_Studio_2011.jpg License: Creative
Commons Attribution-Sharealike 3.0 Contributors: User:Atomicbre
Image:Serial position.png Source: http://en.wikipedia.org/w/index.php?title=File:Serial_position.png License: GNU Free Documentation License Contributors: Obli (talk) ( Uploads)
Image:Dingyjump.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Dingyjump.jpg License: Public Domain Contributors: Alext606, 1 anonymous edits
File:Cops in a Donut Shop 2011 Shankbone.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Cops_in_a_Donut_Shop_2011_Shankbone.jpg License: Creative Commons Attribution
3.0 Contributors: David Shankbone
File:Mixed stereotype content model (Fiske et al.).png Source: http://en.wikipedia.org/w/index.php?title=File:Mixed_stereotype_content_model_(Fiske_et_al.).png License: Creative
Commons Attribution-Sharealike 3.0 Contributors: User:Sonicyouth86
File:Bettie Page driving.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Bettie_Page_driving.jpg License: Public Domain Contributors: Handcuffed
File:Stereotype threat - osborne 2007.png Source: http://en.wikipedia.org/w/index.php?title=File:Stereotype_threat_-_osborne_2007.png License: Creative Commons Attribution-Sharealike
3.0 Contributors: Sonicyouth86
File:TheUsualIrishWayofDoingThings.jpg Source: http://en.wikipedia.org/w/index.php?title=File:TheUsualIrishWayofDoingThings.jpg License: Public Domain Contributors: Chechof,
Editor at Large, GeorgHH, Infrogmation, Innotata, InverseHypercube, Mdd, Sreejithk2000, Timeshifter, 1 anonymous edits
License
284
License
Creative Commons Attribution-Share Alike 3.0 Unported
//creativecommons.org/licenses/by-sa/3.0/

You might also like