Psycasm is the exploration of the world psychological. Every day phenomenon explained and manipulated to one's own advantage. Written by a slightly overambitious undergrad, Psycasm aims at exploring a whole range of social and cognitive processes in order to best understand how our minds, and those mechanisms that drive them, work.
My posts are presented as opinion and commentary and do not represent the views of LabSpaces Productions, LLC, my employer, or my educational institution.
Please wait while my tweets load
Normally I avoid writing about things I learn in class. I try to use this blog, and the associated podcast, to research topics outside of the boundaries of my normal schooling.
This topic struck me, however.
There's a phenomenon called the False Consensus Effect (FCE) which basically states that we, as individuals, view our own preferences, behaviours and judgements as being typical, normal and common within a broader context; it also suggests we find alternative characteristics as being more deviant and atypical than they actually are.
I asked my tutor, 'Is this a kind of logical fallacy?', being new to the topic and a little surprised I'd never heard of it before...
He responds, 'No, not really. It's basically just a cognitive error. Once you know about it, you really won't ever feel confident in offering an opinion again'. Or something to that effect.
And he's right.
As a self-identified Skeptic, a member of the campus Skeptic's group, and a consumer of the Skeptic media (SGU, Skeptically Speaking, Skeptoid, etc) I was shocked that this effect is not as well known as Confirmation Bias, Pareidolia, and Hypnogogia (to name only a few). These phenomena just listed are well known to psychologists, and are equally well known to active skeptics. Confirmation Bias refers to one's tendency to seek out confirming evidence for an argument; Pareidolia is the tendency for humans to perceive patterns in randomness; and Hypnogogia is a lucid dream-state which explains a great many alien abductions and other hallucinations.
The face of the Devil in the smoke of the burning Twin Towers. Perceived patterns in the chaotic randomness of smoke...
The False Consensus Effect, in my opinion, rightly deserves to be acknowledged as a skeptical and psychological tool as well as any of these. Last week I invited you to take a survey that assessed your preferences towards a number of things, including your favourite colour, preferred type of snack, your poison of choice, and whether or not you had a moderate aversion/phobia.
You may be thinking, at this point, why even do the survey? Why not just report on what we know from the literature?
Well, first off, it's fun. Yup, I said it. Getting data is fun.
Second, I reckon that just explaining the FCE would likely lead to people going 'Huh, interesting. This certainly doesn't happen to me though, it's definitely someone else's problem'. The survey, then, was a chance for me to demonstrate this this happens to you, and to everyone else, too.
Here, again, are the relative rates of preferences for the various measures.
[Values represent %'s; n=100]
The trick was in the follow up question:
What proportion of the population also [reflects your opinion]?
I collected ~150 results on the survey, of which Surveymonkey released to me 100. Interestingly, a large proportion of respondents were from the Skeptic group I belong to, some more were from friends of friends, a big chunk was from twitter (both from my account, and the Labspaces' accounts) and some were direct from the blog. I assume that a great many of the people who took this were tertiary educated, considering who follows me on twitter, who my friends are, and who bothers reading science-blogs.
So I manually entered 10 data points for each of the 100 individuals; I calculated the actual popularity of each option (i.e. the actual popularity of red, or of savoury snacks, or of beer) and compared that to the popularity estimated by each individual for their specific preference. That is, if you said blue was your favourite colour and estimated that it was also the favourite colour of 45% of the population, I was able to compare your estimation with the actual popularity of blue (according to the results reported in my survey; Blue is popular among 31% of the population, btw).
I then calculated a delta value, a difference between one's estimate and the actual popularity of a given target, by subtracting your estimate from the actual value. Thus, a positive delta corresponds to an overestimate, and a negative delta corresponds to an underestimate.
Finally, I ran some t-tests that compared the mean values of the groups estimations and the mean values of the actual values they corresponded to. If people were accurate on their estimations, one would expect to find no significant difference between estimates and actual values.
For the record I scratched 4 participants from the set. These people either did not record estimates (1), recorded estimates at floor (5%) (2) or ceiling (100%) (0) for all estimates, or recorded that 100% of people their age also answered exactly the same as they did (1). These criteria were determined before viewing the data.
And so, here are the results. I have also included a break-down by each specific choice. I think this is important in persuading you that the FCE this is a real concern. Why? If everyone picked their own preference honestly but also reported, say, 40%, then some people would necessarily be closer to being right than others. So, if those who like blue (the majority) estimated that 40% also like blue, then they wouldn't be very wrong (40% vs. 32%). But if everyone else also picked 40% then those who selected Grey would be very wrong (3% vs 40%). If this was the case it might look like the Greys and the Browns skewed the results, where the majority of respondent (Blues and Greens) would actually be mostly right. This actually kind of happens, but the numbers are too small to meaningful compare (in some instances only 3 people picked a certain response). That's why we need to look at the mean delta value, and not smaller values. Though some of those individual differences are certainly not significant, it's clear that the trend is always to overestimate.
The mean delta value for estimated colour preference and population agreement was +11.563, with a standard deviation of 20.86. T-tests revealed this difference was significant at p.
For the record, 29 individual recorded a negative delta value (i.e. underestimated).
The mean delta value for estimated snack preference and population agreement was +18.094, with a standard deviation of 23.02. T-tests revealed this difference was significant p.
Only 20 individuals recorded a negative delta.
Aversions / Phobias
The mean delta value for estimated aversions and population congruence was +20.969, with a standard deviation of 27.27. T-tests revealed this difference was significant p.
26 individuals recorded a negative delta.
Drink of Choice
The mean delta value for estimated drink preference and population agreement was +20.083, with a standard deviation of 19.85. T-tests revealed this difference was significant p.
A paltry 14 individuals recorded a negative delta.
Representativness of the Age-Band
I asked a final question 'Of people in your age range (see Question 1) what proportion of people answered exactly the same as you across all measures?'.
The vast majority of people answered 5%. This represented a huge floor effect. I suspect that if a 0% option was offered, almost every 5% response would actually be a 0% response. Not everyone recorded at floor though.
I converted every 5% response as a 0%, and left all other values untouched. This has the effect of actually making the following statement more conservative than it actually is.
The mean percentage estimate of individuals who think that people the same age as them match them on all measured preferrences is.... drum roll... 10.833% (SD = 17.36)! Not hugely informative on the individual level, but a quite surprising nonetheless. Intuition, however, tell me to ignore this completely. Some people reported a very high value for this question (in the 80 - 90 range) which makes it hard to believe they answered honestly, or understood the question as I intended. ... but perhaps this is just the FCE taking a hold me, who knows?
When my tutor said 'you'll never feel confident in making broad claims, ever again', he meant it. We all overestimate our representativeness to the group. The next time someone claims that 'everybody knows...' or 'everybody is doing it...' you can call bullshit with absolute confidence. Even smaller claims like 'I don't think that many people actually believe in Ghosts', or 'If I'm struggling in this class, I bet a few other people are too', are now entirely suspect.
The small number of people who actually recorded a negative delta was less than 1/3rd of respondents in each case, and close to 1/8th in one particular instance. That's low. Just eyeballing the data, there doesn't seem to be any consistency between individuals underestimating in one domain, and underestimating in others too. Though I haven't run the stats; I may be wrong. My point is, you, me, and everyone else do this. Try not to be tempted by the thought that you somehow have special knowledge on a given thing if it relates to a large group. Shockingly, and as counter-intuitive as it feels, we don't.
As I wrote earlier in this post, I suspect a large proportion of those who took this survey were quite educated and quite skeptical. They read science blogs, they belong to skeptic's groups, and many of my friends (who are widely tertiary educated) reported to me that they'd done the survey. Friends, readers, and skeptics - take note! You, me and everyone...
I can't help but wonder how successful actively downgrading our initial estimates will be, to counter this particular effect. It just leads to many questions, like 'by how much to I downgrade?', and 'what is the possible base-rate of this x'.
The upside is, in my opinion, by hesitating to act on - or articulate - these intuitive facts, we can spend more time listening to those of others, and evaluating their arguments accordingly. So the next time you hear a politician claim that they know the minds and opinions of those in their electorate; claim they are privvy to the will of the people - on a national scale; or just have someone try to convince you of the popularity of a restaurant... think again.
*What other kinds of liquor are there??
This post has been viewed: 10473 time(s)
reposted from facebook...
Aah... yeah, the t-test tests something slightly different to your hypothesis. Quick example: say with two options 95 people guessed 50%, the other 5 people guessed 5%, and the true rate was 40%. The t-tests tells you that the population systematically UNDER-estimates - whereas you can obviously tell that's not what you're saying. (The problem is that t-tests measure the magnitude of your deltas - you're really only interested in over vs under estimation.) However, with the Binomial test on the same data it shows over-estimation with p<2.2e-16 (what your hypothesis is testing)... it's HEAPS more powerful than the (incorrect) t-test. Just set alpha=0.05/4 (Bonferroni correction), run a Binomial test for each question, and you're set. More info on wiki:http://en.wikipedia.org/wiki/Binomial_test
Noted, and will rerun in the next few days.
The results are what I expected. I agree that the false consensus effect may be an issue, but as I mentioned in a previous comment, there is also the fact that people don't usually take into account the conditional probabilities.
The likelihood that the answers to questions are similars for 2 different people is very low, unless the percentages of similarity are very, very high. Most people don't think about this, therefore giving you an overestimated rate of %.
I'm sure this has already been considered by the people who write about the False Concensus Effect, but, rather than/as well as it being the "cognitive illusion" of only being able to imagine other people liking the same stuff you do, it could also be the result of a fairly accurate estimation process applied to the biassed sample of people-you-happen-to-know.
As a friend of mine said, "I keep wondering why conservative politicians keep getting voted for, but that's because almost everyone I know is 20 - 30 and tertiary educated."
Great concept, now do one on Global Warming.