(apologies to the author but I must keep these ‘finds’ in my database. The internet is not dependable enough.)
By Lee Jussim
[T]HE LONGSTANDING AND LOGICALLY INCOHERENT EMPHASIS ON STEREOTYPE INACCURACY
Psychological perspectives once defined stereotypes as inaccurate, casting them as rigid (Lippmann, 1922/1991), rationalizations of prejudice (Jost & Banaji, 1994; La Piere, 1936), out of touch with reality (Bargh & Chartrand, 1999), and exaggerations based on small “kernels of truth” (Allport, 1954/1979; Table 1). These common definitions are untenable. Almost any belief about almost any group has been considered a “stereotype” in empirical studies. It is, however, logically impossible for all group beliefs to be inaccurate. This would make it “inaccurate” to believe that two groups differ or that they do not differ.
Alternatively, perhaps stereotypes are only inaccurate group beliefs, and so therefore accurate beliefs are not stereotypes. If this were true, one would first have to empirically establish that the belief is inaccurate—otherwise, it would not be a stereotype. The rarity of such demonstrations would mean that there are few known stereotypes. Increasing recognition of these logical problems has led many modern reviews to abandon “inaccuracy” as a core definitional component of stereotypes (see Jussim et al, 2016 for a review).
Nonetheless, an emphasis on inaccuracy remains, which is broadly inconsistent with empirical research. My book, Social Perception and Social Reality: Why Accuracy Dominates Bias and Self-Fulfilling Prophecy, (hence SPSR) reviewed 80 years of social psychological scholarship and showed that there was widespread emphasis on inaccuracy. Some social psychologists have argued that the “kernel of truth” notion means social psychology has long recognized stereotype accuracy, but I do not buy it. It creates the impression that, among an almost entirely rotten cob, there is a single decent kernel, the “kernel of truth.” And if you doubt that is what this means, consider a turnabout test (Duarte et al, 2015): How would you feel if someone described social psychology has having a “kernel of truth?” Would that be high praise?
THE EMPIRICAL EVIDENCE
This blog is not the place to review the overwhelming evidence of stereotype accuracy, though interested readers are directed to SPSR and our updated reviews that have appeared in Current Directions in Psychological Science (Jussim et al, 2015) and Todd Nelson’s Handbook of Stereotypes, Prejudice and Discrimination (Jussim et al, 2016).
Summarizing those reviews:
Over 50 studies have now been performed assessing the accuracy of demographic, national, political, and other stereotypes.
Stereotype accuracy is one of the largest and most replicable effects in all of social psychology. Richard et al (2003) found that fewer than 5% of all effects in social psychology exceeded r’s of .50. In contrast, nearly all consensual stereotype accuracy correlations and about half of all personal stereotype accuracy correlations exceed .50.
The evidence from both experimental and naturalistic studies indicates that people apply their stereotypes when judging others approximately rationally. When individuating information is absent or ambiguous, stereotypes often influence person perception. When individuating information is clear and relevant, its effects are “massive” (Kunda & Thagard, 1996, yes, that is a direct quote, p. 292), and stereotype effects tend to be weak or nonexistent. This puts the lie to longstanding claims that “stereotypes lead people to ignore individual differences.”
There are only a handful of studies that have examined whether the situations in which people rely on stereotypes when judging individuals increases or reduces person perception accuracy. Although those studies typically show that doing so increases person perception accuracy, there are too few to reach any general conclusion. Nonetheless, that body of research provides no support whatsoever for the common presumption that the ways and conditions under which people rely on stereotypes routinely reduces person perception accuracy.
BIAN AND CIMPIAN’S “GENERIC” CRITIQUE
Bian and Cimpian step into this now large literature and simply declare it to be wrong. They do not review the evidence. They do not suggest the evidence is flawed or misinterpreted. Bian & Cimpian simple ignore the data. That sounds like a strong charge, but, if you think it is too strong, I request that you re-read their critique. The easiest way to maintain any cherished belief is to just ignore contrary data – something that is distressingly common, not only in social psychology (Jussim et al, in press), but in medicine (Ioannidis, 2005), astronomy (Loeb, 2014), environmental engineering (Kolowich, 2016), and across the social sciences (Pinker, 2002).
How, then, do Bian and Cimpian aspire to reach any conclusion about stereotype accuracy without grappling with the data? Their critique rests primarily on declaring (without empirical evidence) that most stereotypes are “generic” beliefs, which renders them inherently inaccurate, so no empirical evidence of stereotype inaccuracy is even necessary. This is the first failure of this critique. They report no data assessing the prevalence of stereotypes as generic beliefs. An empirical question (“what proportion of people’s stereotypes are generic beliefs?”) can never be resolved by declaration.
That failing is sufficient to render their analysis irrelevant to understanding the state of the evidence regarding stereotype accuracy.. However, it also fails on other grounds, which are instructive to consider because they are symptomatic of a common error made by social psychologists. They fall victim to the processistic fallacy, which was addressed in SPSR. Thus, my response to these critiques begins by quoting that text (p. 394):
To address accuracy, research must somehow assess how well people’s stereotypes (or the perceptions of individuals) correspond with reality. The evidence that social psychologists typically review when emphasizing stereotype inaccuracy does not do this. Instead, that evidence typically demonstrates some sort of cognitive process, which is then presumed – without testing for accuracy – to lead to inaccuracy…
Social psychologists have many “basic phenomena” that are presumed (without evidence) to cause inaccuracy: categorization supposedly exaggerates real differences between groups, ingroup biases, illusory correlations, automatic activation of stereotypes, the ultimate attribution error, and many more. None, however, have ever been linked to the actual (in)accuracy of lay people’s stereotypes. Mistaking processes speculatively claimed to cause stereotype inaccuracy, for evidence of actual stereotype inaccuracy, is the prototypical example of the processistic fallacy.
Their prototypical cases of supposedly inherently erroneous generic beliefs are those such as “mosquitos carry the West Nile virus” and “ducks lay eggs” (Leslie, Khemlani, & Glucksberg, 2011). They cite evidence that people judge such statements to be true. They argue that this renders people inaccurate because few mosquitos carry West Nile virus and not all ducks lay eggs. Ipso facto, according to their argument, stereotypes that are generic beliefs also cannot be accurate.
Even if people’s beliefs about ducks’ egg laying were generic and wrong, we would still have no direct information about the accuracy of their beliefs about other people. So, how does this translate to stereotypes? Bian and Cimpian cite another paper by Leslie (in press) in support of the claim that “”more people hold the generic belief that Muslims are terrorists than hold the generic belief that Muslims are female.” What was Leslie’s (in press) “evidence”? Quotes from headline-seeking politicians and a rise in hate crimes post-9/11. In short, this is no evidence whatsoever that bears on the claim that “more people believe Muslims are terrorists than Muslims are female.” Of course, even if this were valid, how it would bear on stereotype accuracy is entirely unclear, because that would depend, not on researcher assumptions about what people mean when they agree with statements like, “Muslims are terrorists” but on evidence assessing what people actually mean. The stereotyping literature is so strongly riddled with invalid researcher presumptions about lay people’s beliefs, that, absent hard empirical evidence about what people actually believe, researcher assumptions that are not backed up by evidence do not warrant credibility.
If, as seems to be widely assumed in discussions such as Bian and Cimpian’s, agreeing that “Muslims are terrorists” means “all Muslims are terrorists” then such stereotypes are clearly inaccurate (indeed, SPSR specifically points out that all or nearly all absolute stereotypes of the form ALL of THEM are X are inherently inaccurate, because human variability is typically sufficient to invalidate almost any such absolutist claim). However, the problem here is the presumption that agreeing that “Muslims are terrorists” is equivalent to the belief that “all Muslims are terrorists.” Maybe it is, but if so, that cannot be empirically supported just because researchers say so. I suspect many would agree that “Alaska is cold” (indeed, I would myself) – but doing so does not necessarily also entail the assumption that every day in every location in Alaska is always frigidly cold. Juneau routinely hits the 70 degree mark, which I do not consider particularly cold. Yet, I would still agree that “Alaska is cold.” Whether any particular generic beliefs is, in fact, absolutist requires evidence. In the absence of such evidence, researchers are welcome to present their predictions as speculations about stereotypes’ supposed absolute or inaccurate content, but they should not be presenting their own presumptions as facts.
Bian and Cimpian acknowledge that statistical beliefs are far more capable of being accurate, but then go on to claim that most stereotypes are not statistical beliefs, or, at least, generically based stereotypes are more potent influences on social perceptions. They present no assessment, however, of the relative frequencies with which people’s beliefs about groups are generic versus statistical. Again, there is an assumption without evidence.
But let’s consider the implications of their claim that most people’s stereotypes include little or no statistical understanding of the distributions of characteristics among groups. According to this view, laypeople would have little idea about racial/ethnic differences in high school or college graduate rates, or about the nonverbal skill differences between men and women, and are clueless about differences in the policy positions held by Democrats and Republicans. That leads to a very simple prediction – that people’s judgments of these distributions would be almost entirely unrelated to the actual distributions; correlations of stereotypes with criteria would be near zero and discrepancy scores would be high. One cannot have it both ways. If people are statistically clueless, then their beliefs should be unrelated to statistical distributions of characteristics among groups. If people’s beliefs do show strong relations to statistical realities, then they are not statistically clueless.
We already know that the predictions generated from the “most stereotypes are generic and are therefore statistically clueless” are disconfirmed by the data summarized in SPSR, and in Jussim et al (2015, 2016). Bian and Cimpian have developed compelling descriptions of the processes that they believe should lead people to be inaccurate. In point of empirical fact, however, people have mostly been found to be fairly accurate. Disconfirmation of such predictions can occur for any of several reasons:
The processes identified as “causing” inaccuracy do not occur with the frequency that those offering them assume (maybe most stereotypes are not generic).
The processes are quite common and do cause inaccuracy, but are mitigated by other countervailing processes that increase accuracy (e.g., perhaps people often adjust their beliefs in response to corrective information).
The processes are common, but, in real life, lead to much higher levels of accuracy than those emphasizing inaccuracy presume (see SPSR for more details). Regardless, making declarations about levels of stereotype inaccuracy on the basis of a speculative prediction that some process causes stereotype inaccuracy, rather than on the basis of evidence that directly bears on accuracy, is a classic demonstration of the processistic fallacy.
THE BLACK HOLE AT THE BOTTOM OF MOST DECLARATIONS THAT “STEREOTYPES ARE INACCURATE”
In science, the convention is to support empirical claims with evidence, typically via a citation. This should be an obvious point, but far too often, scientific articles have declared stereotypes to be inaccurate either without a single citation, or by citing an article that itself provides no empirical evidence of stereotype inaccuracy. My collaborators and I (e.g., Jussim et al, 2016) have taken to referring to this as “the black hole at the bottom of declarations of stereotype inaccuracy.” For example:
“… stereotypes are maladaptive forms of categories because their content does not correspond to what is going on in the environment” (Bargh & Chartrand, 1999, p. 467).
There is no citation here. It is a declaration without any provided empirical support.
Or consider this:
“The term stereotype refers to those interpersonal beliefs and expectancies that are both widely shared and generally invalid (Ashmore & Del Boca, 1981).” (Miller & Turnbull, 1986, p. 233).
There is a citation here – to Ashmore and Del Boca (1981). Although Ashmore and Del Boca (1981) did review how prior researchers defined stereotypes, they did not review or provide empirical evidence that addressed the accuracy of stereotypes. Thus, the Miller and Turnbull (1986) quote also ends in an empirical black hole. Bian and Cimpian’s argument that “stereotypes are inaccurate” based on studies that did not assess stereotype accuracy is a modern and sophisticated version of this argument from a black hole.
IS YOUR BELIEF IN STEREOTYPE INACCURACY FALSIFIABLE?
That question is directed to all readers of this blog entry who still maintain the claim that “stereotypes are inaccurate.” Scientific beliefs should at least be capable of falsification and correction; otherwise, they are more like religion.
Bian and Cimpian follow a long and venerable social psychological tradition of declaring stereotypes inaccurate without: 1. Grappling with the overwhelming evidence of stereotype accuracy; and 2. Without providing new evidence that directly assesses accuracy. This raises the question, if 50 high quality studies demonstrating stereotype accuracy across many groups, many beliefs, many labs, and many decades is not enough to get you to change your mind, what could?
I can tell you what could change my belief that the evidence shows most stereotypes are usually at least fairly accurate. If most of the next 50 studies on the topic provide little or no evidence of inaccuracy, I would change my view. Indeed, in our most recent reviews (Jussim et al, 2015, 2016) we pointed out two areas in which the weight of the evidence shows inaccuracy. National character stereotypes are often inaccurate when compared against Big Five Personality measures (interestingly, however, they are often more accurate when other criterion measures are used); and political stereotypes (e.g., people’s beliefs about Democrats versus Republicans, or liberals versus conservatives) generally exaggerate real differences. Show me the data, and I will change my view.
If no data could lead you to change your position, then your position is not scientific. It is completely appropriate for people’s morals to inform or even determine their political attitudes and policy positions. What is not appropriate, however, is for that to be the case, and then to pretend that one’s position is based on science.
Stereotype accuracy is one of the largest effects in all of social psychology. Given social psychology’s current crisis of replicability, and widespread concerns about questionable research practices (e.g., Open Science Collaboration, 2015; Simmons et al, 2011), one might expect that social psychologists would be shouting to the world that we have actually found a valid, independently replicable, powerful phenomena.
But if one did think that, one could not possibly be more wrong. Testaments to the inaccuracy of stereotypes still dominate textbooks and broad reviews of the stereotyping literature that appear in scholarly books. The new generation of scholars is still being brought up to believe that “stereotypes are inaccurate,” a claim many will undoubtedly take for granted as true, and then promote in their own scholarship. Sometimes, these manifest as definitions of stereotypes as inaccurate; and even when stereotypes are not defined as inaccurate, they manifest as declarations that stereotypes are inaccurate, exaggerated, or overgeneralized. Social psychologists are unbelievably terrific at coming up with reasons why stereotypes “should” be inaccurate, typically presented as statements that they “are” inaccurate. Social psychologists are, however, often less good at correcting their cherished beliefs in the face of contrary data than many of us would have hoped (Jussim et al, in press).
Self-correction is, supposedly, one of the hallmarks of true sciences. Failure to self-correct in the face of overwhelming data is, to me, a threat to the scientific integrity of our field. Perhaps, therefore, most of us can agree that, with respect to the longstanding claim that “stereotypes are inaccurate,” a little scientific self-correction is long overdue.
Allport, G. W. (1954/1979). The nature of prejudice (2nd edition). Cambridge, MA : Perseus Books.
Ashmore, R. D., & Del Boca, F. K. (1981). Conceptual approaches to stereotypes and stereotyping. In D. L. Hamilton (Ed.), Cognitive processes in stereotyping and intergroup behavior (pp.1-35). Hillsdale, NJ: Erlbaum.
Bargh, J. A., & Chartrand, T. L. (1999). The unbearable automaticity of being. American Psychologist, 54, 462-479.
Duarte, J. L., Crawford, J. T., Stern, C., Haidt, J., Jussim, L., & Tetlock, P. E. (2015). Political diversity will improve social psychological science. Behavioral and Brain Sciences, 38, 1-54.
Ioannidis, J. P. (2012). Why science is not necessarily self-correcting. Perspectives on Psychological Science, 7, 645-654.
Jost, J. T., & Banaji, M. R. (1994). The role of stereotyping in system‑justification and the production of false consciousness. British Journal of Social Psychology, 33, 1‑27.
Jussim, L. (2012). Social perception and social reality: Why accuracy dominates bias and self-fulfilling prophecy. New York: Oxford University Press.
Jussim, L., Cain, T., Crawford, J., Harber, K., & Cohen, F. (2009). The unbearable accuracy of stereotypes. In T. Nelson (Ed.), Handbook of prejudice, stereotyping, and discrimination (pp.199-227). Hillsdale, NJ: Erlbaum.
Jussim, L., Crawford, J.T., Anglin, S. M., Chambers, J. R., Stevens, S. T., & Cohen, F. (2016). Stereotype accuracy: One of the largest and most replicable effects in all of social psychology. Pp. 31-63, in T. Nelson (ed.), Handbook of prejudice, stereotyping, and discrimination (second edition). New York: Psychology Press.
Jussim, L., Crawford, J. T., Anglin, S. M., Stevens, S. M., & Duarte, J. L. (In press). Interpretations and methods: Towards a more effectively self-correcting social psychology. Journal of Experimental Social Psychology.
Jussim, L., Crawford, J. T., & Rubinstein, R. S. (2015). Stereotype (In)accuracy in perceptions of groups and individuals. Current Directions in Psychological Science, 24, 490-497.
Kolowich. S. (February 2, 2016). The water next time: Professor who helped expose crisis in Flynt says public science is broken. Chronicle of Higher Education. Retrieved on 2/3/16 from: http://chronicle.com/article/The-Water-Next-Time-Professor/235136/
Kunda, Z., & Thagard, P. (1996). Forming impressions from stereotypes, traits, and behaviors: A parallel-constraint-satisfaction theory. Psychological Review, 103, 284-308.
LaPiere, R. T. (1936). Type-rationalizations of group antipathy. Social Forces, 15, 232-237.
Leslie, S.J. (in press). The Original Sin of Cognition: Fear, Prejudice and Generalization. The Journal of Philosophy.
Leslie, S., Khemlani, S., & Glucksberg, S. (2011). Do all ducks lay eggs? The generic overgeneralization effect. Journal of Memory and Language, 65, 15–31.
Loeb, A. (2014). Benefits of diversity. Nature: Physics, 10, 616-617.
Lippmann, W. (1922/1991). Public opinion. New Brunswick, NJ: Transaction Publishers.
Miller, D.T., & Turnbull, W. (1986). Expectancies and interpersonal processes. Annual Review of Psychology, 37, 233-256.
Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349, aac4716. doi: 10.1126/science.aac4716
Pinker, S. (2002). The blank slate. New York City: Penguin Books.
Richard, F. D., Bond, C. F. Jr., & Stokes-Zoota, J. J. (2003). One hundred years of social psychology quantitatively described. Review of General Psychology, 7, 331-363.
Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22, 1359-1366.
 Consensual stereotypes refer to beliefs shared by a group and are usually assessed by means. For example, you might be teaching a psychology of 30 students, and ask them to estimate the college graduation rates for five demographic groups. Consensual stereotype accuracy can be assessed by correlating the class mean on these estimates with, e.g., Census data on graduate rates for the different groups. Personal stereotype accuracy is assessed identically, but for each person, separately. So, one would assess Fred’s personal stereotype accuracy by correlating Fred’s estimates for each group with the Census data. See SPSR, Chapter 16, for a much more detailed description of different aspects of stereotype accuracy and how they can be assessed.