(Sigh. You Can’t Fix Stupid)


[I] get a lot of criticism from my friends for trying to ‘help’ idiots. And yes, it is often a waste of time in the sense that you can’t change their thinking (much). On the other hand, I learn a lot about how to debate when I do argue with simple folk.

I saved today’s conversation with (well meaning person) Wes Lysander, and some other twits or two. I can’t post a pdf here so I’ll put it on my site.

But when I criticize ‘meaning’ rather than ‘truth’, and require definitions, that’s because meaning is dependent upon the imbecile’s abilities and knowledge, whereas truth is not.

Now, truth is yet another problematic word whose ‘meaning’ is degraded into analogy after analogy. Because the truth content of a term is that which survives testing, not that from which we derive meaning.

This is why I ask people in propertarianism to use terms only when they understand the entire spectrum in which that terminological point addresses a limited context. This is to ensure that we are not making argument by loose imprecise analogy.

Often arguments require multiple axis of causality and therefore multiple spectra.

So meaning is an exceptional device for deception, self deception, and error. (Yes I think I have settled that matter now – self deception is possible by intuitive desire.)

And the reduction of any term to that which survives the process of elimination by the use of multiple axis of constraint, defines the necessary properties of the term (true), and not the abuses of that term (meaning).

Just because I can use a shoe to hammer a nail does not mean it is honest to refer to a shoe as a hammer.

That is what appeals to ‘meaning’ attempt to do.


(1) Wes Lysander – Meritocracies_ China and India Democracies_..


Twenty Concepts for Public Intellectuals, Journalists, Advisors, Politicians and Bureaucrats.

Calls for the closer integration of science in political decision-making have been commonplace for decades. However, there are serious problems in the application of science to policy — from energy to health and environment to education.

One suggestion to improve matters is to encourage more scientists to get involved in politics. Although laudable, it is unrealistic to expect substantially increased political involvement from scientists. Another proposal is to expand the role of chief scientific advisers, increasing their number, availability and participation in political processes. Neither approach deals with the core problem of scientific ignorance among many who vote in parliaments.

Perhaps we could teach science to politicians? It is an attractive idea, but which busy politician has sufficient time? In practice, policy-makers almost never read scientific papers or books. The research relevant to the topic of the day — for example, mitochondrial replacement, bovine tuberculosis or nuclear-waste disposal — is interpreted for them by advisers or external advocates. And there is rarely, if ever, a beautifully designed double-blind, randomized, replicated, controlled experiment with a large sample size and unambiguous conclusion that tackles the exact policy issue.

In this context, we suggest that the immediate priority is to improve policy-makers’ understanding of the imperfect nature of science. The essential skills are to be able to intelligently interrogate experts and advisers, and to understand the quality, limitations and biases of evidence. We term these interpretive scientific skills. These skills are more accessible than those required to understand the fundamental science itself, and can form part of the broad skill set of most politicians.

To this end, we suggest 20 concepts that should be part of the education of civil servants, politicians, policy advisers and journalists — and anyone else who may have to interact with science or scientists. Politicians with a healthy skepticism of scientific advocates might simply prefer to arm themselves with this critical set of knowledge.

We are not so naive as to believe that improved policy decisions will automatically follow. We are fully aware that scientific judgement itself is value-laden, and that bias and context are integral to how data are collected and interpreted. What we offer is a simple list of ideas that could help decision-makers to parse how evidence can contribute to a decision, and potentially to avoid undue influence by those with vested interests. The harder part — the social acceptability of different policies — remains in the hands of politicians and the broader political process.

Of course, others will have slightly different lists. Our point is that a wider understanding of these 20 concepts by society would be a marked step forward.

Twenty Concepts for Public Intellectuals, Journalists, Advisors, Politicians and Bureaucrats.

  1. Differences and chance cause variation. The real world varies unpredictably. Science is mostly about discovering what causes the patterns we see. Why is it hotter this decade than last? Why are there more birds in some areas than others? There are many explanations for such trends, so the main challenge of research is teasing apart the importance of the process of interest (for example, the effect of climate change on bird populations) from the innumerable other sources of variation (from widespread changes, such as agricultural intensification and spread of invasive species, to local-scale processes, such as the chance events that determine births and deaths).
  2. No measurement is exact. Practically all measurements have some error. If the measurement process were repeated, one might record a different result. In some cases, the measurement error might be large compared with real differences. Thus, if you are told that the economy grew by 0.13% last month, there is a moderate chance that it may actually have shrunk. Results should be presented with a precision that is appropriate for the associated error, to avoid implying an unjustified degree of accuracy.
  3. Bias is rife. Experimental design or measuring devices may produce atypical results in a given direction. For example, determining voting behaviour by asking people on the street, at home or through the Internet will sample different proportions of the population, and all may give different results. Because studies that report ‘statistically significant’ results are more likely to be written up and published, the scientific literature tends to give an exaggerated picture of the magnitude of problems or the effectiveness of solutions. An experiment might be biased by expectations: participants provided with a treatment might assume that they will experience a difference and so might behave differently or report an effect. Researchers collecting the results can be influenced by knowing who received treatment. The ideal experiment is double-blind: neither the participants nor those collecting the data know who received what. This might be straightforward in drug trials, but it is impossible for many social studies. Confirmation bias arises when scientists find evidence for a favoured theory and then become insufficiently critical of their own results, or cease searching for contrary evidence.
  4. Bigger is usually better for sample size. The average taken from a large number of observations will usually be more informative than the average taken from a smaller number of observations. That is, as we accumulate evidence, our knowledge improves. This is especially important when studies are clouded by substantial amounts of natural variation and measurement error. Thus, the effectiveness of a drug treatment will vary naturally between subjects. Its average efficacy can be more reliably and accurately estimated from a trial with tens of thousands of participants than from one with hundreds.
  5. Correlation does not imply causation. It is tempting to assume that one pattern causes another. However, the correlation might be coincidental, or it might be a result of both patterns being caused by a third factor — a ‘confounding’ or ‘lurking’ variable. For example, ecologists at one time believed that poisonous algae were killing fish in estuaries; it turned out that the algae grew where fish died. The algae did not cause the deaths.
  6. Regression to the mean can mislead. Extreme patterns in data are likely to be, at least in part, anomalies attributable to chance or error. The next count is likely to be less extreme. For example, if speed cameras are placed where there has been a spate of accidents, any reduction in the accident rate cannot be attributed to the camera; a reduction would probably have happened anyway.
  7. Extrapolating beyond the data is risky. Patterns found within a given range do not necessarily apply outside that range. Thus, it is very difficult to predict the response of ecological systems to climate change, when the rate of change is faster than has been experienced in the evolutionary history of existing species, and when the weather extremes may be entirely new.
  8. Beware the base-rate fallacy. The ability of an imperfect test to identify a condition depends upon the likelihood of that condition occurring (the base rate). For example, a person might have a blood test that is ‘99% accurate’ for a rare disease and test positive, yet they might be unlikely to have the disease. If 10,001 people have the test, of whom just one has the disease, that person will almost certainly have a positive test, but so too will a further 100 people (1%) even though they do not have the disease. This type of calculation is valuable when considering any screening procedure, say for terrorists at airports.
  9. Controls are important. A control group is dealt with in exactly the same way as the experimental group, except that the treatment is not applied. Without a control, it is difficult to determine whether a given treatment really had an effect. The control helps researchers to be reasonably sure that there are no confounding variables affecting the results. Sometimes people in trials report positive outcomes because of the context or the person providing the treatment, or even the colour of a tablet. This underlies the importance of comparing outcomes with a control, such as a tablet without the active ingredient (a placebo).
  10. Randomization avoids bias. Experiments should, wherever possible, allocate individuals or groups to interventions randomly. Comparing the educational achievement of children whose parents adopt a health programme with that of children of parents who do not is likely to suffer from bias (for example, better-educated families might be more likely to join the programme). A well-designed experiment would randomly select some parents to receive the programme while others do not.
  11. Seek replication, not pseudoreplication. Results consistent across many studies, replicated on independent populations, are more likely to be solid. The results of several such experiments may be combined in a systematic review or a meta-analysis to provide an overarching view of the topic with potentially much greater statistical power than any of the individual studies. Applying an intervention to several individuals in a group, say to a class of children, might be misleading because the children will have many features in common other than the intervention. The researchers might make the mistake of ‘pseudoreplication’ if they generalize from these children to a wider population that does not share the same commonalities. Pseudoreplication leads to unwarranted faith in the results. Pseudoreplication of studies on the abundance of cod in the Grand Banks in Newfoundland, Canada, for example, contributed to the collapse of what was once the largest cod fishery in the world.
  12. Scientists are human. Scientists have a vested interest in promoting their work, often for status and further research funding, although sometimes for direct financial gain. This can lead to selective reporting of results and occasionally, exaggeration. Peer review is not infallible: journal editors might favour positive findings and newsworthiness. Multiple, independent sources of evidence and replication are much more convincing.
  13. Significance is significant. Expressed as P, statistical significance is a measure of how likely a result is to occur by chance. Thus P = 0.01 means there is a 1-in-100 probability that what looks like an effect of the treatment could have occurred randomly, and in truth there was no effect at all. Typically, scientists report results as significant when the P-value of the test is less than 0.05 (1 in 20).
  14. Separate no effect from non-significance. The lack of a statistically significant result (say a P-value > 0.05) does not mean that there was no underlying effect: it means that no effect was detected. A small study may not have the power to detect a real difference. For example, tests of cotton and potato crops that were genetically modified to produce a toxin to protect them from damaging insects suggested that there were no adverse effects on beneficial insects such as pollinators. Yet none of the experiments had large enough sample sizes to detect impacts on beneficial species had there been any.
  15. Effect size matters. Small responses are less likely to be detected. A study with many replicates might result in a statistically significant result but have a small effect size (and so, perhaps, be unimportant). The importance of an effect size is a biological, physical or social question, and not a statistical one. In the 1990s, the editor of the US journal Epidemiology asked authors to stop using statistical significance in submitted manuscripts because authors were routinely misinterpreting the meaning of significance tests, resulting in ineffective or misguided recommendations for public-health policy.
  16. Study relevance limits generalizations. The relevance of a study depends on how much the conditions under which it is done resemble the conditions of the issue under consideration. For example, there are limits to the generalizations that one can make from animal or laboratory experiments to humans.
  17. Feelings influence risk perception. Broadly, risk can be thought of as the likelihood of an event occurring in some time frame, multiplied by the consequences should the event occur. People’s risk perception is influenced disproportionately by many things, including the rarity of the event, how much control they believe they have, the adverseness of the outcomes, and whether the risk is voluntarily or not. For example, people in the United States underestimate the risks associated with having a handgun at home by 100-fold, and overestimate the risks of living close to a nuclear reactor by 10-fold.
  18. Dependencies change the risks. It is possible to calculate the consequences of individual events, such as an extreme tide, heavy rainfall and key workers being absent. However, if the events are interrelated, (for example a storm causes a high tide, or heavy rain prevents workers from accessing the site) then the probability of their co-occurrence is much higher than might be expected8. The assurance by credit-rating agencies that groups of subprime mortgages had an exceedingly low risk of defaulting together was a major element in the 2008 collapse of the credit markets.
  19. Data can be dredged or cherry picked. Evidence can be arranged to support one point of view. To interpret an apparent association between consumption of yoghurt during pregnancy and subsequent asthma in offspring, one would need to know whether the authors set out to test this sole hypothesis, or happened across this finding in a huge data set. By contrast, the evidence for the Higgs boson specifically accounted for how hard researchers had to look for it — the ‘look-elsewhere effect’. The question to ask is: ‘What am I not being told?’
  20. Extreme measurements may mislead. Any collation of measures (the effectiveness of a given school, say) will show variability owing to differences in innate ability (teacher competence), plus sampling (children might by chance be an atypical sample with complications), plus bias (the school might be in an area where people are unusually unhealthy), plus measurement error (outcomes might be measured in different ways for different schools). However, the resulting variation is typically interpreted only as differences in innate ability, ignoring the other sources. This becomes problematic with statements describing an extreme outcome (‘the pass rate doubled’) or comparing the magnitude of the extreme with the mean (‘the pass rate in school x is three times the national average’) or the range (‘there is an x-fold difference between the highest- and lowest-performing schools’). League tables, in particular, are rarely reliable summaries of performance.

Nature 503, 335–337 21 November 2013


Tilting Against The Market’s Use of Available Information

Guest Post by Michael Phillip

[A]ll the efficient market hypothesis (EMH) says is that markets use all available information. Which does not sound like much until one works through the implications. One of which is, as William Easterly states, economists correctly predicted that they could not correctly predict. In Cochrane’s words:

“It’s fun to say we didn’t see the crisis coming, but the central empirical prediction of the efficient markets hypothesis is precisely that nobody can tell where markets are going – neither benevolent government bureaucrats, nor crafty hedge-fund managers, nor ivory-tower academics. This is probably the best-tested proposition in all the social sciences. Krugman knows this, so all he can do is huff and puff about his dislike for a theory whose central prediction is that nobody can be a reliable soothsayer.” – John Cochrane


The Portfolio of Privileges and Underprivileges

Guest Post by Michael Phillip

[S]ince social status is conferred in many different ways — everything from race to geography to class — all people are both privileged and non-privileged in certain aspects of their life. Furthermore, since dynamics of social status are highly dependent on situation, a person can benefit from privilege in one situation while not benefiting from it in another. It is also possible to have a situation in which a person simultaneously is the beneficiary of privilege while also being the recipient of discrimination in an area which they do not benefit from privilege.

Racism Is Curable: By Eliminating Demand for It


In response to Matt Zwolinski : http://bleedingheartlibertarians.com/2014/11/on-reverse-racism-three-thought-experiments/


1) The distribution of physical desirability for mating, the demonstrated behaviors of impulsivity and time preference, aggression, and demonstrated intelligence vary between individuals. (true)
2) The social classes are organized by these distributions due to reproductive desirability, status utility, and cooperative (economic) utility. (true)
3) The races demonstrate different relative distributions of these classes. (true)
4) Racial groups demonstrate kin selection in mating, neighborhoods, friendship, social organizations, and business organizations. (true).
5) The norms demonstrated by racial groups reflect behavior at the mean (true). This means lower trust, less intelligent groups must compete against norms in groups with higher trust and higher intelligent groups. (true). It also means that the group that holds dominant political power, and biases toward their norms, determines the economic velocity of the entire polity (true).
6) Racial groups demonstrate kin selection in voting (true).
7) INABILITY to use the state for rents and privileges limits political competition and conflict, whereas ABILITY to use the state for rents and privileges increases political competition and conflict. (true)
8) Economic Wealth reduces dependence upon kin for mutual insurance under kin selection. (true). Economic stress increases dependence upon kin for mutual insurance via kin selection. (true)
9) The difference between economic, political, social, reproductive and status success of one race or another is due to the distribution of superior talents versus inferior liabilities of the members of those races – plus normative factors, the most important of which is in-group trust, and the second is the degree of the suppression of free riding. (true)
10) As such the only reason for racism is the rates of reproduction between the classes. And the only possible means of achieving equality in any and all cases is to suppress the reproduction of the lower classes of the races whose distribution is bottom weighted.
11) It is non-rational to treat unknown individuals who are visually indistinguishable by other than the properties of their peer groups. (true) (which is what people do). One cannot both demand rational action, defend Praxeology, and deny this statement.
12) Equality is achievable and desirable in just four generations. But it is upward reproductive redistribution that must mach downward economic redistribution for equality to be possible. If china can do this so can the rest of the world.

Otherwise, it is non-rational for people with higher reproductive desirability, lower impulsivity, lower aggression, and higher intelligence to tolerate political competition from those who are less desirable and in the net, parasitic, just as it is politically preferable to compete via parasitism if one is less desirable at the bottom.

Human beings are not unique and precious snowflakes. It is only that disregard for life is a moral hazard. The fact that mothers MUST believe their dysgenic offspring are precious is an evolutionary convenience, not a demonstrable fact.

The purpose of science is quite often to force us to acknowledge uncomfortable truths. Equality is not a problem of belief (lying), but one of fact (truth).

Try not to lie.

It hurts the discipline of philosophy. It hurts mankind.


Um … The Market Is A Computer.

—“When I was in high school, I thought we should put the robots in charge. … Then I realized the market is a computer.”—
Eli Harman


Leninism’s Atavism

—“Leninism’s development of the totally expropriating state was profoundly atavistic. So atavistic that, the Soviet Union managed to pass through ibn Khaldun’s state cycle in a single life time. “— Michael Phillip

–“Lenin famously claimed that communism was socialism + electricity. Actually, it was an attempted return to the origins of the state + electricity. But bargaining states had let loose technological dynamism on the world, and mere expropriation was no longer the cutting edge in organising societies. The gap between Leninist pretension and economic reality became de-stabilisingly obvious. So, we have collapsed Leninist regimes or societies with notionally Leninist ruling regimes ruling very not-totalitarian societies or, in the case of North Korea, a regime that has embraced its atavism. History is how the present was created, but only provides understanding if we accurately grasp that history.”— Michael Phillip

Ibn Khaldun’s State Cycle 
(I might recommend Carroll Quigley instead)

Atavistism : “The tendency to revert to Ancestral type”