“…no testimony is sufficient to establish a miracle, unless the testimony be of such a kind, that its falsehood would be more miraculous, than the fact, which it endeavors to establish”
— David Hume
— An Enquiry Concerning Human Understanding, 1748
— (I can’t find said direct quote in google books, but I agree this is the gist of the matter)
“So. Alternate hypothesis. About one million people view Reddit every day. Let’s assume 10% of those see threads like the above – which were pretty popular and which I think both made it to the front page. That’s 100,000 people. Now let’s assume that even 1/10,000 people on the Internet are annoying trolls, which is maybe the easiest assumption we’re ever going to have to make. If each of those annoying trolls posts one fake story to a thread like that for the lulz, that’s enough for ten really convincing stories per thread – which is really all there are, the other fifty or sixty are just the usual friend-of-a-friend-had-a-vague-feeling stuff.”
— Scott Alexander
— Slate Star Codex: Redditors Lie?
— (Since Moldbug retired, Scott might be the most verbose guy on the internet)
It’s all part of the same problem. (Warning, Math):
If a drug researcher discovers a drug which has an effect at p<0.001, what are the odds it was a fake discovery?
Of course, the answer is “Not enough information”
But you can’t do a real calculation until you figure out how likely the drug really is to work.
The real decision matrix requires a comparison of how likely something really is to be true, vs how likely it is to be measured true, when false.
If drugs are effective 1/2 the time, and the test is only wrong 1/1000 of the time, then odds that the drug works are pretty good. Start with 2000 drugs, and here’s what happens
Look, there’s 1000 cases where the drug tests true, and only one of them is a mistake. The odds of the drug working, once it tests at p<0.001 is 99.9%
But that’s not the reality. Far less than 1/2 substances are effective towards our goals. If we instead suggest that only 1 in a Million drugs are effective towards a goal, then the numbers look very different. Start with a billion drugs, and here’s what happens
Look, there’s 1,000,998 cases of drug testing true, and only 1 in 1,000 of those is actually true. Think about that. If a drug has a prior probability of 1/1,000,000 of being effective, than a test that tells you it is effective with p<0.001 is wrong 1,001/1,002 times. Prior probability matters. Prior probability overwhelms the measured probability an awful lot of the time.
At the end of Scott’s piece, he asks: “If you’re like me, and you want to respond to this post with “but how do you know that person didn’t just experience a certain coincidence or weird psychological trick?”, then before you comment take a second to ask why the “they’re lying” theory is so hard to believe. And when you figure it out, tell me, because I really want to know.”
And this is where it gets interesting. This means that to a first approximation, people’s approach to accepting evidence if they already agree, and dismissing it if they don’t already agree is the rational thing to do. Does “XYZ study on climate change” change your opinion on climate change? Odds are strongly no. If you agree, you now have extra evidence, and you hold the same opinion. If you disagree, then you dismiss the study as probably being wrong, and don’t change your opinion. AND this is the smart thing to do. In the short term, anyhow.
In the long term, I continue to insist that the smart thing to do is hold low certainty levels. If new information comes out, update a little bit. But that’s hard to do. And odds are the other side is trying to get you to change your mind regardless of the truth. But still, low certainty levels and actual updating are the best answer I’ve run into.