Skip to main content

What is falsifiability in science?

Many people think that science works by formulating a hypothesis (based on observation and measurement) about a particular natural phenomenon, and then trying to prove that hypothesis correct. While that might sound very reasonable at first glance, it's actually a very naive and even incorrect approach. It's an incorrect approach because it may lead to the wrong conclusions because of confirmation bias.

Rather than trying to prove the hypothesis, the better method is, as contradictory as it might sound at first, try to disprove it. In other words, don't construct tests that simply confirm the hypothesis; instead, construct tests that, if successful, will disprove the hypothesis, show that it's wrong.

And "trying to disprove the hypothesis" is not always as straightforward as "if the test fails, it disproves the hypothesis". In many cases the hypothesis must be falsifiable even if the test succeeds.

An example of this is controlled testing. It might not be immediately apparent, but the "controls" in a "controlled test" are, in fact, there to try to disprove the hypothesis being tested, even if the actual test turns out to be positive (ie. apparently proving the hypothesis correct).

A "control" in a test is an element or scenario for which the test is not being applied, to see that there isn't something else affecting the situation. For example, if what's being tested is the efficacy of a medication, the "control group" is a group of test subject for which something inert is being given instead of the medication. (In this particular scenario this tests, among other things, that the placebo effect plays no significant role.)

If the medication were tested without a control group, a positive result (ie. the medication apparently remedies the ailment) would be unreliable. It might look like it's supporting the hypothesis, but it doesn't take into account that there might be an external factor, something else (eg. the placebo effect), that caused the positive result instead of the medication.

It's very important that hypotheses can be proven wrong in the first place. It's very important for it to be possible to construct a test that, if positive, actually disproves the hypothesis (or, at the very least, a test that if negative, likewise disproves it.)

That is the principle of falsifiability. The worst kind of hypothesis is one that can't be proven wrong, ie. when there is no test that would show it to be incorrect.

For example, if somebody believes in ghosts and spirits, ask them if there is any test or experiment that could be constructed that would prove that they don't actually exist. I doubt they could come up with anything. The same is true for psychics, mediums and the myriads of other such things. They will never come up with a test or experiment that, if positive, they would accept as definitive proof that those things are not real. (Any results of any experiments on these subjects will be dismissed with hand-waving, like the psychic not feeling well, or whatever.)

The hypothesis that ghosts exist is pretty much unfalsifiable. While people can come up with experiments that, if positive, would "prove" their existence, not many can come up with an experiment that would disprove it. And that's a big problem. The "positive" experiment results are not reliable because, like with uncontrolled medical tests, they don't account for other reasons for the observed results.

That's why it's more important to be able to prove a hypothesis wrong than right. If numerous negative experiments (ie. ones that if successful would prove the hypothesis wrong) fail, that will give credibility to the hypothesis. But if no such experiments are possible, then the hypothesis becomes pretty much useless.

Comments