Lets say a bunch of different scientists around the world are measuring some factor X. And a bunch of different studies, using different methods come up with similar values. There is the common belief that the result must be right given the agreement between the different methods.
But this idea is false. Scientist can come to agreement even on things even if the fact is false. How does this happen? Lets say the first team to publish, Team T gets a result XT. The next team, Team U, tries their method and lets propose two counterfactual worlds. World A and World B.
In World A when Team U finishes their result is reasonably close to XT. Team U is pretty happy and publishes.
In World B, Team U’s result is very very different than XT. Team U is less happy and they therefore recheck the results again and again. They find some legitimate errors in their methods. They are now able to get a result less different than XT but still not in good agreement.
Now Team V enters the picture and when it does the world splits again. Again Team V is pressured to look for mistakes only when their results don’t agree with previously established ones. The more teams publish results the stronger the pressure and bias. Thus a scientific consensus is born.
A real world example of this phenomenon can be seen in a Nasa feature on the measurement of global ocean cooling by Josh Willis:
In 2006, he co-piloted a follow-up study led by John Lyman at Pacific Marine Environmental Laboratory in Seattle that updated the time series for 2003-2005. Surprisingly, the ocean seemed to have cooled
Not surprisingly, says Willis wryly, that paper got a lot of attention, not all of it the kind a scientist would appreciate. In speaking to reporters and the public, Willis described the results as a “speed bump” on the way to global warming,
Basically, I used the sea level data as a bridge to the in situ [ocean-based] data,” explains Willis, comparing them to one another figuring out where they didn’t agree. “First, I identified some new Argo floats that were giving bad data; they were too cool compared to other sources of data during the time period. It wasn’t a large number of floats, but the data were bad enough, so that when I tossed them, most of the cooling went away. But there was still a little bit, so I kept digging and digging.”
What I find amazing about this is that we have a NASA scientist admitting to throwing away data and its a feature on the NASA website. What is interesting about this is that he never bothered investigating the reason for the supposed “bad data”. And notice the one way nature of his corrections…he only threw away data that was “too cool”. And when he didn’t get agreement he kept digging and digging. This better than anything I have ever seen illustrates how scientists come to agreement on things.