In a leaky capacitor comprising a 20-nm thick film of silica sandwiched between two gold electrodes, the current measured at 0.1V increases with ambient humidity.
In plain language, we want to know if electrical conduction through a thin layer of glass improves when air around it gets more humid. This may or may not be true. The purpose of the experiment is to test the hypothesis, and provide a reasonably convincing answer – in this case, “true” or “false.” We don’t know which one it is, and we are here to find out. This approach is valid in pure, but also in applied sciences – even if your end goal is to just make something work and then sell it, usually you won’t get far trying random things and relying on luck alone, without any guidance from the theory – and theory is built from the hypotheses that have been repeatedly tested.
In the universe where this hypothesis is true, the system is well behaved (for example, the mechanism is not defect-driven), and you do a decent job carrying out the experiment (sufficient number of measurements, all important variables controlled), you might get the following relationship between the current i and relative humidity (RH):
…or you might do a half-assed job (not calibrating the instruments, not controlling the temperature, not being consistent with equilibration times, collecting only one data point for each measurement, ignoring unfavorable results), and get any of these:
(If you are a graduate student or a postdoc in sciences, try and imagine how each of these possibilities plays out as far as reaction from your supervisor, your career, esteem, etc. It’s a useful thought experiment.)
The same goes for the case where, in reality, the hypothesis is false: there is no significant change in resistivity with relative humidity of air. In such universe, you can again end up with any of the possibilities listed above. Depending on how you do things, and on luck, you might get it right…
…or you might get it wrong for whatever legitimate reason (including sloppiness).
The conclusion that is often not appreciated is that, even without any intentional bias, research findings are, by their nature, often false. This is normal, inevitable, and is no cause for alarm. It is ultimately a consequence of finite time and resources available to carry out experiments. If we do a good job designing and carrying out experiments, and the system under test is well defined and well behaved (in other words, we know and control all important variables, or in different other words, the system is not intrinsically stochastic) our answers will hopefully be correct most of the time – but they will not be correct every time. Hence the importance of meta studies.
If we don’t understand this, we may be in a little bit of trouble. The real trouble, however, starts when we introduce bias by not testing the hypothesis, but working to prove it true (just because it’s ours), or sometimes to prove somebody else’s hypothesis false (just because it’s theirs). Then we waste resources, cherry-pick desired data, hide conflicting data, deceive ourselves and others, and occasionally simply cook the books. This happens all the time.
Further to bias stemming from the perceived “ownership” of the pet hypothesis, we should not ignore the fact that doing a half-assed as opposed to a proper job has certain advantages to it: with only a little bit of cherry picking (be the first to throw the stone…) sloppy approach opens a world of possibilities instead of forcing the scientist to build upon one – often inconvenient – correct answer. Besides making one’s life easier, sloppy science also enables advancing the career faster because more data is generated per unit time. In a typical academic setting, with all the publish-or-perish stress and competition for limited funding, this is highly valuable, and is sometimes disguised as “working hard.” There is a very strong incentive for supervisors and students to rely on sloppiness and bias as tools of success, and there is almost nothing in the way to stop them. The peer review is dysfunctional, as demonstrated by numerous ridiculously impossible, fraudulent findings that were successfully published only to be refuted later on. In most cases, realization that some published work is not reproducible results in one simple action: forget about it and move on. The reason, again, is that there is no incentive to investigate the problem at the expense of more pressing problems: publish or perish, secure the funding, do not make enemies, protect the public image of your institution, and so forth. Instances of whistle blowing in response to sloppiness, bias, or fraud should ideally lead to adequate institutional response and retraction of unreliable findings, yet the frequency of this happening is extremely rare when compared to the 50%-80% rate of irreproducible claims found in biomedical publications (see Academic bias & biotech failures, Reliability of ‘new drug target’ claims called into question and Is Reproducibility on New Drug Targets 50% or 20%?).
In order to fully profit from the sloppy and/or fraudulent approach to science, one must resort to a secret ingredient: arrogance maintained by self-deception. Surveys show that 94% of academic professionals claim that they are in the top 50% in their field. Without self-deception, it would be hard to deceive others.
Look, mom – no hands!
In modern science, reported outcomes serve to prove the particular idea “correct,” consequently boosting up recognition and sometime also the financial success of the investigator. In recent decades a new trend is noticeable whereby reported outcome does not even prove anything crucial other than the authors’ supposed ability to make it happen. In fact, being the only one able to obtain certain result is portrayed not as a sign of scientific failure, but as a sign of skill and merit. Science is nowadays more of a sport or a performing art, than the idea of organized knowledge in the form of testable explanations and predictions about the universe.
In reality, the reported outcome may have happened only once out of tens or hundreds of attempts – or, even worse – never happened at all. Getting funding to try to replicate other people’s results is not only impossible, but the idea is often ridiculed, making the story of science as a self-correcting mechanism rather naive. While there certainly is a self-correcting aspect to science, most of resources are nowadays devoted to correcting trivial errors that would have been easy to avoid in the first place. Self deception outweighs both self-correction and prevention.
A frequent response to the critique of modern science is to point to all the progress that science has enabled in the past centuries. Such tip-of-the-iceberg perspective neglects the scale of failure underneath. Just how efficient science is, and can it be made more efficient? A typical branch of modern science may have had tens of thousands of person years invested in it, along with billions of dollars appropriated mostly from public funds. Is the resulting value reasonable? How much of this effort and investment has been wasted? Was this waste avoidable? How many of the promises have materialized? How many of the supposedly materialized promises actually only existed in written reports? These questions pertain not only to direct economic value, but also to things that are not easily expressed in monetary terms – for example, training of future generations of engineers and scientists. To what extent does present system yield young PhDs who are capable of critical thinking, problem solving, good management, and realistic assessment of hypotheses? For statistical analysis of odds of a scientific finding being false, see Why Most Published Research Findings Are False.  For a collection of representative stories of fraud in modern science, refer to Chapter 3 of “The Great Betrayal” by Horace Freeland Judson.  This is nothing unique to scientists; For example, only 1% of drivers report that they consider themselves to be worse than average drivers. See, for example Alfred Mele, “Real Self-Deception,” Behavioral and Brain Sciences 20 (1997): 91-102. or see this article mail.ny.acog.org/website/EFM/Overconfidence.pdf  See “The Truth Wears Off” By Jonah Lehrer, and the follow-up