How Science Works?



We shall begin with a trivial example. Let’s assume we want to test the following hypothesis using real-life technologies and scientific methods, with all their real-life shortcomings:

In a leaky capacitor comprising a 20-nm thick film of silica sandwiched between two gold electrodes, the current measured at 0.1V increases with ambient humidity.

In plain language, we want to know if electrical conduction through a thin layer of glass improves when air around it gets more humid. This may or may not be true. The purpose of the experiment is to test the hypothesis, and provide a reasonably convincing answer – in this case, “true” or “false.” We don’t know which one it is, and we are here to find out. This approach is valid in pure, but also in applied sciences – even if your end goal is to just make something work and then sell it, usually you won’t get far trying random things and relying on luck alone, without any guidance from the theory – and theory is built from the hypotheses that have been repeatedly tested.

In the universe where this hypothesis is true, the system is well behaved (for example, the mechanism is not defect-driven), and you do a decent job carrying out the experiment (sufficient number of measurements, all important variables controlled), you might get the following relationship between the current i and relative humidity (RH):

Note that the above may be the most likely outcome, but due to limited number of measurements and large variability, there will be some finite probability that you will get something like this:

…or you might do a half-assed job (not calibrating the instruments, not controlling the temperature, not being consistent with equilibration times, collecting only one data point for each measurement, ignoring unfavorable results), and get any of these:


(If you are a graduate student or a postdoc in sciences, try and imagine how each of these possibilities plays out as far as reaction from your supervisor, your career, esteem, etc. It’s a useful thought experiment.)

The same goes for the case where, in reality, the hypothesis is false: there is no significant change in resistivity with relative humidity of air. In such universe, you can again end up with any of the possibilities listed above. Depending on how you do things, and on luck, you might get it right…

…or you might get it wrong for whatever legitimate reason (including sloppiness).

The conclusion that is often not appreciated is that, even without any intentional bias, research findings are, by their nature, often false.[1] This is normal, inevitable, and is no cause for alarm. It is ultimately a consequence of finite time and resources available to carry out experiments. If we do a good job designing and carrying out experiments, and the system under test is well defined and well behaved (in other words, we know and control all important variables, or in different other words, the system is not intrinsically stochastic) our answers will hopefully be correct most of the time – but they will not be correct every time. Hence the importance of meta studies.

If we don’t understand this, we may be in a little bit of trouble. The real trouble, however, starts when we introduce bias by not testing the hypothesis, but working to prove it true (just because it’s ours), or sometimes to prove somebody else’s hypothesis false (just because it’s theirs). Then we waste resources, cherry-pick desired data, hide conflicting data, deceive ourselves and others, and occasionally simply cook the books. This happens all the time.[2]

Further to bias stemming from the perceived “ownership” of the pet hypothesis, we should not ignore the fact that doing a half-assed as opposed to a proper job has certain advantages to it: with only a little bit of cherry picking (be the first to throw the stone…) sloppy approach opens a world of possibilities instead of forcing the scientist to build upon one – often inconvenient – correct answer. Besides making one’s life easier, sloppy science also enables advancing  the career faster because more data is generated per unit time. In a typical academic setting, with all the publish-or-perish stress and competition for limited funding, this is highly valuable, and is sometimes disguised as “working hard.” There is a very strong incentive for supervisors and students to rely on sloppiness and bias as tools of success, and there is almost nothing in the way to stop them. The peer review is dysfunctional, as demonstrated by numerous ridiculously impossible, fraudulent findings that were successfully published only to be refuted later on. In most cases, realization that some published work is not reproducible results in one simple action: forget about it and move on. The reason, again, is that there is no incentive to investigate the problem at the expense of more pressing problems: publish or perish, secure the funding, do not make enemies, protect the public image of your institution, and so forth. Instances of whistle blowing in response to sloppiness, bias, or fraud should ideally lead to adequate institutional response and retraction of unreliable findings, yet the frequency of this happening is extremely rare when compared to the 50%-80% rate of irreproducible claims found in biomedical publications (see Academic bias & biotech failures, Reliability of ‘new drug target’ claims called into question and Is Reproducibility on New Drug Targets 50% or 20%?).

In order to fully profit from the sloppy and/or fraudulent approach to science, one must resort to a secret ingredient: arrogance maintained by self-deception.  Surveys show that 94% of academic professionals claim that they are in the top 50% in their field.[3] Without self-deception, it would be hard to deceive others.

Look, mom – no hands!

In modern science, reported outcomes serve to prove the particular idea “correct,” consequently boosting up recognition and sometime also the financial success of the investigator.  In recent decades a new trend is noticeable whereby reported outcome does not even prove anything crucial other than the authors’ supposed ability to make it happen. In fact, being the only one able to obtain certain result is portrayed not as a sign of scientific failure, but as a sign of skill and merit. Science is nowadays more of a sport or a performing art, than the idea of organized knowledge in the form of testable explanations and predictions about the universe.

In reality, the reported outcome may have happened only once out of tens or hundreds of attempts – or, even worse – never happened at all.  Getting funding to try to replicate other people’s results is not only impossible, but the idea is often ridiculed, making the story of science as a self-correcting mechanism rather naive. While there certainly is a self-correcting aspect to science, most of resources are nowadays devoted to correcting trivial errors that would have been easy to avoid in the first place. Self deception outweighs both self-correction and prevention.[4]

A frequent response to the critique of modern science is to point to all the progress that science has enabled in the past centuries.  Such tip-of-the-iceberg perspective neglects the scale of failure underneath.  Just how efficient science is, and can it be made more efficient?  A typical branch of modern science may have had tens of thousands of person years invested in it, along with billions of dollars appropriated mostly from  public funds.  Is the resulting value reasonable? How much of this effort and investment has been wasted?  Was this waste avoidable?  How many of the promises have materialized?  How many of the supposedly materialized promises actually only existed in written reports? These questions pertain not only to direct economic value, but also to  things that are not easily expressed in monetary terms – for example, training of future generations of engineers and scientists.  To what extent does present system yield young PhDs who are capable of critical thinking, problem solving, good management, and realistic assessment of hypotheses?

[1] For statistical analysis of odds of a scientific finding being false, see Why Most Published Research Findings Are False.
[2] For a collection of representative stories of fraud in modern science, refer to Chapter 3 of “The Great Betrayal” by Horace Freeland Judson.
[3] This is nothing unique to scientists; For example, only 1% of drivers report that they consider themselves to be worse than average drivers. See, for example Alfred Mele, “Real Self-Deception,” Behavioral and Brain Sciences 20 (1997): 91-102.
or see this article
[4] See “The Truth Wears Off” By Jonah Lehrer, and the follow-up

Enjoy your meal


, ,

This is interesting: a greenwashed packaging of meyer lemons with a recipe for a cake containing lemon zest, and information about that same zest containing thiabendazole, imazalil, fludioxonil, and pyrimethanil.



I was going to explain the basics, but somebody’s beat me to it. Here you can read a brief description for each of the chemicals. Enjoy your meal.

Impact Factor

Why do scientists pay so much attention to impact factors of scientific journals? The answer is – because they do.  Like with many other cliches in modern science, there is no scientific rationale involved.
Impact factor of a given journal is, by definition, the number of citations per year for papers published in that journal, averaged over two years and over all the papers published in that journal.  Note that impact factor is based solely on the number of citations, and does not involve any other metric of value (for example, comments from the peer review).  Also note that IF only includes citations from the first two years after the publication, disregarding 60%-90% of actual citations, depending on the field.  What IF actually tells anyone who cares to know is simply how influential or prominent a given journal is.  The problem is that, in reality, most of scientists promote or buy the idea that IF of a given journal has anything to do with the quality of any particular manuscript published in that journal.
An in-depth report from the International Mathematical Union (IMU) summarizes this aspect of the problem:

For papers, instead of relying on the actual count of citations to compare individual papers, people frequently substitute the impact factor of the journals in which the papers appear. They believe that higher impact factors must mean higher citation counts. But this is often not the case! This is a pervasive misuse of statistics that needs to be challenged whenever and wherever it occurs.

On one hand, you have an easy access to a direct metric of what you are interested in: the number of times the paper of interest has been cited.  On the other hand, you have the IF where you decide to forget this direct metric, and replace it with an average value for all papers that happen to be published in that same journal - and then use this as a presumably valid indication of the quality of that one paper you were interested in. To make the matter worse, the arithmetic mean used in calculating IF is inappropriate, as number of citations does not follow the normal distribution.
Nowadays most of papers published in thousands of peer-reviewed journals are all indexed regardless of which journal they are published in. Finding a paper based on keywords or authors is an easy task, and so is finding out the number of times the paper has been cited.  These tasks have nothing to do with the journal or the IF. If I tell you that I published a paper in a journal of such-and-such with an impact factor of 5, you still have no clue whether my paper has been cited zero or 134 times. You may say that it is most probable that the paper will have been cited 5 times per year in the first two years, but why in the world would anyone deal with probabilities when they have access to actual answers?
One situation where looking at the IF may be meaningful is when a non-expert is trying to guess the quality of a paper that just got published. However, one should remember that, while IF is the average, the uncertainty of the guess is high due to broad distribution of numbers of citations.

A Brief History of Sense of Scale

I just learned that, in a new Discovery Channel documentary, Stephen Hawking warns that alien life might turn out to be hostile:

If aliens visit us, the outcome would be much as when Columbus landed in America, which didn’t turn out well for the Native Americans

This is a rather naive and anthropocentric view that disregards facts of time, and Hawking should know better.  Why in the world would he be comparing a potential encounter of humans with an alien race capable of colonizing Earth to that of  Native Americans with Columbus’ crews?  Just look at the diversity of life that evolved here, on one tiny planet, in this random instant of time. Then try imagining just how different a colonizing alien species would be from us (and that includes Native Americans, Stephen!) – if it started evolving on a different planet at a different time.  It is then much more appropriate to compare the presumed impact of our encounter with the aliens to the impact of Columbus’ landing on snails or redwood trees or Mississippi alligators or bacteria in the American soil. In fact, even these comparisons are narrow-minded.

Dumbing down versus messing up

I am really bothered by the experts who mess things up when attempting to dumb them down for the lay public. Take physicists, for example – particularly, cosmologists: Paul Davies in Superforce, in the very first chapter, writes how “…the expanding universe is rather like a three-dimensional version of the expanding balloon.” While (or because?) I am not an expert on the subject, I find such mental images obviously and offensively wrong and subversive. They are founded on the nonsense concept of a viewer being outside of the universe, and having the frame of reference which is in some ways absolute, since the expansion of the universe (that is, stretching of the surface of the balloon in the above analogy) is described as such in this “outsider’s” frame of reference.

I heard an entertaining lecture by Brian Greene several years ago, where he painted similar cartoonish concepts, teaching the unsuspecting audience wrong intuition.

It is perplexing: why and how these otherwise great minds look at the universe from the outside?

To my great satisfaction, I am not alone in this dismay.  Raphael Bousso, for example, got rid of this “God’s eye view.” As a consequence, he was able to make an important breakthrough in the field of cosmology.


Get every new post delivered to your Inbox.