We often hear people saying that ‘Economics’ is a difficult field. My opinion differs a bit: Econ is not difficult, but the application of it is, i.e. applied econ. Since we all are aware of the differences between reading theoretically and using practically, it should not be difficult to understand why research is a complicated field, yet significant. There is a growing demand for evidence-based research, impact evaluations, and applied-econ research. Researchers devote their years working trying to establish ‘causal’ relationships, spending their hours on research designs, preparing questionnaires, and what not; to be able to tell the world that their ‘research’ could (or could not) prove a certain hypothesis. But, saying this to the world is not that easy. It requires an excellent research paper with ‘low’ p values, high R squared, expected sign of coefficients, and maybe many more things; which is definitely easier said than done. If we fail to get all the above things correct, it might get challenging to get your work published, and hence, telling your story to the world might be a different scenario altogether.
But, should we only believe the researchers who have more publications? Should we not consider something more neutral like ‘more contextual research in view of public policy’? If we do so, then the debate takes a different angle altogether, as we begin giving some importance to the research work, which is not very often recognized. But, we see that this is important to consider, as good policy decisions are an outcome of a contextual research question, a well-thought research design, useful quality data, credible methodology, and compelling results. Sadly, not all such research outcomes are known to the world, and the most common reason is that the results are not captivating enough.
We discuss two problems relating to current phenomena of publication: a)specification searching – selective reporting of analyses within a particular study; b) publication bias: It occurs when the outcome of an experiment or research study influences the decision whether to publish or otherwise distribute. Both of these problems are deep-rooted, as sometimes researchers try to fudge the data, manipulate the methodologies, or modify the research question to make the work publishable. Many research ideas die in infancy when the researchers are unable to reject the null hypothesis, i.e. they are ‘unable to obtain statistically significant’ results. What we miss in the picture is that failing to reject the null hypothesis is not the same as ‘accepting the null hypothesis’, it only means that the study is unable to provide enough evidence for the alternative hypothesis to be true.
During such times, one must remember: “ There are lies, damn lies, and statistics”. Statistics and econometrics are mere tools to analyse, and interpret the data, but not the only tools we should we depend on. Practical significance is of equal, rather more important than statistical significance, because statistically significant results might be impractical : example, the outcome variable is ‘ female labour force participation’, and one of the explanatory variable ‘ age squared’ then has a positive coefficient, and is statistically significant at p-value of less than 5 %, which means that as women age, labour force participation increases.
This is counter-intuitive, and in such a scenario, a researcher has two options : one, keep adding more explanatory variables to get the desired results (negative coefficient), and make the paper publishable, because, compelling results; or two, stay with the model, and try to look at the intuitive reasons behind the result (the case of Japan, where demographics are such, that old people work more). Let’s consider another scenario, in which ‘age squared’ has a positive coefficient, but is statistically insignificant. In this case, the researcher might try to play with different models to arrive at a ‘significant’ coefficient, which is the problem of ‘specification searching’. Such scenarios often happen when researchers want to make their work publishable with ‘compelling’ results. Journals more often than not reject the papers with negative results, and this is why most experiments that fail are neither ever heard of, nor studied.
“I have not failed, I have just found 10,000 ways that won’t work” – Thomas A. Edison. The importance of publishing failed experiments is highly under-rated. There is very little literature on which experiments failed, what methodology did they use, and why did they fail. This literature is imperative to understand what new changes should we look at, and what new strategies should we follow to get the desired results. Plenty of theories about what works in an experiment, but nobody talks about what happens on the field when we work, and why we fail to get the desired results. Since failed experiments are not published, we get to read upon only those ‘wonderful’ experiments, that managed to reject the null hypothesis.
All the researchers have a common goal: to portray reality of the world. Why is it then the researchers with concrete reality can reach heights in their career, and not the failed ones? Isn’t it essential to look at the reasons behind the failure, the research design, their methodology, their data quality? It is equally possible that two researchers with the same research question, work using two different methodologies, and datasets. It is definitely possible that one of them gets their work published because he could get better results, but had a not-so-reliable methodology, while the other, had an incredible methodology, but poor data quality, due to which the undesired results appeared. When we do this, we miss out on interesting negative findings, which may help us in asking more relevant questions about the research.
There are organizations like WHO, TESS who have a dedicated section to publish failed experiments. Recently, a proposal was made in the U.S.A., wherein it was discussed that initially, if an abstract of the research paper of a PhD student is reviewed and accepted, then, the findings will be published, no matter what the outcome of the study is. However, the implementation of this policy is long due, and till then, we should be hopeful about minimizing the publication bias and maximizing efforts in optimizing research designs and methodologies, and improved data quality, to ensure unbiased research.
“Whether you fear it or not, true disappointment will come, but with disappointment comes clarity, conviction, and true originality.” – Conan O’ Brien.