One interesting development regarding science funding, and thus the entire progress of Science, during the last few decades has been a slow but steady transition towards management methods derived from the world of business.
This development responds to the need of using an objectively measurable output on which to base the reward of monetary compensation, jobs and even awards. Scientists are then required to be responsible of producing the necessary output for themselves.
The most obvious outputs to measure are the number of academic articles produced by a scientist and their citations, although in practice aggregate measures such as the h_hep index, which are slightly more elaborate, are commonly used. The result of using such a research metric for rewarding jobs is effectively a model which resembles a free-market economic system.
Such a system has the advantage of being objective and providing a sense of fairness. However, one disadvantage of this practice is what has become known as the trivialization of research. This term refers to circumstances under which research is coldly and solely approached as a way to produce citations, recognition or funding.
From the sociological perspective, an interesting feedback loop is also generated withing the academic system. The described method filters to remain in the field only those researchers who produce the most output as measured by the chosen metric. They become the ones to train the next generation of researchers, and as a whole the system will tend to optimize that metric within certain given constraints, such as the total funding available.
The optimization, however, is in regard to the chosen metric and nothing else, which raises the question of whether it could discourage scientific breakthroughs from happening. One way to visualize this question is by comparison with business settings, in which the measurable metric is usually monetary profit. Assuming this profit comes from a set of customers, the system tends to automatically optimize some combination of stock production, its quality, and its cost, based on aggregate preferences of those customers. This is a perfectly reasonable method, which is why it permeates the business corporations of the capitalist world, where profit is indisputably the proxy to optimize.
On the other hand, many would argue that the variable which scientists would want to optimize is the scientific understanding of Nature. It is not that intuitive whether the output metrics described above are directly related to this variable or not.
The reason for the lack of intuition is simple when analyzed deeper: there is no independent set of customers. Rather, paper citations are given by other researchers who are themselves participants within the same social system. Thus feedback loops such as the one described above are spontaneously generated. Another such loop is the one whose result is to aggregate all interest in a scientific subfield to a small subset of specific questions which are most likely to end up in scientific production of citations within a small time frame.
What I have presented here is merely a superficial look at what a study of sociological tendencies in Science can unveil. There are consequences to ignoring this. If we hold on to the belief that researchers are free to pursue their own interests and therefore exercise that freedom, and we let this belief go unquestioned, then we will not be the headliners of rational thinking that we hold ourselves to be.
Congratulations on the website. very good and informative. I hope you have luck with the kicksrarter
Excited about this project! Wish you all good luck!
Something that I’m curious about – in what way was funding decisions, say, less dependent on bibliometrics 50 years ago? Were the decisions based more on the judges understanding of the scientific output of the researchers rather than citation counts, or was there some more fundamental difference?
That’s a good question, Edo, it’s interesting to read interviews with scientists from 50 years ago as you said, since they describe a very different picture of the academic world from the one we have today, the impression I get is that the main difference is the number of people involved in research. As the world progressed, academic fields started to require a systematic procedure for evaluation, making it less personal and more number-based.
I’d like to look further into this, though, so thank you for pointing this out, and for your kind wishes!
I was interested to see that this initiative aims to start with a focus on theoretical high energy physics.
You might be interested in Nicholas Maxwell’s work. He also proposes reforming the science method by starting with the metascience of theoretical physics, then scaling this out to the other natural sciences and then social enquiry.
Also, I posted your project to the Institute for Globally Distributed Open Research and Education (IGDORE). I think the community there would generally be interested in your initative.
What do you think of the Russian covid vaccine, will you apply it? or do you prefer the German one?
Why is Russia discriminated against? or is germany scientifically better than russia?