One interesting development regarding science funding, and thus the entire progress of Science, during the last few decades has been a slow but steady transition towards management methods derived from the world of business.
This development responds to the need of using an objectively measurable output on which to base the reward of monetary compensation, jobs and even awards. Scientists are then required to be responsible of producing the necessary output for themselves.
The most obvious outputs to measure are the number of academic articles produced by a scientist and their citations, although in practice aggregate measures such as the h_hep index, which are slightly more elaborate, are commonly used. The result of using such a research metric for rewarding jobs is effectively a model which resembles a free-market economic system.
Such a system has the advantage of being objective and providing a sense of fairness. However, one disadvantage of this practice is what has become known as the trivialization of research. This term refers to circumstances under which research is coldly and solely approached as a way to produce citations, recognition or funding.
From the sociological perspective, an interesting feedback loop is also generated withing the academic system. The described method filters to remain in the field only those researchers who produce the most output as measured by the chosen metric. They become the ones to train the next generation of researchers, and as a whole the system will tend to optimize that metric within certain given constraints, such as the total funding available.
The optimization, however, is in regard to the chosen metric and nothing else, which raises the question of whether it could discourage scientific breakthroughs from happening. One way to visualize this question is by comparison with business settings, in which the measurable metric is usually monetary profit. Assuming this profit comes from a set of customers, the system tends to automatically optimize some combination of stock production, its quality, and its cost, based on aggregate preferences of those customers. This is a perfectly reasonable method, which is why it permeates the business corporations of the capitalist world, where profit is indisputably the proxy to optimize.
On the other hand, many would argue that the variable which scientists would want to optimize is the scientific understanding of Nature. It is not that intuitive whether the output metrics described above are directly related to this variable or not.
The reason for the lack of intuition is simple when analyzed deeper: there is no independent set of customers. Rather, paper citations are given by other researchers who are themselves participants within the same social system. Thus feedback loops such as the one described above are spontaneously generated. Another such loop is the one whose result is to aggregate all interest in a scientific subfield to a small subset of specific questions which are most likely to end up in scientific production of citations within a small time frame.
What I have presented here is merely a superficial look at what a study of sociological tendencies in Science can unveil. There are consequences to ignoring this. If we hold on to the belief that researchers are free to pursue their own interests and therefore exercise that freedom, and we let this belief go unquestioned, then we will not be the headliners of rational thinking that we hold ourselves to be.