Quantitative evaluation of scientific production isn’t disrupting science. On the contrary, it’s the best we can do. The alternative is to give in to bias and prejudice.
It’s enough that two or more scientists meet in a social gathering for them, within a couple of minutes, to start complaining about how the pressure to publish more, better, and sexier papers is killing science.
If you’re among those whiners, I’m sorry, I beg to disagree.
Academic research is mostly funded by public resources. It’s only fair that some kind of product comes out of this investment. If not papers (or patents, softwares, and other types of real measurable products), what would you suggest that scientists should show for their work?
Should society blindly pay for their peripatetic dilettantism expecting that after decades of reclusion, a professor would show at the university balcony and triumphantly reveal the hidden truths of the universe?
No. I know you don’t expect that. But, serious, what could be better than requiring that scientists regularly show the progress of their work?
Sometimes we seem to forget that scientists are humans. Without any pressure for results, what would refrain them of anchoring to tenured laziness and starting eating their own brains like the anecdotic tunicate? Just look at any of those dusty departments in low-rank universities and you see that it’s exactly what happens when scientists are let on their own.
I don’t dispute that evaluation and rewarding systems based on counting of peer-reviewed papers, citations, and other scienceometric figures may be ill-shaped and outdated in many ways. In fact, I’ve been discussing that a lot before (check the to-do list at the end of this post). However, rather than just shutting down those systems, we should aim at adjusting their goals and procedures.
Is the pressure to publish yielding an over-publication of irrelevant results? Is the appraisal of citation indexes inducing people to unethical behavior? Is the competition within and between labs leading to poorer outputs than those we would get if a cooperative system were enforced?
If any of such distortions is happening, they should be fixed. But we should fix them just like we do in the lab: objectively diagnosing problems, proposing solutions, quantifying results, analysing uncertainties, and adjusting criteria as needed to reach our goals.
Of course, personally, I don’t like to have my work scrutinized, criticized, and squeezed into a soup of numbers and indexes every time I apply for a promotion or a new grant. But, sincerely, I can’t think of any better way of evaluating and distributing scarce resources among thousands of competing scientists, than exactly doing that.
Surely, to be treated as a number is unpleasant. However, I enjoy even less the perspective of being under a “qualitative analysis,” as often praised by people angry at quantitative evaluation criteria. If the costs to cover such a system don’t impress you, just think that qualitative evaluation could well be translated as envy, frustration, arrogance, condescendence, vengeance and all kinds of biases and prejudices humans are so skilled on.
Therefore, if we want to improve the evaluation systems, we should aim at making them more quantitative, not less!
Maybe part of the antagonism to quantifying scientific production is a reaction to the intimidation feeling that contemporary science causes. We see science as a world which can’t be understood by a single clever mind anymore. Practiced by a dozen million professionals worldwide, branched into uncountable subspecialties, with results published in tens of thousands of overespecialized journals, our eyes get lost in a oppressively near horizon of the scientific landscape. We dream of simpler times when scientists dedicated their life to a single subject, and papers had a single author, and nobody cared whether you got cited or not. (Did such a time ever exist?)
However, although scary and intimidating, I’m sure that this massive, uncontrollable, unthinkable, unfathomable amount of scientific information we are producing is positive. But to lie comfortable on this bright corner, I have to accept science as a wildland, from which I’ve explored only the hills and rivers around my village (with occasional excursions into foreign grounds).
From this perspective, I have no alternative but also accepting that the only chance of building a functional, efficient, and unbiased bureaucracy to map and administrate this scientific world is having me and my work reduced to numbers in a spreadsheet.
At this point, my message is simple: publish more, publish well. And don’t be afraid of quantitative evaluations. Without them—be sure—things could be much worse.
The MBO’s to-do list in science:
- Improve peer-review.
- Improve selection procedures.
- Take care of gender inequalities.
- Modernize papers.
- Define better intellectual contributions.
- Tackle the job problem.