Among equals, throw dice

White dices

One of the main problems in science nowadays is distribution of resources. Academic positions, grants, lab space are under fierce competition. Evaluation procedures focuses on attributing production scores, but do not consider the uncertainty in these estimates. Random choice among qualified candidates is the best strategy to distribute resources.

1. Who is the best candidate?

The idea is simple and it is expressed in the title: to pick among equals, throw dice. All that remains is to build my argument. I start with an example.

Imagine that a company wants to select a new employee for a specific function. Twenty candidates apply, all of them with the minimum requirements. How can one make the best choice?

A selection committee is formed and a series of tests is prepared. The candidates are examined and scores are given. Most of people would say that the fair procedure would be to hire the candidate with the highest score. My point here is exactly that this may be far from the best decision.

Suppose that among the 20 candidates, 5 cluster with very high scores, say, 9.0, 9.2, 9.4, 9.6 and 9.8. All others end up with lower scores, say about 8. Naturally, the best option should be among those 5 top. But is really the  9.8 candidate better than the 9.0? This depends on the margin of error.

Consider all the factors that the tests could not capture: a subjective bias of an interviewer, a bad disposition of a candidate in the day of the tests, a potential positive feature of a candidate that the test was not designed to measure. All these uncertainties together lead to error bars that envelope each score, exactly like they do to our experimental data in the lab.

OK, we attributed scores and accept that a margin of error is in play, what can we do about it? To compute these uncertainties is not really practical. My humble solution is to accept a certain degree of ignorance and make a qualified random choice.

In my example, 5 out of 20 candidates stand out. Probably, any of the would be a good professional for the job. Then, why not just throw the dice to decide which between the 5 will take the position?

2. A simple, but discomforting idea

I have been defending this idea for a long time, and I always find the same type of resistance. People look at me as if I were asking them to give up of their free-will. They do not like the feeling of not controlling the process. They also often argue that to introduce a random variable in the game is somewhat unfair.

But by selecting the highest score, the control was just an illusion. We do not have any guarantee that the 9.8 candidate was really better than the 9.6 or even the 9.0 candidates. And as soon as we look at the selection from a statistical point of view, there is absolutely nothing unfair about choosing a lower score over a higher one, as these estimates are just overlapping within the error bars.

Although this should be obvious, I often see the selection procedures moving exactly in the opposite direction, of the hyper-determination. In a tie situation, say two candidates scoring 9.8, the committee will apply more tests till breaking the tie: “Ah, 9.87 against 9.85! We have a winner.” It is ridiculous, but incredibly common.

And it is not only that the hyper-determination hurts my statistical sense. I am quite convinced that it is in the root of many problems faced in diverse areas, as hyper-specialization of students exams or frauds in science.

3. Pro-choice in science

One of the main problems that we live in science nowadays are the evaluation procedures to distribute resources. Academic positions, grants, physical space in the labs are all issues under a fierce competition.

These resources are usually distributed after a productivity evaluation, maybe complemented by a peer review process, which sorts researchers according to their qualifications and production levels. Not bad. I agree that this should be the way to do so.

The problem starts with the analysis of the results. Hyper-determination is the rule: Higher scores, more resources. A complete disregard for the statistics from people who should be the most conscious about it.

This pressure for productivity has had a bad effect on science. Fraud, shallow publications, plagiarism, Matthew effect are spread in the labs.

The qualified random selection that I advocate would have a positive effect on these issues. If instead of aiming at maximum scores, evaluation processes aimed at the average of the qualified scores, the hypertrophy of the scientific production would likely reduce.

Scientists in their eternal struggle for resources would then work to do good science and to be among the best. But they would not have reasons to maximize parameters like number of publications, H-index, impact factors and other scientometric paraphernalia that has little to do with the main reason for doing science. Do you still remember what is it?

MB

  • If you enjoyed this post, follow the blog by signing up in the sidebar. Leave your comments or like below. You can also follow me on Twitter.


Categories: Productivity, Science Policy, Scientific Culture, Work Organization

Tags: , , , , ,

7 replies

Trackbacks

  1. Is there a fair future for computational theoretical chemistry? | Much Bigger Outside
  2. Who are the paper’s authors? | Much Bigger Outside
  3. How much $papers is a scientist worth? | Much Bigger Outside
  4. Gender policy in science is taking the wrong turn | Much Bigger Outside
  5. Stop Asking for Letters of Recommendation | Much Bigger Outside
  6. Let Science Manage Scientists | Much Bigger Outside

A penny for your thoughts...

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: