1. A plainly wrong paper
The main results of the paper were obviously wrong. This was clear to me and my colleagues as soon as we dropped an eye on that paper. Our wonder was how the authors, the reviewers, the editor didn’t notice that and allowed it to be published.
I won’t name names here, but it’s a real case, which took place not long ago. And I’m not talking about a publication lost in one of those obscure journals flooding our spam boxes. The paper appeared in a traditional and high impact factor journal.
It may have helped that the bylines had three well-known names in the field. Did the reviewers felt intimidated? All in all, if those people say that the phenomenon should be like that, they are probably right, right? Or maybe the reviewers didn’t even notice the error: anyone regularly publishing knows how lousy and lazy reviewers can be.
This case would be an anecdote if it were not a symptom of something more serious happening in the academy. I have seen too many papers with obvious mistakes, flawed logic, misinterpretations, and unacceptable methodologies to believe that this was an isolated case.
It worries me that papers, the main medium for scientific reports, are becoming unreliable.
It’s ironic that 20 years after the Sokal affair—when the physicist Alain Sokal published a hoax paper on an important cultural studies journal to prove the low intellectual rigor in that field—hard sciences may be suffering from the same problem.
I’m far from have a diagnostic of the situation. This would be a research project on itself. But my bet is that people are just publishing too much. It is easy to find leading researchers publishing 30 or 40 papers a year. And to hold these sky-rocketing numbers, how much of quality control has to be compromised?
I don’t believe the authors of the wrong paper did it on purpose. Most likely, it was a case where a young postdoc did a sloppy work; his boss was too busy managing his several grad students to notice; the external senior co-authors didn’t have enough expertise to judge the results better.
2. Towards corporate science
I see these things happening and truly scares me that it may happen to me too. I’m involved in more projects than I probably should be. In some of them, I don’t really play any major role and my participation is secondary. I may be acting more as a consulting specialist, or as a contractor to do specialized tasks.
In other projects, although I’m leading them, an assistant may be doing the lab work, and I may not even see the raw data. You guess the danger.
What I do to keep the standards is to be very critical of any manuscript I get from my co-authors, questioning everything, from the data presented to the figures style. I’m pretty sure that some of my colleagues may be quite annoyed by that; but, from my viewpoint, even all this care may not be enough.
The problem is in the roots of contemporary science: while our research has, say, a corporate character, we still think of it (and present it) as if it were a piece of intellectual work.
Just take a second to appreciate the figure below. Three snapshots of important moments in science: Newton’s Principia, Einstein’s special relativity, and ATLAS collaboration for the Higgs boson detection.
From Latin, through German, to English; from a book, through a paper, to a public digital report; from an invited book, through a peer-reviewed paper, to an open-community reviewed report; from single-author works to a collaboration involving over 3000 scientists.
The ways of doing and reporting science have been changing a lot. Recognizing that is the first step to improve the ways we manage our projects.
3. Project managing 101
The Atlas collaboration is an extreme case, but, in fact, any scientific project is today the result of the work of many different people: technicians (who may not even appear in the paper’s bylines), grad students, postdocs, research assistants, senior professors.
Each of these people has different expertise and responsibilities. But at the moment that the final product of this enterprise is reduced to a paper signed by “authors,” without any specific role attribution, this contaminates the whole accountability chain, diluting individual responsibilities.
And the problem goes deeper.
Our conventional job attributions—like grad student, postdoc or professor—forged in a very different scholastic context, are quite inappropriate to manage contemporary research projects. We would be probably doing better if we had, for instance, project assistants, project leaders, and project managers.
It may sound as heresy, but academy should really seek help from private sector experience on research to improve their skills on project managing. No matter how exciting are the results in our papers, when it comes to “initiating, planning, executing, controlling, and closing the work of a team to achieve specific goals and meet specific success criteria,” we are truly amateurs.
We just don’t have training for that, we don’t know the best practices, we don’t recognize it as important, and we presumptuously think that it is enough to play by the ear.
This amateurism in project managing may be the ultimate reason for having unreliable or simply plainly wrong papers being published, even in what we consider to be the best journals.
Probably, given the time, those scientists, research institutions, and funding agencies finding the better ways to manage science will predominate. Survival of the fittest, I dare say. I hope that our generation, with our academic titles and wrong papers, won’t just be a big joke for them.
- I invite you to take a look at this post, Who are the paper’s authors?, where I discuss how authorship attribution should evolve to a system of explicit credits.
Categories: Science Policy