The Flawed System for Science

November 12, 2022

Science is awesome. Armed with our senses, tools, and our reasoning we can unveil amazing facts and even predict the future. But for all the beauty, elegance, and simplicity of the scientific method, the process of contributing to the scientific community has become inane. It's exploitative for scientists and unreliable for society.

The scientific method is the well-known procedure of observation, question, hypothesis, experiment, analysis, and conclusion. But how do these conclusions become part of the scientific canon? That is, how do they become accepted as genuine? Currently, there are a few institutions that act as gatekeepers. They are called scientific journals. Some famous ones are Nature, Cell, Science, and The Lancet. When a study is accepted into a journal, it's considered scientific knowledge.

Over time, society has ascribed rigor, objectivity, and prestige to these journals. So much so that the careers of scientists depend on the number of publications and citations they have on them. But when we get deeper into how the system operates, we discover that the fame is far from justified. Many studies with obvious flaws have been published in journals just to be retracted years later, after they had already been cited multiple times. Others never passed replication.

This process lays out a perverse set of incentives. As Stephen Buranyi explains in Is the staggeringly profitable business of scientific publishing bad for science?

Scientists create work under their own direction – funded largely by governments – and give it to publishers for free; the publisher pays scientific editors who judge whether the work is worth publishing and check its grammar, but the bulk of the editorial burden – checking the scientific validity and evaluating the experiments, a process known as peer review – is done by working scientists on a volunteer basis. The publishers then sell the product back to government-funded institutional and university libraries, to be read by scientists – who, in a collective sense, created the product in the first place.

So, while big publishers boast huge profit margins (Reed-Elsevier (RELX) was 36%), academic researchers have low salaries and get no money for the grueling process of peer review. Some might argue that this is fine because research is satisfactory in its own right. But while it's true that many scientists love their jobs, this doesn't mean that they should get the short end of the stick. In fact, people working at universities don't get to do a lot of research. David Matthews writes in Times Higher Education:

"Those lucky enough to have become full professors – supposedly the light at the end of the tunnel for struggling junior scholars – spend just 17 per cent of their time on their own research. Teaching, research supervision and “management and organisational tasks” were all bigger commitments."

Given these concerning issues, we are left to wonder why scientists, perhaps the smartest among us, are subjugated by this system where they have the least amount of power. The way to get ahead in the scientific community is to get prestige. Prestige as measured by the number of publications and citations in scientific journals. So the goal of the academic researcher is not to investigate areas with the most promising effects for humanity, but to focus on what will get published and cited, instead. These objectives are at odds. Agnes Callard, Associate Professor at the University of Chicago, laments in her essay Publish or Perish that

academic writing is obsessed with other academic writing—with finding a “gap in the literature” as opposed to answering a straightforwardly interesting or important question

Another worrying effect of this system is called publication bias. Scientists are more likely to submit studies if the results are surprising or statistically stronger because those are the kinds of studies that the journals want to publish. This means that lots of important studies will never be submitted, either because the effect is expected or because it isn't big enough. Worse, two of the main reasons why studies get extraordinary results are methodological errors and statistical abnormalities. Just like a broken clock is right twice a day, if data is randomly collected enough times, strange effects are bound to appear. The process of trying different analyses and conclusions to find something interesting is called p-hacking. It's true that p-hacked studies usually fail replication, but not before they are published, cites, and reach mass media.

In summary, scientific publishing is broken:

  • Journals are expensive. Universities must pay for them or risk having their staff miss out on the latest discoveries.
  • Universities pay low salaries, considering the high demands of the job and the credentials required to get the position in the first place.
  • Academic scientists don't have a lot of time for research.
  • Many studies are government-funded through grants, yet they are behind paywalls.
  • Scientists have to pay high publication fees, yet they have to review studies for free.
  • Journals have the incentive to publish positive and interesting results instead of negative ones (publication bias). They are also more likely to accept studies on more fashionable subjects, and novel and challenging results, over more important and solid work. This undermines the validity of published studies.
  • Most people cannot read papers since the language is aimed at other scientists. This is, many times, the result of scientists trying to sound smart to their peers, though the effect tends to be the opposite.
  • Researchers rely too much on universities. They depend on them to get their salaries, equipment, grants, and access to other journals. So they are less likely to speak their minds on controversial topics, or to try lines of research that run counter to the line set by the university.
  • Source data is not accessible. Most papers show only the results summary, but not the data that was used for analysis. Having the raw data means it is easier for others to find errors, so this would increase the reliability of the analysis.

For the fundamental endeavor of creating scientific knowledge, we rely on a process that exploits scientists so that a few companies can make big profits. Most scientists have chosen their work out of love for seeking out the truth. They fight to get funding for pricey projects. They move to places with poor living conditions for their research. That we have a system that does such a poor service o them should be a scandal.

Alternatives for Scientists

Until we change things, there are alternatives for scientists who refuse to play this rigged, senseless game:

New Promising Journals

The big players do provide higher prestige, but they don't have a monopoly on it. Randy Schekman is a Nobel Prizer winner and the editor of eLife, an internet journal based on open science and technology innovation. He pledges:

I have now committed my lab to avoiding luxury journals, and I encourage others to do likewise.

Granted, it's easier to go against the establishment after winning the Nobel Prize, but that doesn't taint the effort.

Independent Research

This is the path taken by scientists turned bloggers or cutting-edge technology consultants. They do not work as staff for any University, government, or private company. For example, Alexey Guzey runs his website and he is the president of New Science, an institution created with the goal of "do[ing] to science what Silicon Valley did to entrepreneurship".

This path has a big caveat: it depends on how good someone is at internet marketing. I should note, however, that traditional research also has its fair share of popularity contests.


An option for courageous scientists is to start a business, to become entrepreneurs. The business can be loosely or tightly related to their main area of research. Of course, they need to gauge how much time the business side will require, so as to leave enough for the actual research part.

The most common objection to this option is that running a business leaves no time for research but, as showed earlier, people in Academia don't get to spend much time on it either.

Colin Percival is a PhD, computer scientist, and winner of multiple national math competitions. He runs Tarsnap, a company that provides high-reliability cloud backup services. He wrote a heartfelt article called On The Use of a Life where he explains why he decided to leave Academia:

In many ways, starting my own company has given me the sort of freedom which academics aspire to. [...] academic institutions systemically promote exactly the sort of short-term optimization of which, ironically, the private sector is often accused. Is entrepreneurship a trap? No; right now, it's one of the only ways to avoid being trapped.