Canada Kicks Ass
Nobel Prize winner: "How journals. . . are damaging science"

REPLY

1  2  Next



DrCaleb @ Tue Dec 10, 2013 10:08 am

$1:
How journals like Nature, Cell and Science are damaging science
The incentives offered by top journals distort science, just as big bonuses distort banking

Randy Schekman
The Guardian, Monday 9 December 2013 19.30 GMT

I am a scientist. Mine is a professional world that achieves great things for humanity. But it is disfigured by inappropriate incentives. The prevailing structures of personal reputation and career advancement mean the biggest rewards often follow the flashiest work, not the best. Those of us who follow these incentives are being entirely rational – I have followed them myself – but we do not always best serve our profession's interests, let alone those of humanity and society.

We all know what distorting incentives have done to finance and banking. The incentives my colleagues face are not huge bonuses, but the professional rewards that accompany publication in prestigious journals – chiefly Nature, Cell and Science.

These luxury journals are supposed to be the epitome of quality, publishing only the best research. Because funding and appointment panels often use place of publication as a proxy for quality of science, appearing in these titles often leads to grants and professorships. But the big journals' reputations are only partly warranted. While they publish many outstanding papers, they do not publish only outstanding papers. Neither are they the only publishers of outstanding research.

These journals aggressively curate their brands, in ways more conducive to selling subscriptions than to stimulating the most important research. Like fashion designers who create limited-edition handbags or suits, they know scarcity stokes demand, so they artificially restrict the number of papers they accept. The exclusive brands are then marketed with a gimmick called "impact factor" – a score for each journal, measuring the number of times its papers are cited by subsequent research. Better papers, the theory goes, are cited more often, so better journals boast higher scores. Yet it is a deeply flawed measure, pursuing which has become an end in itself – and is as damaging to science as the bonus culture is to banking.

It is common, and encouraged by many journals, for research to be judged by the impact factor of the journal that publishes it. But as a journal's score is an average, it says little about the quality of any individual piece of research. What is more, citation is sometimes, but not always, linked to quality. A paper can become highly cited because it is good science – or because it is eye-catching, provocative or wrong. Luxury-journal editors know this, so they accept papers that will make waves because they explore sexy subjects or make challenging claims. This influences the science that scientists do. It builds bubbles in fashionable fields where researchers can make the bold claims these journals want, while discouraging other important work, such as replication studies.

In extreme cases, the lure of the luxury journal can encourage the cutting of corners, and contribute to the escalating number of papers that are retracted as flawed or fraudulent. Science alone has recently retracted high-profile papers reporting cloned human embryos, links between littering and violence, and the genetic profiles of centenarians. Perhaps worse, it has not retracted claims that a microbe is able to use arsenic in its DNA instead of phosphorus, despite overwhelming scientific criticism.

There is a better way, through the new breed of open-access journals that are free for anybody to read, and have no expensive subscriptions to promote. Born on the web, they can accept all papers that meet quality standards, with no artificial caps. Many are edited by working scientists, who can assess the worth of papers without regard for citations. As I know from my editorship of eLife, an open access journal funded by the Wellcome Trust, the Howard Hughes Medical Institute and the Max Planck Society, they are publishing world-class science every week.

Funders and universities, too, have a role to play. They must tell the committees that decide on grants and positions not to judge papers by where they are published. It is the quality of the science, not the journal's brand, that matters. Most importantly of all, we scientists need to take action. Like many successful researchers, I have published in the big brands, including the papers that won me the Nobel prize for medicine, which I will be honoured to collect tomorrow.. But no longer. I have now committed my lab to avoiding luxury journals, and I encourage others to do likewise.

Just as Wall Street needs to break the hold of the bonus culture, which drives risk-taking that is rational for individuals but damaging to the financial system, so science must break the tyranny of the luxury journals. The result will be better research that better serves science and society.

$1:
The journal Science has recently retracted a high-profile paper reporting links between littering and violence. Photograph: Alamy/Janine Wiedel

Image


http://www.theguardian.com/commentisfre ... ge-science

Some may remember the journal 'Lancet' published a study on the link between rising cases of Autism and Mercury in vaccinations, that had to be retracted because it was entirely false. There are outbreaks of disease in North America because of that false equivalence nowadays because parents don't get their kids vaccinated. And the children pay the price, because some Doctor wanted to manufacture evidence for a civil trial.

   



BartSimpson @ Tue Dec 10, 2013 10:30 am

Science is and always has been corrupted by the influence of those who pay for it. Just the way things work.

   



DrCaleb @ Tue Dec 10, 2013 10:43 am

BartSimpson BartSimpson:
Science is and always has been corrupted by the influence of those who pay for it. Just the way things work.


It used to be that Universities did research for research's sake alone. Now, they make the number of published articles as a measure for the value of a researcher.

Professor Higgs only published 5 papers in his career, and won a Nobel Prize. He was recently quoted as saying that because of this new 'Publish! Publish! Publish!' mentality, he'd never have kept his job in today's work environment.

But the point this guy I think is making is that science belongs to everyone, not just people who subscribe to a certain journal; so publish it where it's accessible to everyone freely.

   



N_Fiddledog @ Tue Dec 10, 2013 11:34 am

Also Climategate showed us peer review is vulnerable to ideological corruption when a cabal of politically motivated ideologues get control of the back channels of respected journals.

   



BartSimpson @ Tue Dec 10, 2013 11:56 am

N_Fiddledog N_Fiddledog:
Also Climategate showed us peer review is vulnerable to ideological corruption when a cabal of politically motivated ideologues get control of the back channels of respected journals.


+5 R=UP

   



Zipperfish @ Tue Dec 10, 2013 12:04 pm

It goes a lot further than just the big journals. In a fascinating (though strangely ignored) study:

Why Most Published Research Findings Are False


$1:
Several methodologists have pointed out [9–11] that the high rate of nonreplication (lack of confirmation) of research discoveries is a consequence of the convenient, yet ill-founded strategy of claiming conclusive research findings solely on the basis of a single study assessed by formal statistical significance, typically for a p-value less than 0.05. Research is not most appropriately represented and summarized by p-values, but, unfortunately, there is a widespread notion that medical research articles should be interpreted based only on p-values. Research findings are defined here as any relationship reaching formal statistical significance, e.g., effective interventions, informative predictors, risk factors, or associations. “Negative” research is also very useful. “Negative” is actually a misnomer, and the misinterpretation is widespread. However, here we will target relationships that investigators claim exist, rather than null findings.

It can be proven that most claimed research findings are false

   



sandorski @ Tue Dec 10, 2013 5:16 pm

BartSimpson BartSimpson:
N_Fiddledog N_Fiddledog:
Also Climategate showed us peer review is vulnerable to ideological corruption when a cabal of politically motivated ideologues get control of the back channels of respected journals.


+5 R=UP


-5 PDT_Armataz_01_33

   



BartSimpson @ Tue Dec 10, 2013 5:57 pm

sandorski sandorski:
BartSimpson BartSimpson:
N_Fiddledog N_Fiddledog:
Also Climategate showed us peer review is vulnerable to ideological corruption when a cabal of politically motivated ideologues get control of the back channels of respected journals.


+5 R=UP


-5 PDT_Armataz_01_33


You're a dick.

   



DanSC @ Tue Dec 10, 2013 6:04 pm

sandorski sandorski:
BartSimpson BartSimpson:
N_Fiddledog N_Fiddledog:
Also Climategate showed us peer review is vulnerable to ideological corruption when a cabal of politically motivated ideologues get control of the back channels of respected journals.


+5 R=UP


-5 PDT_Armataz_01_33

We're taking the side of poor science now?

   



DrCaleb @ Wed Dec 11, 2013 7:48 am

Zipperfish Zipperfish:
It goes a lot further than just the big journals. In a fascinating (though strangely ignored) study:

Why Most Published Research Findings Are False


$1:
Several methodologists have pointed out [9–11] that the high rate of nonreplication (lack of confirmation) of research discoveries is a consequence of the convenient, yet ill-founded strategy of claiming conclusive research findings solely on the basis of a single study assessed by formal statistical significance, typically for a p-value less than 0.05. Research is not most appropriately represented and summarized by p-values, but, unfortunately, there is a widespread notion that medical research articles should be interpreted based only on p-values. Research findings are defined here as any relationship reaching formal statistical significance, e.g., effective interventions, informative predictors, risk factors, or associations. “Negative” research is also very useful. “Negative” is actually a misnomer, and the misinterpretation is widespread. However, here we will target relationships that investigators claim exist, rather than null findings.

It can be proven that most claimed research findings are false


I read that one too, and I agree; why are people not shouting this from the rooftops? If it's not reproducible, it's not science. Science that lacks credibility is Science Fiction.

I recall one well respected researcher in Asia (can't recall who) admitted that pretty much all his experimental data was made up, and his papers based on it were worthless. How does society progress based on lies?

   



andyt @ Wed Dec 11, 2013 7:58 am

In the end it's all what we believe. Believe it and you will see it. Seems to be most prevalent with medical research, because of the enormous bucks involved.

   



DrCaleb @ Wed Dec 11, 2013 8:31 am

andyt andyt:
In the end it's all what we believe. Believe it and you will see it. Seems to be most prevalent with medical research, because of the enormous bucks involved.


That's actually the exact opposite of Science. Belief does not enter into it. That's the trap many fall into, they can't believe it so it's not real.

Data is the key to good science. Reproducible results that confirm or deny a conclusion. Anyone can drop a feather and a rock off the leaning tower of Pisa and anyone can see and measure the results. If the feather hits the ground first one time, it tells us much more than the rock hitting first one million times.

   



andyt @ Wed Dec 11, 2013 8:53 am

If you believe the feather got there first, it did. And I think we have a lot of that in science. The paradigm shift for instance, where proposition X just has to be true, everybody agrees, until somebody manages to break through and demonstrate that Y seems more likely. Maybe many other champions of Y were ignored or ridiculed because they didn't have the right pr.

BTW, I thought it was two rocks, no feather.

   



DrCaleb @ Wed Dec 11, 2013 9:27 am

andyt andyt:
If you believe the feather got there first, it did. And I think we have a lot of that in science.


What you describe isn't science, and it is the whole point of the article. In an experiment, there is no 'belief'. There is measurement. Detectors will say which got there first, the rock of feather. There are no opinions involved. That's pseudo-science and we thought we eliminated it back in the Renaissance, but apparently Texas wants a return to the middle ages.

andyt andyt:
The paradigm shift for instance, where proposition X just has to be true, everybody agrees, until somebody manages to break through and demonstrate that Y seems more likely. Maybe many other champions of Y were ignored or ridiculed because they didn't have the right pr.


People confuse the malleability of Science to come with new conclusions based on new observations with the notion that it was false to begin with. The problem with modern science is it's no longer the easily observable things that we are looking at. We all the easy stuff has been discovered, like the feather and the rock. Instead, we have things called '6 sigma probabilities'. That means that we do millions or billions of experiments (like searching for the Higgs Bosun) and we create a statistical probability that it is '6 sigma' likely, or 6 digits - 99.999999% likely that a given conclusion is true. It's never 100%, because of the nature of statistical analysis.

That leaves the door open to the other .000001% of results that don't fit the conclusion. And people look at those and don't discount them as measurement errors or some other error, they see them as a falsification of the conclusion. These are the people lotteries were designed for. Even ancient Greeks and Romans knew lotteries always favoured whoever was selling tickets. ;)

andyt andyt:
BTW, I thought it was two rocks, no feather.


Galileo was believed to devise the thought experiment based on a one pound and one hundred pound ball dropped off the tower. The conclusion at the time was the 100 pound ball would fall 100 times faster. I think his daughter wrote about someone actually doing it, and there was only a fractional difference in fall time, not 100 times difference. Enough error in dropping the balls at the same time or observing them that modern science would give it 4 or 5 sigma probability.

Astronauts on the moon used a feather and a hammer. The result was conclusive to the observer.

That's the beauty of science, you can design your experiment any way you want, so long as the theory fits the available data and is reproducible.

   



Zipperfish @ Wed Dec 11, 2013 9:36 am

DrCaleb DrCaleb:

I read that one too, and I agree; why are people not shouting this from the rooftops? If it's not reproducible, it's not science. Science that lacks credibility is Science Fiction.

I recall one well respected researcher in Asia (can't recall who) admitted that pretty much all his experimental data was made up, and his papers based on it were worthless. How does society progress based on lies?


There's a rampant confimration bias where scientists are rejecting the null hypothesis in order to find effects that aren't really there.

For non-science types, when you run a science you don't actually test your hypothesis you test the null hypothesis:

Hypothesis: Sugar makes kids hyperactive
Null Hypothesis: Sugar has no effect on children's hyperactivity.

In order to find an effect, you have to "reject the null hypothesis" (that sugar has no effect). Accepting the null hypothesis means you didn't find anything, which isn't going to get you published anywhere.

The use of statistics is also a problem. Uncertainties are not being accurately communicated. When you undergo a contrived statistical exercise in order to shopw the effect you're looking for, you tend to multiply existing uncertainties, until the uncertainty itself is greater than the effect you're looking for. If uncertainties are greater than the effect, you can't reject the null hypothesis, and you're not going to get published.

Finally, the peer review system doesn't seem to be working well. There's been some pretty obvious errors in some pretty big papers. Sheldon Cooper's new element being but one example.

And then on top opf that, you've got these quasi-scientitfic "research institutes" which are just advocacy groups masquerading as scientists clogging up the interwebz with their crappy research.

No wonder science has lost its appeal to authority.

   



REPLY

1  2  Next