Premium

Science Has an Enormous Credibility Problem With No Solution in Sight

Peter Schneider/Keystone FILE via AP

In 1912, amateur archaeologist Charles Dawson claimed that he had discovered the "missing link" between early apes and humans. Dawson said that a workman at the Piltdown gravel pit had given him a fragment of the skull four years earlier after the crew had broken the skull into pieces, believing it was a fossilized coconut.

Dawson revisited the Piltdown gravel pit several times, finding additional parts of the skull, some teeth, and half of a lower jaw. Other paleontologists examined the find and confirmed that the bones were human, except for the jawbone, which was ape-like.

Dawson announced his discovery of "Piltdown Man" and generated enormous excitement. The "missing link" was akin to the Holy Grail, and Dawson was celebrated worldwide by scientists and the general public.

In 1953, Dawson was found to be a fraud. He had created the impression of an ancient hominid by using the skull of a recently deceased human, a baboon's jawbone, and an orangutan's teeth. It fooled many scientists for 41 years.

A fraud like Dawson's couldn't happen today. Scanning equipment would have been able to immediately spot a fraudulent jawbone and teeth, and chemical analysis would have shown that the skull was less than 100 years old.

But scientific fraud is a big problem today. "The credibility crisis of science is not about scientific progress invalidating previously held scientific beliefs, which is intrinsic to the very nature of scientific revolutions," write Thomas Plümper and Eric Neumayer in Nautilus. "Rather, the crisis has been caused by scientists who deliberately publish overconfident, misleading, and often simply false empirical results based on research designs or model specifications they have intentionally specified to give the desired results."

"Cooking the books," in ways large and small, has become commonplace today, and there doesn't appear to be anything science can do to regain its credibility. 

If this were a problem limited to scientists trying to fool other scientists or research and academic institutions, the fraud would probably not be noticeable.

But that's not the case. John Ioannidis, a professor of medicine at Stanford University, wrote a paper with the ominous title “Why most published research findings are false." Ioannidis "showed that the statistics of reported positive findings were not consistent with how often one should expect to find them," according to Nautilus's Philip Ball. Ioannidis concluded in 2014 that “many published research findings are false or exaggerated, and an estimated 85 percent of research resources are wasted."

Many scientists use the technique of "tweaking." Tweaking is publishing overconfident, misleading, and often manipulated results based on specifications they have intentionally altered to give the desired results. There's nothing accidental about it. It's cheating.

But the practice is so widespread (85% according to Ioannidis) and cooked into so much of modern science that even if it were easier to spot, it's unlikely that it could be adequately policed.

Nautilus:

Tweaking is potentially more damaging to science in the long run than data manipulation and fabrication. That might be hard to believe, since tweaked empirical results are likely to have smaller effects on the fabric of science than cases of data fabrication and manipulation. But the cumulative effect of tweaking can still be larger than that of data fabrication and manipulation because these strategies are rare, whereas tweaking is common.

Ever since the online platform Retraction Watch began monitoring and reporting retractions in 2010, the number of retracted articles per year has steadily increased. Some of this is due to “bulk retractions” of thousands of articles published by so-called paper mills, where authors pay to have fake articles published. We are not interested in these retracted paper-mill publications but in variants of data fraud, a subset of retractions that have also been steadily increasing. Most notably, there have been several high-profile retractions involving work by Francesca Gino of Harvard University and Marc Tessier-Lavigne of Stanford University. And these are just the most recent cases—the ones that stick in the public mind for a while before attention is drawn to other, more spectacular cases of scientific fraud.

"All of this is to say that scientists no longer sit at God’s table, so to speak," write Plümper and Neumayer. "They have become mere mortals in the midst of a massive crisis of trust."

Make no mistake: Tweaking is not about changing the course of science. Nor is it, at least not primarily, about the misuse of public research funds (although it is a scandal that hard-working taxpayers fund the research of tweakers). Rather, tweaking is about scientists pursuing their own interests in a competitive, vulnerable system based on trust and on freedom from control by institutions that enforce rules.

One common method of tweaking involves researchers not getting the results they expected after the experiment or study has been completed. By increasing the sample size or changing the specification of the model, the results can more closely match the researchers' expectations.

But it's still fraudulent. And unless the researcher is honest and notes the increase in sample size or other changes made, the impression is left that the results are true to what the researcher was trying to prove.

Science has lost some of its standing with the public. While skepticism about scientific findings can be healthy and is an inherent part of the scientific process, general disbelief and distrust pose significant challenges. Scientists have a vested interest in regaining some of that lost trust. This is easier said than done. But much would be gained if scientists were honest about the uncertainties associated with scientific results—honest with other scientists in scientific publications and honest in public statements. Scientists must learn to distinguish between scientific results and their personal opinions, promote full transparency in scientific research—not hide potential conflicts of interest—and find ways to improve communication with the public to rebuild trust.

That's a tall order. And given the competition for jobs and for research dollars, and the need to publish to survive, it doesn't seem likely that science as an industry will change anytime soon. 

Recommended

Trending on PJ Media Videos

Advertisement
Advertisement