See more of the story

Opinion editor's note: Star Tribune Opinion publishes a mix of national and local commentaries online and in print each day. To contribute, click here.

•••

In late July, the Star Tribune published the discouraging news that Sylvain Lesné, a University of Minnesota neuroscientist, is alleged to have doctored data in an influential paper exploring the causes of Alzheimer's disease. A few days later we learned that an Ohio State University cancer research lab manipulated images in its publications.

It is disheartening to learn that scientists cheat. Misbehavior among researchers slows the search for cures for diseases like Alzheimer's and cancer and adds to a growing skepticism about science.

As one who has studied cheating in science, I have good news and bad.

The good news: Very few scientists falsify their data. A survey I conducted with colleagues at the University of Minnesota and Health Partners revealed that fewer than 1% admitted to falsifying or making up data. A more recent study from the Netherlands found a slightly higher but still small percentage of scientists admitting to manipulating their data.

The bad news: Even if only a few scientists cheat, their false findings can have an outsized effect. When bad data enter the scientific record, future research by honest scientists is distorted. This is the concern with the "groundbreaking" but problematic research done at the University of Minnesota. For more than 15 years, other researchers used Lesné's questionable findings as the basis for their work.

And there's more bad news: Altering data is not the only way scientists cheat. Conflicts of interest can bias results, sometimes by leading researchers to ignore data that do not support one's theory. Unfortunately, many scientists admit to less than honest practices such as these. Fully 33% of the scientists we surveyed admitted to one of 10 serious misbehaviors, and more than half of the scientists in the Dutch study reported that they engaged frequently in at least one "questionable research practice."

Why do scientists cheat? Consider these three possibilities. Scientists who cheat:

1) Are bad people.

2) Are good people who do not understand the rules of science.

3) Are caught in a system that encourages them to "cut corners" to succeed.

Many people assume that bad science is done by bad people, the kind who also might cheat on their taxes, in a poker game or on their partners. There are, of course, dishonest people in every profession. But it is unlikely that a career in science would attract such a high proportion of cheaters — the 30 to 50% who admit to misbehavior.

Perhaps scientists aren't cheating. They simply do not understand the rules governing how data should be collected and analyzed. Are there situations that allow you to ignore data that contradict your theories? When is it acceptable to take money from an industry that has a strong financial interest in your results? As a supervisor of graduate students, do you really need to check their findings before they are published?

If the problem is ignorance of the rules, the solution must be more training. Enter "Responsible Conduct of Research" courses. A typical RCR course reviews the "range of accepted scientific practices for conducting research" and informs scientists about the "regulations, policies, statutes, and guidelines that govern the conduct of federally funded research" to "promote compliance with the same."

RCR training is now mandatory for all researchers funded by the National Institutes of Health and National Science Foundation.

Yet our research found that RCR education has no consistent effect on cheating. In fact, in some cases training actually increased the likelihood of dishonesty.

Which leaves us with the third possibility, that cheating is generated by the way we organize and pay for science. In the U.S., we believe that competition, not collaboration, is the best way to increase scientific knowledge. We ask scientists to compete for research grants, for publications, for recognition via traditional and social media. To the winner goes the spoils — the best jobs, promotions and continued funding. And the loser? Driving taxi.

This kind of competition, especially in the current market where too many graduate students are looking for too few jobs, creates pressure to cut corners small (eliminating outliers) and large (manipulating data). Instead of laboratories working together to find the cause and cure for Alzheimer's, they work against each other. Instead of sharing data, researchers are forced to find ways to get ahead of the competition.

One researcher told us he would intentionally publish misleading reports to slow the progress of competing labs.

If we wish to prevent dishonest science, we must find ways to use funding to drive collaboration. An obvious way to do this is to fund consortia, not individuals. Bring together scientists who are working on the same problem and reward them for sharing their research. Help redirect the goal of researchers, from protecting their jobs to the true objective of science: unraveling the mysteries of nature, conquering disease, protecting our environment, and promoting the health and welfare of all.

Consortia offer the added benefit of engaging science skeptics who doubt expert advice on vaccines and public health measures. By inviting members of the public (the more skeptical the better) to join the research team, they'll see how science is done.

We expect better. Let's create conditions that encourage scientists to do their work honestly.

Raymond De Vries is a professor at the Center for Bioethics and Social Sciences in Medicine at the University of Michigan.