(Chris Hondros/Getty Images)

In response to growing concern over flawed research practices, Stanford University has created a new center dedicated to studying and improving the process of research itself. We’ll talk about the most common scientific shortcomings, proposed solutions and how to judge whether research is reliable.

Guests:
John Ioannidis, professor of medicine at Stanford University and co-director of the Meta-Research Innovation Center at Stanford (METRICS)
Steven Goodman, professor of medicine at Stanford University and co-director of METRICS
Barbara Spellman, professor of law at the University of Virginia and editor-in-chief of "Perspectives on Psychological Science"

  • Guest

    How reliable is the mainstream media?

    In 1985 a jury concluded that the CIA killed Kennedy, and the media never once reported on it.
    http://www.globalresearch.ca/miami-jury-cia-involved-in-jfk-assassination/5379965

  • Peter

    I’m looking forward to hearing Dr. Ioannidis, who wrote an article with the arresting title “Why Most Published Research Findings Are False” (PLoS Medicine, 2005).
    It’s the most downloaded of all articles ever published in the Public Library of Science:
    http://www.plosmedicine.org/article/info%3Adoi%2F10.1371%2Fjournal.pmed.0020124

    • Chemist150

      A difference between believing a study is false or true could come down to subtle techniques not expressly stated but assumed to be known to “someone skilled in the art”.

      Let’s say that your loading test plates. The amounts used are very small. First you load a key component (say compound that you’re testing for activity) to the plate and there is a meniscus. You move the plate and droplets get stuck on the plate cell wall. You then add other components that are key to your test. Because the droplets are stuck on the sides, the contents of the drops do not contribute to the outcome and the concentration of a component key to the reactivity and final measurement is reduced. In the example, you have a much lower concentration of the compound to test than you intended.

      The end result could be that the first person added all reagents and then vortexed the plate reducing the error bars by ensuring the proper concentration and the people repeating the experiment do not have the finely tuned experience to vortex the plates and they find inactivity where the paper found activity because there error bars are high from inexperience.

      Something to chew on. Even the idea that it’s been “proven” false is in question.

  • smolsen

    Fascinating research. I’m curious if this phenomenon has been consistent over time, holding the total amount of published research constant? or has it been growing over time? And what that might suggest.

  • dave

    One of the biggest problems — particularly in my field, economics — is what’s called “internal validity”, meaning you can’t replicate the results *even* given the exact data and analysis, often because researchers aren’t required to submit their data and analysis code with the paper.

    It’s always struck me that one solution would be requiring the release of both data *and* the code used in analysis. Is this catching on at all?

    • Bob Dole

      Good point. In psychosocial intervention research, limited treatment fidelity is also a barrier to replication. Since the intervention is delivered by humans, there is lots of adaptation that happens. Very little is even specified about what the humans “provider/performer” should do. It’s not an exaggeration to say that a staged Shakespeare play is delivered with far greater fidelity from night to night.

  • fred

    How can non-statisticians learn to read the research? I like three older books that serve as guides and wonder if the guests can recommend others? These books explain experimental design, various tests, appropriate use of tests, etc.

    The older ones are PDQ Epidemiology, PDQ Statistics, and Interpreting the Medical Literature.

  • Steven

    Wakefield has been exonerated.

    • Seth Katzman

      Not by any reliable research. Find another religion.

      • Steven

        check again. ”

        New Published Study Verifies Andrew Wakefield’s Research on Autism – Again (MMR Vaccine Causes Autism)

  • Seth Katzman

    There are fascinating examples of acupuncture researchers claiming their research shows positive specific results even when sham acupuncture shows the same degree of effectiveness.

  • Bob Dole

    Regarding replication, it seems to me that original work is prized and replication is not. The new carrots offered by NIH for people who shared their data could be amplified by providing a carrot to those who are willing to replicate, rather than originate.

    Regarding the larger issue of the reliability of research, one surprising (shocking, actually) aspect of even the best scientific papers, is that once you get to the conclusion and discussion sections, there is little obligation to follow the scientific method. You are free to speculate without regard to the scientific method, to spin the results in whatever hopeful way that you like. I’d love to see the conclusion and discussion sections made into either separate papers or headlined with “leaving the scientific domain” warnings :-), so that the merits of the methods and results are not conferred upon the goofy, narrative inferences that may arise in the conclusion and disucssion. If someone wants to have an inference included with the main paper, then the inferences should be stated in symbolic language and subjected to a formal proof test. I’ll follow up offline with the Stanford group with dozens of examples.

    Great show.

    FCW (aka Bob Dole 🙂 )

    • Christine Blasey

      Great points Bob, I agree that replication is critical and that funders and journal editors should place higher value on replication studies.

      • Bob Dole

        Thanks, is it crazy to ask for a formal proof of inferences in a “scientifically verified inference” section?

        Until something along those lines can be hammered out, I’d love to see a group like the Stanford meta research group act as an independent “inference checker” of past papers, encouraging grad students and the public to find and submit such for confirmation, and peer review journals to invite crowdsourced? fact checkers to do the same.

  • Stacy Sterling

    The focus in this discussion on the issue of replicability is most relevant to the types of randomized efficacy trials used in, for example, drug trials. HOWEVER, there is a huge body of effectiveness and implementation research going on for which the issues of validity are very important, BUT…. in those studies there are so many factors that vary (these studies, by their very nature, have to do with how something will work in the real world as opposed to a lab), that replicability is very difficult and in some ways, beside hte point.

  • Mennah

    It was encouraging to hear this issue being addressed on forum! I enjoyed listening to the great insights from the group.

    I work for a company called Science Exchange, and I wanted to share a couple of the things that we are doing to help solve the reproducibility crisis. Because Science Exchange is a network of independent labs, we are utilizing their expertise to validate scientific findings and identify high-quality research.

    One of our most exciting projects is the Reproducibility Project: Cancer Biology, a collaboration with the Center for Open Science where we are replicating 50 of the top cancer biology studies from 2010-2012. The result will be an openly available dataset to study cancer biology reproducibility.

    I am thrilled to see that these programs are getting the funding and press required to raise continued awareness on the issue of scientific reproducibility.

Sponsored by

Become a KQED sponsor