I was sitting at a typical conference room table at a biomedical startup interviewing for an internship. Everything in the room felt sterile and unfamiliar – exciting but scary. Even for an internship, this was a big-girl job and one that paid well. My thoughts were polarizing between the reward of experience and the fear of underperforming at such a serious place. Before the interview was over, the quality director Rob Campbell (name changed) asked if I had any more questions. I did. At the threshold of a new journey filled with uncertainty, I needed to be certain of one thing. So I asked, “what are the ways I can fail at this job?”

Rob stood up, paced a little and had a smile on his face. I braced myself for the generic response about not trying hard enough. Instead, I got something far more eye opening.

“Nothing is ever a failure if you learn from it. The only way you can fail is by not learning from the things you attempt and not revising your methods. In fact, failure is at the heart of success.”

Until recently, it never occurred to me that this same moral should be more encouraged within the sciences.

On the surface, failures, unfinished projects, inconclusive and null results don’t seem to offer a lot. They don’t attract positive media attention. It doesn’t persuade investors, and it simply doesn’t feel good. Because of that, null results and failed attempts go largely unpublished. According to the work of Annie Franco, Neil Malhotra and Gabor Simonovits in 2014, “strong results are 40 percentage points more likely to be published than are null results.” They added that if research has strong results, they’re 60% more likely to get written up. This leads to publication bias, and concern about its implications to science have been brought up for decades.

Null results and failed tries are still results, and the information gathered from the project serves deeper, long-term learning.

So, in this article, I’ll look at this topic through several angles:

Why We Should Publish Null Results

Preventing Duplication

Publishing results that do not confirm what you expect or did not come out statistically significant are a lesson to the future. While the project itself might have been a financial bust, publishing the results prevents duplication and further financial stress.

And money is just the half of it. Time spent and wasted on a project becomes a huge personal investment that could be avoided altogether.

Answering Why It Didn’t Work

Going back to Rob’s advice about learning from failure – or really just learning from experience, sharing findings with the world later allows the world to determine why something didn’t work, and make amends.

Let the technology for glowing plants be one example. Glowing Plant was not just a technologically neat idea; it was a neat approach to funding. There was a lot of press attention surrounding Glowing Plant because they were one of the pioneers in crowd-funded science.

The aim of Glowing Plant was to engineer bioluminescent plants with the vision of one day lighting the world sustainably.

The project failed for two primary reasons. First, engineering this technology was more difficult and complex than expected. In order to make tobacco plants glow, the research team had to insert six genes, but they could never get them in at once. The resulting glowing plants were very dim. Media images depicting the glowing plant were shot with a long exposure, making the plant appear much brighter. The second reason it failed is that the project ran out of money.

Along the way, however, Glowing Plant regularly published project updates on their blog. While this was not formatted like a typical research paper, these updates did have information about results, changes in methods, obstacles and approaches.

With combined information from both the Glowing Plant and press publications, a natural archive of information and details about the project remains. Anyone can pick out fragments and attempt to rework, improve or be inspired, taking into account what aspects worked, and what aspects did not.

Looking outside of science, there are numerous tech startups that failed but taught huge lessons and paved a new future.

One example is Napster. The digital music pioneer came on scene in 1999, providing Internet users with the ability to share and download music. Not even a year later, copyright issues arose and after multiple lawsuits, the original Napster filed for bankruptcy.

Knowing why and how Napster failed enabled better innovations and technological integrations so prevalent today that we take what they did for granted. One of the most popular gadgets and services to stem from this idea of digital music sharing is Apple Music and the iPod. From there, the vision grew out into incorporating music with phones, and then adapting cars with Bluetooth technology or built-in capabilities for music playing and communication.

All Research Contains Valuable Information

Whether a project had null results, was dead before it ever began or had numerous failed tries, there is valuable implicit information knotted in with the outcomes. The New York Times highlighted research conducted in Mumbai, India by Neena Shah More and her research team. More’s project was to implement a community based program in the slums of Mumbai. The program aimed to collectively educate participants and foster community support in order to reduce the infant mortality rate. Prior studies in rural communities had significantly positive results; however, this project did not.

Some of the reasons posed for the null results boiled down to how differently an urban setting is from a rural community. In an urban setting, there are challenges to measuring improvements and differences in the chronic nature of diseases.

More provided considerable detail about the project’s setup, methods used, challenges faced, insights and analysis. Though the results were not significant, the amount of information the study revealed can help prepare researchers for further studies of this nature.

In fact, the New York Times article mentioned that More revisited the project in 2012 using a different approach because of what was learned.

It Opens the Door for Entirely New Research

I cannot leave off one of the most famous failed experiments of our recent history: The Michelson-Morley experiment.

In the 19th century, scientists believed that since sound waves required sound to travel through, light waves must also require some type of medium. They called this medium the “luminiferous ether.” This ether was thought to be found throughout the entire universe and could be interfered with. Particularly, as the earth traveled around the sun, physicists theorized that the speed of light would change.

The Michelson-Morley experiment set out to investigate this idea using a device called the “interferometer” that could detect very, very small changes in the speed of two beams of light.

The device split a single beam of light into two beams. These two beams bounce away and then reflect back into a single, recombined beam. When the two light beams recombined, a change in the speed between the two light beams could be detected based on the interference pattern resulting in the recombined light beam. A less intense beam meant that a difference occurred, and a more intense beam indicated completely synchronized waves.

Michelson and Morely investigated the resulting light patterns, and found no difference. They tweaked the approach, and still found no difference. This null result disproved the existence of the ether.

Years later when Einstein came up with the theory of special relativity, the results of the Michelson-Morely experiment were consistent with Einstein’s theory, leading to a breakthrough in physics.

Who Publishes Null Results?

Some publishing outlets are embracing and encouraging the movement to publish null or inconclusive results. Right now there are a few avenues for publishing.

A Current List of Who Publishes

As of 2018, this list includes publication platforms that will accept null results.

  • Journal of Articles in Support of the Null Hypothesis: The mission of the JASNH is to minimize publication bias by offering a place to publish experimental results that are not statistically significant. They publish twice a year.
  • Journal of Negative Results – Ecology and Evolutionary Biology: The Journal of Negative Results in Ecology and Evolutionary Biology strives to break the trend of publication bias by providing an outlet for peer-review publication of rigorous research that do not necessarily meet widely accepted significance standards. The JNR says the type of work they publish “includes studies that 1) test novel or established hypotheses/theories that yield negative or dissenting results, or 2) replicate work published previously (in either cognate or different systems).
  • F1000research.com: Right on the homepage it says, “Publish all your findings including null results, data notes and more. …” F1000Research is open access and peer reviewed. The platform is open to life science researchers, and more information about the publishing process can be found here: https://f1000research.com/about
  • PeerJ:PeerJ is another open access, peer-reviewed publishing platform. Their website says publication selection is “based only on a determination of scientific and methodological soundness, not on subjective determinations of ‘impact,’ ‘novelty’ or ‘interest.’”
  • ClinicalTrials.gov: Publication bias spreads to all areas of research, including clinical studies. While the there is no blanket legislation about reporting all clinical results, there is a law concerning clinical trials involving children where the results must be posted to ClinicalTrails.gov. ClinicalTrails.gov is only a database of raw information and has the drawbacks of not being very user friendly; however, it is a platform available for information sharing.
  • Major Journals: Null results still find their way into major journals. The first step is writing the paper up and submitting it. Then comes the acceptance or rejection. It helps if your study counters a popular view or that your methods were stringent and detailed. Publishing in a major journal is still difficult, but scrutinizing your results, and giving proper reason as to why your result is what it is, can intrigue editors.

A List of Archived Publication Sources

  • The Journal of Negative Results in BioMedicine: In September of 2017, the JNRBM officially ended publication. They provided this statement to address their decision, “The mission and purpose of JNRBM had always been to encourage the publication of null results, addressing bias in the literature. Since its inception, JNRBM provided a platform for results which would otherwise have remained unpublished, and many other journals followed JNRBM’s lead in publishing articles reporting negative or null results. As such, JNRBM has succeeded in its mission and there is no longer a need for a specific journal to host these null results. For authors seeking an alternative outlet for the publication of null results, a number of other BioMed Central journals will consider this content; please refer to the specific criteria for publication for each journal. In particular, please see BMC Research Notes.”
  • PLOS ONE’s Missing Pieces Collection: PLOS ONE’s Missing Pieces is a collection that highlighted null or inconclusive results. Within this collection weren’t just null results, but results that were unable to duplicate a past research project. Their goal with this collection was to show how important negative results are within the research community.

Alternatives and Considerations

While there are platforms solely dedicated to good science that lacked hopeful results, they are few in number. An article published on edgeforscholars.org provides tips on ways to get some of your research out there to the public.

  • “Combine results with significant results.”
  • Don’t shy away from rejection. Keep trying, and only call it quits when it’s really time.
  • Design or position your null results in interesting ways.
  • “Add power analyses.”

If you’re trying to publish these results in a major journal there are a few other pointers to think about:

  • Make sure your methods are detailed, rigorous and thoroughly planned out.
  • Make sure your results answer questions: The editor-in-chief of Research for Evidence wrote an article titled “Null results should produce answers, not excuses.” The title of the article speaks for itself. Brown suggests that null results should still point to answers – more answers than questions. She provides an example statement of how this could look, “the finding of no statistically significant effect demonstrates that community participation is not an effective enhancement for community-based education in Afghanistan.” Brown finishes her article by providing three steps to help ensure null results provide answers.
  • Publish your results somewhere: This alternative may not be peer-reviewed or standardized, but publishing results somewhere online allows the Internet to ultimately archive it, and for people to potentially find it. Consider the Glowing Plant who published regular updates regarding their work. Even though the project was ultimately incomplete, the world knew someone tried it. The world can investigate reasons it may not have worked well, and if someone wants to pick up the torch, they start the marathon a lot farther ahead than if there was nothing to go on.

Recall Neena Shah More’s project mentioned earlier in this article. One of the things that made her research useful is the level of detail provided. It might possibly have helped that other similar projects had impactful results, allowing a detailed comparison between two distinct settings.

Publishing scientific failure

The Reality Check

Publishing null results, failed results, inconclusive results and the like – it’s the right thing to do. Publication bias is a real problem and it can have real consequences. But as much as we preach in the name of science, we must acknowledge the realities and obstacles that come alongside these ideals.

Obstacles Faced by Researchers

The first issue is the time it takes to write up and submit research in general. It’s tedious, laborious and could still come to nothing despite trying. Moreover, if a lab runs out of funding or spins into a different direction, the project may never even reach completion. What do you do then?

If that happens, consider some of the advice in alternative ways to publish. There is always the option of trying to include past, inconclusive research within the body of something more significant. The other option, especially if a project never got to a publishable point, is to publish it on your own platform such as a personal website, blog or collaborative page.

Another hurtle is the fear of what these results might do to your career. The idea is that a great amount of time and energy provided little impact. As a result, if the project is even deemed publishable, it might only be published in a lower tier journal. The domino effect damages reputations and ultimately careers.

There is not a good answer to whether or not publishing null results leads to ruin. Sven Hendrix concluded in his article “Should I Publish Negative Results or Does this Ruin My Career In Science?” that the responsibility of this level of transparency should not rest on young scientists who are building their career. Instead, the burden should rest on seasoned scientists who have had years of impactful work.

Some people would argue that getting published is more important than where you’re published. However, there is not consistent opinion when it comes to that unfortunately.

A final issue that some researchers face is a combination of disappointment and a lack of awareness. According to a TIME article, investigators from Stanford discovered some of the reasons scientists were not publishing in the first place. They discovered that some did not value the null results, claiming “… that null effects do not tell a clear story.” While there are several articles emphasizing the value of null results, awareness still appears to be low.

The Stanford investigators also received statements that there was nothing worth publishing or that the results were disappointing.

Obstacles with Funding

Everyone likes a story with a happy ending. More importantly, everyone likes a story that just has an ending. The reality of research is that it often involves a lot of failure and unfinished business. Failure doesn’t make for good press on institutions, and it doesn’t really persuade organizations to back a project. Then the other side to all of this is that when the money runs out, the story ends before it ever really began.

Funding will always remain an obstacle. Even when research is going well, getting the funds to keep it going often requires significant effort. When your research doesn’t yield the expected results, there are only a few ways to approach it, and they still offer no guarantees.

First, refer back to the advice of keeping your methods and analyses highly detailed, and investigating the project from all angles. Good science is interesting science, and having a sound examination can potentially better position the work.

Second, go back to DIY publishing when the plan pivots. It’s not ideal. But if it’s the only way, then it’s the only way.

Third, hold back the null results, and publish and market only the positive impactful ones. Of course, that advice is exactly the opposite of what this article is about. It’s being posed because it is still an option. And personally, I can’t fault anyone when their hands are tied and they’re backed into a corner. The realities of publishing sometimes make it hard to do what we would really like to do.

Obstacles with the Media

There is a saying in the public relations world: “Spray and pray.” This sometimes relates to the idea of getting press releases out in high volumes and hoping someone somewhere picks up the story.

When the press release has something really interesting going on, it’s a lot easier to get attention. In fact, if it’s interesting enough or the press spins it the right way (which can be the wrong way), hype can really snowball fast.

The obstacles media can cause is that too much hype can occur around the beginnings of a project. It helps gain attention and gets backers; however, if the project goes under, the fall is a lot harder because of how public it becomes.

Consider the amount of press focused on IBM Watson’s work with MD Anderson Cancer Center. In the beginning, journalists tossed around the notion that Watson would “revolutionize cancer care.” I must admit that the hope is easy to bite into, and we did.

Lower scale projects can sometimes still face premature media attention, and face tougher circumstances when the project isn’t going as planned. It can be very tempting at that point to position results in the most positive way possible, or to highlight the more significant aspects of research and minimize anything that had less impact. However, that leads to the overall issue with publication bias.

Failure is messy and often crushing. It’s an inherent part of science that we tend to be ashamed to talk about. But when we change the definition and understand the underlying lessons, it can also be unbelievably rewarding.

References

American Physical Society. (n.d.). Michelson and Morley. Retrieved October, 2018.

BioSpace. (2017, March 16). True or False: Publishing Negative Results Ruins Your Science Career. Retrieved October, 2018.

Brown, A. (2017, January 18). Null results should produce answers, not excuses. Retrieved October, 2018.

Doronina, V. (2013, August 09). Where to publish negative results. Retrieved October, 2018.

Franco, A., Malhotra, N., & Simonovits, G. (2014, September 19). Publication bias in the social sciences: Unlocking the file drawer. Retrieved October, 2018.

Future Science Media. (2013, June 21). Retrieved October, 2018,

Glowing Plant, G. (n.d.). Glowing Plant. Retrieved October, 2018.

Hendrix, S. (2018, September 22). Should I publish negative results or does this ruin my career in science? Retrieved October, 2018.

Hubbard, R., & Armstrong, J. S. (1992). Are null results becoming an endangered species in marketing? Marketing Letters,3(2), 127-136. doi:10.1007/bf00993992

Kluger, J. (2014, August 28). Why Scientists Should Celebrate Failed Experiments. Retrieved October, 2018.

Kwak, S., Giraldo, J. P., Wong, M. H., Koman, V. B., Lew, T. T., Ell, J., . . . Strano, M. S. (2017). A Nanobionic Light-Emitting Plant. Nano Letters,17(12), 7951-7961. doi:10.1021/acs.nanolett.7b04369

MD Anderson Cancer Center’s IBM Watson project fails, and so did the journalism related to it. (2017, February 23). Retrieved October, 2018.

PLOS ONE. (n.d.). PLOS Collections. Retrieved October, 2018.

Publishing Null Results. (2018, August 19). Retrieved October, 2018.

Rosoff, M. (2011, October 04). Napster Is Finally Dead — Here’s A Look Back At What It Once Meant.

The Nobel Prize in Physics 1907. (n.d.). Retrieved October, 2018.

Trafton, A., & MIT News Office. (2017, December 12). Engineers create plants that glow. Retrieved October, 2018.

Zaringhalam, M. (2017, October 13). An Experiment That Didn’t Work.

Zhang, S. (2017, April 20). Whatever Happened to the Glowing Plant Kickstarter? Retrieved October, 2018.