Scroll to top


Led by Shyam Saladi

We need a continuous integration (CI) for papers.


Aim: There is a fundamental disconnect between what publications are meant to do and how science happens. Authors attempt to establish, and reviewers assess, an intellectual step forward through a story weaving together facts and data. Yet, receiving scientists must translate this intellectual step into actionable experiments with new assays, protocols and theory. Here, it’s the details that matter: a simple missing detail or piece of important data makes all the difference.

The open-source community has addressed durability with continuous integration (CI). Each time a contribution is made, no matter the contributor, rigorous test code must certify the change to formally enter the codebase. Unit tests ensure low-level details are right; integration tests maintain high-level functionality.

I see an analogy with scientific publishing. If each manuscript is a changeset and the corpus of literature the codebase, traditional peer review is an “integration test” by certifying a full-length manuscript. What’s missing are the unit tests that check the fine-grained details, for example against best practices and, where necessary, solicit targeted input from turks/experts.

A CI infrastructure would allow for easy, community-driven development of programmatic checks and would allow decoupling efforts at UIs for authors/reviewers to understand issues detected. I’m working out the specification to serve bits of preprints through an intuitive API and building such a system that I expect to be in beta this September. 

Work at the Sprint

I am proposing work at the Sprint to define easily agreeable checks for preprints that build upon, and improving where necessary, the digest-preprints API. Such checks might include identifying references to retracted papers, highlighting poor practices such as “data available upon request”, lack of licenses in a referenced repository and lack of necessary statistical detail. Given the two-day time period, the checks should be fairly self-contained and tend toward engineering (vs. scientific) pursuits. For example, an image manipulation checker would be great and certainly feasible but would require significant scientific effort.

I am looking for…

Software developers, paired with researchers and policymakers/research culture experts, to form sub-teams to identify and build checks.