Preamble | Pilot Homepage | Bigger Picture | Pilot Team | Participant Guide | Join the Pilot
If peer review was something to look forward to, what would it look like?
We are all too familiar with the failings of the current scientific publishing infrastructure:
A Lack of journal review timeline transparency
A Lack of journal reviewer identity transparency
AND that the current peer review process puts authors and reviewers in antagonizing positions, where a discovery goes through multiple rounds of peer review to, ultimately, be given a one-time “branding” that attaches it permanently to the journal it is published in, where the journal is assumed to signify the impact AND quality of a scientific discovery.
We know this is not how science works. Community-assessment of ideas and science is ongoing, and the antiquated ‘journal’ format fails to keep pace with the science it is meant to process and curate. But does it have to be this way?
Good peer review sets standards for scientific rigor, and should provide balance to biases inherent to human nature, thus giving the broader community valuable insights into the validity and meaning of a study.
How do we preserve the benefits of peer review, while separating it from the current science publishing industry that has evolved to serve its own infrastructure rather than science? What are the most important elements of peer review that make it good?
In 2019, a group of cancer immunologists came together to attempt to answer this question, with the proposition of “Universal Principled Review”. Distilling the essence of good peer review as being able to assess the Quality and Impact of each study as independent attributes.
“The primary roles of the peer-review process should be to vet the quality of the data using field-specific criteria and to request a balanced discussion of its validity and meaning.”
While the principles of Universal Principled Review (UPR) speak to reviewers and authors alike, its execution is not easy. And, with further review and discussion, it became apparent that a mere reconfiguration of the review process, within a framework that is all-to-familiar, is a band-aid. While many of the ideas behind UPR were solid and idealized best-practices, the bigger problem is the infrastructure, and the profit and reward-structure created by commercial publishing.
As we approach five years of UPR implementation, Solving For Science is now building upon what we learned to create the next iteration: Discovery Stack.
Originators of the Discovery Stack Idea
The following people originated, debated, and developed this idea:
Mark Ansel
Sammy Bedoui
Jan Botcher
Casey Burnett
Vincent Chan
Alexis Combes
Stacie Dodgson
Keke Fairfax
Ananda Goldrath
Alex Hoffmann
Ken Hu
Max Krummel
Mike Kuhns
Gabe Murphy
Shalin Naik
Liz Neeley Yong
Andrew Oberst
José Ordovás-Montañés
Thales Papagiannakopoulos
Marion Pepper
Philippe Pierre
Weston Porter
Ferdinando Pucci
Brooke Runnette
Chris Schaffer
Tiffany Scharschmidt
Carly Strasser
Nina Serwas
Richard Sever
Rich Trott
Roxane Tussiwand
Alana Welm
Bryan Welm
John Wherry
Starting in 2023, a core team formed to further develop the idea and implement an actionable pilot experiment and plan—the Discovery Stack Pilot—through SolvingFor’s idea engine process.
With Discovery Stack, we improved upon and streamlined the workflow of UPR to make it more intuitive for implementation, while preserving the core of scientific rigor and clarity.
The first step is to reframe Peer Review entirely, by separating assessment of the Quality of science (the more objective metric) from the Impact, or perception of importance.
Then, to reframe how we think about our role in helping to improve the quality (and assess the impact) of science, we are reconsidering our reviewer-selves to copy-editors: providing correction, clarification, and preventing the authors from making overzealous statements of their conclusions. Simply, we are re-positioning ourselves from peer reviewers to Peer Improvers. Additionally, we also re-designed the experience of going through a peer’s manuscript to feel more akin to what we hope a good peer review experience should feel like: the process of reading through your colleague’s work to make it better.
How changing peer review can impact the academic community.
The current peer review and publication system has over a hundred thousand stakeholders and any change would have to consider how this could impact each of them. Each stakeholder have specific challenges and needs for the system.
- Graduate Students
- Postdocs
- Pre-tenure PIs
- Senior PIs
- Department Chairs
- Promotion/Tenure Committees
- Granting and Hiring Agencies
- Professional Societies
- Librarians
- Existing Curators (Editors)
- Existing Businesses (e.g., publishing houses, editing/typesetting…)
- Government Agencies
Elements of Good Peer Review:
- Provide rapid feedback and assessment to more rapidly share quality scientific experiments and ideas
- Educate reviewers in normative behavior and provide mechanisms to 1) score, 2) honor, and 3) compensate reviewers for their service to authors and their community. (coming in v.2 of this experiment!)
- Center objectivity in evaluating the conclusions derived from the experiments as well as the experiments themselves. And allow this objective measure of ‘Quality’ to be further updated, over time, by additional vetted peers.
- Give authors options as to what steps to take to improve the quality of their scientific work before it is shared.
- Provide both an initial and ongoing assessment of ‘Impact’ or ‘Importance’ of the work, while acknowledging that it is arguably the most subjective aspect of scientific assessment.
- Generate a transparent platform with improved access to metrics—on the history of the Quality and Impact assessments, on reviewer quality and behavior.
- Recapture the tremendous advertising revenue and return it to authors, to this process, and to science more generally. In short, to recapture our own value.
Testing our System
We believe that a new process is needed. Peer Review should not be based on what commercial publishing infrastructure has created, but on how we consume ideas and what we need to assess and honor the best science.
But we are also scientists. We need metrics. Simply creating an improved workflow and rubric is not enough. We have to test it. And we’re starting with the review system itself.
As scientists, we build our hypothesis and models based on data, and we implement and learn from them to create next experiments and next iterations. Like our lab work, we are taking the process of re-iteration to Peer Review.
We need you to be your best scientist and be an innovator in a new domain.
Please join us in the Discovery Stack Pilot as either an Author or a Reviewer, or both!
Together, we will test a workflow of peer review that could one day become a rite of passage for a manuscript that both authors and reviewers could look forward to.
For detailed information on the study design of Discovery Stack workflow and rubrics, click here.
Written and Complied
Beiyun C. Liu is a postdoctoral fellow at St. Jude Children’s Research Hospital.
Ferdinando Pucci is an Assistant Professor at Oregon Health & Science University.