Publishing is broken… so we’re trying something new
We're creating a dynamic system where science thrives, objectivity is centered and “impact” can evolve. Be a part of this groundbreaking shift: a transparent, community-driven platform for storing and assessing discoveries. Together, we'll transform the way we share and curate science!
You can read about The Bigger Picture, below, but first we’ve got to test it.
Testing Our System
We developed a pilot to evaluate an estimated 50-100 immunology pre-prints with “peer improvement review”—reframing peer review as an opportunity to make papers better—while also quantifying a paper’s quality and impact through a transparent scoring system. Ultimately we will compare the scoring results to where the pre-prints are eventually published, while also assessing speed, and author and reviewer satisfaction.
The Big Picture: The ultimate goal is to implement this new reviewing paradigm as part of a community-led, journal-free space, where science will be evaluated and curated by and for scientists: Discovery Stack/Discovery Curator.
Who’s Behind This
Discovery Stack/ Discovery Curator (DS/DC) is the product of Solving For Science, a new distributed community of scientists and allies that aims to change the culture of science from within. This campaign is designed to address our collective frustration with publishing and how we currently share our scientific ideas and results. Through a series of “idea engine” meetings with input from early career researchers to senior scientist and former journal editors, we developed our pilot program. A solution that we hope is fast, fair, and formative.
What We’re Proposing
Something fast, fair, formative and a bit fun! Discovery Stack/Discovery Curator is new way to store and curate discoveries, owned and operated by a distributed community of scientists—completely independent of the journal system. Our pilot is the first step in developing a system where scientific results are more objectively judged, where impact can evolve. Plus, the proposed system will provide the curation we need — because right now our options are journals (no, thank you) or the overwhelming ecosystem of open science (great, but so much to navigate).
The Pilot
Our pilot utilizes the field of immunology as a first step towards a new method for storing, evaluating, and curating discoveries. The program will be owned and operated by Solving For Science—a distributed community of scientists that aims to change the culture of science from within.
We will subject an estimated 50-100 pre-prints submitted to bioRxiv to a peer improvement review process, assigning Quality and Impact scores. Ultimately, we will compare the scoring results to where the pre-prints are eventually published, while also assessing author and reviewer satisfaction and speed of review.
Following implementation of the pilot, solicitation of feedback, and adjustments of the process design, we aim to implement this new reviewing paradigm as part of a community-led, journal-free space where science will be evaluated and curated by and for scientists: Discovery Stack/Discovery Curator.
A new reviewing paradigm…
Quality-Review (Q-review) flows in two parts: peer improvement in-line edits to the manuscript, and completing a scoresheet. The primary goal of Q review is to evaluate if the authors’ conclusions match the experimental data. Three Q reviewers are encouraged to point out conceptual issues, flaws in experimental design, and issues with data quality that do not support the authors’ claims—all in the frame of how you might help a friend or colleague with their grants or manuscripts.
Impact-Review (I-review) is composed of ten I reviewers, who will assign an I score that reflects if the paper is important and timely; if the findings are novel or unexpected.
After Q and I review, authors have the choice to respond, scores are shared, reviews are disclosed and the paper is placed in the Stack, a database which can be sorted according to score.
Pilot Timeline
The Bigger Picture
We’re testing a new home for scientific discoveries. Building on the success and transparency of bioRxiv, we have developed a solution that separates quality from impact, removes the “reviewer activist” phenomena (Itchuaqiyaq and Walton, 2022), and is based in peer feedback and curation.
Discovery Stack/Discovery Curator (DS/DC): a fundamental shift in how we approach each step of the publishing process.
Submit
Harnessing the success of bioRvix as a free, open-access tool for paper dissemination, submitting through bioRxiv means your science reaches the world as soon as it’s ready.
Review
With peer-improvement, we will eliminate the gate-keeping and antagonism seen in the current publishing landscape. Through a fully transparent, un-blinded system: Reviewers can focus on actually improving the paper through Quality review — in-line edits, ensuring if the data supports the conclusions. And through a separate Impact review, reviewers can discuss how the paper will move the field.
Once a paper receives its Q&I score, it’s placed into the “Stack”—a central database that can be sorted by score (sometimes you want to see a high quality, low impact paper), field, topic, and more!
Community Curation & Evaluation
Instead of relying on the opinion of three reviewers, with DS/DC the scientific community can continuously score papers, provide comments, and evaluate reviewers, too! This continuous scoring allows papers to evolve over time, with their positions changing in the Stack according to their adjusted scores. In addition, readers can also sort and filter the Stack based upon their field of interest or even specific reviewers that they “follow,” to allow for a fully curated experience.
Features Beyond the Pilot
- Q and I scores do not end with the first reviewers! In Discovery Stack/Discovery Curator, the community will be able to contribute their own scores over time.
- Reviewers will also be subject to review: the community will be able to “review” the reviewers, and reviewers’ scores can be compared and normalized.
- Financial models we’re considering:
- Fixed cost for handling
- Reviewer compensation
- Return ad revenue to authors
Why We’re Doing This
Something has to change. The current publishing process is slow and profit-driven. Peer review and publishing decisions are often opaque and unfair. Pre-prints and policy changes that favor open science opened a door for easy access to new findings, but that also creates a world where we’re overwhelmed by all of the science that’s now available. We still need to know: is the study any good, and will it affect our field? We still need review and we still need curation since they are the foremost tools used for promotion and evaluation. But we want to put it back in the hands of scientists, in a community-led space.
We Know You’ll Have Questions…
How will we know if the pilot is successful?
Every campaign at SolvingFor is an experiment. We’ll be measuring review time, comparing Q&I scores to impact factors, and author and reviewer survey feedback. These metrics will guide our process, and give us concrete guidance for improvements in the future. We’re also not afraid of calling this iteration of the pilot a fact-finding mission. If Q&I scores don’t align perfectly with journal impact factors, that’s not a failure, that’s incredibly interesting! Our biggest metric for success will be the very thing that grounds SolvingFor as an organization: community buy-in. We’re tackling something big so we expect surprises, frustrations, and even failure — but with the help of our community we’re poised to iterate and make a difference.
How fast is fast?
Seven weeks is what we’re testing in the pilot. 5 weeks for Q-review (including 2 weeks for authors to respond) and 2 weeks to collect I-review. Importantly, authors will have access to Q-review scores as they come in — so we’re hoping for faster!
How are reviewers selected?
SolvingFor will serve as ‘editors’ for this pilot program and collect reviewer suggestions from authors and the SolvingFor community. Reviewers will be matched based on expertise.
Is the review anonymous?
No, both Q and I reviewer identities will be made public after the conclusion of the pilot (once all papers are published in their destination journals).
How does peer improvement differ from normal peer review?
The goal of peer improvement is to edit scientific manuscripts as if a colleague or friend asked you to review their paper or grant before submission. The focus should be on creating clarity; aiding the narrative the authors are trying to build. Reviewers will use Hypothes.is to edit text directly or add comments to highlight anything that is incorrect, or conclusions that are overstated or need further clarification. They will then complete a scoresheet that guides reviewers through a more objective process of evaluating the science.
What is an Impact score and why are there 10 reviewers?
We aim for Impact review (I-review) to be a better measure of potential impact than traditional citation-based impact factors. I-scores will reflect the inherent variability associated with assessing novelty, importance, utility of new technologies or resources, a surprising new perspective and/or therapeutic advance/clinical impact. We are using 10 reviewers per manuscript to see the variance in what the community views as high or low impact. In Discovery Stack/Discovery Curator (our plan for after the Pilot), the community will provide I scores as well—allowing for impact to evolve over time.
How will reviewers be held accountable to ‘Peer Improvement’ standards?
Because both Q and I reviewers identities are disclosed, we hope that transparency will prevent bad reviewer behavior overall. In the future of DS/DC, reviewers will also be scored by the community based on their comments and feedback as a form of accountability.
Are authors expected to do follow up experiments based on Q&I review?
Author are encouraged, but not obligated, to respond to reviewers’ in-line edits as well as comments. Importantly, reviewers’ comments focus on if the conclusions match the experimental data. Reviewers will point out conceptual issues, flaws in experimental design, and issues with data quality that do not support the claims and provide suggestions for how that claim can be made or improved. This review and response process is a simulation: authors are not expected to perform experiments in response to reviewers’ comments.
How are scores calculated?
Reviewers fill out their scoresheets, and SolvingFor will calculate the final scores based on {equation to be determined, we’re working on it!}
Who sees the scores?
Reviewer comments from the Q scorecard and in-line edits will be available to authors once each reviewer has finished their edits. Numeric Q-and-I scores will not be shared with authors or the public until the greater pilot study has concluded (papers are published). Scores will be made public after the conclusion of the pilot (once all papers are published in their destination journals).
Okay, I’m in! How do I become a reviewer?
Stellar! We’re looking for immunology PIs to sign up as reviewers — declaration of co-reviewers (grad students, postdocs etc) will be encouraged.
Sign up here!
Okay, I’m in! How can I join the pilot?
Huzzah! We’re so happy to have you. We’re looking for Immunology pre-prints that are under review this winter. Sign up here!