This article was written for and published during the beginning of Peer Review Week 2017. Here on the Impact Blog, we’ll be featuring posts covering a variety of perspectives on and issues relating to peer review, and which also consider this year’s theme of “Transparency”. To kick things off, Jon TennantDaniel Graziotin and Sarah Kearns consider what can be done to address the various shortcomings and problems of the peer review process. While there is obviously substantial scope for improvement, none of the ideas proposed here are beyond our current technical and social means. The key challenge may lie in galvanising our scholarly communities. (Originally posted with LSE Impact Blog)

Keywords: open access, peer review, social media, and community. 


Peer review of scientific research papers forms one of the cornerstones of our knowledge generation process. Since its origins in the 19th century, it has become a diverse and complex process. The purpose of peer review is to evaluate and improve the quality, novelty, validity, and impact of submitted papers and the underlying research. Despite being viewed by many as the gold standard of certification for research, there is now increasing evidence that the ideal of peer review is not equally matched by its process or practice.

Research has shown that peer review is prone to bias in numerous dimensions, frequently unreliable, and can even fail to detect fraudulent research. This is a critical issue at a time when public education and engagement with science, and trust in research, are needed due to the proliferation of “alternative facts”, where expertise is often casually dismissed in important socio-political domains. While we believe that the ideal of peer review is still needed, it is its implementation, and the present lack of any viable alternative, that must be looked at for improvement.

In our latest research paper, published at F1000 Research, we brought together an international, cross-disciplinary team of 33 authors to look at the history, present status, and potential future of peer review. As a team, we felt there were some important questions about peer review that needed to be examined in greater detail. For example, what role does it play in our modern digital research and communications infrastructure? Does it perform to the high standards with which it is generally regarded? How can the power and practices of the web, particularly the social aspects of Web 2.0, be leveraged to think about innovative models for peer review?

We showed that there has been an explosion in innovation and experimentation in peer review in the last five years. This has been fuelled by the advent of web technologies, and an increasing realisation that there is substantial scope to improve the process of peer review. By combining the knowledge and experiences from across a diverse range of disciplines, we took an introspective look at peer review, one we hope will be useful for future discussions on the topic.

We believe that there are three core traits that underpin any viable peer-review system: quality control and moderation, performance and engagement incentives, and certification and reputation. We also strongly believe that any new system of peer review must be able to demonstrate that it not only outperforms the current models, but also that it avoids or eliminates as many of the biases in existing systems as possible.

Quality control and moderation

Quality control is the core function of peer review, and is what distinguishes the scholarly literature from almost any other type. Typically this has been administered in a closed, venue-coupled system with few actors; namely authors, reviewers, and editors, with the latter managing the process. A strong coupling of peer review to journals plays an important part in this due to the common, albeit deeply flawed, association of researcher prestige with journal brand. The issue here is that “quality” of peer review remains based on trust, rather than anything substantive. While, intuitively, the quality of peer review at more prestigious journals might be considered higher than at smaller journals, we cannot objectively state it is as there is simply not enough evidence due to the opacity of the process.

Other social knowledge-exchange platforms such as WikipediaStack Exchange, and Reddit have self-organised communities and governance structures (e.g. moderators) that represent possible alternative models. Here, moderators have the same operational functionality as journal editors in terms of gate-keeping and facilitating the process of engagement. Individual research communities could transparently elect groups of moderators based on expertise (e.g. automated through ORCID, using the number of previous publications as a threshold), prior engagement with peer review, and assessment of their reputation. Different communities could use specific social norms and procedures to govern content and engagement, and to self-organise into individual but connected platforms.

In such a system, published objects could be preprints, data, software, or any other digital research output. Quality control would be provided by having a system of semi-automated but managed and open peer review, with public interaction, collaboration, and transparent refinement through version control. Community moderation and crowdsourcing would play an important role, preventing underdeveloped feedback that is not constructive and could delay efficient research progress.

When authors and moderators collectively deem the peer-review process to have been sufficient for an object to have reached a community-decided level of quality or acceptance, the review is complete. Some journals, such as the Journal of Open Source Software, already implement this process successfully. While traditional editorial roles are not foreseen in our vision, we recognise there are still potential extreme cases where consensus is not achieved and third-party involvement is required. This can be achieved, for example, through impromptu election of a highly ranked, super-moderator or arbiter, or an F1000-like system of discontinued peer review. Following this process, the objects can be indexed, and the updated version can be assigned a persistent identifier such as a DOI, as well as an appropriate license allowing for maximum reuse (including then sending to a traditional journal) and process sustainability.

The important distinction here from the traditional model is the active promotion of inclusive participation and community interaction, with quality defined and controlled by a process of engagement and digestion of content. If desired, these objects could then form the basis for manuscript submissions to journals, perhaps even fast-tracking them as the quality assessment would already have been completed. The role of peer review would then be coupled with the concept of a “living published unit”, and with dissemination and validation of research occurring independent of journals.

Performance and engagement incentives

To motivate and encourage participation with peer review, incentives will increase wider engagement. Lowering the threshold of entry for different research communities starts to make open peer review more accessible and less burdensome. One of the most widely-held reasons for performing peer review is a quid pro quo sense of academic altruism or duty to the research community. However, at present this is imbalanced and researchers still receive far too little credit or recognition for their efforts. Directly tying to certification and reputation is the ultimate goal of most academic incentive systems.

New ways of encouraging peer review can be developed by quantifying engagement with the process and linking this to academic profiles. To some extent, this is already performed at platforms like Publons or ScienceOpen where the records of individuals reviewing for a particular journal can be integrated into ORCID. Platforms such as Reddit, Amazon, and Stack Exchange use gamification and represent a model in which participants receive virtual rewards for engaging with review. Those activities are further evaluated and ranked by the community via upvotes and usefulness scores. A hierarchical reward system based on badges could be integrated into this, including features for content or individuals such as “Top 5% reviewer”, “Successfully replicated”, “500 upvotes”, or whatever individual communities decide is best.

The distinction from the traditional process is that highly rated reviews gain more exposure, more scrutiny and recognition, and ultimately more credit. This creates the incentive to engage with the process in a way that is most beneficial to the community, which can then be used as a way of establishing prestige for individuals and for content.

Certification and reputation

The current peer-review process is generally poorly rewarded as a scholarly activity. Performance metrics provide one way of certifying peer review, and provide the basis for incentivising participation. As outlined above, a fully transparent and interactive process, combined with reviewer identification, makes clear the level of engagement and added value from each participant.

Certification can be provided to contributors based on their engagement with the process: community evaluation of their contributions (e.g. as implemented at Amazon, Reddit, or Stack Exchange), combined with their reputation as authors. Rather than having anonymous or pseudonymous participants, for peer review to work well in this open system requires full identification, to connect on-platform reputation with authorship history. Rather than a journal-based form, certification is granted based on continuing engagement with the research process and is revealed at the object and individual level.

The distinction from the traditional model here is that achievement of certification takes place via an evolving and continuous process of community engagement and can be quantified. Models like Stack Exchange are ideal candidates for such a system, and operate through a simple and transparent up-voting and down-voting scheme, combined with achievement badges. Such quantifiable performance metrics could easily be tied to the academic reputation of individuals. As this is decoupled from journals, it alleviates all of the well-known issues with journal-based ranking systems and is fully transparent. This should be highly appealing not just to researchers, but also to those in charge of hiring, tenure, promotion, grant funding, and research assessment, and therefore could become an important factor in future policy development.

Challenges and future considerations

None of the ideas proposed above is radical, unachievable, or beyond current technical and social means. There are working models that demonstrate the potential feasibility of such a system, as exemplified by a huge range of web-native platforms, many of which scholars already engage with. Instead, what we suggest here is the simple recombination of existing traits from successful social platforms into a single, hypothetical hybrid platform.

A key challenge our proposed hybrid system will have to overcome is simultaneous uptake across the whole scholarly ecosystem. In particular, this proposal involves a requirement for standardised communication between a range of key participants. Real shifts will occur where elements of this system can be taken up by specific communities, but remain interoperable between them. Identifying sites where stepwise changes in practice are desirable to a community is an important next step. Increasing the – currently almost non-existent – role and recognition of peer review in promotion, hiring and tenure processes could be a critical step forward for incentivising the changes we have discussed. As such, we expect that research funders at a range of levels should be interested in pooling knowledge and resources to build such a platform as a consortium.

By looking at the increasing adoption of social technologies by digital communities, we can see that there is considerable scope and appetite for the significant development and adoption of new peer-review initiatives proposed herein. Such an initiative has the potential to resolve many of the technical and social issues associated with peer review, and also disrupt our entire system of scholarly communication. High-quality implementations of these ideas in systems that communities can choose to adopt may act as de facto standards that help build towards consistent practice and adoption. We look forward to seeing progressive developments in this domain in the near future.

Note: This article gives the views of the authors, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below.


About the authors

Jon Tennant completed his PhD at Imperial College London and his research looks at deep time evolutionary patterns in groups like dinosaurs and crocodiles. Alongside this, he currently works as a PLOS Paleo Community Editor, is Communications Director for ScienceOpen, a freelance science writer, founder of paleorXiv and the Open Science MOOC, and author of the kids’ dinosaur book Excavate Dinosaurs! He can be found on Twitter at @Protohedgehog, talking about open access.

Daniel Graziotin received his PhD at the Free University of Bozen-Bolzano, Italy. He is a postdoctoral researcher at the Institute of Software Technology, University of Stuttgart, Germany. His research interests include behavioral software engineering, empirical software engineering, open science, and studies of science in general. He is associate editor at the Journal of Open Research Software and academic editor at the Research Ideas and Outcomes (RIO) journal. Daniel is recipient of the Alexander von Humboldt Fellowship (2017), the European Design Award (bronze; 2016), and the Data Journalism Award (2015). He is an open science advocate; lately, he is attempting to open up software engineering academic conferences. He can be found on Twitter at @dgraziotin and on his website ineed.coffee.

Sarah Kearns is currently in the University of Michigan’s Chemical Biology PhD program where she studies non-conventional chemical interactions to better drug development. Outside of lab, she writes for Michigan Science Writers, where she serves as the communications director, and her own blog Annotated Science. She is interested in improving scientific literacy and increasing transdisciplinary sciences along with promoting open access/science and advancing the current scholarly infrastructure. Sarah can be found on Twitter @annotated_sci talking about science and open access.


Leave a Reply

Your email address will not be published. Required fields are marked *