2

Peer review in a rapidly evolving publishing landscape

Irene Hames

Abstract:

Pre-publication peer review has long been recognised as a cornerstone of scholarly publishing. Despite various criticisms and a number of shortcomings, this scrutiny and critical assessment by experts is still considered essential by many. This chapter describes the realistic expectations of peer review and what constitutes good practice, emphasising the important role of the editor. It also outlines the many ways traditional peer review is being adapted, the new models that are appearing, and the increasing emphasis on openness and transparency. Various problems are addressed, including difficulties in finding reviewers, the imbalance between publication output and participation in peer review worldwide, the ‘wastage’ of reviews that accompanies the often repeated submission of manuscripts to journal after journal after rejection, and the increasing pressure on researchers not only to publish but to publish in high-impact journals. All these are impacting the peer-review processes of journals and editorial workload. The recent innovation to concentrate only on the assessment of soundness of research methodology and reporting pre-publication is receiving acceptance from researchers keen to publish their work without undue delay, and is being adopted by an increasing number of publishers. In this model, the evaluation of interest, importance and potential impact is left for after publication. The possibilities for post-publication review and evaluation in an online world are many and varied, but there are also many challenges. It is clear that, nearly three and a half centuries on from the appearance of the first journals, the opportunities for innovation and experimentation in peer review are greater than ever before.

Key words

Peer review

scholarly publishing

manuscript management

reviewers

blinded review

open review

editors

editorial decision-making

post-publication review

peer-review models

ethics

transparency

Introduction

Peer review in scholarly publishing is the process by which research output is subjected to scrutiny and critical assessment by individuals who are experts in those areas. Traditionally this process of ‘editorial’ peer review (to distinguish it from the peer review of grant applications) takes place before publication. Researchers prepare reports of their work (commonly referred to as manuscripts) and submit them to journals for consideration for publication. Such scrutiny by external experts has been used in journal publishing for nearly 300 years, going back to the early to mid-18th century when the Royal Societies of Edinburgh and London started to consult their members on publication decisions in a regulated way, calling on those most knowledgeable in the subject matter (Kronick, 1990; Spier, 2002). Peer review only became widespread, however, after the middle of the 20th century, when the numbers of articles being produced and their specialisation and complexity went beyond that which most editors could handle on their own or had the expertise necessary to carry out a critical assessment (Burnham, 1990). Today, the scale of the operation is enormous, with over 1.5 million articles being published each year in around 25 000 peer-reviewed journals (Ware and Mabe, 2009).

Peer review is considered by many to be a critical and key element in journal publishing. Not only by editors, who have long recognised and appreciated its value (e.g. Laine and Mulrow, 2003), but also by the research community. The overwhelming majority of researchers in the large international surveys carried out by Ware and Monkman (2008) and Sense About Science (2009) considered that peer review greatly helps scientific communication, without it there would be no control, and that the accuracy and quality of work that has not been peer reviewed cannot be trusted. Around 90 per cent of the respondents in both surveys felt that their own last accepted paper had been improved by peer review –an important element that is often overlooked. That is not to say that peer review is perfect – it isn’t. Even though the two surveys found much support for peer review, there was a degree of dissatisfaction (12 and 9 per cent, respectively). Also, around a third in each survey did not agree that the current peer-review system is the best that can be achieved.

The shortcomings of peer review have been highlighted (Wager and Jefferson, 2001) and criticisms levelled on a number of fronts (Smith, 2006, 2010a): about its quality and fairness, that it is open to abuse and bias, that it is expensive, slow and lacks consistency, and that it is conservative. There are concerns about delays, both in the time taken for review and decision at individual journals, and in the time taken for a piece of work to be published because it sometimes has to go from journal to journal, being rejected a number of times before finally being accepted (a manuscript can only be sent to one journal at a time). This multiple reviewing of what might essentially be the same manuscript has led to growing concern about the ‘wastage’ in reviewer effort (see below).

There have also been criticisms that in some top journals the weight given to reviewer suggestions for additional experiments is too great, with escalating and unrealistic demands being made and insufficient critical editor intervention (Lawrence, 2003; Ploegh, 2011). This highlights the central importance of the editor in the peer-review process. As put so well by one respondent in the Ware and Monkman survey (2008, p. 4): ‘…[peer review] works as well as can be expected. The critical feature that makes the system work is the skill and insight of the editor. Astute editors can use the system well, the less able who follow reviewer comments uncritically bring the system into disrepute.’

A common misconception is that reviewers ‘accept’ and ‘reject’ manuscripts. They don’t. They assess, advise and make recommendations, and it is editors who make the publication decisions for their journals. If extra experiments are being suggested, it is their role to decide whether they are required. It is always a fine balance, and there are many authors who have reason to be grateful to the editors who have prevented them from publishing prematurely and provided the feedback that has led to the eventual publication of good, sound work. But equally, most researchers can relate ‘horror stories’ about their experiences of having work reviewed by journals. It is a human activity and so prone to the usual failings. There are also editors who are not as good, thorough or impartial as they should be, and reviewers who are slow, biased or rarely return adequate reviews.

As peer review determines what is published and where it is published, it plays a central role in the career prospects and funding opportunities of researchers as these depend heavily on publication records. Because of the importance of peer review there have been attempts to find evidence for its effectiveness and contest the proposition that ‘a faith based rather than an evidence based process lies at the heart of science’ (Smith, 2010a, p. 1). Jefferson et al. (2002, 2007), for example, have carried out systematic reviews on studies of the peer-review process in biomedical journals. They found little empirical evidence in support of its use as a mechanism to ensure the quality of biomedical research, but cautioned that ‘the absence of evidence on efficacy and effectiveness cannot be interpreted as evidence of their absence’ (Jefferson et al., 2007, p. 2). They actually found very few well-designed studies that could be included in their reviews and concluded that a well-funded programme of research was urgently required, although they acknowledged that ‘the methodological problems in studying peer review are many and complex’ (Jefferson et al., 2007, p. 2). There is ongoing research into peer review, but this comes mostly from the biomedical field. Many high-quality studies are presented at the International Congresses on Peer Review and Biomedical Publication, which have been held every four years since 1989 (see http://www.ama-assn.org/public/peer/peerhome.htm). A review has also been carried out of studies published between 1945 and 1997 on a wide range of aspects of editorial peer review and covering the scholarly literature spectrum (Weller, 2001).

Publishers of scholarly journals play an important role in peer review, having invested heavily in both the supporting technological infrastructure and management of the peer-review process. Development of web-based online systems for manuscript submission and peer review began in the mid-1990s, and by the late 1990s they were entering the mainstream (Tananbaum and Holmes, 2008). In the following decade, these systems were refined and increased in sophistication, and adoption grew as authors, reviewers, editors and publishers became keen to reap the many benefits (Ware, 2005). At the beginning of the second decade of the 21st century, the great majority of journals, especially in STM (Scientific, Technical and Medical) areas, have web-based online submission and review. Part of this rapid adoption has been due to the large investment by publishers in the implementation of online systems and in the provision of training and support to editors and editorial offices. The importance of training cannot be overestimated, both for the technological aspects and for induction into good and ethical practice. Online systems are a tool, and just knowing how to use them is not the same as operating peer review according to good practice.

One of the problems with peer review, and probably why there is a degree of dissatisfaction with it, is that the quality across journals is very variable. There is little standardisation, or even knowledge by some of what good practice is. The situation is exacerbated by the nature of editorial work, where most people learn ‘on the job’. Editors are usually appointed on the basis of their research records, reputation and ‘vision’ for the journal. Often they will come to the role without much experience of the editorial side of research publication. They will inevitably have been authors and reviewers, perhaps have been on editorial boards, but taking responsibility for peer review and editorial decision-making may be new to them. Considering the great power that comes with being an editor and the effects the actions of editors can have on the careers, reputations and research-funding opportunities of researchers, it is essential that they are properly equipped to take on this important role. Publishers have large pools of expertise they can draw on, both in-house and in the editorial offices of their journals. They should have induction and training programmes for their new editors and editorial staff, refreshing this regularly, and providing updates on new developments. In addition to the high-quality information some publishers provide on their own websites, a number of excellent organisations exist where editors can find guidelines on various aspects of peer review. Publishers have a duty to make their editors aware of these and to provide guidance on which can be trusted as sources of good and reliable advice.

Peer review is currently the subject of intense debate, with sometimes quite extreme and polarised views being put forward, from the belief that ‘the peer review system is breaking down’ (Fox and Petchey, 2010) to the recommendation ‘the time has come to move from a world of “filter then publish” to one of “publish then filter” ’ (Smith, 2010b). Blog postings on peer review tend to become vigorous discussions, receiving numerous comments (see, for example, Grant, 2010; Neylon, 2010; Rohn, 2010). Sometimes there is confusion about basic issues. What needs to be made very clear is that good practice in peer review is system and business-model independent, i.e. it doesn’t matter whether a paper-, email- or web-based system is used, or whether a journal is subscription based, open access, with author-side payment, or has a hybrid model. Good and bad examples exist in all, and global statements linking quality with one model cannot, and should not, be made. It would be naive, however, to ignore the fact that author-side-payment models have presented the opportunity for abuse and unscrupulous business ventures where authors may be charged to have their work published in journals purporting to be peer reviewed, but without much, or even any, quality review taking place. Indeed, Davis (2009a) has reported the case of a ‘nonsense’ manuscript being submitted and accepted for publication after ‘peer review’ and being sent a request for payment of the publication fee.

There is another common misconception. It is often assumed that the larger and more important or high impact a journal is, the better is its quality of peer review. This is not necessarily so. The large established journals do have very experienced teams and sophisticated processes built up over many years, but there are also many small, specialist journals operating rigorous peer review, often led by dedicated and knowledgeable editors and their equally committed editorial teams.

Peer review as the foundation of the primary literature

Peer review in perspective

Peer review is widely thought to be the cornerstone of scholarly publishing. But why is it important for researchers to publish their work? There are two main reasons. The first is to add their findings to the scholarly record (the primary literature). This is a reporting function and it is here that peer review has a central and critical role, making sure that the ‘minutes of science’ (Velterop, 1995) are a sound basis for further research, checking that work has been reported correctly and clearly, preventing inclusion of studies with flawed design or methodology. The second reason researchers need to publish is linked to an assessment function. Researchers are judged by the journals in which they publish, and institutions are judged by the publication records of their researchers. There is currently a journal hierarchy based on Impact Factor (Garfield, 2006; and see Thomson Reuters, http://thomsonreuters.com/products_services/science/free/essays/ impact_factor/) and this journal attribute is used, but not without considerable criticism (Lawrence, 2007; UK House of Commons Science and Technology Committee, 2011, paragraph 177), as a ‘proxy’ evaluation for the merits of articles and researchers. Decisions on appointment, promotion and grant funding are influenced very heavily by it. There is real concern that researchers are being put under increasing pressure not only to publish, but to publish in high-impact journals, many of which will accept only what they consider to be the most important and novel work (consequently some have rejection rates of over 90 per cent; McCook, 2006). The use by certain countries of cash incentives for publication in international journals, with very large payments for acceptance by journals with high Impact Factors (Fuyuno and Cyranoski, 2006; Shao and Shen, 2011), has also led to concerns that integrity may sometimes be compromised. The increased pressure to publish can have effects on journals and editorial offices, and their peer-review processes – greatly increased numbers of submissions, authors aiming too high in their choice of journal and being rejected (sometimes a number of times at different journals as they work their way down the journal pecking order), work being sliced too thinly so as to increase publication output (‘salami’ publishing), premature submission of research (with, again, consequent rejection) and decisions being contested more frequently because so much hinges on a positive outcome. There may also be ethical implications, with increasing levels of questionable behaviour and misconduct (Martinson et al., 2005; Fanelli, 2009). This is not the healthiest of situations for researchers, scholarly publishing or peer review (Lawrence, 2003).

A quote by Stephen Lock made in 1991 (p. xi), the year he retired as editor of the British Medical Journal (BMJ), still sums up the situation well: ‘And underlying these worries was yet another: that scientific articles have been hijacked away from their primary role of communicating scientific discovery to one of demonstrating academic activity.’

Submissions to journals are set to go on increasing. Also, the number of submissions from countries such as China and India, where research activity is expanding rapidly, is growing, resulting in an increasing proportion of global publications (Adams et al., 2009a,b). In materials science, China has already become the largest single-country producer, having overtaken both Japan and the USA, and it will soon surpass the combined output of the EU-15 (Adams and Pendlebury, 2011). It is predicted to become the largest producer of research papers in the next decade, possibly by as early as 2013 (Royal Society, 2011a). The quality of the work from China is improving, and is making up an increasing proportion of the papers in high-impact journals (Nature, 2011). Publication is therefore likely to become even more competitive, especially in high-impact journals. This change in the ‘geography of publication’ is also impacting submission–reviewing dynamics (see below).

Peer review is used by many journals as a tool for selection as well as a quality-control step to check on the soundness of research methodology, results and reporting, to help editors choose those articles most suitable for their journal readership, and to meet the standards that have been set by their journals for quality, novelty, breadth of interest and potential impact. This grouping of articles into a hierarchy of journals has been shown to be important to researchers (Tenopir et al., 2010). It is, however, a very much more subjective activity than the screen for soundness. The two functions of peer review have traditionally gone hand-in-hand. There is now, however, a new model, where the two functions have been separated. In 2006, the journal PLoS ONE was launched by the Public Library of Science (PLoS), with the remit to publish all submitted scientific and medical research that was sound – the peer-review process would concentrate only on ensuring scientific rigour, and not on the novelty, importance, interest or potential impact of the work. The evaluation of that would be left for the post-publication phase. This would ensure publication of sound research wasn’t delayed as a result of authors needing to go to more than one journal because of rejection on the grounds of insufficient novelty or impact. The PLoS ONE model is being emulated by a number of other publishers, who are producing their own similar journals (e.g. BMJ Open, Sage Open, Scientific Reports from Nature Publishing Group, Biology Open from the Company of Biologists, AIP Advances from the American Institute of Physics), and is attracting a lot of interest from both researchers and grant funders. It is leading to a series of ‘repository-type’ journals – large online, open-access collections of research articles, either multi-disciplinary or subject-specific, that have been peer reviewed for soundness but no evaluation made beyond that as a requirement for publication. This is a rapidly evolving area, closely tied to increasing moves to open access, concerns about the proxy use of publication in high-impact journals in research/researcher assessment, and the opportunities arising for post-publication evaluation in an online world.

What peer review can and can’t do

Peer review is sometimes portrayed as an infallible gold standard. It isn’t. But in the hands of an experienced and knowledgeable editor it is a powerful tool. What are the realistic expectations of peer review? Ideally, it should (adapted, with permission, from Hames, 2007, pp. 2–3):

1. prevent the publication of bad work – filter out studies that have been poorly conceived, designed or executed;

2. check as far as possible from the submitted material that the research reported has been carried out well and there are no flaws in the design or methodology;

3. ensure that the work is reported correctly and unambiguously, complying with reporting guidelines where appropriate, with acknowledgement to the existing body of work and due credit given to the findings and ideas of others;

4. ensure that the results presented have been interpreted correctly and all possible interpretations considered;

5. ensure that the results are not too preliminary or too speculative, but at the same time not block innovative new research and theories;

6. provide editors with evidence to make judgements as to whether articles meet the selection criteria for their particular publications, for example on the level of general interest, novelty or potential impact;

7. provide authors with quality and constructive feedback;

8. generally improve the quality and readability of articles;

9. help maintain the integrity of the scholarly record.

What is it unrealistic to expect of peer review? It is not a guarantor of absolute truth. It is not its role to police research integrity or to determine whether there has been misconduct. The peer-review process cannot, for example, detect fraud that has occurred in the laboratory, and if an author is set on presenting falsified or fabricated results this can only be determined at laboratory level. Research and publication misconduct do occur, as has been demonstrated by surveys (Martinson et al., 2005; Fanelli, 2009) and as can be seen by viewing the case studies brought to organisations such as COPE (the Committee on Publication Ethics, http://www.publicationethics.org/cases) and the retractions highlighted almost daily on the Retraction Watch blog (http://retractionwatchwordpress.com/). However, if suspicions or allegations of misconduct arise during the peer-review process, editors cannot ignore them and must investigate at journal level to see whether there is enough substance to pass them on for proper investigation (see Hames, 2007, pp. 173–99). It is the responsibility of the institutions to which the individuals suspected of misconduct belong to carry out investigations into whether misconduct has or has not occurred. COPE is an organisation editors can turn to for help and advice on all aspects of publication ethics. Besides the many resources available on its website (codes of conduct, flowcharts on how to handle ethical problems, and other guidance), it holds a quarterly forum where editors can bring cases of ethical concern for discussion and advice.

The peer-review process

Requirements of a good peer-review system

What are the requirements of a good peer-review system? It should:

image involve assessment by external reviewers (not just by editors or editorial staff – that isn’t peer review);

image be fair and without bias, either positive or negative;

image be timely;

image be kept confidential for any aspect that isn’t operating under open review (see below);

image have systems and guidance in place to allow reviewers to provide quality and constructive feedback for both the editor and the authors;

image have decision-making based solely on the merits of the work, not influenced by who the authors are or where they come from, and not dictated by commercial considerations.

Peer review should not be a one-way street, with the editor working autonomously and rigidly, disregarding the views of others. Rather, it should be considered a dialogue – between the authors and the editor, between the reviewers and the editor, and between the readers and the journal, with all parties acting ethically and treating one another with respect. It is, however, expected that reviewers do not generally communicate direct with the authors unless this is approved by the journal. Clarity is of the essence in all the interactions, and helps avoid delays, misunderstanding and disputes: authors need to know the scope and quality of work a journal is interested in receiving, reviewers need to know the parameters within which they are reviewing, and editors need to make their instructions and decisions clear.

Journals vary in size and so do editorial offices, ranging from ones where one person does everything to those where there are many staff. The editor him/herself may carry out all the roles, seeing all manuscripts on submission and steering them through the whole peer-review process. They may share responsibilities with members of an editorial board, with individual editors handling manuscripts in different subject areas or from different geographical regions. The principles of good practice are the same whatever the size or organisation of a journal.

Publishers vary in the type of editor they use for their journals. Some (e.g. Nature and the BMJ) appoint internal, ‘professional’, staff editors, who are usually individuals with relevant research or practitioner experience but no longer active in research or in practice. Others (the large majority of specialist journals) have editors who are active researchers (sometimes called ‘academic’ editors), based in either academia or industry, or practitioners in their fields. Some journals (such as Science) have a combination of the two, where external editors are used for their subject knowledge and research expertise, and the staff editors manage the peer-review process and make the decisions on acceptance.

In all journals, the workflows and procedures (which should all be documented) need to be evaluated regularly to ensure they are still optimal and up to date with new developments, and are not either superfluous or unduly cumbersome.

‘Traditional’ peer review

The ‘traditional’ model of peer review includes, irrespective of whether web-based, email or paper systems are used, the following steps:

image The author submits his or her manuscript, acting on behalf of his or her co-authors in multi-author papers. This author may also take on the role of ‘corresponding author’, being the author with whom the journal corresponds during the peer-review process, or another author may be nominated for this.

image Manuscript receipt is acknowledged and an identification number allocated (this happens automatically with most online systems).

image The manuscript goes through a ‘triage’ step, where it is (i) checked for completeness and adherence to journal policies and ethical guidelines (effectively a technical/administrative check), and (ii) assessed editorially by someone with editorial decision-making responsibility (i.e. staff or academic editors, acting alone or in consultation), for such things as topic, scope, quality and interest, to determine whether these fall within the areas and standards defined by editorial policy.

image If deficiencies are found in the technical check that are extensive enough to preclude further consideration by a journal, the manuscript is returned to the authors for improvement and resubmission. Some guidance may also be given as to the level of interest in seeing it come back so that authors do not embark on a substantial effort only to have the manuscript rejected on editorial grounds once it is resubmitted. In less severe cases the manuscript may be put on hold while the authors provide the missing items, and moved on in the editorial process once these are in (with usually a time limit set for compliance).

image If a manuscript is found to be unsuitable on editorial grounds, it is rejected without further review – this is an ‘editorial rejection’, i.e. it hasn’t been subjected to peer review. Rejection rates at this stage vary from journal to journal, but can be very high (60–80 per cent) in the top, highly selective journals.

image Those manuscripts that pass the initial triage are then sent for external, ‘peer’, review.

image The experts to whom manuscripts are sent for external assessment are usually called ‘reviewers’ or ‘referees’ (although the latter rather implies that the person is acting as an umpire or arbitrator, whereas that is the role of the editor).

image An editorial decision is made on the basis of the reviewers’ reports and all the other information that the editor has related to that submission, and communicated to the author.

With increasing awareness of the incidence of plagiarism, many publishers have adopted a new online service to help in its detection –CrossCheck (http://www.crossref.org/crosscheck/index.html), which can be integrated into online manuscript systems. This enables screening for textual duplication and similarity between submitted manuscripts and work that has already been published in one form or another (screening against articles submitted to other journals, to enable duplicate submission to be picked up, is not something that is currently available). Because of the large number of publishers of scholarly related content who have signed up to the initiative, and so allow DOI-identified content that is held behind pay walls to be added to the CrossCheck database and crawled by the iParadigms text-comparison software behind CrossCheck –iThenticate (http://www.ithenticate.com/) – it is a very powerful tool. Areas of textual duplication are highlighted, but it requires human intervention to determine whether plagiarism has occurred. Often specialist subject knowledge is required to make that assessment, along with a good understanding of the features and limitations of the system, and these should always be mixed with a healthy dose of common sense. Journals vary as to the stage at which the duplication check is done. Some scan all submissions, some only those selected for external review, some only those recommended for publication, prior to formal acceptance, others just a random selection or only when there is reason to suspect plagiarism. Journals also need to think about what they will do when they discover plagiarism and whether different circumstances, for example the type and extent of the plagiarism or the seniority of the authors involved, warrant different actions. COPE has prepared a discussion paper to address these sorts of issues (COPE, 2011a).

Screening for inappropriate digital image manipulation may also be carried out. Following the pioneering work by the Journal of Cell Biology in this area (Rossner and Yamada, 2004), other journals have introduced checks into their workflows.

The introduction of new technologies to help detect problems with submissions has been a great benefit to journals and the peer-review process. However, this has also brought issues that have to be addressed by journals and their publishers – the increased workload on editors and editorial staff, and the need to ensure that the expertise required, not only to use the new screening tools but also to interpret the results, is in place. Increasing detection of suspect cases is also leading to increased pressures on editors, in terms of both the time needed to look into them and knowing what to do if misconduct rather than honest error appears to have occurred. COPE has produced a number of flowcharts to help editors faced with cases of suspected misconduct.

Types of peer review

There are a number of types of peer review (see Table 2.1).

Table 2.1

The common forms of pre-publication peer review

image

Single-blind review

In single-blind review, the reviewers know who the authors are but the authors do not know who the reviewers are. This has long been the traditional form used by the majority of STM journals. Reviewer anonymity is considered to enable reviewers to provide honest, critical appraisals and unfavourable reviews without fear of reprisal from authors, this being particularly so in the case of younger reviewers and more senior authors, who may at some time be involved in their future job applications and funding proposals. The argument against this system is that there is a lack of accountability, providing unscrupulous reviewers with the opportunity to make unwarranted negative comments, to intentionally delay the peer-review process so that they or others with whom they are connected can get or keep a head start on similar work, or even to steal ideas from the authors. Unconscious bias may go unchallenged. Editors can also get away with not being as transparent as they should be, for example saying that an external reviewer has been used when they themselves have carried out the review.

Double-blind review

In double-blind review, which is the main form used in the humanities and social sciences, neither the reviewers nor the authors know one another’s identities. The argument is that this removes the potential influence (either positive or negative) of who the authors are (such as their status, gender or relationship to the reviewer) or where they come from (e.g. organisation, institution or country). One problem is that it can involve considerable effort to ensure manuscripts are anonymised, and another is that reviewers can frequently identify who the authors are (van Rooyen et al., 1998). Some organisations and journals (e.g. the American Economic Association and Political Analysis) that have traditionally used double-blind review are moving to single-blind review because they consider the effectiveness of the former has decreased in the age of online search engines, with reviewers being able to search for similar titles to that of the submitted manuscript and coming up with working papers, conference contributions and other items linked to the work submitted, and so the authors (see Jaschik, 2011).

Open review and ‘transparent’ approaches

In open review the authors know the identities of the reviewers and the reviewers know the identities of the authors. This is felt by some to be a much more transparent approach than blinded review, with greater accountability (Godlee, 2002), and has been used successfully by the BMJ since 1999. Open review has taken on a broader meaning in recent years (see Table 2.1), being extended to include, for example, publication of the reviewers’ names or their names and reports with published articles, and open commenting, either from a restricted community or from anyone. Authors and reviewers need therefore to check carefully what ‘open review’ means when they submit or agree to review manuscripts and that they are happy with the degree of openness. Release of reviewers’ names, reviews and editorial correspondence extends only to cases where submissions are accepted for publication; these details, including that a submission has been made, are generally kept confidential for those manuscripts not accepted.

Some publishers and journals have gone further with open review to increase transparency. BioMed Central, for example, provides a link to a ‘pre-publication history’ for each article published in its BMC series medical journals. The named reviewers’ reports, the authors’ responses and all the versions of the manuscript are posted online with the published article. Another journal, The EMBO Journal, has introduced what it calls a ‘transparent editorial process’ (Pulverer, 2010). Since 2009 it has invited authors to have a ‘peer review process file’ included online with their published articles. Around 95 per cent do, and the files are apparently popular and reasonably accessed (Pulverer, 2010). Included in the file are the timeline and correspondence relevant to the processing of the manuscript: the reviewers’ reports (but with the reviewers’ identities kept anonymous), the authors’ responses, the editorial decision letters, and other correspondence between the editor and authors. There are no confidential comments in the review forms, and reviewers who have any concerns, for example about ethical issues or data integrity, are asked to write direct to the editor. As well as making transparent the ‘black box’ of the editorial process (for articles accepted for publication), the journal sees the editorial information it is providing as an educative tool for researchers, and evidence that the peer-review process is usually constructive and effective. BMJ Open posts signed reviewers’ reports and authors’ responses in a ‘peer review history’ file attached to the articles it publishes, along with previous versions of the manuscript.

Reviewer selection and finding reviewers

Selection of the most appropriate reviewers for each manuscript is critical to quality peer review. A good editor will carefully match reviewers with manuscripts, basing selection on appropriate area and level of expertise, effectiveness as a reviewer, and absence of conflicting interests that might prevent a fair and unbiased appraisal. The calibre of the reviewers is also critical. The editor of the Journal of the American College of Cardiology (JACC) has made the pertinent observation that ‘the reviewers themselves are the weakest (or strongest) links’ in peer review (DeMaria, 2010). Good reviewers who respond quickly to communications and provide timely and constructive reviews without bias are invaluable; those who are always slow, regularly fail to submit their reviews or provide superficial or inadequate reviews are a liability to editors who want to earn and keep the respect of their communities and attract and retain good authors.

Potential reviewers are identified from a number of places: the journal’s own database, the specialist editors, the manuscript bibliography, subject and publication databases and suggestions from the authors (many journals ask authors to provide these, as well offering them the opportunity to list, with reasons, individuals they feel should not be approached to review). They may come from academia, research institutions, professional practice or industry – wherever the best people for the job are. The journal’s own database should become a powerful resource, with up-to-date information on all aspects of reviewer expertise and performance as well as contact details. It should also grow, with new names being added, especially as new research disciplines and geographical areas open up and develop.

Journals vary in the number of reviewers they assign to each manuscript, but most commonly two to three are used (ALPSP/EASE, 2000; ALPSP and Kaufman-Wills Group, 2005; Ware and Monkman, 2008). Many editors will, however, choose to send some manuscripts to a greater number than usual, for example those covering controversial topics or where widely held views are being overturned. Multidisciplinary research also presents challenges, and a number of reviewers will generally be chosen with expertise in one or more of the areas if it proves impossible to find reviewers with expertise across the whole paper. A trusted reviewer with more general expertise may also be brought in to provide a broad overview. Editor guidance is critical in the review of multidisciplinary work if quality reviews are to be obtained. Reviewers need to be directed to focus their efforts appropriately and so avoid wasting time struggling with parts that fall outside the areas they are competent to assess.

Is it getting harder to find reviewers? This is difficult to judge as few data exist. Informal conversations with editors and general comments suggest that some journals are finding it more difficult. There has been some speculation (Fox and Petchey, 2010, p. 325) that ‘the peer review system is breaking down and will soon be in crisis’ because of a ‘tragedy of the reviewer commons’, where individuals exploit the system by submitting manuscripts but have little incentive to review manuscripts from others. Fox and Petchey (2010) have proposed a new system where authors ‘pay’ for their submissions using a ‘currency’ called PubCreds, which they earn by carrying out reviews. A number of arguments have been put forward against this model (Davis, 2010), and it is difficult to see it working, but suggestions for innovation should always be welcomed for discussion. Petchey and Fox (2011) are, in collaboration with some ecology journals, accumulating data on reviewing. Editors at the journal Molecular Ecology have published some data (Vines et al., 2010). They analysed the number of requests to review that had to be sent to get a review between 2001 and 2010. The mean number did increase (from 1.38 in 2001 to 2.03 in 2010), suggesting that it was harder to find reviewers in 2010 than in 2001. However, as the change occurred mostly in 2008, coinciding with the journal’s move from an email-based editorial system to an automated one, the authors suggest that invitations from 2008 on might have been blocked by spam filters, with some invitations never reaching the intended recipients. They also found their reviewer pool increased in proportion to increased submissions, and there was no increase in the average number of reviews by individual reviewers. From their results they concluded that there is ‘little evidence for the common belief that the peer-review system is overburdened by the rising tide of submissions’, and that their reviewer pool is accommodating the increased submissions. Other editors have reported similar experiences (British Antarctic Survey, 2011).

One of the reasons some journals may be finding it harder to find reviewers is that there is currently an imbalance between where submissions are coming from and who is doing the reviewing. For example, the USA is producing about 20 per cent of papers globally but conducting about 32 per cent of the reviews, whereas China is producing about 12–15 per cent of the papers but probably doing only 4–5 per cent of the reviewing (Elsevier, 2011a). This is believed to be a temporary, transitionary situation, and likely to become better balanced as young researchers in the newly emerging scientific nations become more established and experienced in peer reviewing. Publishers and journals have a crucial role to play in providing training in reviewing and publication procedures and ethics in these areas of the world, and also the tools editors need to help them find reviewers. New author-identification and name-disambiguation initiatives such as Open Researcher & Contributor ID (ORCID; http://www.orcid.org/) will aid identification of specific reviewers from countries where many names are common. Training for all reviewers is something that is generally lacking, and 68 per cent of the researchers in the Sense About Science survey (2009) agreed that formal training would improve the quality of reviews. One of the key recommendations of a UK House of Commons Science and Technology Committee inquiry into peer review was that all early-career researchers should be given the option for training in peer review (UK House of Commons Science and Technology Committee, 2011, paragraph 119). The important role of publishers in training reviewers from countries which are not traditional scientific leaders was also stressed, with the comment that this should help alleviate the imbalance between publication output and participation in peer review (paragraph 130).

The whole peer-review process only works because enough willing and appropriately qualified reviewers can be found. Why do they agree to review? It seems that the reasons are more altruistic than self-promotional, with the main ones being to play a part as a member of the academic community and enjoying being able to improve papers (Ware and Monkman, 2008; Sense About Science, 2009). Peer review is also regarded as an integral part of the professional activity of researchers, although it is currently not generally acknowledged in any formal way that brings them professional credit (see below for ways reviewers are acknowledged).

It is courteous, and ultimately more effective and efficient, to check with reviewers whether or not they will review a manuscript rather than just sending it to them (see Hames, 2007, pp. 53–60). Before reviewers are approached, checks should be made to ensure there aren’t any reasons why they shouldn’t be used as reviewers: for example, they are already reviewing other manuscripts, they have indicated that they are not currently available for reviewing duties, their reviewing record is poor (for either timeliness or review quality), they are at the same institution(s) as the author(s) or have too close a relationship (professional or personal) to them, or there are other conflicting interests that might bias their evaluation.

Existence and recognition of bias and potentially conflicting interests

Godlee and Dickersin (2003) distinguish between ‘good’ and ‘bad’ biases, defining (p. 92) the former as ‘those in favour of important, original, well-designed and well-reported science’ and the latter as ‘those that reflect a person’s pre-existing views about the source of a manuscript (its authors, their institutions or countries of origin) or about the ideas or findings it presents’. It is probably impossible to eliminate all bad biases, but good editors and journals work hard to minimise them in their choice of reviewers and by the guidance they give them.

According to the ICMJE Uniform Requirements for Manuscripts (International Committee of Medical Journal Editors, http://www.icmje. org/): ‘Conflict of interest exists when an author (or the author’s institution), reviewer, or editor has financial or personal relationships that inappropriately influence (bias) his or her actions (such relationships are also known as dual commitments, competing interests, or competing loyalties)’ (ICMJE, 2011). However, just because a potentially conflicting relationship exists doesn’t mean that an individual shouldn’t be chosen to review a manuscript – that decision is up to the editor. The crucial issue is disclosure of all potentially competing interests. Journals handle this differently, ranging from those with very explicit guidelines and where specific and detailed questions are asked (e.g. those using the ICMJE-recommended form, http://www.icmje.org/ethical_4conflicts. html) to those that have just a general statement asking that all relevant potential conflicts of interest be disclosed. How much disclosure is required usually depends on how much potential financial influence there is from large commercial organisations.

Bias and potential conflicting interests within a journal or editorial office have to be handled very stringently. For example, if one of the editors submits a manuscript to the journal they should not have any involvement with its processing or handling, or be able to access any of the details associated with its review or the decision process.

Monitoring the review process

It is important to monitor the progress of manuscripts through the review process. The status of manuscripts at all stages should be checked regularly to ensure that none is stuck anywhere for whatever reason. Online submission and review systems have made this task much easier, bringing up lists of submissions at various stages, highlighting where reviews are overdue or where responses and revisions are outstanding. They also allow complete records of activity and communications to be associated with manuscripts, making it easier to answer queries and make the checks required when working on a manuscript at the various stages. The importance of adding all relevant information to the system that isn’t automatically recorded – such as information from phone calls and emails – should be stressed to editorial users, so that records are complete and present a true record and audit trail that all members of the editorial team can consult, not only at the present time but also in the future, for example if a submission needs to be revisited, perhaps as part of a misconduct investigation.

Reminder schedules need to be set up for all stages. Personally tailored emails are important at times because authors and reviewers can get frustrated at automatic requests that may be inappropriate or reminders that are too demanding or badly timed, especially when reviewers have, for example, made arrangements with the editor or editorial office to submit their reviews later. One of the keys to a successful journal is the good relationships it establishes with its community of authors and reviewers (who are, after all, mainly the same group) and the goodwill it builds up.

Evaluation and decision-making

When all the reviews for a manuscript are in from the external reviewers, a decision needs to be made on its fate. As stressed above, it is not the reviewers who make the decision on whether a manuscript should be published, but the editor. It is also the editor’s role to advise the authors on what does or does not need to be done to make a manuscript acceptable for publication in their journal (and many offer advice that will be helpful to the authors even if the manuscript is going to be declined and will need to be submitted to another journal). Any editor who always says ‘carry out the revisions recommended by the reviewers’ isn’t doing their job. Sometimes this direction will be applicable, but in many cases some intervention and direction by the editor are needed. There are hard and soft reviewers, some who ask for too much additional work, work that is better suited for a follow-up publication, some who may have missed an important point or not expressed themselves clearly. Some of their comments may even be wrong. When things like this are spotted by the editor it inspires confidence in authors that the editor has actually read their manuscript and carried out a critical appraisal of the reviewers’ comments.

Making the decision and communicating it to the authors

The arrangements for decision-making vary from journal to journal, ranging from the situation in small ones where a single editor makes all the decisions to that in large journals where a number of editors may be involved. With multiple editors, each may have responsibility for making decisions independently, or there may be layers of editorial seniority, with recommendations from, for example, subject editors going to a more senior editor for approval and, probably, also editorial input.

Decision-making isn’t a matter of ‘counting votes’, especially as different reviewers may have different strengths and expertise, and be focusing on different aspects of a single manuscript. Some may also not be too familiar with a journal or its policies and aspirations, particularly if they weren’t provided with good guidance as part of the review process. It is the comments resulting from the critical appraisals from the reviewers that are important. An editor may decide that further review is needed before a decision can be made, in which case the manuscript will be sent to an additional reviewer (or reviewers), perhaps with special guidance, instructions and questions based on what is already in. Alternatively, some editors may want to have a conversation with the authors to get their responses to specific questions, or clarification on any areas of confusion before making a final decision – this is part of the ‘dialogue’ of quality and fair peer review.

Once the decision has been made, it needs to be communicated to the authors. It should be made absolutely clear what the decision is, and, if a revision is being invited, exactly what the conditions for acceptance are –such as, what extra work needs to be done, what missing controls should be provided, and which journal policy requirements need to be met and cannot be compromised on, for example regarding material and data availability and sharing. Authors may not, for a number of reasons, be able to fulfil these and they need to know this at the point of decision, not months later when they submit a new manuscript only to find they have wasted all that time because it is impossible for them to meet the requirements critical for acceptance. Authors shouldn’t hesitate to get back to editors for clarification if they have any doubts or concerns. Good editors will be happy to provide this; it is in their best interests to have things sorted out as early as possible. Good editors will also always provide reasons for their decisions and actions. Authors more readily accept negative decisions when they are backed up with solid reasoning. This also maintains good relations and helps prevent authors being reluctant to submit future manuscripts to the journal.

It is important that editors are given complete editorial independence in their decisions on what to accept and include in their journals. The owners and publishers of journals must not try to influence their decisions in any way; this would be unethical. Decisions should also not be dictated by commercial considerations or influenced by factors such as the nationality, ethnicity, political beliefs, race or religion of the authors. A number of organisations have codes of conduct on both these aspects, for example, the World Association of Medical Editors (WAME, 2011a,b) and COPE (2011b, sections 6 and 16). COPE has also published a Code of Conduct for Journal Publishers (COPE, 2011c) that emphasises the importance of editorial independence. The WAME guidelines (2011a) recognise that there are some limits to editorial freedom and that the owners of journals have the right to remove editors, ‘but they should dismiss them only for substantial reasons such as a pattern of bad editorial decisions, disagreement with the long-term editorial direction of the journal, or personal behavior (such as criminal acts) that are incompatible with a position of trust’.

Reviewer acknowledgement and appreciation

Reviewers should, each time they review a manuscript, be thanked for their review, notified of the decision and sent all the reviewers’ reports (anonymised unless open review with revealed reviewer identity is used). This is very easy to do with online systems so there is no excuse not to. Not only is it courteous, it helps reviewers see things they have missed and better understand the scope of a journal and the quality it is looking for. The EMBO Journal has started to send reviewers all the other reports before the decision is actually made (calling this ‘cross-peer review’), actively encouraging them to comment on each other’s reports. The feedback can then be used to inform the decision. Reviewers are not generally, except in a few journals or for specific tasks such as statistical appraisal, paid. This issue has often been debated, but payment is generally thought not to be an effective or appropriate method of compensation (see, for example, WAME, 2007). The opinions of reviewers on this have been found to be divided, but with a majority thinking it would make the cost of publishing too expensive (Ware and Monkman, 2008). Because most reviewers are also authors (Ware and Monkman, 2008), a give-and-take relationship exists, and they benefit from critical appraisal and feedback on their own manuscripts. This of course only works if individuals take on their fair share of the reviewing load, and (as mentioned above) doubts have been expressed about this and the ‘tragedy of the reviewer commons’, where there is every incentive to submit manuscripts but little incentive to review (Fox and Petchey, 2010). It is, to a certain extent, up to editors to make sure that at their journals there is some degree of equality in the submission–reviewing balance.

Journals have devised various non-cash ways to reward their reviewers, including such things as public acknowledgement in a list published, usually annually, in the journal, free journal subscriptions or discounts on books, waivers of publication charges, gifts of various sorts (e.g. CDs, calendars, offprints), invitations to special receptions, CME (Continuing Medical Education) credits, letters of endorsement and personalised certificates. And, of course, good reviewers are often invited to join the editorial boards of journals for which they review. These sorts of acknowledgement are, however, by no means universal, and greater recognition of the work carried out by reviewers, by both publishers and employers, has been called for (UK House of Commons Science and Technology Committee, 2011, paragraph 164). This brings with it the need for all publishers to have in place robust systems for recording the work carried out by reviewers, and is another area where schemes such as ORCID will make it easier to assign credit accurately.

Consistency in decision-making

Achieving consistency in decision-making across a journal that is selective in what it publishes (i.e. does not base acceptance on just soundness) is important, both for the quality of the journal and in fairness to all its authors. Each journal needs to work out how this will be achieved and whose responsibility it will be to monitor it. Critical is that all the individuals with editorial decision-making responsibilities are made aware of editorial policies and standards when they join the journal, and the whole editorial board is kept updated when change are made. Where there are restrictions on how many articles a journal can publish, these individuals also need to be kept informed on how many manuscripts are being accepted across the journal and advised whenever the threshold for acceptance is changed, so that a backlog isn’t built up. They also need to be told when the remit on scope or type of submission changes, so that manuscripts that fall outside these aren’t inappropriately accepted. Once authors have been told a manuscript has been accepted, that must be viewed as a commitment to publish. Decisions shouldn’t be reversed unless serious problems are found with a paper, and new editors shouldn’t overturn decisions made by the previous editor unless serious problems are identified (COPE, 2011b,sections 3.2and3.3respectively).

Rejected manuscripts – review ‘wastage’?

What of the reviews for the manuscripts that are rejected? In most journals these will just be left in the records and archived along with the submission and its associated correspondence. In an attempt to alleviate the burden on the reviewer pool, some publishers have introduced a system of ‘cascading’ submissions and reviews. When a manuscript is rejected, the authors are given the opportunity to have it passed on to another journal, along with the reviews. This system is used successfully by a number of publishers (e.g. Nature Publishing Group, BioMed Central, PLoS, the IOP Publishing, the Royal Society of Chemistry, and the European Molecular Biology Organization, EMBO), but it is always the author’s choice whether to take up the transfer option or to submit afresh and get new reviews. Journals vary as to whether or not the reviewers’ names are passed on with the reviews, and this probably affects what the new editor decides to do: they can elect to send manuscripts for additional review if they feel it is necessary, and are probably more likely to do this if they do not know the identities of the reviewers of the reports that have been passed to them from another journal. Cascading becomes more problematical between publishers, as exemplified by The Neuroscience Peer Review Consortium (http://nprc.incf.org/). This was set up in January 2008 as an alliance of neuroscience journals whose editors were concerned that they were seeing many solid manuscripts being rejected because of space limitations or because articles weren’t suitable for their journals. Journals within the Consortium (at the end of 2011 there were about 40) agree to accept manuscript reviews from other members of the Consortium. There seems only to have been one public update report (Saper and Maunsell, 2009), but from that it appears that the percentage of manuscripts being received/ forwarded to other members is low (1–2 per cent). Aside from concerns about lack of awareness amongst authors and because there are many more neuroscience journals outside of the Consortium than in it, there are issues of commercial advantage, as explained by Philip Campbell, the editor of Nature, who has said (Campbell, 2011, paragraph 58):

‘This facility is controversial within NPG. We invest significant sums of money in our professional editors spending time both in the office and in visits cultivating contacts with referees, and fostering insightful refereeing as best we can. To then hand on the reports and names to another publisher is to some extent undermining our competitive edge. Indeed, the principle [sic] competitor of NN[Nature Neuroscience], Neuron, is not a part of the experiment, and we might well not have joined the experiment if it was.’

New models of peer review

The Internet and electronic publishing have presented opportunities to experiment with new approaches, and although traditional pre-publication peer review is highly valued it needs to be open to improvement and innovation. A number of models have been introduced, some are now quite established and others are being run experimentally for trial periods. A small selection is given below.

Combining traditional and open review

Attempts have been made to couple a period of public, open commenting with more conventional, confidential review. One of the best-known models is that in use at the journal Atmospheric Chemistry and Physics. Here, after pre-screening by the editorial board, submissions are posted on the journal’s website as ‘discussion papers’. A moderated, interactive public discussion period of 8 weeks follows during which reviewers chosen by the journal post their comments, remaining anonymous if they choose, along with anyone else who wants to, but whose identity must be disclosed. After this period, authors are expected, within 4 weeks, to publish a response to all the comments. It is only after this that the editorial decision and recommendations are made. All discussion papers and the associated comments remain permanently archived. The model seems to have been successful for this journal (see Pöschl, 2010, for a progress report), but even so, the level of commenting from the scientific community is relatively low (see below for further discussion of this issue), with only about one in four papers receiving comments in addition to those from the chosen reviewers (Pöschl, 2010). The Journal of Interactive Media in Education is another journal that combines private and public review.

Such approaches may not be suitable for all journals or for all disciplines. For example, when the journal Nature experimented with optional open review running in parallel with its conventional, confidential peer-review process, uptake by authors was low (5 per cent), particularly so for cellular and molecular fields, and only 54 per cent of the papers that were available received comments (Nature, 2006a). On analysis, the comments were of limited use to the editors in decision-making. ‘Pre-print’ or ‘e-print’ posting, as it is known, and open commenting may not work in fast-moving areas such as molecular biology where concerns about being scooped are common, or in disciplines such as chemistry where patents may be involved (Parker, 2011). In the physics community, however, pre-publication e-print posting with the opportunity for community commenting is the norm and has been very successful: arXiv (http://arxiv.org/), the e-print server in the fields of physics, mathematics, non-linear science, computer science, quantitative biology, quantitative finance and statistics, was established in 1991, and in 2011 contained nearly 700 000 e-prints and served about 1 million full-text downloads to around 400 000 distinct users every week (Ginsparg, 2011). Authors can go on to submit their work to journals for publication, and in the majority of the areas covered most do (although there is considerable variation amongst the sub-disciplines; Davis, 2009b).

One journal that has experimented successfully with open, public, peer review is the Shakespeare Quarterly, a well-established humanities journal. In 2010 it opened up the review of some articles submitted for a special issue on ‘Shakespeare and New Media’ to public commenting (Shakespeare Quarterly, 2010). To ensure there would be comments from experts, the guest editor invited around 90 scholars to comment. Some did, along with self-selected commentators, and high-quality feedback was received. Some junior scholars were, however, put off commenting in case they contradicted the more senior ones. One of the reasons the experiment was generally successful was probably due to the time and effort put into the project by the editor: ‘it was as controlled a process as traditional peer review. It was just controlled in a different way’ (Howard, 2010).

Selection of articles to review by reviewers rather than selection of reviewers by editors

Examples include ‘open (participatory) peer-review’ (an open-peer-review experiment at the Journal of Medical Internet Research), where reviewers can sign themselves up either as reviewers for specific articles (the abstracts of submitted articles which authors have agreed can be posted are listed on the site) or to be added to the reviewing database. This is similar to the model used for many years at the British Journal of Educational Technology, where members of the reviewer panel of over 250 are invited to ‘bid’ for newly submitted articles. Once or twice each month, the list of the titles of new articles is circulated to the panel, who choose those they think will be of interest to them and are in areas where they are familiar with the topic. In PeerChoice, being trialled by Elsevier, reviewers can use analytics software to select articles that match their interest and competency (Elsevier, 2011b).

Greater author control

At BMC Biology, authors are allowed to opt out of re-review by the reviewers after revision to meet the original criticisms. The editors must then decide whether the authors’ responses are reasonable. At Biology Direct, authors have themselves to get three members of the editorial board to agree to be responsible for reviewing the manuscript.

Accompanying commentary on publication

The journal Behavioral and Brain Science has an ‘open peer commentary’ feature. Here, particularly significant and controversial work is accompanied by 10–25 invited commentaries from specialists, together with the authors’ responses to them. BMC Biology also publishes an invited commentary from an expert for those cases in its author-opt-out scheme (see above) where revisions aren’t as extensive as they should be or there are other limitations, so that readers are aware of these.

It is clear that experimentation is going on at a number of journals and that there is the potential to ‘pick-and-mix’ the various features being tried. It will therefore be invaluable to other journals for the results of such experiments to be reported, helping inform decisions on changes they may be thinking of making. There may be both discipline- and journal-related indicators to consider for successful adoption. Nature (2006b) has published a series of analyses and perspectives of peer review from a range of stakeholders that includes new models and approaches. The archive of the now closed Peer-to-Peer blog on nature.com also contains much useful and insightful information on various aspects and innovations of peer review (http://blogs.nature.com/peer-to-peer/).

Post-publication review and evaluation

Even though, traditionally, peer review of scholarly work has taken place before publication, there have been opportunities for further review and commenting after publication. These have, however, been limited. With the advent of new social media and networking channels not even dreamed of until relatively recently, the opportunity exists for peer review to continue to a much greater extent and much more easily after publication. And not just in the previous traditional context of, for example, ‘letters to the editor’, which represent a limited and mostly moderated and selective mechanism for views to be expressed, but along a spectrum that goes from that rather restricted case to the situation where anyone from anywhere in the world can take part: from a researcher in the same field to the amateur with an interest in the area, from the fanatic who enters online discussions to promote their pet theories or grievances to the Nobel laureate whose theories have been recognised by the highest award. This openness brings with it some issues: for example, who and what to trust? In open forums the responsibility rests with the reader. But they can be helped to judge the reliability and trustworthiness of contributions and points made by having transparency – who the comments are from, the contributor’s background experience and, crucially, what their affiliations are (so that potential competing interests can be taken into account). Levels of moderation and the amount of information required from contributors vary from field to field and journal to journal. But in making the decision on where on this spectrum journal editors want feedback on their journal articles to fall, the potential importance of comments from outside the immediate community should not be underestimated.

Commentary, article-level metrics and e-letters

A number of publishers have opened up articles in their journals to post-publication commenting, most notably BioMed Central, PLoS and the BMJ Group. PLoS has also introduced ‘article-level metrics’, where all the articles published in its journals have details on usage, citations, social bookmarks and blog posts, as well as reader (‘star’) ratings and comments (PLoS, 2011). The aim is to help readers determine the value of the articles, both to them and to the wider community. AIP Advances, a new journal from the American Institute of Physics, is also offering article-level metrics, in an effort to allow articles to be judged on their own scientific merit. A common problem with post-publication commentary is getting people to take part. The level of engagement – with the exception of the BMJ, which seems to receive many ‘rapid responses’ to its articles – is generally low (Priem, 2011), which has been a disappointment to many. Schriger et al. (2011) analysed post-publication commenting in medical journals and found 82 per cent of the articles had no responses, prompting the authors to conclude (p. 153) that ‘post-publication critique of articles … does not seem to be taking hold.’ There are probably a number of reasons for low participation: people are busy enough doing all the other things they need to, there’s no incentive to engage because this activity attracts no ‘credit’, there may be reluctance to criticise openly the work of others (or indeed fear of being publicly criticised in return). There is also a problem with author engagement. Author participation is important, but a reluctance to respond has been found (Gøtzsche et al., 2010). Online commenting and collaboration can work, for example as in the Polymath Project (Gowers, 2009 ), where over a thousand comments were received and a major mathematics problem solved after only 7 weeks – so maybe lessons can be learnt from the area of open science (Nielsen, 2011)?

New social media

New social media – such as blogs, Facebook, Twitter – present the opportunity for rapid feedback with extensive reach, bringing enormous potential benefits: alerting others to work they may not have otherwise seen, offering refinement and analysis, and bringing together people from different fields and geographical areas who then perhaps start to collaborate. Very important is the ability to alert people to published work that is problematical or suspect, either because of issues with the methodology or because of fabrication, falsification or other unethical behaviour. And because this can happen very quickly – sometimes within hours of publication – it can help minimise the damage that can occur when work that is suspect remains appearing sound in the scholarly literature. There have been some high-profile examples of work being criticised and found to be wrong very soon after publication as a result of vigorous activity in the blogosphere. For example, when in December 2010 a paper was published online in Science reporting a bacterium that can grow using arsenic instead of phosphorus and which incorporates arsenate into its nucleic acids (Wolfe-Simon et al., 2011), postings critical of the methodology and interpretation appeared almost immediately (Redfield, 2010). The story continued over the following months, predominantly through the Twitter hashtag #arseniclife, which has come to symbolise successful post-publication review and members of the scientific community working together and openly, along the way also influencing the way the public thinks (Zimmer, 2011). Similar criticisms and blogosphere activity (Mandavilli, 2011) followed publication of a paper claiming to have identified genetic signatures allowing prediction of exceptional longevity in humans (Sebastiani et al., 2010). The same media, however, also present the opportunity for concerted and rapid criticism of individuals, which may be unwarranted or false, even defamatory, and that is an issue that concerns many.

Post-publication evaluation

There is one well-established ‘post-publication peer review’ service, Faculty of 1000 (F1000, http://f1000.com/), which uses ‘Faculties’ of experts to provide named evaluations and ratings of published research across biology and medicine. The service started in 2002 (Wets et al., 2003), with just over a thousand members (hence the name), and it prides itself that the majority of its evaluations are not from what are thought of as the top-tier journals. It now has over 10 000 members, who are asked to select the best articles they have read in their specialties, highlighting the key findings and putting the work into context. As more repository-type journals, where sound work is published without any selection for interest, potential impact or various other parameters, come into existence (see above), the need for, and expectation of, such post-publication services – across all disciplines – will undoubtedly increase. There is great scope for expansion of post-publication evaluation in a number of forms, involving individuals, journals and organisations.

With work being discussed potentially via various social media and by numerous people there is also great interest in finding ways to aggregate all this information and quantify it in some meaningful way to gauge the impact of research. The field is very new, and the challenges considerable, but initiatives such as ‘altmetrics’ (Priem et al., 2010; Mandavilli, 2011) have already been set up and more are likely to follow.

Conclusion and outlook

So where does peer review currently stand? Despite some claims that it is ‘broken’ or ‘in crisis’, many feel that pre-publication peer review is not something that they want to see disappear. Mark Ware reviewed the state of journal peer review at the end of 2010, and concluded that ‘far from being in crisis, peer review remains widely supported and diversely innovative’ (Ware, 2011, p. 23). Fiona Godlee (2011), editor of the BMJ, has commented about peer review, that: ‘At its best, I think we would all agree that it does improve the quality of scientific reporting and that it can improve, through the pressure of the journal, the quality of the science itself and how it is performed.’ The UK House of Commons Science and Technology Committee inquiry into peer review concluded (2011) that, although pre-publication peer review is not perfect and there is much that can be done to improve and supplement it (paragraph 278), ‘peer review in scholarly publishing, in one form or another, is crucial to the reputation and reliability of scientific research’ (paragraph 277).

There are without doubt problems, variations in quality and considerable scope for improvement. Editors and publishers need to work together to ensure not only that their peer-review processes are of the highest quality, but that they also evolve and adapt to what is required by the communities they serve. There is no room for complacency. At this stage in the history of journal publishing, three and a half centuries on from the appearance of the first journals, there is greater opportunity for innovation and experimentation than ever before, and this should be embraced. For the first time the two traditional functions of pre-publication review have been separated: a pre-publication check for methodological and reporting soundness, and post-publication evaluation for interest, importance and potential impact. This model is attractive to many because it allows research to be published without delay, enabling researchers to move on without wasting time submitting work that is sound to journal after journal in the quest for publication, and others to benefit as soon as possible from their findings. The existence of a number of repository-type journals should help ensure healthy competition and the maintenance of high peer-review standards. Publication is closely linked to the assessment of researchers and their work by their institutions and the research funders. For as long as they require the ‘badge’ of a high-impact journal, the pressure on researchers to publish in those journals will remain. Post-publication evaluation services, however, offer the potential to assess and highlight work not just in the short term, but over the longer time period that it takes some work to be recognised. With the ‘publish all, filter later’ model (Smith, 2010b) it would be impossible to distinguish what is sound from what is not, or what is evidence-based rather than opinion, and so this is not in the view of many a realistic way forward. However, with the ‘publish all that is sound, evaluate later’ approach, researchers and the public can remain confident that what they are reading has been checked by experts.

Peer review is facing new challenges. Vast amounts of data are being generated in some disciplines. Some of this needs to be assessed by reviewers during peer review, or at least seen by them to enable them to assess what does need to be included or made available in appropriate repositories for others to access and use. This brings up issues of data availability, usability and storage, with all the technological and economic implications, areas which are under intense investigation and discussion (e.g. Royal Society, 2011b). Although reviewers have expressed willingness to review authors’ data (Ware and Monkman, 2008), the potential burden should not be underestimated, as expressed by one researcher (Nicholson, 2006): ‘The scientific community needs to reassess the way it addresses the peer-review problem, taking into account that referees are only human and are now being asked to do a superhuman task on a near-daily basis.’

Concerns about the increased burden on reviewers and the ‘wastage’ of reviews in the quest for publication have led to ‘cascading’ submission and review, and this may be something that more journals adopt, especially within individual publishers or between the sister journals of organisations.

The journal publishing landscape is a rapidly evolving one and it is likely that the diversity of peer-review models will increase. It is clear that different disciplines and communities, even different journals, have different requirements, and what works for one may not for another. Also, decisions will, to some extent, be dictated by authors and their research funders. But in this varied and rapidly evolving landscape there is little doubt that peer review – which is basically scrutiny and assessment by experts – will continue as an important component of the scholarly publishing process.

Sources of further information

Council of Science Editors (CSE, http://www. councilscienceeditors. org/). CSE’s White Paper on Promoting Integrity in Scientific Journal Publications, 2009 Update. http://www. councilscienceeditors. org/i4a/pages/index. cfm?pageid=3331 (accessed 3 January 2012).

Elsevier Peer Review resources, http://www. elsevier. com/wps/find/reviewershome. reviewers [(accessed 3 January 2012). ].

Godlee, F. and Jefferson, T. (eds) Peer Review in Health Sciences, 2nd edn. London: BMJ Books.

Hames, I. Peer Review and Manuscript Management in Scientific Journals: guidelines for good practice. Oxford: Blackwell Publishing and ALPSP http://eu. wiley. com/WileyCDA/WileyTitle/productCd-1405131594. html, 2007.

International Society of Managing and Technical Editors (ISMTE, http://www. ismte. org). Resource Central, a collection of resources, tools, instructions and articles to assist editorial offices in peer-review management processes. http://www. ismte. org/Resource_Central (accessed 3 January 2012).

Lock, S. A Difficult Balance. Editorial Peer Review in Medicine. Philadelphia: ISI Press; 1985.

Research Information Network. Peer Review: a guide for researchers. http://www. rin. ac. uk/peer-review-guide, 2010. [(accessed 3 January 2012). ].

Wiley-Blackwell, Best Practice Guidelines on Publication Ethics: A Publisher’s Perspective. Available at http://www. wiley. com/bw/publicationethics/ (accessed 3 January 2012).

References

Adams, J., King, C., Ma, N. Global Research Report: China. Thomson Reuters. November 2009 http://researchanalytics. thomsonreuters. com/m/pdfs/grr-china-nov09. pdf, 2009. [(accessed 4 January 2012). ].

Adams, J., King, C., Singh, V. Global Research Report: India. Thomson Reuters. October 2009 http://researchanalytics. thomsonreuters. com/m/pdfs/grr-India-oct09_ag0908174. pdf, 2009. [(accessed 4 January 2012). ].

Adams, J., Pendlebury, D. Global Research Report: Materials Science and Technology. Thomson Reuters, June 2011 http://researchanalytics. thomsonreuters. com/m/pdfs/grr-materialscience. pdf, 2011. [(accessed 4 January 2012). ].

ALPSP/EASE. Current practice in peer review: results of a survey conducted during Oct/Nov 2000. http://www. alpsp. org/Ebusiness/Libraries/Publication_Downloads/Current_Practice_in_Peer_Review. sflb. ashx?download=true, 2000. [(accessed 6 January 2012). ].

ALPSP, Group, Kaufman-Wills, The facts about open access. A studyof the financial and non-financial effects of alternative business models for scholarly journals. http://www. alpsp. org/Ebusiness/ProductCatalog/Product. aspx?ID=47, 2005. [(accessed 6 January 2012). ].

Antarctic Survey, British. Written evidence submitted to the UK House of Commons Science and Technology Committee Inquiry into Peer Review. (8 March 2011, PR 40). http://www. publications. parliament. uk/pa/cm201012/cmselect/cmsctech/856/856vw_10. htm, 2011. [(accessed 4 January 2012). ].

Burnham, J. C. The evolution of editorial peer review. JAMA. 1990; 263:1323–1329.

Campbell, P. Written evidence submitted to the UK House of Commons Science and Technology Committee Inquiry into Peer Review. (10 March 2011, PR 60). http://www. publications. parliament. uk/pa/cm201012/cmselect/cmsctech/856/856we11. htm, 2011. [(accessed 4 January 2012). ].

COPE. How should editors respond to plagiarism? COPE discussion paper. April 2011 http://www. publicationethics. org/files/COPE_plagiarism_discussion_%20doc_26%20Apr%2011. pdf, 2011. [(accessed 6 January 2012). ].

COPE. Code of Conduct and Best Practice Guidelines for Journal Editors. March 2011 http://www. publicationethics. org/files/Code%20of%20conduct%20for%20journal%20editors4. pdf, 2011. [(accessed 6 January 2012). ].

COPE. Code of Conduct for Journal Publishers. March 2011 http://www. publicationethics. org/files/Code%20of%20conduct%20for%20publishers%20FINAL_1_0. pdf, 2011. [(accessed 6 January 2012)].

Davis, P., Open access publisher accepts nonsense manuscript for dollars. The Scholarly Kitchen. 10 June 2009 http://scholarlykitchen. sspnet. org/2009/06/10/nonsense-for-dollars/, 2009. [(accessed 4 January 2012). ].

Davis, P. Physics papers and the arXiv. The Scholarly Kitchen. 17 June 2009 http://scholarlykitchen. sspnet. org/2009/06/17/physics-papers-and-the-arxiv/, 2009. [(accessed 4 January 2012). ].

Davis, P. Privatizing peer review – the PubCred proposal. The Scholarly Kitchen. 16 September 2010 http://scholarlykitchen. sspnet. org/2010/09/16/privatizing-peer-review/, 2010. [accessed 4 January 2012). ].

DeMaria, A. N. Peer review: the weakest link. JACC (Journal of the American College of Cardiology). 2010; 55:1161–1162.

Elsevier. Evidence given by Mayur Amin to the UK House of Commons Science and Technology Committee Inquiry into Peer Review. 11 May 2001. Transcript of oral evidence, HC 856, Q127 http://www. publications. parliament. uk/pa/cm201012/cmselect/cmsctech/856/856. pdf, 2011.

Elsevier. PeerChoice pilot general information. http://www. elsevier. com/wps/find/P04. cws_home/peerchoice, 2011. [(accessed 4 January 2012). ].

Fanelli, D., How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS ONE, 2009;4(5):e5738, doi: 10. 1371/journal. pone. 0005738.

Fox, J., Petchey, O. L. Pubcreds: fixing the peer review process by ‘privatizing’ the reviewer commons. Bulletin of the Ecological Society of America. 2010; 91:325–333.

Fuyuno, I., Cyranoski, D. Cash for papers: putting a premium on publication. Nature. 2006; 441:792.

Garfield, E. The history and meaning of the journal impact factor. JAMA. 2006; 295:90–93.

Ginsparg, P. ArXiv at 20. Nature. 2011; 476:145–147.

Godlee, F. Making reviewers visible: openness, accountability, and credit. JAMA. 2002; 287:2762–2765.

Godlee, F. Evidence given to the UK House of Commons Science and Technology Committee Inquiry into Peer Review. 11 May 2001. Transcript of oral evidence, HC856, Q97 http://www. publications. parliament. uk/pa/cm201012/cmselect/cmsctech/856/856. pdf, 2011.

Godlee, F., Dickersin, K. Bias, subjectivity, chance, and conflict of interest in editorial decisions. In: Godlee F., Jefferson T., eds. Peer Review in Health Sciences. 2nd edn. London: BMJ Books; 2003:91–117.

Gøtzsche, P. C., Delamothe, T., Godlee, F., Lundh, A. Adequacy of authors’ replies to criticism raised in electronic letters to the editor: cohort study. BMJ. 2010; 341:c3926.

Gowers, T. Is massively collaborative mathematics possible? Gowers’s Weblog. 27 January 2009 http://gowers. wordpress. com/2009/01/27/is-massively-collaborative-mathematics-possible/, 2009. [(accessed 4 January 2012). ].

Grant, R. P. On peer review. Confessions of a (former) Lab Rat. 15 April 2010 http://occamstypewriter. org/rpg/2010/04/15/on_peer_review/, 2010. [(accessed 4 January 2012). ].

Hames, I. Peer Review and Manuscript Management in Scientific Journals: guidelines for good practice. Oxford: Blackwell Publishing and ALPSP; 2007.

Howard, J. Leading humanities journal debuts ‘open’ peer review, and likes it. The Chronicle of Higher Education. 26 July 2010 http://chronicle. com/article/Leading-Humanities-Journal-/123696/, 2010. [(accessed 4 January 2012)].

ICMJE, Uniform requirements for manuscripts submitted to biomedical journals. Ethical considerations in the conduct and reporting of research: conflicts of interest. http://www. icmje. org/ethical_4conflicts. html, 2011. [(accessed 3 January 2012). ].

Jaschik, S. Rejecting double blind. Inside Higher Ed: Times Higher Education. 31 May 2011 http://www. timeshighereducation. co. uk/story. asp?storycode=416353, 2011. [(accessed 4 January 2012). ].

Jefferson, T., Alderson, P., Wager, E., Davidoff, F. Effects of editorial peer review: a systematic review. JAMA. 2002; 287:2784–2786.

Jefferson, T., Rudin, M., Brodney Folse, S., Davidoff, F., Editorial peer review for improving the quality of reports of biomedical studies. Cochrane Database of Systematic Reviews 2007:2007. [Issue 2. Art. No. : MR000016. doi:10. 1002/14651858. MR000016. pub3. ].

Kronick, D. A. Peer review in 18th-century scientific journalism. JAMA. 1990; 263:1321–1322.

Laine, C., Mulrow, C. Peer review: integral to science and indispensable to Annals. Annals of Internal Medicine. 2003; 139:1038–1040.

Lawrence, P. A. The politics of publication. Nature. 2003; 422:259–261.

Lawrence, P. A. The mismeasurement of science. Current Biology. 2007; 17(15):R583–R585.

Lock, S., Introduction to the third impression London: BMJ. Originally published 1985A Difficult Balance: Editorial Peer Review in Medicine. Philadelphia: ISI Press, 1991.

Mandavilli, M. Peer review: Trial by Twitter. Nature. 2011; 469:286–287.

Martinson, B. C., Anderson, M. S., de Vries, R. Scientists behaving badly. Nature. 2005; 435:737–738.

McCook, A. Is peer review broken? The Scientist. 2006; 20(2):26.

Nature. Overview: Nature’s peer review trial. http://www. nature. com/nature/peerreview/debate/nature05535. html, 2006. [(accessed 4 January 2012). ].

Nature. Nature’s peer review debate. http://www. nature. com/nature/peerreview/debate/index. html, 2006. [(accessed 4 January 2012). ].

Nature. Nature Publishing Index 2010 China. http://www. natureasia. com/en/publishing-index/china/2010/, 2011. [(accessed 4 January 2012). ].

Neylon, C. Peer review: what is it good for? Science in the Open. 5 February 2010 http://cameronneylon. net/blog/peer-review-what-is-it-good-for/, 2010. [(accessed 4 January 2012). ].

Nicholson, J. K. Reviewers peering from under a pile of ‘omics’ data. Nature. 2006; 440:992.

Nielsen, M. Open science. Michael Nielsen, author blog. 7 April 2011 http://michaelnielsen. org/blog/open-science-2/, 2011. [(accessed 4 January 2012). ].

Parker, R., Evidence given to the UK House of Commons Science and Technology Committee Inquiry into Peer Review, 4 May 2001. Transcript of oral evidence, HC856, Q8. 2011; http://www. publications. parliament. uk/pa/ cm201012/cmselect/cmsctech/856/856. pdf

Petchey, O., Fox, J., Progress on obtaining journal-level data on the peer review system. PubCreds: Fixing the Peer Review Process by ‘Privatising’ the Reviewer Commons 2011; 18 February 2011. http://www. ipetitions. com/ petition/fix-peer-review/blog/5040 (accessed 4 January 2012).

Ploegh, H. End the wasteful tyranny of reviewer experiments. Nature. 2011; 472:391.

PLoS, Article-level metrics. and Article-level metrics information http://www. plosone. org/static/almInfo. action, 2011. [(accessed 4 January 2012). ].

Pöschl, U. Interactive open access publishing and public peer review: the effectiveness of transparency and self-regulation in scientific quality assurance. IFLA Journal. 2010; 36:40–46.

Priem, J., Taraborelli, D., Groth, P., Neylon, C. altmetrics: a manifesto. 26 October 2010 (modified September 2008, 2011) http://altmetrics. org/manifesto/, 2010. [(accessed 4 January 2012). ].

Priem, J. Has Journal commenting failed? Jason Priem, author blog. 7 January 2011 http://jasonpriem. com/2011/01/has-journal-article-commenting-failed/, 2011. [(accessed 4 January 2012). ].

Pulverer, B. A transparent black box EMBO Journal. 2010; 29:3891–3892.

Redfield, R. Arsenic-associated bacteria (NASA’s claims). RRResearch. 4 December 2010 http://rrresearch. fieldofscience. com/2010/12/arsenic-associated-bacteria-nasas. html, 2010. [(accessed 4 January 2012). ].

Rohn, J. Peer review is no picnic. guardian. co. uk. 6 September 2010 http://www. guardian. co. uk/science/blog/2010/sep/06/peer-review, 2010. [(accessed 4January 2012). ].

Rossner, M., Yamada, K. M. What’s in a picture? The temptation of image manipulation. Journal of Cell Biology. 2004; 166:11–15.

Royal Society. Knowledge, networks and nations: global scientific collaboration in the 21st century. RS Policy document 03/11, March 2011, 2011.

Society, Royal. Science as a public enterprise. http://royalsociety. org/policy/projects/science-public-enterprise/, 2011. [(accessed 6 January 2012). ].

Saper, C. B., Maunsell, J. H. R., Editorial. The Neuroscience Peer Review Consortium. Neural Development, 2009;4(10) (http://www. neuraldevelopment. com/content/4/1/10).

Schriger, D. L., Chehrazi, A. C., Merchant, R. M., Altman, D. G. Use of the internet by print medical journals in 2003 to 2009: a longitudinal observational study. Annals of Emergency Medicine. 2011; 57:153–160.

Sebastiani, P., Solovieff, N., Puca, A., et al, Genetic signatures of exceptional longevity in humans. Science. Published online 1, 2010;(July 2010), doi: 10. 1126/science. 1190532.

About Science, Sense. Peer Review Survey 2009: Full Report. http://www. senseaboutscience. org/data/files/Peer_Review/Peer_Review_Survey_Final_3. pdf, 2009. [(accessed 6 January 2012). ].

Quarterly, Shakespeare. Shakespeare Quarterly Open Review: Shakespeare and New Media. http://mediacommons. futureofthebook. org/mcpress/ShakespeareQuarterly_NewMedia, 2010. [(accessed 4 January 2012). ].

Shao, J., Shen, H. The outflow of academic papers from China: why is it happening and can it be stemmed? Learned Publishing. 2011; 24:95–97.

Smith, R. Peer review: a flawed process at the heart of science and journals. Journal of the Royal Society of Medicine. 2006; 99:178–182.

Smith, R., Classical peer review: an empty gun. Breast Cancer Research, 2010;12(Suppl. 4):S13, doi: 10. 1186/bcr2742.

Smith, R., Scrap peer review and beware of ‘top journals’. BMJ Group blogs 2010; 22 March 2010. http://blogs. bmj. com/bmj/2010/03/22/richard-smith-scrap-peer-review-and-beware-of-%E2%80%9Ctop-journals%E2%80%9D/ (accessed 6 January 2012).

Spier, R. The history of the peer-review process. Trends in Biotechnology. 2002; 20:357–358.

Tananbaum, G., Holmes, L. The evolution of Web-based peer-review systems. Learned Publishing. 2008; 21:300–306.

Tenopir, C., Allard, S., Bates, B., Levine, K. J., King, D. W., Birch, B., Mays, R., Caldwell, C. Research publication characteristics and their relative values: a report for the Publishing Research Consortium. September 2010 http://www. publishingresearch. net/documents/PRCReportTenopiretalJan2011. pdf, 2010.

UK House of Commons Science, Committee, Technology, Peer review in scientific publications. Eighth Report of Session 2010–12. HC 856. London: The Stationary Office Limited http://www. publications. parliament. uk/pa/cm201012/cmselect/cmsctech/856/856. pdf;http://www. publications. parliament. uk/pa/cm201012/cmselect/cmsctech/856/85602. htm, 2011. [(accessed 4 January 2012). ].

van Rooyen, S., Godlee, F., Evans, S., Smith, R., Black, N. Effect of blinding and unmasking on the quality of peer review: a randomised trial. JAMA. 1998; 280:234–237.

Velterop, J. J. M. Keeping the minutes of science. In: Collier M., Arnold K., eds. Proceedings of the Second Electronic and Visual Information Research (ELVIRA) Conference. London: Aslib; 1995:11–17.

Vines, T., Rieseberg, L., Smith, H. No crisis in supply of peer reviewers. Nature. 2010; 468:1041.

Wager, E., Jefferson, T. Shortcomings of peer review in biomedical journals. Learned Publishing. 2001; 14:257–263.

WAME. Rewarding Peer Reviewers: Payment vs Other Types of Recognition. WAME Listserve Discussion 12 February 2007 to 20 February 2007 http://www. wame. org/resources/wame-listserve-discussion/, 2007.

WAME. The Relationship Between Journal Editors-in-Chief and Owners. Policy Statement posted 25 July 2009 http://www. wame. org/resources/policies#independence, 2011. [(accessed 3 January 2012). ].

WAME. Geopolitical intrusion on editorial decisions. Policy Statement posted 23 March 2004 http://www. wame. org/resources/policies#geopolitical, 2011. [(accessed 4 January 2012). ].

Ware, M., Online Submission and Peer Review Systems: A review of currently available systems and the experiences of authors, referees, editors and publishers. ALPSP Research Report 2005; http://www. alpsp. org/Ebusiness/ProductCatalog/Product. aspx?ID=40 (accessed 6 January 2012).

Ware, M. Peer review: recent experience and future directions.New. Review of Information Networking. 2011; 16(1):23–53.

Ware, M., Mabe, M., The STM Report. An overview of scientific and scholarly journal publishingInternational Association of Scientific. Technical and Medical Publishers, 2009. [September 2009. ].

Ware, M., Monkman, M., Peer review in scholarly journals: perspective of the scholarly community – an international study. Publishing Research Consortium (PRC) Research Report 2008; http://www. publishingresearch. net/documents/PeerReviewFullPRCReport-final. pdf

Weller, A. C. Editorial Peer Review: Its Strengths and Weaknesses. ASIST Monograph Series. Medford, NJ: Information Today, Inc. ; 2001.

Wets, K., Weedon, D., Velterop, J. Post-publication filtering and evaluation: Faculty of 1000. Learned Publishing. 2003; 16:249–258.

Wolfe-Simon, F., Blum, J. S., Kulp, T. R., et al. A bacterium that can grow by using arsenic instead of phosphorus. Science. 2011; 332:1163–1166.

Zimmer, C., The discovery of arsenic-based Twitter. How #arseniclife changed science. Slate 2011; 27 May 2011. http://www. slate. com/id/2295724/ (accessed 4 January 2012).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset