Empirical Software Engineering Journal, abs/2009.01209
2021
Peer review is a key activity intended to preserve the quality and integrity of scientific publications. However, in practice it is far from perfect. We aim at understanding how reviewers, including those who have won awards for reviewing, perform their reviews of software engineering papers to identify both what makes a good reviewing approach and what makes a good paper. We first conducted a series of in-person interviews with well-respected reviewers in the software engineering field. Then, we used the results of those interviews to develop a questionnaire used in an online survey and sent out to reviewers from well-respected venues covering a number of software engineering disciplines, some of whom had won awards for their reviewing efforts. We analyzed the responses from the interviews and from 175 reviewers who completed the online survey (including both reviewers who had won awards and those who had not). We report on several descriptive results, including: 45% of award-winners are reviewing 20+ conference papers a year, while 28% of non-award winners conduct that many. 88% of reviewers are taking more than two hours on journal reviews. We also report on qualitative results. To write a good review, the important criteria were it should be factual and helpful, ranked above others such as being detailed or kind. The most important features of papers that result in positive reviews are clear and supported validation, an interesting problem, and novelty. Conversely, negative reviews tend to result from papers that have a mismatch between the method and the claims and from those with overly grandiose claims. The main recommendation for authors is to make the contribution of the work very clear in their paper. In addition, reviewers viewed data availability and its consistency as being important.