Skip to yearly menu bar Skip to main content


Reviewer Instructions

Executive Summary

  • We have reduced your load to a maximum of 5 papers: please write detailed thoughtful reviews.

  • Anonymous reviews of accepted papers will be made public.

  • Include pros and cons in your review.

  • Reviewing deadline is 21st July; borderline papers may get an extra review during the discussion phase.

  • Please let us know if you identify dual-submissions that violate our policy as detailed in the author instructions.

Introduction

Thank you for reviewing for NIPS! Your help is a very valuable service to the community, since the technical content of the program is largely determined through the efforts and comments of the reviewers. Below is a set of instructions and guidelines that will help you perform your reviews efficiently and effectively.

2014 Reviewing

  • We have worked hard to recruit a large number of reviewers so we can reduce everyone's reviewing load, this year most reviewers have four papers or less. Some reviewers have five papers. In return we ask for thorough reviews.

  • As for last year we will make public on the NIPS website all the (anonymous) reviews of accepted papers. Reviewers will have the chance to edit these after the rebuttal period. This will provide context to these papers, help ensure that authors pay heed to reviewer comments, provide data on "how to review" to younger researchers, and generally improve transparency of the process.

Reviewing Timeline

  • July 1st: Start of reviewing process.
  • July 21st: Reviewing deadline. Reviews must be completed and entered into CMT.
  • July 21st - Aug 4th: Discussion period and additional reviews for borderline and low-confidence papers.
  • Aug 4th - Aug 11th: Author rebuttal period
  • Aug 11th - Aug 19th: Final discussion period between reviewers and Area Chairs

Conference system access

All reviews must be entered electronically into the CMT System; login help is available from this page. Reviewers may visit this site multiple times and revise their reviews as often as necessary before the reviewing deadline. When you were invited to become a reviewer, CMT sent you an automated mail with instructions on how to login. Use your email address as the login id; you can change your password from the login screen.

During the review period, you will probably get many emails sent from CMT (e.g., those telling you your paper assignments). Please make sure emails from CMT are not snagged by your spam filter!

Confidentiality

By viewing the papers, you agree that the NIPS review process is confidential. Specifically, you agree not to use ideas and results from submitted papers in your work, research or grant proposals, unless and until that material appears in other publicly available formats, such as a technical report or as a published work. You also agree not to distribute submitted papers or the ideas in the submitted papers to anyone unless approved by the program chairs.

Double blind reviewing

This year, we will continue to use double blind reviewing. Of course, the authors do not know the identity of the reviewers; this also holds for authors who are on the program committee. But in addition, the reviewers do not know the identity of the authors. Note, however, that the area chairs do know the author identities, to avoid accidental conflicts of interest and to help determine novelty.

Of course, double blind reviewing is not perfect: by searching the Internet, a reviewer may discover (or think he/she may have discovered) the identity of an author. We encourage you not to actively attempt to discover the identities of the authors. If you have good reason to suspect that a paper has been published in the past, you can go and search on the Internet, but we ask that you first completely read the paper. Also, based on the experience of other double-blinded conferences, we caution reviewers that the assumed authors may not be the actual authors; multiple independent invention is common, and different groups build on each others' work.

If you believe that you have discovered the identity of the author, we ask that you explain how and why in the "Confidential comments to PC members" in your review (see below). This will help the (non-blind) program committee determine the novelty of the work. This will also help us assess the effectiveness of double blind reviewing.

Supplementary material

You will notice that some authors submit supplementary material. Submission of additional material is allowed: authors can submit up to 10 MB of material, containing proofs, audio, images, video, or even data or source code.

Your responsibility as a reviewer is to read and judge the main paper. It is optional for you to read or view the supplementary material. However, keeping in mind the space constraints of a NIPS paper, you may want to consider looking at the supplementary material before complaining that the authors did not provide a fully rigorous proof of their theorem, or only demonstrated qualitative results on a small number of examples.

The supplement can either be a PDF file, or an archive, either compressed (gz, tgz or zip) or uncompressed (tar). On UNIX computers, you can use gunzip followed by tar -xvf to unpack the archives. On Windows computers, WinZip should be able to unpack tar.gz files.

Formatting issues

We ask that you double-check that your papers have followed the submission guidelines with respect to length (i.e. they are at most 9 pages, where the ninth page must contain only references), format, and anonymity, and that they are not jokes! Please notify the program chairs immediately, at program-chairs@nips.cc, if you find any other serious formatting or anonymity problems with a submitted paper.

Previously published work and dual-submissions

Where possible, reviewers should identify submissions that are very similar (or identical) to versions that have been previously published, or that have been submitted in parallel to other conferences. A clarification of this policy is given in the Author and Submission Instructions. If you detect violations of the dual submission policy please note this in the "Confidential comments to PC members" section of the review form.

Writing your reviews: Review Content

We ask that reviewers pay particular attention to the question: does the paper add value to the NIPS community? We would prefer to see solid, technical papers that explore new territory or point out new directions for research, as opposed to ones that may advance the state of the art, but only incrementally. A solid paper that heads in a new and interesting direction is taking a risk that we want to reward. For more background on what makes a good NIPS paper, please see these guidelines.

The high quality of NIPS depends on having a complete set of reviews for each paper. Reviewer scores and comments provide the primary input used by the program committee to judge the quality of submitted papers. Far more than any other factor, reviewers determine the scientific content of the conference. However, we also stress that short superficial reviews that venture uninformed opinions about a paper are worse than no review, since they may result in the rejection of a high quality paper that the reviewer simply failed to understand.

Reviewer comments have two purposes: to provide feedback to authors, and to provide input to the program committee. Reviewer comments to authors whose papers are rejected will help them understand how NIPS papers are rated, and how they might improve their submissions in the future. Reviewer comments to authors whose papers are accepted will help them improve the paper for the final conference proceedings. Reviewer comments to the program committee are the basis on which accept/reject decisions are made. Your comments are seen by the area chair and the other reviewers. Please do express your honest opinions. However please also make sure that your comments are considerate; this will help maximize the effectiveness of the overall process, in particular, the author rebuttal phase.

Overview

You'll be asked to give a Quality Score and Confidence for each paper (see Quantitative evaluation section, below). In addition, this year we are adding an "Impact Score". The idea is to estimate the value of the paper for the wider NIPS community. We will be using this score to help sort papers where the other measures are less informative. Please explain your evaluation and impact assessments in the "Comments to author(s)" text box provided.

Your written review should begin by summarizing the main ideas of each paper and relating these ideas to previous work at NIPS and elsewhere. While this part of the review may not provide much new information to authors, it is invaluable to members of the program committee, and it demonstrates to the authors that you understand their paper. You should then discuss the strengths and weaknesses of each paper, addressing the criteria described in the Qualitative Evaluation section, below, and in the NIPS Evaluation Criteria document. Please read the review criteria and use those to guide your decisions. It is tempting to include only weaknesses in your review. However, it is important to also mention and take into account the strengths, as an informed decision needs to take both into account. It is particularly useful to include a list of arguments pro and con acceptance. If you believe that a paper is out of scope for NIPS, please corroborate this judgment by looking at the list of topics in the call for papers. Finally, please fill in the "Summary of review" -- this should be a short 1-2 sentence summary of your review.

Importantly, reviewer comments should be detailed, specific and polite, avoiding vague complaints and providing appropriate citations if authors are unaware of relevant work. As you write a review, think of the types of reviews that you like to get for your papers. Even negative reviews can be polite and constructive! Remember that you are not assessing your interest in the paper, but its quality as a scientific contribution to the field - again imagine someone not working on the same topic as you assessing your papers.

If you have information that you wish only the program committee to see, you may fill in the "Confidential comments to PC members" box. The confidential comments to the program committee have many uses. Reviewers can use this section to make recommendations for oral versus poster presentations, to make explicit comparisons of the paper under review to other submitted papers, and to disclose conflicts of interest that may have emerged in the days before the reviewing deadline. You can also use this section to provide criticisms that are more bluntly stated.

Quantitative Evaluation

Reviewers give a score of between 1 and 10 for each paper. The program committee will interpret the numerical score in the following way:

10: Top 5% of accepted NIPS papers, a seminal paper for the ages.

I will consider not reviewing for NIPS again if this is rejected.

9: Top 15% of accepted NIPS papers, an excellent paper, a strong accept.

I will fight for acceptance.

8: Top 50% of accepted NIPS papers, a very good paper, a clear accept.

I vote and argue for acceptance.

7: Good paper, accept.

I vote for acceptance, although would not be upset if it were rejected.

6: Marginally above the acceptance threshold.

I tend to vote for accepting it, but leaving it out of the program would be no great loss.

5: Marginally below the acceptance threshold.

I tend to vote for rejecting it, but having it in the program would not be that bad.

4: An OK paper, but not good enough. A rejection.

I vote for rejecting it, although would not be upset if it were accepted.

3: A clear rejection.

I vote and argue for rejection.

2: A strong rejection. I'm surprised it was submitted to this conference.

I will fight for rejection.

1: Trivial or wrong or known. I'm surprised anybody wrote such a paper.

I will consider not reviewing for NIPS again if this is accepted.

Reviewers should NOT assume that they have received an unbiased sample of papers, nor should they adjust their scores to achieve an artificial balance of high and low scores. Scores should reflect absolute judgments of the contributions made by each paper.

Impact Score

Independently of the Quality Score above, this is your opportunity to identify papers that are very different, original, or otherwise potentially impactful for the NIPS community.

There are two choices:

2: This work is different enough from typical submissions to potentially have a major impact on a subset of the NIPS community.

1: This work is incremental and unlikely to have much impact even though it may be technically correct and well executed.

Examples of situations where the impact and quality scores may point in opposite directions include papers which are technically strong but unlikely to generate much follow-up research, or papers that have some flaw (e.g. not enough evaluation, not citing the right literature) but could lead to new directions of research.

Confidence Score

Reviewers also give a confidence score between 1 and 5 for each paper. The program committee will interpret the numerical score in the following way:

5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature.

4: The reviewer is confident but not absolutely certain that the evaluation is correct. It is unlikely but conceivable that the reviewer did not understand certain parts of the paper, or that the reviewer was unfamiliar with a piece of relevant literature.

3: The reviewer is fairly confident that the evaluation is correct. It is possible that the reviewer did not understand certain parts of the paper, or that the reviewer was unfamiliar with a piece of relevant literature. Mathematics and other details were not carefully checked.

2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper.

1: The reviewer's evaluation is an educated guess. Either the paper is not in the reviewer's area, or it was extremely difficult to understand.

Qualitative Evaluation

All NIPS papers should be good scientific papers, regardless of their specific area. We judge whether a paper is good using four criteria; a reviewer should comment on all of these, if possible:

Quality

Is the paper technically sound? Are claims well-supported by theoretical analysis or experimental results? Is this a complete piece of work, or merely a position paper? Are the authors careful (and honest) about evaluating both the strengths and weaknesses of the work?

Clarity

Is the paper clearly written? Is it well-organized? (If not, feel free to make suggestions to improve the manuscript.) Does it adequately inform the reader? (A superbly written paper provides enough information for the expert reader to reproduce its results.)

Originality

Are the problems or approaches new? Is this a novel combination of familiar techniques? Is it clear how this work differs from previous contributions? Is related work adequately referenced? We recommend that you check the proceedings of recent NIPS conferences to make sure that each paper is significantly different from papers in previous proceedings. Abstracts and links to many of the previous NIPS papers are available from http://books.nips.cc

Significance

Are the results important? Are other people (practitioners or researchers) likely to use these ideas or build on them? Does the paper address a difficult problem in a better way than previous research? Does it advance the state of the art in a demonstrable way? Does it provide unique data, unique conclusions on existing data, or a unique theoretical or pragmatic approach?

Author feedback and reviewer consensus

Between August 4th and 11th, authors will have a chance to submit feedback on their reviews. This is an opportunity to correct possible misunderstandings about the contents of the paper, or about previous work. Authors may point out aspects of the paper that you missed, or disagree with your review.

In previous years authors have felt that their comments were ignored in the final decision. While it is perfectly legitimate that many author comments will not change the final evaluation of a paper, it is important to convey to the authors that their comments were taken into account. Therefore, please read each rebuttal carefully and keep an open mind. Do the authors' comments make you change your mind about your review? Have you overlooked something? If you disagree with the authors' comments, please update your review to explain why (even if briefly).

From August 11th to 19th the area chairs will, where necessary, lead a discussion via the website and try to come to a consensus amongst the reviewers. The discussion will involve both marginal papers, trying to reach a decision on which side of the bar they should fall, and controversial papers, where the reviewers disagree. Many papers fall into these categories, and therefore this phase is a very important one. While engaging in the discussion, recall that different people have somewhat different points of view, and may come to different conclusions about a paper. It may be helpful to ask yourself "do the other reviewers' comments make sense?", and "should I change my mind given what the others are saying?" Reviewer consensus is valuable, but is not mandatory. If the reviewers do come to a consensus, the program committee takes it very seriously; only rarely is a unanimous recommendation overruled. However, we do not require conformity: if you think the other reviewers are not correct, you are not required to change your mind.

Subgroups of the program committee will then meet electronically and in teleconferences, and the final results will be announced in mid September.

Conflicts of interest

By now you should have entered your conflicts of interest using the tools available at your login page in CMT. As a reminder: you should mark a conflict with anyone who is or was recently your student or mentor, is a current or recent colleague, or is a close collaborator. If in doubt, it is better to mark a conflict, in order to avoid the appearance of impropriety. Your own username should be automatically marked as a conflict, but sometimes the same person may have more than one account, in which case you should definitely mark your other accounts as a conflict as well. If you do not mark a conflict with an author, it is assumed that you do not have a conflict by default.

Subject areas, published papers, and bidding

By now, you will also have entered your subject areas into CMT, and uploaded a representative sample of your papers to the Toronto Paper Matching System. This data is used to make the best possible assignments of papers to reviewers, subject to conflict and quota constraints. Additionally reviewers were asked to rate up to 25 potential assignments by entering "not willing", "willing", or "eager" into CMT. These 'bids' were used to further refine the assignments. For details of how this information was used please see this blog post

Again, we thank you for your help so far and in the future. Your carefully considered input is crucial to the success of the conference.

Corinna Cortes and Neil Lawrence

NIPS 2014 Program Chairs
program-chairs@nips.cc