Skip to yearly menu bar Skip to main content


Reviewer Guidelines

Frequently asked questions

Frequently asked questions can be found here.

Contacting the program chairs

If you encounter a situation that you are unable to resolve with your AC, please contact the program chairs. Please refrain from writing to the program chairs at their own email addresses.

Introduction

Thank you for agreeing to serve for NeurIPS 2019! The community needs outstanding people like you to make NeurIPS a success, and we will work hard to make your duties as easy as possible. This page provides an overview of reviewer responsibilities and key dates.

Key dates

  • Reviewers enter domain conflicts, subject areas, TPMS information, etc.

    • May 1 -- May 8

  • Abstract Submission deadline

    • Thu May 16 (4pm ET; 8pm UTC)

  • Paper Submission deadline

    • Thu May 23 (4pm ET; 8pm UTC)

  • Reviewers enter individual conflicts

    • 1 week: Fri May 24--Thu May 30

  • Reviewers bid on submissions

    • 1 week: Fri May 31--Thu Jun 6

  • PCs assign submissions to reviewers

    • 3 days: Fri Jun 7--Mon Jun 10

  • SACs & ACs micro-adjust reviewer assignments

    • 1 week: Tues Jun 11--Mon Jun 17

  • Reviewers write reviews

    • 4 weeks: Tues Jun 18--Mon Jul 15

  • Authors respond to reviews

    • 1 week: Thu Jul 25--Wed Jul 31

  • ACs & reviewers discuss reviews & responses

    • 2 weeks: Thu Aug 1--Wed Aug 14

  • Notification date

    • Wed Sep 4

General information

  • Please respect deadlines and respond to emails as promptly as possible!

  • It is crucial that we are able to reach you in a timely manner. We will send most emails from CMT (i.e., email@msr-cmt.org). Such emails are sometimes accidentally marked as spam. Please check your spam folder regularly. If you find such an email in there, please whitelist the CMT email address so that you will receive future emails from CMT.

  • If you have changed or plan to change your email address, please update CMT accordingly. We have no way of knowing whether an email sent to you from CMT has bounced, so it is crucial that you make sure that CMT has the correct email address for you at all times. You should also make sure that your domain conflicts in CMT are up to date; these are important for preventing conflicts during the review process.

  • The NeurIPS definitions of conflicts of interest (and instructions for entering them) have been updated, so please make sure you read this year’s definitions.

  • NeurIPS uses the Toronto Paper Matching System (TPMS) to assign submissions to ACs and reviewers. Please log into TPMS here and make sure that your profile is up to date.

  • All participants must agree to abide by the NeurIPS code of conduct.

Responsibilities

  • Each reviewer will be assigned around 4-6 submissions. Reviewers are responsible for reviewing submissions, reading author responses, discussing submissions and author responses with other reviewers and area chairs (ACs), and helping make decisions. The reviewing process is double blind at the level of reviewers and ACs. There are no physical meetings; discussions with other reviewers and ACs will take place via CMT.  Reviewer identities are visible to other reviewers, ACs and SACs. After decisions have been made, reviews and meta-reviews will be made public (but reviewer and SAC/AC identities will remain anonymous).

  • This year, as an incentive, ACs will be asked to evaluate the quality of each review using three scores: “exceeded expectations”, “met expectations,’ and “failed to meet expectations.” The 400 or so highest-scoring reviewers will be awarded free NeurIPS registrations. The next 2000 or so highest-scoring reviewers will have registrations reserved for them (for a limited time frame). The lowest-scoring reviewers may not be invited to review for future conferences.

Reviewer best practices

  • It is okay to be unavailable for part of the review process (e.g., on vacation for a few days), but if you will be unavailable for more than that -- especially during important windows (e.g., discussion, decision-making) -- you must let your ACs know ASAP.

  • With great power comes great responsibility! Take your job seriously and be fair.

  • Write thoughtful and constructive reviews. Your reviews must accord with the NeurIPS code of conduct. Although the double-blind review process reduces the risk of discrimination, reviews can inadvertently contain subtle discrimination, which should be actively avoided.

    • Example: avoid comments regarding English style or grammar that may be interpreted as implying the author is "foreign" or "non-native". So, instead of "Please have your submission proof-read by a native English speaker,” use a neutral formulation such as "Please have your submission proof-read for English style and grammar issues.”

  • If you notice a conflict of interest with a submission that is assigned to you, please contact your AC immediately so that the paper can be reassigned. (Note that our definitions have changed a little from last year, so please carefully read this year’s definitions here.)

  • DO NOT talk to other reviewers, ACs, or SACs about submissions that are assigned to you without prior approval from your AC; other reviewers, ACs, and SACs may have conflicts with these submissions. In general, your primary point of contact for any discussions should be the corresponding AC for that submission.

  • DO NOT talk to other reviewers, ACs, or SACs about your own submissions (i.e., submissions you are an author on) or submissions with which you have a conflict.

  • Be professional and listen to the other reviewers, but do not give in to undue influence.

  • Engage actively in the discussion phase for each of the submissions that you are assigned, even if you are not specifically prompted to do so by the corresponding AC.

  • It is not fair to dismiss any submission without having thoroughly read it. Think about the times when you received an unfair, unjustified, short, or dismissive review. Try not be that reviewer! Always be constructive and help the authors understand your viewpoint, without being dismissive or using inappropriate language. If you need to cite existing work to justify one of your comments, please be as precise as possible and give a complete citation.

  • If you would like the authors to clarify something during the author response phase, please articulate this clearly in your review (e.g., “I would like to see results of experiment X” or “Can you please include details about the parameter settings used for experiment Y”).  You may additionally or alternatively put similar comments in the field “Improvements” where you are asked what the authors would have to do for you to increase your score.

Reviewer Instructions

Online submission system (CMT)

All reviews must be submitted via the NeurIPS 2019 CMT site. You may visit the site multiple times and revise your reviews as often as necessary. If you are both an author and a reviewer, please use the same email address for both roles in CMT. During the reviewing process, you will receive many emails from CMT (i.e., email@msr-cmt.org). Such emails are sometimes accidentally marked as spam. Please check your spam folder regularly and if you find such an email in there, please whitelist the CMT email address so that you will receive future emails from CMT.

Confidentiality

You must keep everything relating to the review process confidential. Do not use ideas and results from submissions in your own work until they become publicly available (e.g., via a technical report or a published paper). Do not to talk about or distribute submissions (or the ideas and results described in them) to anyone without prior approval from the program chairs.

Double-blind reviewing

The reviewing process will be double blind at the level of reviewers and ACs (i.e., reviewers and ACs cannot see author identities) but not at the level of SACs and program chairs. Authors are responsible for anonymizing their submissions. In particular, they should not include author names, author affiliations, or acknowledgements in their submissions and they should avoid providing any other identifying information (even in the supplementary material). If you are assigned a submission that is not adequately anonymized (e.g., includes author names, author affiliations, acknowledgements, or other identifying information) then please contact the corresponding AC. Under no circumstances should you attempt to find out the identities of the authors for any of your assigned submissions (e.g., by searching on Google or arXiv). If you accidentally find out, please do not divulge the identities to anyone, but do tell your AC that this has happened. You should not let the authors’ identities influence your decision in any way.

Supplementary material

Authors may submit up to 100MB of supplementary material, such as proofs, derivations, data, or source code; all supplementary material must be in PDF or ZIP format. Your responsibility as a reviewer is to read and review the submission itself; looking at supplementary material is at your discretion. That said, NeurIPS submissions are short, so you may wish to look at supplementary material before criticizing a submission for insufficient details, proofs, or experimental results.

Formatting instructions

Submissions are limited to eight content pages, including all figures and tables, in the NeurIPS “submission” style; additional pages containing only references are allowed. Authors must use the NeurIPS 2019 LaTeX style file. If you are assigned any submissions that violate the NeurIPS style (e.g., by decreasing margins or font size) or page limits, please contact the program chairs.

Dual submissions

Submissions that are identical or substantially similar to papers that are in submission to, have been accepted to, or have been published in other archival conferences, journals, workshops, etc. should be deemed dual submissions. Submissions that are identical or substantially similar to other NeurIPS submissions should also be deemed dual submissions; submissions should be distinct and sufficiently substantial. Slicing contributions too thinly may be sufficient for submissions to be deemed dual submissions. If you suspect that a submission that has been assigned to you is a dual submission or if you require further clarification, please contact the corresponding AC and program chairs. For more information about dual submissions, please see the Call for Paper.

Review content

We know that serving as a reviewer for NeurIPS is time consuming, but the community needs outstanding people like yourself to uphold the scientific quality of NeurIPS. Review content is the primary means by which ACs, SACs, and program chairs make decisions about submissions. Please make your review as detailed and informative as possible; short, superficial reviews that venture uninformed opinions or guesses are worse than no review since they may result in the rejection of a high-quality submission.

Review content is also the primary means by which authors understand their submissions’ decisions. Reviews for rejected submissions help authors understand how to improve their work for other conferences or journals. Reviews for accepted submissions help authors understand how to improve their work for the camera-ready versions.

The review form will ask you for the following:

1. Contributions: Please list three things this paper contributes (e.g., theoretical, methodological, algorithmic, empirical contributions; bridging fields; or providing an important critical analysis). For each contribution, briefly state the level of significance (i.e., how much impact will this work have on researchers and practitioners in the future?). If you cannot think of three things, please explain why. Not all good papers will have three contributions.

There are many examples of contributions that warrant publication at NeurIPS.  These contributions may be theoretical, methodological, algorithmic, empirical, connecting ideas in disparate fields (“bridge papers”), or providing a critical analysis (e.g., principled justifications of why the community is going after the wrong outcome or using the wrong types of approaches.).  One measure of the significance of a contribution is (your belief about) the level to which researchers or practitioners will build off of or use the proposed ideas. Solid, technical papers that explore new territory or point out new directions for research are preferable to papers that advance the state of the art, but only incrementally.  

This year, we are asking reviewers to try to list three contributions and their corresponding level of significance.  Not all good papers will have three contributions. For example, a ground-breaking theoretical paper might simply contribute the key theorem and proof (Significance: High).  However, for such a paper, hopefully you could also list “Presented a unified and extended view of several existing results. Significance: Medium.” or “Provided new proof path. Significance: High.”  

Please remain polite in this section and avoid writing “This paper did not contribute any new ideas.”  Instead write something along the lines of “Authors proposed model that primarily combines models in [cite A] and [cite B].  Significance: Low.”

For more examples of what is intended for this question, see section Examples of Review Content of this guide.

2.  Detailed comments: Please provide a thorough review of the submission, including its originality, quality, clarity, and significance.

Your comments should begin by summarizing the main ideas of the submission and relating these ideas to previous work at NeurIPS and in other archival conferences and journals. Although this part of the review may not provide much new information to authors, it is invaluable to ACs, SACs, and program chairs, and it can help the authors determine whether there are misunderstandings that need to be addressed in their author response. You should then summarize the strengths and weaknesses of the submission, focusing on each of the following four criteria: originality, quality, clarity, and significance. Clarification of these terms, as well as example quotes from past NeurIPS reviews, can be found in the section Examples of Review Content of this guide.

Your comments should be detailed, specific, and polite. Please avoid vague, subjective complaints. Think about the times when you received an unfair, unjustified, short, or dismissive review. Try not be that reviewer! Always be constructive and help the authors understand your viewpoint, without being dismissive or using inappropriate language. Remember that you are not reviewing your level of interest in the submission, but its scientific contribution to the field!

3.  Overall score:

10: Top 5% of accepted NeurIPS papers. Truly groundbreaking work.
     I will consider not reviewing for NeurIPS again if this submission is rejected.

9: Top 15% of accepted NeurIPS papers. An excellent submission; a strong accept.
     I will fight for accepting this submission.

8: Top 50% of accepted NeurIPS papers. A very good submission; a clear accept.
     I vote and argue for accepting this submission.

7: A good submission; an accept.
     I vote for accepting this submission, although I would not be upset if it were rejected.

6: Marginally above the acceptance threshold.
     I tend to vote for accepting this submission, but rejecting it would not be that bad.

5: Marginally below the acceptance threshold.
     I tend to vote for rejecting this submission, but accepting it would not be that bad.

4: An okay submission, but not good enough; a reject.
     I vote for rejecting this submission, although I would not be upset if it were accepted.

3: A clear reject.
     I vote and argue for rejecting this submission.

2: I'm surprised this work was submitted to NeurIPS; a strong reject.
     I will fight for rejecting this submission.

1: Trivial or wrong or already known.
     I will consider not reviewing for NeurIPS again if this submission is accepted.

You should NOT assume that you were assigned a representative sample of submissions, nor should you adjust your scores to match the overall conference acceptance rates. The “Overall Score” for each submission should reflect your assessment of the submission’s contributions.

4.  Confidence score:

5: You are absolutely certain about your assessment.
     You are very familiar with the related work.

4: You are confident in your assessment, but not absolutely certain.
     It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.

3: You are fairly confident in your assessment.
     It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.

2: You are willing to defend your assessment, but it is quite likely that you did not understand central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.

1: Your assessment is an educated guess.
     The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.

5.  Improvements: What would the authors have to do for you to increase your score?

Please provide details on what the authors would have to demonstrate in a rebuttal or change in a revision to have you increase your overall score.  For example, “Comparisons to baselines X, Y, Z would have to be added.” “The authors would have to find an additional dataset to demonstrate the potential impact of the methods.”  “The assumptions of Theorem 1 would have to be relaxed to include X.” “The authors would have to clearly articulate how this work differs in a significant way from the past work of [cite].”  “The organizational structure and clarity of writing would have to be significantly improved.”

6.  Were the Reproducibility Checklist answers useful for evaluating the submission?

The responses to these questions will not be used to determine whether or not a paper is accepted, but could inform future NeurIPS policies.

7.-10.  Whether or not code was submitted, and if so, if it influenced your review?  If not, did you wish code had been submitted for evaluation?

The responses to these questions will not be used to determine whether or not a paper is accepted, but could inform future NeurIPS policies.

11. Have you previously reviewed or area chaired (a version of) this work for another archival venue?

Yes or no.  This information will be useful to ACs and SACs in putting your review details in context of having already seen an earlier version.

12. Have you seen this submission online (e.g., arXiv, personal website, social media)?

Yes or no.  This informs the ACs and SACs that you are aware of the authors of this work and the review was not double blind.  

13. Agree to abide by the NeurIPS code of conduct

The NeurIPS code of conduct can be found here: https://neurips.cc/public/CodeOfConduct

14. Confidential comments for the area chair

If you have comments that you wish to be kept confidential from the authors, you can use the “Confidential Comments to Area Chair” text field. Such comments might include explicit comparisons of the submission to other submissions and criticisms that are more bluntly stated. If you accidentally find out the identities of the authors, please do not divulge the identities to anyone, but do tell your AC that this has happened.

Author response

Authors will be given the opportunity to respond to their reviews before decisions are made. This is to enable them to address misunderstandings, point out parts of their submissions that were overlooked, or disagree with the reviewers’ assessments. In previous years, some authors felt that their responses were ignored. As a reviewer, it is your responsibility to read and (if appropriate) respond to each author response. It is not fair to ignore any author response, even for submissions that you think should be rejected. Although it is possible that an author response will not change your assessment of a submission, you must convey to the authors that you have carefully considered their comments. As you read each author response, keep an open mind. Have you overlooked something? Please update each review to indicate that you have read the author response and whether you agree or disagree with it. You should be more specific than “I have read the author response and my opinion remains the same.” If that is the case, you should explain why it remains it remains the same, what the author response failed to address, etc.

Discussion

After the author response phase, the AC for each submission will initiate a discussion via CMT to encourage the reviewers to come to a consensus. If the reviewers do come to a consensus, the program chairs will take it seriously; only rarely are unanimous assessments overruled. The discussion phase is especially important for borderline submissions and submissions where the reviewers’ assessments differ; most submissions fall into one or another of these categories, so please take this phase seriously. When discussing a submission, try to remember that different people have different backgrounds and different points of view. Ask yourself, “Do the other reviewers' comments make sense?" and do consider changing your mind in light of their comments, if appropriate. That said, if you think the other reviewers are not correct, you are not required to change your mind. Reviewer consensus is valuable, but it is not mandatory.

Examples of Review Content

Contributions:

The following are examples of contributions a paper might make.  This list is not exhaustive.

“The paper provides a thorough experimental validation of the proposed algorithm, demonstrating much faster runtimes without loss in performance compared to strong baselines.”

“The paper proposes an algorithm for [insert] with computational complexity scaling linearly in the observed dimensions; in contrast, existing algorithms scale cubicly.”

“The paper presents a method for robustly handling covariate shift in cases where [insert assumptions], and demonstrated the impact on [insert application].”

“The authors provide a framework that unifies [insert field A] and [insert field B], two previously disparate research areas.”

“This paper demonstrates how the previously popular approach of [insert] has serious limitations when applied to [insert].”

“The authors propose a new framework for quantifying fairness of ML algorithms.”

“The authors show how the definition of fairness in [insert citation] fails to capture [insert], which is a critical example of its failure mode.”

Quality: Is the submission technically sound? Are claims well supported by theoretical analysis or experimental results? Is this a complete piece of work or work in progress? Are the authors careful and honest about evaluating both the strengths and weaknesses of their work?

Example from nips30/reviews/1548.html

“The technical content of the paper appears to be correct albeit some small careless mistakes that I believe are typos instead of technical flaw (see #4 below).

4. The equation in line 125 appears to be wrong. Shouldn't there be a line break before the last equal sign, and shouldn't the last expression be equal to E_q[(\frac{p(z,x)}{q(z)})^2]?”

“The idea of having a sandwich bound for the log-marginal likelihood is certainly good. While the authors did demonstrate that the bound does indeed contain the log-marginal likelihood as expected, it is not entirely clear that the sandwich bound will be useful for model selection. This is not demonstrated in the experiment despite being one of the selling point of the paper. It's important to back up this claim using simulated data in experiment.”

Example from OpenReview

“Technical issues: The move from (1) to (2) is problematic. Yes it is a lower bound, but by igoring H(Z), equation (2) ignores the fact that H(Z) will potentially vary more significantly that H(Z|Y). As a result of removing H(Z), the objective (2) encourages Z that are low entropy as the H(Z) term is ignored, doubly so as low entropy Z results in low entropy Z|Y. Yes the -H(X|Z) mitigates against a complete entropy collapse for H(Z), but it still neglects critical terms. In fact one might wonder if this is the reason that semantic noise addition needs to be done anyway, just to push up the entropy of Z to stop it reducing too much. In (3) arbitrary balancing parameters lamda_1 and lambda_2 are introduced ex-nihilo - they were not there in (2). This is not ever justified. Then in (5), a further choice is made by simply adding L_{NLL} to the objective. But in the supervised case, the targets are known and so turn up in H(Z|Y). Hence now H(Z|Y) should be conditioned on the targets. However instead another objective is added again without justification, and the conditional entropy of Z is left disconnected from the data it is to be conditioned on. One might argue the C(X,Y,Z) simply acts as a prior on the networks (and hence implicitly on the weights) that we consider, which is then combined with a likelihood term, but this case is not made. In fact there is no explicit probabilistic or information theoretic motivation for the chosen objective. Given these issues, it is then not too surprising that some further things need to be done, such as semantic noise addition to actually get things working properly. It may be the form of noise addition is a good idea, but given the troublesome objective being used in the first place, it is very hard to draw conclusions. In summary, substantially better theoretical justification of the chosen model is needed, before any reasonable conclusion on the semantic noise modelling can be made.”

Clarity: Is the submission clearly written? Is it well organized? (If not, please make constructive suggestions for improving its clarity.) Does it adequately inform the reader? (Note: a superbly written paper provides enough information for an expert reader to reproduce its results.)

Example from /nips30/reviews/1548.html

“While the paper is pretty readable, there is certainly room for improvements in the clarity of the paper. I find paragraphs in section 1 and 2 to be repetitive. It is clear enough from the Introduction that the key advantages of CHIVI are the zero avoiding approximations and the sandwich bound. I don't find it necessary to be stressing that much more in section 2. Other than that, many equations in the paper do not have numbers. The references to the appendices are also wrong (There is no Appendix D or F). There is an extra period in line 188.

The Related Work section is well-written. Good job!”

Example from /nips30/reviews/1173.html

“The paper is generally well-written and structured clearly. The notation could be improved in a couple of places. In the inference model (equations between ll. 82-83), I would suggest adding a frame superscript to clarify that inference is occurring within each frame, e.g. q_{\phi}(z_2^{(n)} | x^{(n)}) and q_{\phi}(z_1^{(n)} | x^{(n)}, z_2^{(n)}). In addition, in Section 3 it was not immediately clear that a frame is defined to itself be a sub-sequence.”

Originality: Are the tasks or methods new? Is the work a novel combination of well-known techniques? Is it clear how this work differs from previous contributions? Is related work adequately cited? (Abstracts and links to many previous NeurIPS papers are available here.)

Example from /nips30/reviews/60.html

“The main contribution of this paper is to offer a convergence proof for minimizing sum fi(x) + g(x) where fi(x) is smooth, and g is nonsmooth, in an asynchronous setting. The problem is well-motivated; there is indeed no known proof for this, in my knowledge.

There are two main theoretical results. Theorem 1 gives a convergence rate for proxSAGA, which is incrementally better than a previous result. Theorem 2 gives the rate for an asynchronous setting, which is more groundbreaking.”

Example from /nips30/reviews/1173.html

“The paper is missing a related work section and also does not cite several related works, particularly regarding RNN variants with latent variables (Fraccaro et al. 2016; Chung et al. 2017), hierarchical probabilistic generative models (Johnson et al. 2016; Edwards & Storkey 2017) and disentanglement in generative models (Higgins et al. 2017). The proposed graphical model is similar to that of Edwards & Storkey (2017), though the frame-level Seq2Seq makes the proposed method sufficiently original. The study of disentanglement for sequential data is also fairly novel.”

Significance: Are the results important? Are others (researchers or practitioners) likely to use the ideas or build on them? Does the submission address a difficult task in a better way than previous work? Does it advance the state of the art in a demonstrable way? Does it provide unique data, unique conclusions about existing data, or a unique theoretical or experimental approach?

Example from /nips30/reviews/688.html

“I liked this article very much. It answers a very natural question: gradient descent is an extremely classical, and very simple algorithm. Although it is known not to be the fastest one in many situations, it is widely used in practice; we need to understand its convergence rate. The proof is also conceptually simple and elegant, and I found its presentation very clear.”

Example from /nips30/reviews/3278.html

”This paper seems to be a useful contribution to the literature on protein docking, showing a modest improvement over the state of the art. As such, I think the paper would be well-suited for publication in a molecular biology venue, or perhaps as an application paper at NIPS. The main weakness of the paper in my view is that it is a fairly straightforward application of an existing technique (GCNs) to a new domain (plus some feature engineering). As such I am leaning towards a rejection for NIPS.”

Please comment on and take into account the strengths of the submission. It can be tempting to only comment on the weaknesses; however, ACs, SACs, and program chairs need to understand both the strengths and the weaknesses in order to make an informed decision. It is useful for the ACs, SACs, and program chairs if you include a list of arguments for and against acceptance. If you believe that a submission is out of scope for NeurIPS, then please justify this judgement appropriately, including, but not limited to, looking at subject areas and previous NeurIPS papers.  If you need to cite existing work, please be as precise as possible and give a complete citation.

Example from /nips30/reviews/587.html

“There are several things to like about this paper:

- The problem of safe RL is very important, of great interest to the community and without too much in the way of high quality solutions.

- The authors make good use of the developed tools in model-based control and provide some bridge between developments across sub-fields.

- The simulations support the insight from the main theoretical analysis, and the algorithm seems to outperform its baseline.

However, I found that there were several shortcomings:

- I found the paper as a whole a little hard to follow and even poorly written as a whole. For a specific example of this see the paragraph beginning 197.

- The treatment of prior work and especially the "exploration/exploitation" problem is inadequate and seems to be treated as an afterthought: but of course it is totally central to the problem! Prior work such as [34] deserve a much more detailed discussion and comparison so that the reader can understand how/why this method is different.

- Something is confusing (or perhaps even wrong) about the way that Figure 1 is presented. In an RL problem you cannot just "sample" state-actions, but instead you may need to plan ahead over multiple timesteps for efficient exploration.

- The main theorems are hard to really internalize in any practical way, would something like a "regret bound" be possible instead? I'm not sure that these types of guarantees are that useful.

- The experiments are really on quite a simple toy domain that didn't really enthuse me.”

Example from https://openreview.net/forum?id=SkkTMpjex¬eId=rkgMSRKrx

“The main contributions of the paper are:

1) Distributed variant of K-FAC that is efficient for optimizing deep neural networks. The authors mitigate the computational bottlenecks of the method (second order statistic computation and Fisher Block inverses) by asynchronous updating.

2) The authors propose a “doubly-factored” Kronecker approximation for layers whose inputs are too large to be handled by the standard Kronecker-factored approximation. They also present (Appendix A) a cheaper Kronecker factored approximation for convolutional layers.

3) Empirically illustrate the performance of the method, and show:

- Asynchronous Fisher Block inversions do not adversely affect the performance of the method (CIFAR-10)

- K-FAC is faster than Synchronous SGD (with and without BN, and with momentum) (ImageNet)

- Doubly-factored K-FAC method does not deteriorate the performance of the method (ImageNet and ResNet)

- Favorable scaling properties of K-FAC with mini-batch size

Pros:

- Paper presents interesting ideas on how to make computationally demanding aspects of K-FAC tractable.

- Experiments are well thought out and highlight the key advantages of the method over Synchronous SGD (with and without BN).

Cons:

- “…it should be possible to scale our implementation to a larger distributed system with hundreds of workers.” The authors mention that this should be possible, but fail to mention the potential issues with respect to communication, load balancing and node (worker) failure. That being said, as a proof-of-concept, the method seems to perform well and this is a good starting point.

- Mini-batch size scaling experiments: the authors do not provide validation curves, which may be interesting for such an experiment. Keskar et. al. 2016 (On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima) provide empirical evidence that large-batch methods do not generalize as well as small batch methods. As a result, even if the method has favorable scaling properties (in terms of mini-batch sizes), this may not be effective.”