Skip to yearly menu bar Skip to main content


NIPS 2017

Long Beach Convention Center, Long Beach
Mon Dec 4th through Sat the 9th
 

Call for Papers 

Deadline for Paper Submissions:

Fri May 19, 2017 20:00 PM UTC
 

Submit at: https://cmt.research.microsoft.com/NIPS2017/

 

Submissions are solicited for the Thirty-First Annual Conference on Neural Information Processing Systems (NIPS 2017), an interdisciplinary conference that brings together researchers in all aspects of neural and statistical information processing and computation, and their applications.

 

Submission instructions:

https://nips.cc/Conferences/2017/PaperInformation/AuthorSubmissionInstructions

All submissions will be made in PDF format. Papers are limited to eight pages, including figures and tables, in the NIPS style. Additional pages containing only cited references are allowed. Camera-ready papers will be due in advance of the conference; however, authors will be allowed to make minor changes, such as fixing typos and adding additional references, for a certain period of time after the conference.

Frequently Asked Questions can be found here.

 

Supplementary Material: Authors can submit up to 100 MB of material, containing proofs, additional details, data, or source code. Looking at any supplementary material is up to the discretion of the reviewers.

 

Reviewing: Reviewing will be double-blind: the reviewers will not know the identities of the authors. It is the responsibility of the authors to ensure the proper anonymization of their paper. The reviews and meta-reviews of accepted papers will be made publicly available.

 

Evaluation Criteria: Submissions that violate the NIPS style guide or page limits, are not within the scope of NIPS (see technical areas below), or have already been published elsewhere (see dual submission policy below) may be rejected by the area chairs without further review. Submissions that have fatal flaws revealed by the reviewers, including (without limitation) incorrect proofs or flawed or insufficient wet-lab, hardware, or software experiments, may be rejected on that basis, without taking into consideration other criteria. Submissions that satisfy the previous requirements will be judged on the basis of their technical quality, novelty, potential impact, and clarity. Typical NIPS papers often (but not always) include a mix of algorithmic, theoretical, and experimental results, in varying proportions. While theoretically grounded arguments are certainly welcome, it is counterproductive to add “decorative maths” whose only purpose is to make the paper look more substantial or even intimidating, without adding relevant insights. Algorithmic contributions should have at least an illustration of how the algorithm can eventually materialize into a machine learning application.

 

Technical Areas: Papers are solicited on all aspects of neural and statistical information processing and computation, and their applications, including, but not limited to:

 
  1. Algorithms: Active Learning, Bandit Algorithms, Boosting and Ensemble Methods, Classification, Clustering, Collaborative Filtering, Components Analysis (e.g., CCA, ICA, LDA, PCA), Density Estimation, Dynamical Systems, Hyperparameter Selection, Kernel Methods, Large Margin Methods, Metric Learning, Missing Data, Model Selection and Structure Learning, Multitask and Transfer Learning, Nonlinear Dimensionality Reduction and Manifold Learning, Online Learning, Ranking and Preference Learning, Regression, Reinforcement Learning, Relational Learning, Representation Learning, Semi-Supervised Learning, Similarity and Distance Learning, Sparse Coding and Dimensionality Expansion, Sparsity and Compressed Sensing, Spectral Methods, Sustainability, Stochastic Methods, Structured Prediction, and Unsupervised Learning.

  2. Probabilistic Methods: Bayesian Nonparametrics, Bayesian Theory, Belief Propagation, Causal Inference, Distributed Inference, Gaussian Processes, Graphical Models, Hierarchical Models, Latent Variable Models, MCMC, Topic Models, and Variational Inference.

  3. Optimization: Combinatorial Optimization, Convex Optimization, Non-Convex Optimization, and Submodular Optimization.

  4. Applications: Audio and Speech Processing, Computational Biology and Bioinformatics, Computational Social Science, Computer Vision, Denoising, Dialog- and/or Communication-Based Learning, Fairness Accountability and Transparency, Game Playing, Hardware and Systems, Image Segmentation, Information Retrieval, Matrix and Tensor Factorization, Motor Control, Music Modeling and Analysis, Natural Language Processing, Natural Scene Statistics, Network Analysis, Object Detection, Object Recognition, Privacy Anonymity and Security, Quantitative Finance and Econometrics, Recommender Systems, Robotics, Signal Processing, Source Separation, Speech Recognition, Systems Biology, Text Analysis, Time Series Analysis, Video, Motion and Tracking, Visual Features, Visual Perception, Visual Question Answering, Visual Scene Analysis and Interpretation, and Web Applications and Internet Data.

  5. Reinforcement Learning and Planning: Decision and Control, Exploration, Hierarchical RL, Markov Decision Processes, Model-Based RL, Multi-Agent RL, Navigation, and Planning.

  6. Theory: Competitive Analysis, Computational Complexity, Control Theory, Frequentist Statistics, Game Theory and Computational Economics, Hardness of Learning and Approximations, Information Theory, Large Deviations and Asymptotic Analysis, Learning Theory, Regularization, Spaces of Functions and Kernels, and Statistical Physics of Learning.

  7. Neuroscience and Cognitive Science: Auditory Perception and Modeling, Brain Imaging, Brain Mapping, Brain Segmentation, Brain--Computer Interfaces and Neural Prostheses, Cognitive Science, Connectomics, Human or Animal Learning, Language for Cognitive Science, Memory, Neural Coding, Neuropsychology, Neuroscience, Perception, Plasticity and Adaptation, Problem Solving, Reasoning, Spike Train Generation, and Synaptic Modulation.

  8. Deep Learning: Adversarial Networks, Attention Models, Biologically Plausible Deep Networks, Deep Autoencoders, Efficient Inference Methods, Efficient Training Methods, Embedding Approaches, Generative Models, Interaction-Based Deep Networks, Learning to Learn, Memory-Augmented Neural Networks, Neural Abstract Machines, One-Shot/Low-Shot Learning Approaches, Optimization for Deep Networks, Predictive Models, Program Induction, Recurrent Networks, Supervised Deep Networks, Virtual Environments, and Visualization/Expository Techniques for Deep Networks.

  9. Data, Competitions, Implementations, and Software: Benchmarks, Competitions or Challenges, Data Sets or Data Repositories, and Software Toolkits.

 

Dual Submissions Policy: Submissions that are identical (or substantially similar) to papers that have been previously published, accepted for publication, or submitted in parallel to other conferences or journals are not appropriate for NIPS and violate the dual submission policy. Prior submissions on arXiv.org are permitted. The reviewers will be asked not to actively look for such submissions, but if they are aware of them, this will not constitute a conflict of interest. Previously published papers by the authors on related topics must be cited (with adequate means of preserving anonymity). It is acceptable to submit work to NIPS 2017 that  has been made available as a technical report or on arXiv.org without citing it. The dual submissions policy applies during for the duration of the NIPS review period (i.e., until the authors have been notified about the decision for their paper.)

 

Demonstrations, Workshops, and Symposia: There is a separate demonstration track at NIPS. Authors who wish to submit to the demonstration track should consult the call for demonstrations. There is also a separate call for workshops and symposia.