Timezone: »
1.) Aim
We propose a two day workshop on the topic of machine learning approaches in neuroscience and neuroimaging. We believe that both machine learning and neuroimaging can learn from each other as the two communities overlap and enter an intense exchange of ideas and research questions. Methodological developments in machine learning spurn novel paradigms in neuroimaging, neuroscience motivates methodological advances in computational analysis. In this context many controversies and open questions exist. The goal of the workshop is to pinpoint these issues, sketch future directions, and tackle open questions in the light of novel methodology.
The first workshop of this series at NIPS 2011 built upon earlier events in 2006 and 2008. Last year’s workshop included many invited speakers, and was centered around two panel discussions, during which 2 questions were discussed: the interpretability of machine learning findings, and the shift of paradigms in the neuroscience community. The discussion was inspiring, and made clear, that there is a tremendous amount the two communities can learn from each other benefiting from communication across the disciplines.
The aim of the workshop is to offer a forum for the overlap of these communities. Besides interpretation, and the shift of paradigms, many open questions remain. Among them:
- How suitable are MVPA and inference methods for brain mapping?
- How can we assess the specificity and sensitivity?
- What is the role of decoding vs. embedded or separate feature selection?
- How can we use these approaches for a flexible and useful representation of neuroimaging data?
- What can we accomplish with generative vs. discriminative modelling?
- Can and should the Machine Learning community provide a standard repertoire of methods for the Neuroimaging community to use (e.g. in choosing a classifier)?
2.) Background
Modern multivariate statistical methods have been increasingly applied to various problems in neuroimaging, including “mind reading”, “brain mapping”, clinical diagnosis and prognosis. Multivariate pattern analysis (MVPA) methods are designed to examine complex relationships between large-dimensional signals, such as brain MRI images, and an outcome of interest, such as the category of a stimulus, with a limited amount of data. The MVPA approach is in contrast with the classical mass-univariate (MUV) approach that treats each individual imaging measurement in isolation.
While MUV is useful in localizing effects characterized by localized activity of individual regions, i.e., brain mapping, it is less suited to constructing models that can make prediction at the subject level. Even more importantly, MUV ignores relationships between disjoint anatomical regions, while a growing body of neuroscientific evidence is pointing to an organization of the brain that is comprised of large-scale, distributed networks. These networks exhibit coherent functional activity and can be targeted by disease, resulting in correlations in atrophy. By examining the entire image pattern of both functional and structural data, rather than voxel-level measurements, MVPA offers a unique opportunity to examine and reveal these network-level associations.
Yet, as a new approach in neuroimaging, MVPA is surrounded with unresolved, controversial issues.
In this workshop, we intend to investigate the implications that follow from adopting machine-learning methods for studying brain function. In particular, this concerns the question how these methods may be used to represent cognitive states, and what ramifications this has for consequent theories of cognition. Besides providing a rationale for the use of machine-learning methods in studying brain function, a further goal of this workshop is to identify shortcomings of state-of-the-art approaches and initiate research efforts that increase the impact of machine learning on cognitive neuroscience.
Moreover, from the machine learning perspective, neuroimaging is a rich source of challenging problems that can facilitate development of novel approaches; for example, feature extraction and feature selection is particularly important since the primary objective machine learning analysis of neuroimaging data is to gain a scientific insight rather than simply learn a ``black-box'' predictor. However, unlike some other applications where the set features might be quite well-explored and established by now, neuroimaging is a domain where a machine-learning researcher cannot simply "ask a domain expert what features should be used", since this is essentially the question the domain expert themselves are trying to figure out. While the current neuroscientific knowledge can guide the definition of specialized 'brain areas', more complex patterns of brain activity, such as spatio-temporal patterns, functional network patterns, and other multivariate dependencies remain to be discovered mainly via statistical analysis.
3.) Open questions and possible topics for contributions
I. Machine learning and pattern recognition methodology
Statistics. The common approach to quantify the model fit in MVPA methods is via metrics like Area Under the ROC curve, average accuracy, and mean square error obtained from cross-validation. However, we are also interested in other statistical quantities: e.g. confidence intervals and statistical significance of our estimates, the detected regions, and their relationship to the experimental conditions. What are methods that achieve statistical interpretability of observation made via MVPA approaches?
Generative modeling versus Discriminative modeling. What are appropriate approaches for specific problems, what questions can be posed in either of these frameworks, what are overlapping, what are complementary characteristics - potentially depending on the specific neuroscientific question?
Embedded vs. separate feature selection and decoding. Several recent approaches perform both feature selection, and decoding or classification. Can, or should they be decoupled, and which considerations are important for each choice?
II. Causal inference and interpretability in neuroimaging
Biological interpretability. Multivariate models are, by construction, difficult to interpret and visualize since they are based on patterns that span the image and are not localized. Furthermore, non-linear models, such as those used in kernel-based methods, are even harder to characterize since they cannot be represented with a single “discriminative” map.
True specificity and sensitivity in the general population. Traditional computational anatomy studies compare cases and controls to characterize effects. MVPA methods offer the ability to examine multi-variate effects and make accurate subject-level predictions. The traditional paradigm of cross-validation on case-control data, however, is likely to over-estimate the accuracy of MVPA methods. It is not clear how these models will perform in the general population, where we have heterogeneity in normals and many other “similar” disease conditions to consider – e.g. Alzheimer’s and other dementia types.
How to deal with confounding factors such as age, gender, subject motion, etc? Do we pre-process the data to regress out these effects or include them in the MVPA model? How do we combine features with different units in the MVPA model?
III. Linking machine learning, neuroimaging and neuroscience
How suitable are MVPA methods for brain mapping? Brain mapping deals with the problem of localizing regions that are recruited for certain functional tasks, such as viewing a stimulus. Is the use of MVPA methods for brain mapping appropriate? Isn’t it true that a sophisticated MVPA model, given enough data, is likely to be able to discriminate between stimuli with an accuracy significantly better than chance across signals from all brain regions? There is a basic difference between a brain region “encoding” for a stimulus and it being “tuned” to a stimulus. How far can the MVPA framework go in helping to understand the functional specificity of brain regions?
Flexible representation of functional and anatomical neuroimaging data. The possibilities of representing data offered by machine learning approaches such as manifold learning are currently only partially used. We would like to understand how machine learning and the mapping approaches it offers can be used to better understand neuroimaging data in both exploratory research, and the clinical setting?
Model based methods and their link to neuroscience. Model based methods are only slowly adopted in the neuroscience community, with the literature being dominated by standard MUV approaches. There seems to be a reservation regarding the generalization power of and their verification in experiments. What is the reason for this reservation? What are the true vs. the perceived limitations of MVPA? How can we perform model selection in the light of bias and generalization performance.
Author Information
Georg Langs (Medical University of Vienna)
Irina Rish (Mila/UdeM/LAION)
Guillermo Cecchi (IBM Research)
Brian Murphy (BrainWaveBank)
Bjoern Menze (ETH Zurich)
Kai-min K Chang (CMU)
Moritz Grosse-Wentrup (Max Planck Institute for Intelligent Systems)
More from the Same Authors
-
2019 : Poster Session »
Pravish Sainath · Mohamed Akrout · Charles Delahunt · Nathan Kutz · Guangyu Robert Yang · Joseph Marino · L F Abbott · Nicolas Vecoven · Damien Ernst · andrew warrington · Michael Kagan · Kyunghyun Cho · Kameron Harris · Leopold Grinberg · John J. Hopfield · Dmitry Krotov · Taliah Muhammad · Erick Cobos · Edgar Walker · Jacob Reimer · Andreas Tolias · Alexander Ecker · Janaki Sheth · Yu Zhang · Maciej Wołczyk · Jacek Tabor · Szymon Maszke · Roman Pogodin · Dane Corneil · Wulfram Gerstner · Baihan Lin · Guillermo Cecchi · Jenna M Reinen · Irina Rish · Guillaume Bellec · Darjan Salaj · Anand Subramoney · Wolfgang Maass · Yueqi Wang · Ari Pakman · Jin Hyung Lee · Liam Paninski · Bryan Tripp · Colin Graber · Alex Schwing · Luke Prince · Gabriel Ocker · Michael Buice · Benjamin Lansdell · Konrad Kording · Jack Lindsey · Terrence Sejnowski · Matthew Farrell · Eric Shea-Brown · Nicolas Farrugia · Victor Nepveu · Jiwoong Im · Kristin Branson · Brian Hu · Ramakrishnan Iyer · Stefan Mihalas · Sneha Aenugu · Hananel Hazan · Sihui Dai · Tan Nguyen · Doris Tsao · Richard Baraniuk · Anima Anandkumar · Hidenori Tanaka · Aran Nayebi · Stephen Baccus · Surya Ganguli · Dean Pospisil · Eilif Muller · Jeffrey S Cheng · Gaël Varoquaux · Kamalaker Dadi · Dimitrios C Gklezakos · Rajesh PN Rao · Anand Louis · Christos Papadimitriou · Santosh Vempala · Naganand Yadati · Daniel Zdeblick · Daniela M Witten · Nicholas Roberts · Vinay Prabhu · Pierre Bellec · Poornima Ramesh · Jakob H Macke · Santiago Cadena · Guillaume Bellec · Franz Scherr · Owen Marschall · Robert Kim · Hannes Rapp · Marcio Fonseca · Oliver Armitage · Jiwoong Im · Thomas Hardcastle · Abhishek Sharma · Wyeth Bair · Adrian Valente · Shane Shang · Merav Stern · Rutuja Patil · Peter Wang · Sruthi Gorantla · Peter Stratton · Tristan Edwards · Jialin Lu · Martin Ester · Yurii Vlasov · Siavash Golkar -
2019 : Coffee Break & Poster Session »
Samia Mohinta · Andrea Agostinelli · Alexandra Moringen · Jee Hang Lee · Yat Long Lo · Wolfgang Maass · Blue Sheffer · Colin Bredenberg · Benjamin Eysenbach · Liyu Xia · Efstratios Markou · Jan Lichtenberg · Pierre Richemond · Tony Zhang · JB Lanier · Baihan Lin · William Fedus · Glen Berseth · Marta Sarrico · Matthew Crosby · Stephen McAleer · Sina Ghiassian · Franz Scherr · Guillaume Bellec · Darjan Salaj · Arinbjörn Kolbeinsson · Matthew Rosenberg · Jaehoon Shin · Sang Wan Lee · Guillermo Cecchi · Irina Rish · Elias Hajek -
2016 : Moritz Grosse-Wentrup (Max Planck Institute Tuebingen) »
Moritz Grosse-Wentrup -
2016 Workshop: Representation Learning in Artificial and Biological Neural Networks »
Leila Wehbe · Marcel Van Gerven · Moritz Grosse-Wentrup · Irina Rish · Brian Murphy · Georg Langs · Guillermo Cecchi · Anwar O Nunez-Elizalde -
2016 Invited Talk: Learning About the Brain: Neuroimaging and Beyond »
Irina Rish -
2015 Workshop: Machine Learning and Interpretation in Neuroimaging (day 1) »
Irina Rish · Leila Wehbe · Brian Murphy · Georg Langs · Guillermo Cecchi · Moritz Grosse-Wentrup -
2014 Workshop: MLINI 2014 - 4th NIPS Workshop on Machine Learning and Interpretation in Neuroimaging: Beyond the Scanner »
Irina Rish · Georg Langs · Brian Murphy · Guillermo Cecchi · Kai-min K Chang · Leila Wehbe -
2013 Workshop: MLINI-13: Machine Learning and Interpretation in Neuroimaging (Day 2) »
Georg Langs · Brian Murphy · Kai-min K Chang · Paolo Avesani · James Haxby · Nikolaus Kriegeskorte · Susan Whitfield-Gabrieli · Irina Rish · Guillermo Cecchi · Raif Rustamov · Marius Kloft · Jonathan Young · Sina Ghiassian · Michael Coen -
2013 Workshop: MLINI-13: Machine Learning and Interpretation in Neuroimaging (Day 1) »
Georg Langs · Brian Murphy · Kai-min K Chang · Paolo Avesani · James Haxby · Nikolaus Kriegeskorte · Susan Whitfield-Gabrieli · Irina Rish · Guillermo Cecchi · Raif Rustamov · Marius Kloft · Jonathan Young · Sina Ghiassian · Michael Coen -
2012 Workshop: MLINI - 2nd NIPS Workshop on Machine Learning and Interpretation in Neuroimaging (2 day) »
Georg Langs · Irina Rish · Guillermo Cecchi · Brian Murphy · Bjoern Menze · Kai-min K Chang · Moritz Grosse-Wentrup -
2011 Workshop: Machine Learning and Interpretation in Neuroimaging (MLINI-2011) »
Melissa K Carroll · Guillermo Cecchi · Kai-min K Chang · Moritz Grosse-Wentrup · James Haxby · Georg Langs · Anna Korhonen · Bjoern Menze · Brian Murphy · Janaina Mourao-Miranda · Vittorio Murino · Francisco Pereira · Irina Rish · Mert Sabuncu · Irina Simanova · Bertrand Thirion -
2010 Workshop: Practical Application of Sparse Modeling: Open Issues and New Directions »
Irina Rish · Alexandru Niculescu-Mizil · Guillermo Cecchi · Aurelie Lozano -
2010 Spotlight: Functional Geometry Alignment and Localization of Brain Areas »
Georg Langs · Yanmei Tie · Laura Rigolo · Alexandra Golby · Polina Golland -
2010 Session: Spotlights Session 12 »
Irina Rish -
2010 Session: Oral Session 15 »
Irina Rish -
2010 Poster: Functional Geometry Alignment and Localization of Brain Areas »
Georg Langs · Yanmei Tie · Laura Rigolo · Alexandra Golby · Polina Golland -
2009 Workshop: Connectivity Inference in Neuroimaging »
Karl Friston · Moritz Grosse-Wentrup · Uta Noppeney · Bernhard Schölkopf -
2009 Poster: Discriminative Network Models of Schizophrenia »
Guillermo Cecchi · Irina Rish · Benjamin Thyreau · Bertrand Thirion · Marion Plaze · Jean-Luc Martinot · Marie Laure Paillere-Martinot · Jean-Baptiste Poline -
2009 Oral: Discriminative Network Models of Schizophrenia »
Guillermo Cecchi · Irina Rish · Benjamin Thyreau · Bertrand Thirion · Marion Plaze · Jean-Luc Martinot · Marie Laure Paillere-Martinot · Jean-Baptiste Poline -
2008 Workshop: New Directions in Statistical Learning for Meaningful and Reproducible fMRI Analysis »
Melissa K Carroll · Irina Rish · Francisco Pereira · Guillermo Cecchi -
2008 Poster: Understanding Brain Connectivity Patterns during Motor Imagery for Brain-Computer Interfacing »
Moritz Grosse-Wentrup -
2006 Workshop: Novel Applications of Dimensionality Reduction »
John Blitzer · Rajarshi Das · Irina Rish · Kilian Q Weinberger -
2006 Poster: Adaptive Spatial Filters with predefined Region of Interest for EEG based Brain-Computer-Interfaces »
Moritz Grosse-Wentrup · Klaus Gramann · Martin Buss