Skip to yearly menu bar Skip to main content


Poster

DeepPINK: reproducible feature selection in deep neural networks

Yang Lu · Yingying Fan · Jinchi Lv · William Stafford Noble

Room 210 #81

Keywords: [ Information Retrieval ] [ Biologically Plausible Deep Networks ] [ Regression ] [ Computational Biology and Bioinformatics ] [ Classification ] [ Sparsity and Compressed Sensing ]


Abstract:

Deep learning has become increasingly popular in both supervised and unsupervised machine learning thanks to its outstanding empirical performance. However, because of their intrinsic complexity, most deep learning methods are largely treated as black box tools with little interpretability. Even though recent attempts have been made to facilitate the interpretability of deep neural networks (DNNs), existing methods are susceptible to noise and lack of robustness. Therefore, scientists are justifiably cautious about the reproducibility of the discoveries, which is often related to the interpretability of the underlying statistical models. In this paper, we describe a method to increase the interpretability and reproducibility of DNNs by incorporating the idea of feature selection with controlled error rate. By designing a new DNN architecture and integrating it with the recently proposed knockoffs framework, we perform feature selection with a controlled error rate, while maintaining high power. This new method, DeepPINK (Deep feature selection using Paired-Input Nonlinear Knockoffs), is applied to both simulated and real data sets to demonstrate its empirical utility.

Live content is unavailable. Log in and register to view live content