Skip to yearly menu bar Skip to main content


Poster

DeepDRK: Deep Dependency Regularized Knockoff for Feature Selection

Hongyu Shen · Yici Yan · Zhizhen Jane Zhao

East Exhibit Hall A-C #2610
[ ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Model-X knockoff has garnered significant attention among various feature selection methods due to its guarantees for controlling the false discovery rate (FDR). Since its introduction in parametric design, knockoff techniques have evolved to handle arbitrary data distributions using deep learning-based generative models. However, we have observed limitations in the current implementations of the deep Model-X knockoff framework. Notably, the "swap property" that knockoffs require often faces challenges at the sample level, resulting in diminished selection power. To address these issues, we develop "Deep Dependency Regularized Knockoff (DeepDRK)," a distribution-free deep learning method that effectively balances FDR and power. In DeepDRK, we introduce a novel formulation of the knockoff model as a learning problem under multi-source adversarial attacks. By employing an innovative perturbation technique, we achieve lower FDR and higher power. Our model outperforms existing benchmarks across synthetic, semi-synthetic, and real-world datasets, particularly when sample sizes are small and data distributions are non-Gaussian.

Live content is unavailable. Log in and register to view live content