Timezone: »
MET: Masked Encoding for Tabular Data
Kushal Majmundar · Sachin Goyal · Praneeth Netrapalli · Prateek Jain
Event URL: https://openreview.net/forum?id=vMHs3HR7r0A »
This paper proposes $\textit{Masked Encoding for Tabular Data (MET)}$ for learning self-supervised representations from $\textit{tabular data}$. Tabular self-supervised learning (tabular-SSL) -- unlike structured domains like images, audio, text -- is more challenging, since each tabular dataset can have a completely different structure among its features (or coordinates), that is hard to identify a priori. MET attempts to circumvent this problem by assuming the following hypothesis: the observed tabular data features come from a latent graphical model and the downstream tasks are significantly easier to solve in the latent space. Based on this hypothesis, MET uses random masking based encoders to learn a positional embedding for each coordinate, which would in turn capture the latent structure between coordinates. Extensive experiments on multiple standard benchmarks for tabular data demonstrate that MET significantly outperforms all the current baselines. For example, on Criteo dataset -- a large-scale click prediction dataset -- MET achieves as much as $5\%$ improvement over the current state-of-the-art (SOTA) while purely supervised learning based approaches have been able to advance SOTA by at most $1\%$ in the last few years. Furthermore, MET can be $>20\%$ more accurate than Gradient-boosted decision trees -- considered as a SOTA method for the tabular setting -- on multiple benchmarks.
This paper proposes $\textit{Masked Encoding for Tabular Data (MET)}$ for learning self-supervised representations from $\textit{tabular data}$. Tabular self-supervised learning (tabular-SSL) -- unlike structured domains like images, audio, text -- is more challenging, since each tabular dataset can have a completely different structure among its features (or coordinates), that is hard to identify a priori. MET attempts to circumvent this problem by assuming the following hypothesis: the observed tabular data features come from a latent graphical model and the downstream tasks are significantly easier to solve in the latent space. Based on this hypothesis, MET uses random masking based encoders to learn a positional embedding for each coordinate, which would in turn capture the latent structure between coordinates. Extensive experiments on multiple standard benchmarks for tabular data demonstrate that MET significantly outperforms all the current baselines. For example, on Criteo dataset -- a large-scale click prediction dataset -- MET achieves as much as $5\%$ improvement over the current state-of-the-art (SOTA) while purely supervised learning based approaches have been able to advance SOTA by at most $1\%$ in the last few years. Furthermore, MET can be $>20\%$ more accurate than Gradient-boosted decision trees -- considered as a SOTA method for the tabular setting -- on multiple benchmarks.
Author Information
Kushal Majmundar (Google)
Sachin Goyal (CMU, Carnegie Mellon University)
Praneeth Netrapalli (Google Research)
Prateek Jain (Google Research)
More from the Same Authors
-
2021 Spotlight: Near-optimal Offline and Streaming Algorithms for Learning Non-Linear Dynamical Systems »
Suhas Kowshik · Dheeraj Nagaraj · Prateek Jain · Praneeth Netrapalli -
2021 Spotlight: Differentially Private Model Personalization »
Prateek Jain · John Rush · Adam Smith · Shuang Song · Abhradeep Guha Thakurta -
2022 : Learning an Invertible Output Mapping Can Mitigate Simplicity Bias in Neural Networks »
Sravanti Addepalli · Anshul Nasery · Venkatesh Babu R · Praneeth Netrapalli · Prateek Jain -
2023 Poster: AdANNS: A Framework for Adaptive Semantic Search »
Aniket Rege · Aditya Kusupati · Sharan Ranjit S · Alan Fan · Qingqing Cao · Sham Kakade · Prateek Jain · Ali Farhadi -
2023 Poster: Simplicity Bias in 1-Hidden Layer Neural Networks »
Depen Morwani · Jatin Batra · Prateek Jain · Praneeth Netrapalli -
2023 Poster: Blocked Collaborative Bandits: Online Collaborative Filtering with Per-Item Budget Constraints »
Soumyabrata Pal · Arun Suggala · Karthikeyan Shanmugam · Prateek Jain -
2023 Poster: Label Robust and Differentially Private Linear Regression: Computational and Statistical Efficiency »
Xiyang Liu · Prateek Jain · Weihao Kong · Sewoong Oh · Arun Suggala -
2022 Poster: DP-PCA: Statistically Optimal and Differentially Private PCA »
Xiyang Liu · Weihao Kong · Prateek Jain · Sewoong Oh -
2022 Poster: S3GC: Scalable Self-Supervised Graph Clustering »
Fnu Devvrit · Aditya Sinha · Inderjit Dhillon · Prateek Jain -
2022 Poster: Test Time Adaptation via Conjugate Pseudo-labels »
Sachin Goyal · Mingjie Sun · Aditi Raghunathan · J. Zico Kolter -
2022 Poster: Reproducibility in Optimization: Theoretical Framework and Limits »
Kwangjun Ahn · Prateek Jain · Ziwei Ji · Satyen Kale · Praneeth Netrapalli · Gil I Shamir -
2022 Poster: Matryoshka Representation Learning »
Aditya Kusupati · Gantavya Bhatt · Aniket Rege · Matthew Wallingford · Aditya Sinha · Vivek Ramanujan · William Howard-Snyder · Kaifeng Chen · Sham Kakade · Prateek Jain · Ali Farhadi -
2021 Poster: Differentially Private Model Personalization »
Prateek Jain · John Rush · Adam Smith · Shuang Song · Abhradeep Guha Thakurta -
2021 Poster: Streaming Linear System Identification with Reverse Experience Replay »
Suhas Kowshik · Dheeraj Nagaraj · Prateek Jain · Praneeth Netrapalli -
2021 Poster: LLC: Accurate, Multi-purpose Learnt Low-dimensional Binary Codes »
Aditya Kusupati · Matthew Wallingford · Vivek Ramanujan · Raghav Somani · Jae Sung Park · Krishna Pillutla · Prateek Jain · Sham Kakade · Ali Farhadi -
2021 Social: ML in India: A Billion Opportunities »
Praneeth Netrapalli · Srujana Merugu -
2021 Poster: Do Input Gradients Highlight Discriminative Features? »
Harshay Shah · Prateek Jain · Praneeth Netrapalli -
2021 Poster: Near-optimal Offline and Streaming Algorithms for Learning Non-Linear Dynamical Systems »
Suhas Kowshik · Dheeraj Nagaraj · Prateek Jain · Praneeth Netrapalli -
2021 Poster: Statistically and Computationally Efficient Linear Meta-representation Learning »
Kiran Thekumparampil · Prateek Jain · Praneeth Netrapalli · Sewoong Oh -
2020 Poster: Projection Efficient Subgradient Method and Optimal Nonsmooth Frank-Wolfe Method »
Kiran Thekumparampil · Prateek Jain · Praneeth Netrapalli · Sewoong Oh -
2020 Spotlight: Projection Efficient Subgradient Method and Optimal Nonsmooth Frank-Wolfe Method »
Kiran Thekumparampil · Prateek Jain · Praneeth Netrapalli · Sewoong Oh -
2020 Poster: RNNPool: Efficient Non-linear Pooling for RAM Constrained Inference »
Oindrila Saha · Aditya Kusupati · Harsha Vardhan Simhadri · Manik Varma · Prateek Jain -
2020 Spotlight: RNNPool: Efficient Non-linear Pooling for RAM Constrained Inference »
Oindrila Saha · Aditya Kusupati · Harsha Vardhan Simhadri · Manik Varma · Prateek Jain -
2020 Poster: The Pitfalls of Simplicity Bias in Neural Networks »
Harshay Shah · Kaustav Tamuly · Aditi Raghunathan · Prateek Jain · Praneeth Netrapalli -
2020 Poster: Least Squares Regression with Markovian Data: Fundamental Limits and Algorithms »
Dheeraj Nagaraj · Xian Wu · Guy Bresler · Prateek Jain · Praneeth Netrapalli -
2020 Spotlight: Least Squares Regression with Markovian Data: Fundamental Limits and Algorithms »
Dheeraj Nagaraj · Xian Wu · Guy Bresler · Prateek Jain · Praneeth Netrapalli -
2016 Poster: Provable Efficient Online Matrix Completion via Non-convex Stochastic Gradient Descent »
Chi Jin · Sham Kakade · Praneeth Netrapalli -
2015 Poster: Convergence Rates of Active Learning for Maximum Likelihood Estimation »
Kamalika Chaudhuri · Sham Kakade · Praneeth Netrapalli · Sujay Sanghavi -
2014 Poster: Non-convex Robust PCA »
Praneeth Netrapalli · Niranjan Uma Naresh · Sujay Sanghavi · Animashree Anandkumar · Prateek Jain -
2014 Spotlight: Non-convex Robust PCA »
Praneeth Netrapalli · Niranjan Uma Naresh · Sujay Sanghavi · Animashree Anandkumar · Prateek Jain -
2013 Poster: Phase Retrieval using Alternating Minimization »
Praneeth Netrapalli · Prateek Jain · Sujay Sanghavi