Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Table Representation Learning Workshop

Scaling Experiments in Self-Supervised Cross-Table Representation Learning

Maximilian Schambach · Dominique Paul · Johannes Otterbach

Keywords: [ Cross-Table ] [ Table ] [ Representation Learning ] [ Self-supervised learning ]


Abstract: To analyze the scaling potential of deep tabular representation learning models, we introduce a novel Transformer-based architecture specifically tailored to tabular data and cross-table representation learning by utilizing table-specific tokenizers and a shared Transformer backbone.Our training approach encompasses both single-table and cross-table models, trained via missing value imputation through a self-supervised masked cell recovery objective.To understand the scaling behavior of our method, we train models of varying sizes, ranging from approximately $10^4$ to $10^7$ parameters. These models are trained on a carefully curated pretraining dataset, consisting of 135 M training tokens sourced from 76 diverse datasets.We assess the scaling of our architecture in both single-table and cross-table pretraining setups by evaluating the pretrained models using linear probing on a curated set of benchmark datasets and comparing the results with conventional baselines.

Chat is not available.