Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Table Representation Learning

RegCLR: A Self-Supervised Framework for Tabular Representation Learning in the Wild

Weiyao Wang · Byung-Hak Kim · Varun Ganapathi

Keywords: [ table detection ] [ Representation Learning ] [ Self-supervised learning ]


Abstract:

Recent advances in self-supervised learning (SSL) using large models to learn visual representations from natural images are rapidly closing the gap between the results produced by fully supervised learning and those produced by self-supervised learning on downstream vision tasks. Inspired by this advancement and primarily motivated by the emergence of tabular and structured document image applications, we question which pretraining objectives without supervision, architectures, and fine-tuning strategies are most effective. To address these questions, we introduce \ours~a new self-supervised framework that combines contrastive and regularized methods and is compatible with the standard Vision Transformer (ViT)~\citep{Dosovitskiy21} architecture. Then, \ours~is instantiated by integrating masked autoencoders (MAE)~\citep{He22} as a representative example of a contrastive method and enhanced Barlow Twins (eBT) as a representative example of a regularized method with configurable input image augmentations in both branches. Several real-world table recognition scenarios (e.g., extracting tables from document images), ranging from standard Word and Latex documents to even more challenging electronic health records (EHR) computer screen images, have been shown to benefit greatly from the representations learned from this new framework, with detection AP improving relatively by 4.8\% for table, 11.8\% for table column, and 11.1\% for GUI objects over a previous fully supervised baseline on real-world EHR screen images.

Chat is not available.