Skip to yearly menu bar Skip to main content


Talk
in
Workshop: Table Representation Learning Workshop

Invited talk: Enabling Large Language Models to Reason with Tables

Wenhu Chen

[ ]
Fri 15 Dec 2 p.m. PST — 2:30 p.m. PST

Abstract:

Large language models (LLMs) are becoming attractive as few-shot reasoners to solve Natural Language (NL)-related tasks. However, there is still much to learn about how well LLMs understand structured data, such as tables. While it is true that tables can be used as inputs to LLMs with serialization, there lack comprehensive studies examining whether LLMs can truly comprehend such data. In this talk, I will cover different ways to utilize LLMs to interface with tables. One approach is to feed the whole table as a sequence to LLMs for reasoning. In this direction, we will talk about the recent paper GPT4Table to summarize the lessons learned in different table linearization strategies, including table input format, content order, role prompting, and partition marks. The other approach is to use tools like SQL or other language to interface with a table for data access without feeding the entire table. LLMs will work as a reasoner to derive the answer based on the interfaced results from the table.

Chat is not available.