Data efficient learning and generalization with sensors and speech [In-Person Only]
Jaya Narain
Abstract
Learning robust embedding can help create reliable models in challenging data-scarce situations. Methods that allow utilizing embedding from pre-trained models across tasks and modalities can be particularly impactful in time series domains, where labeled data is often limited. In this talk, I present several recent projects towards training and characterizing the use of foundation models for time series signals — including selecting signal-rich labels to use embeddings from public models for speech characterizations and related fairness considerations for atypical speech, sharing embeddings across speech and wearable sensor signals, and leveraging contextual knowledge from LLMs for multi-modal fusion.
Chat is not available.
Successful Page Load