In the past year, tools such as ChatGPT, Stable Diffusion and SegmentAnything have had an immediate impact on our everyday lives. Many of these tools have been built using foundation models, that is, very large models (having billions or trillions of parameters) trained on vast amounts of data (Bommasani et al., 2021). The excitement around these foundation models and their capabilities might suggest that all the interesting problems have been solved and artificial general intelligence is just around the corner (Wei et al., 2022; Bubeck et al., 2023).
At this year’s I Can’t Believe It’s Not Better workshop we invite papers to cooly reflect on this optimism and to demonstrate that there are in fact many difficult and interesting open questions. The workshop will specifically focus on failure modes of foundation models, especially unexpected negative results. In addition, we invite contributions that will help us understand current and future disruptions of machine learning subfields as well as instances where these powerful methods merely remain complementary to another subfield of machine learning.
Contributions on the failure modes of foundation models might consider:
- Domain-specific areas where the application of foundation models did not work as expected.
- Failures in the safety and explainability of foundation models.
- The limits of current foundation model methodologies.
Besides failure modes of foundation models, this workshop also considers their impact on the ML ecosystem and potential problems that remain to be solved by these new systems. In this context, relevant questions include:
- Where do foundation models leave researchers in other areas (e.g., AI for science, recommender systems, Bayesian methods, bioinformatics)?
- Which important problems are not solved by training large models with large amounts of data?
- What unexpected negative results were encountered when applying foundation models to a specific domain?
Live content is unavailable. Log in and register to view live content