Demonstrations must show novel technology and must run online during the conference. Unlike poster presentations or slide shows, interaction with the audience is a critical element. Therefore, the creativity of demonstrators to propose new ways in which interaction and engagement can fully leverage this year’s virtual conference format will be particularly relevant for selection. This session has the following demonstrations:
Tue 8:30 a.m. - 8:35 a.m.
|
Introduction
(
Talk
)
|
Douwe Kiela 🔗 |
Tue 8:35 a.m. - 8:50 a.m.
|
Interactive Exploration for 60 Years of AI Research
(
Live Demo
)
link »
Research in artificial intelligence has been around for over six decades, and interest in the field is still rapidly growing. A diversification of interests has birthed many sub-fields within AI, making it harder for novices and senior researchers alike to orient themselves and their work within the historical context of ML research. We created an interactive demo to investigate an opinionated selection of papers from the last 60 years. The demo not only reflects on the past, but it also allows users to position abstracts of their own novel ideas into the research landscape carved by the last 60 years of AI publications. |
Hendrik Strobelt · Benjamin Hoover 🔗 |
Tue 8:50 a.m. - 9:05 a.m.
|
SenSE: A Toolkit for Semantic Change Exploration via Word Embedding Alignment
(
Live Demo
)
link »
Lexical Semantic Change (LSC) detection, also known as Semantic Shift, is the processing of identifying and characterizing variations in language usage across different scenarios such as time and domain. It allows us to track the evolution of word senses, as well as to measure the difference between the language used in distinct communities. LSC detection is often done by applying a distance measure over vectors of two aligned word embedding matrices. In this demonstration, we present SenSE, an interactive semantic shift exploration toolkit that provides visualization and explanation of lexical semantic change for an input pair of text sources. Our system focuses on showing how the different alignment strategies may affect the output of an LSC model as well as on explaining semantic change based on the neighbors of a chosen target word, while also extracting examples of sentences where these semantic deviations appear. The system runs as a web application (available at http://sense.mgruppi.me), allowing the audience to interact by configuring the alignment strategies while visualizing the results in a web browser. |
Maurício Gruppi · Sibel Adali · Pin-Yu Chen 🔗 |
Tue 9:05 a.m. - 9:20 a.m.
|
Training Transformers Together
(
Live Demo
)
link »
We invite volunteers to train a large Transformer language model over the Internet. Instead of using supercomputers, we will pool together all available computational resources: desktops, laptops, servers and even cloud TPUs from around the world. All training artifacts, such as model checkpoint and optimizer states, will be shared online for public use. For this demonstration, we will provide an open-source starter kit that volunteers can use to join the global distributed training run and host similar experiments independently in the future. |
Alexander Borzunov · Max Ryabinin · Tim Dettmers · quentin lhoest · Lucile Saulnier · Michael Diskin · Yacine Jernite · Thomas Wolf 🔗 |
Tue 9:20 a.m. - 9:35 a.m.
|
GANs for All: Supporting Fun and Intuitive Exploration of GAN Latent Spaces
(
Live Demo
)
link »
For design professionals, one of the key markers of expertise is a coherent understanding of the design space in which their work is situated. Novices lack this understanding, and as a result, they are likely to suffer from “design fixation.” Our goal is to create a system to support novices in gaining a more complete, expert understanding of their domain’s design space by making use of deep generative models. Our starting point was the StyleGAN model trained on the Feidegger dataset. We extended the model in a number of ways to better support the exploration of the GAN's latent space. First, we employed the SGD method to project images into latent space, then we added a pixel-level loss function which dramatically improved the ability to locate out-of-sample examples. Second, we implemented a method to generate high-quality images via text descriptions. To achieve this, we randomly sample images from the latent space and pass these along with the text description through a CLIP model to find the image which most closely matches the text. Third, we performed PCA on the latent space to identify semantically-meaningful directions and provide a simple means for the user to interpolate a design in these directions. Finally, we developed an intuitive interface to allow three images to be combined using style mixing. We developed a graphical front-end web application that can support novices in exploring the full design space of a domain. This interface combines a number of methods from the literature into a single system, which provides a fun and intuitive way for novices to meaningfully explore the latent space of a GAN. |
Wei Jiang · Richard Davis · Kevin Gonyop Kim · Pierre Dillenbourg 🔗 |