`

Timezone: »

 
Demonstration
Demonstrations 1
Douwe Kiela · Barbara Caputo · Marco Ciccone

Tue Dec 07 08:30 AM -- 09:50 AM (PST) @ None

Demonstrations must show novel technology and must run online during the conference. Unlike poster presentations or slide shows, interaction with the audience is a critical element. Therefore, the creativity of demonstrators to propose new ways in which interaction and engagement can fully leverage this year’s virtual conference format will be particularly relevant for selection. This session has the following demonstrations:

  • Interactive Exploration for 60 Years of AI Research
  • SenSE: A Toolkit for Semantic Change Exploration via Word Embedding Alignment
  • Training Transformers Together
  • GANs for All: Supporting Fun and Intuitive Exploration of GAN Latent Spaces
  • Lesan - Machine Translation for Low Resource Languages
Tue 8:30 a.m. - 8:35 a.m.
Introduction (Talk)
Douwe Kiela
Tue 8:35 a.m. - 8:50 a.m.
(Live Demo)  link »

Research in artificial intelligence has been around for over six decades, and interest in the field is still rapidly growing. A diversification of interests has birthed many sub-fields within AI, making it harder for novices and senior researchers alike to orient themselves and their work within the historical context of ML research. We created an interactive demo to investigate an opinionated selection of papers from the last 60 years. The demo not only reflects on the past, but it also allows users to position abstracts of their own novel ideas into the research landscape carved by the last 60 years of AI publications.

Hendrik Strobelt · Ben Hoover
Tue 8:50 a.m. - 9:05 a.m.
(Live Demo)  link »

Lexical Semantic Change (LSC) detection, also known as Semantic Shift, is the processing of identifying and characterizing variations in language usage across different scenarios such as time and domain. It allows us to track the evolution of word senses, as well as to measure the difference between the language used in distinct communities. LSC detection is often done by applying a distance measure over vectors of two aligned word embedding matrices. In this demonstration, we present SenSE, an interactive semantic shift exploration toolkit that provides visualization and explanation of lexical semantic change for an input pair of text sources. Our system focuses on showing how the different alignment strategies may affect the output of an LSC model as well as on explaining semantic change based on the neighbors of a chosen target word, while also extracting examples of sentences where these semantic deviations appear. The system runs as a web application (available at http://sense.mgruppi.me), allowing the audience to interact by configuring the alignment strategies while visualizing the results in a web browser.

Maurício Gruppi · Sibel Adali · Pin-Yu Chen
Tue 9:05 a.m. - 9:20 a.m.
(Live Demo)  link »

We invite volunteers to train a large Transformer language model over the Internet. Instead of using supercomputers, we will pool together all available computational resources: desktops, laptops, servers and even cloud TPUs from around the world. All training artifacts, such as model checkpoint and optimizer states, will be shared online for public use.

For this demonstration, we will provide an open-source starter kit that volunteers can use to join the global distributed training run and host similar experiments independently in the future.

Alexander Borzunov · Max Ryabinin · Tim Dettmers · quentin lhoest · Lucile Saulnier · Michael Diskin · Yacine Jernite · Thomas Wolf
Tue 9:20 a.m. - 9:35 a.m.
(Live Demo)  link »

For design professionals, one of the key markers of expertise is a coherent understanding of the design space in which their work is situated. Novices lack this understanding, and as a result, they are likely to suffer from “design fixation.” Our goal is to create a system to support novices in gaining a more complete, expert understanding of their domain’s design space by making use of deep generative models.

Our starting point was the StyleGAN model trained on the Feidegger dataset. We extended the model in a number of ways to better support the exploration of the GAN's latent space. First, we employed the SGD method to project images into latent space, then we added a pixel-level loss function which dramatically improved the ability to locate out-of-sample examples. Second, we implemented a method to generate high-quality images via text descriptions. To achieve this, we randomly sample images from the latent space and pass these along with the text description through a CLIP model to find the image which most closely matches the text. Third, we performed PCA on the latent space to identify semantically-meaningful directions and provide a simple means for the user to interpolate a design in these directions. Finally, we developed an intuitive interface to allow three images to be combined using style mixing.

We developed a graphical front-end web application that can support novices in exploring the full design space of a domain. This interface combines a number of methods from the literature into a single system, which provides a fun and intuitive way for novices to meaningfully explore the latent space of a GAN.

Wei Jiang · Richard Davis · Kevin Gonyop Kim · Pierre Dillenbourg
Tue 9:35 a.m. - 9:50 a.m.
(Live Demo)  link »

Millions of people around the world can not access content on the Web because most of the content is not readily available in their language. Machine translation (MT) systems have the potential to change this for many languages. Current MT systems provide very accurate results for high resource language pairs, e.g., German and English. However, for many low resource languages, MT is still under active research. The key challenge is lack of datasets to build these systems.

We present Lesan, an MT system for low resource languages. Our pipeline solves the key bottleneck to low resource MT by leveraging online and offline sources, a custom OCR system for Ethiopic and an automatic alignment module. The final step in the pipeline is a sequence to sequence model that takes parallel corpus as input and gives us a translation model. Lesan's translation model is based on the Transformer architecture. After constructing a base model, back translation, is used to leverage monolingual corpora.

Currently Lesan supports translation to and from Tigrinya, Amharic and English. We perform extensive human evaluation and show that Lesan outperforms state-of-the-art systems such as Google Translate and Microsoft Translator across all six pairs. Lesan is freely available and has served more than 10 million translations so far. At the moment, there are only 213 Tigrinya and 14,964 Amharic Wikipedia articles. We believe that Lesan will contribute towards democratizing access to the Web through MT for millions of people.

Asmelash Teka Hadgu · Abel Aregawi · Adam D Beaudoin

Author Information

Douwe Kiela (Facebook AI Research)
Barbara Caputo (Politecnico di Torino)
Marco Ciccone (Politecnico di Torino)

More from the Same Authors