Skip to yearly menu bar Skip to main content


Booth Presentations

Creative AI Session 2

Hall D1 (level 1)

Jean Oh · Isabelle Guyon

Wed 13 Dec 8:45 a.m. PST — 12:15 p.m. PST

Abstract:

Chat is not available.


Table 1
Kiss/Crash

Adam Cole

Kiss/Crash is a multi-screen work exploring the subject of AI-imagery and representation as well as the autobiographical themes of loneliness, desire, and intimacy in the digital age. The installation consists of three individual works in a shared space, Kiss/Crash, Me Kissing Me, and Crash Me, Gently, all of which play with augmenting, inverting, and negating the iconic image of the kiss using AI image translation. Repurposing a classic Hollywood aesthetic through a queer lens, the piece reflects on the nature of images and places AI models within a history of image-production technologies meant to arouse and homogenize our desires. In the process, it reveals the logic of AI imagery and hints at how our relationship to reality will continue to be stretched and shaped by artificial representations at an accelerating pace. This piece celebrates diversity by bringing a unique queer perspective to generative AI, questioning how homogenous representations of love might haunt our AI-mediated future and how LGBT artists can playfully resist and invert that dominant narrative.


Table 10
Blabrecs: An AI-Based Game of Nonsense Word Creation

Isaac Karth · Max Kreminski

Blabrecs is a hybrid digital/physical board game and a rules modification to the popular wordgame Scrabble. In Blabrecs, as in Scrabble, players take turns drawing letter tiles from a bag and placing these tiles on a grid to form words, which are then scored based on letter frequencies and tile score multipliers to award players with points. Unlike Scrabble, however, Blabrecs does not use an English dictionary to determine what letter sequences constitute valid words. Instead, it uses a classifier trained on the dictionary to accept or reject letter sequences. Actual dictionary words are disallowed; only nonsense sequences that the classifier misclassifies as words are allowed to be played.

Physically, Blabrecs consists of a standard Scrabble set plus a computer running the Blabrecs web interface, which consists of a clientside-only web app written in HTML, CSS, and ClojureScript. The AI component of Scrabble consists of two classifiers: a Markov chain-based classifier and a more sophisticated classifier based on a separable convolutional neural network, both of which run directly in the web browser. Players can freely switch between these two classifiers as they play.

One of our primary goals in creating Blabrecs was to use AI to promote greater diversity in language use. Scrabble as a game is notorious for its imposition of an external authority (the dictionary) between players and their own language, and in this way it bears some resemblance to AI-based tools that try to standardize the use of language – sometimes via basic normalization of spelling and grammar, and sometimes even by fully rewriting the user's words into a different and more "correct" writing style, as in some recent LLM-based approaches to helping people write. Blabrecs was originally conceived in part as a protest against this standardization of language in all of its many forms.

By imposing deliberately absurdist constraints on language usage, Blabrecs forces players to create entirely new words. The web interface also provides space for players to write in their own definitions for these words, giving players even further support for generating a diverse vocabulary of new words. Based on our observations of past playtests of Blabrecs, the kinds of words that players invent are often influenced by their own diverse experiences and backgrounds, and no two player groups are likely to end up repeating the same word – a far cry from the standardization of language seen in high-level Scrabble play, where certain key words are memorized and employed in almost every game.

More generally, we hope that the way AI is used in Blabrecs (to push players away from the "typical", rather than pushing them towards it) provides a vision for how AI might be used similarly in other creative contexts. In a time of increasing AI-driven standardization of creative form, we feel that the use of AI to promote creative diversity is an important and underinvestigated direction for research in AI-based creativity support tools.

Blabrecs can be played online here: https://mkremins.github.io/blabrecs


Table 2
Androgynous and Mixed Race Human Face

Eunsu Kang

This work is composed of several hexagonal wooden tiles and a book. These tiles have engraved drawings of androgynous and mixed race human faces generated by AI. These drawing tiles can be installed in various ways of the conference exhibition site as needed. The audience will be able to appreciate the details of this work installed on the site.

My early experiments in 2017 involved delving into the realm of machine learning algorithms to generate portraits. I began by utilizing the renowned Celeb-A dataset, which, although acknowledged for its limitation in accurately representing the diverse spectrum of human beings on Earth, served as a starting point for my exploration. Employing the MMD-GAN algorithm, I discovered a fascinating outcome: the constant generation of human faces that exhibited an intriguing ambiguity in terms of gender, age, and race. To the algorithm, they were merely human faces, stripped of the societal constructs that often define our perceptions. This experiment fueled my inspiration to embark upon a new creative journey using one of the latest generative AI advancements, Dall-E 2, to envision and depict androgynous and mixed race human faces. I have etched these captivating visages onto hexagonal wooden tiles, employing the precision of laser-cutting technology. The natural colors and the lingering warm burnt aroma emitted by the wooden tiles serve to evoke a sense of connection to our own skin, devoid of any judgment that may be superimposed upon it. These hexagonal panels possess the versatility to be arranged in various configurations, expanding their boundaries in all directions.


Table 3
Voice Scroll

David R Rokeby

Voice Scroll is a real-time voice to panorama generator. It can be used either in performance or as an interactive installation where the audience generates a continuously unfolding panorama by speaking.


Table 4
The WHOOPS! Gallery: An Intersection of AI, Creativity, and the Unusual

Jack Hessel · · Yonatan Bitton · Nitzan Bitton Guetta · Yuval Elovici

The WHOOPS! art gallery presents 500 AI-generated images that challenge common sense perceptions. Resulting from a collaboration between AI researchers and human designers, the collection underscores disparities in visual commonsense reasoning between machines and humans. While humans readily identify the anomalies, contemporary AI models struggle, highlighting gaps in AI understanding. This study offers insights into the evolving interplay between human cognition, art, and artificial intelligence.


Table 5
Visions of Resilience: Augmented Diversity

Ninon Lizé Masclef

The artwork "Visions of Resilience: Augmented Diversity" invites the public to embark on an immersive journey celebrating cultural diversity through the power of augmented reality. In this interactive experience, individuals will have the opportunity to create unique AI-generated AR Mardi Gras masks that shape a vibrant tapestry of colors and patterns representative of the diverse cultures of New Orleans. Through real-time modification and customization, participants become active co-creators, infusing their masks with elements that resonate with their heritage. This interactive fresco becomes a living testament to the resilience of the Black Masking Indians and the mosaic of traditions within the Mardi Gras community.


Creative AI
Table 6
salad bowl

salad bowl is an interactive neural sound installation where audiences are invited to co-create and co-mingle with “the salad” — a neural network trained on a diverse, eclectic collection of sounds. the salad is a heterogeneous mix of sound elements, each with its unique character, all contributing to a vibrant whole. The salad is a collective memory of the past sonic experiences of people, places, and things throughout the world, all encoded in a fuzzy possibility space.

In salad bowl, you can sit down at the dinner table. There’s a salad bowl and a microphone in front of you. You pick a piece of paper from the salad bowl. The piece of paper prompts you to make a sound with your voice. You make the sound into the microphone. The salad picks up the sound. The sound becomes part of the salad. The salad becomes part of the sound. The sound comes out warped. It’s perceptual identity has been transformed. The sound is no longer just your voice, but rather a view into the infinite possibilities that your sound could be, in the context of the salad.

To wildly transform the sounds put into the salad, the neural network takes the participant’s sound as input and destroys around 80-90% of it. It then looks at the missing pieces and creates its best guess of what was missing.

More than a mere exploration of generative sound modeling, salad bowl is a celebration of spontaneous, shared, and diverse human interactions with sound. salad bowl encourages multiple people to sit down together at the dinner table and engage in colorful sonic conversation. The result is not just an exploration of a generative model's internal representations, but also a celebration of spontaneous, shared, and diverse experiences of human interaction with sound.

salad bowl encourages people to think of -- and design -- generative AI systems as "salad bowls" not "melting pots". Unlike in a melting pot, where the identities of each individual member are lost in favor of a uniform whole, a salad bowl's ingredients are able to shine together while preserving their identity, creating a shared collective entity that embraces diversity, showcases unique beauty, and fosters a richer, multifaceted experience for all who engage with it.


Table 8
AI Applications to Illustrate Native American Arts: Birdsongs: Using Transfer Learning to Augment Image Generation Models

Kimberly Mann Bruch

Background: Situated approximately 40 miles northeast of San Diego and 30 miles inland from the Pacific Ocean, the Reservation of the Pala Band of Mission Indians is home to 1250 enrolled members– consisting of Cupeños and Luiseños. A vast 20 square miles of valley surrounded by mountains along the San Luis Rey River, the Pala Reservation is comprised of residential and agricultural areas as well as unused wildlands.

The San Diego Supercomputer Center (SDSC) is located on the University of California San Diego campus. Often collaborating with the Pala Band of Mission Indians for educational projects, Senior Science Writer Kimberly Mann Bruch of SDSC and Senior Diana Duro of Pala most recently led a team that used artificial intelligence (AI) tools in an effort to augment image generation models to appropriately represent Native American birdsinging.

Summary: The Native American birdsong represents a sacred, traditional performing art that consists of a rhythmic song regarding an essential life lesson – accompanied by handmade gourd rattles. Unfortunately, the word “birdsong” is grossly misrepresented across an array of technology tools – ranging from search engines to artificial intelligence (AI) imagery models. To augment these models, the team utilized transfer learning to “teach” an example model how to better represent the terms “birdsong”, “birdsinging”, “gourd rattle”, and “rattle”. First, the team obtained images that represented “gourd rattle” and input them into a dataset. Next, the dataset was placed into an existing image generation model. Unfortunately, even with the proper image and description input to the existing model, time and time again, the search did not yield “gourd rattle” or “rattle” upon search. Instead, it most often described the “gourd rattle” as “maraca”, which is similar, but incorrect. Next, the team repeated this activity with “birdsong” and “birdsinging” – the results were the same. That is, the model was unable to ”learn” the terms “birdsong” or “birdsinging”.

Future Work: The team plans to continue working on modifying the models to remedy the issue and then use lessons learned for additional terms.

Funding: The project was funded by the National Science Foundation West Big Data Innovation Hub (1916481) with support also provided by the San Diego Supercomputer Center at UC San Diego and the Pala Band of Mission Indians.


Table 9
Visualising AI

Emma Yousif

Visualising AI is an open source initiative by Google DeepMind. We commission artists from around the world to create more diverse and accessible representations of AI, inspired by conversations with scientists, engineers, and ethicists. The collaboration between artists and the technical specialists is key as it encourages cross-disciplinary dialogue, bringing unique perspectives to the table

With this project, we want to mitigate misconceptions and archaic representations of AI, and we want to help enhance the public’s understanding of these technologies by offering accessible tools that cater to different learning styles and cultural contexts.

As the collection grows, we hope to invite more and more artists to tackle new and emerging themes. In the meantime, we’ve open sourced the entire collection to ensure the artwork can begin to shift the needle on public perception and understanding of AI.