Skip to yearly menu bar Skip to main content


Video Presentation
in
Session: Creative AI Performances 2

Entanglement

Screen
[ ]
Thu 14 Dec 5 p.m. PST — 7 p.m. PST

Abstract:

How much work must the universe do, and how many dreams does it have to nurture, in order to grow a single tree? Then, how much of the universe does a forest harbor?

Entanglement, inspired by the motif of the forest, is a large-scale (16x16x4m) immersive artwork that invites spectators into a multi-sensory environment where visible and invisible worlds are interconnected and symbiotic. The artwork consists of three elements: the growth of trees through procedural modeling, generative AI that dreams images of trees and forests, and the operation of dynamic systems that connect tree roots with the mechanisms of fungi and bacteria–or of neural networks within a brain. Through the entanglement of microcosmic and simultaneous connections, it offers a sensory opportunity for contemplation and inspiration regarding ways of connecting with the world beyond ourselves, and a vision of an AI future that is fully present in its environment, as a diverse, living system in ecosystemic balance with the world. To borrow a phrase from Ursula Le Guin, our word for world is forest.

The artwork was produced using extensive custom software authored by the artists as well as SideFX Houdini and Stable Diffusion/ControlNet. Here we are using generative AI non-conventionally as a part of an ecosystem; not as an alternative artist, nor as a mere tool. As artists we are both thrilled and apprehensive of the transformative power of Generative AI, particularly regarding the role of AI in artistic creativity. Diversity is crucial in our interconnected society, however we are extending the celebration of diversity beyond human-centered society to the whole ecosystem, as we think it is vital for a vigorous future especially in the awareness of the Anthropocene era.

The central AI concept of today is the optimization of prediction based on latent compression of vast stores of mostly human-created data. This means there is all-too-human partiality embedded in the AI, running the risk of finding only what it is trained to seek, and further blinding us to the ecosystem we inhabit. For example, we found that running image generation in a feedback process is a great way to quickly reveal the biased tendencies of the trained model, which required countermeasures including additional image processing to suppress crowd-pleasing over-saturation and contrast as well as latent adjustment to prevent the production of predicted preferences (such as human subjects, text titles, cuteness, inappropriate content, and so on). It also sometimes collapses into symmetric pattern-making or complete loss of depth. This reminds us of the very real risk of positive feedback that can narrow our world and our future just like an echo chamber.

This artwork was completed through a collaboration between the artists of Artificial Nature (Haru Ji & Graham Wakefield) and researchers from the Yonsei University Intelligence Networking Lab (Chanbyoung Chae & Dongha Choi). We express gratitude to Digital Silence and Ulsan Art Museum for organizing the exhibition, and for support in workspace and hardware also provided by UCSB (MAT and the AlloSphere Research Group), SBCAST, and York University (Alice Lab).

Chat is not available.