Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Workshop
Sat Dec 08 05:00 AM -- 03:30 PM (PST) @ Room 518
Second Workshop on Machine Learning for Creativity and Design
Luba Elliott · Sander Dieleman · Rebecca Fiebrink · Jesse Engel · Adam Roberts · Tom White





Workshop Home Page

Over the past few years, generative machine learning and machine creativity have continued grow and attract a wider audience to machine learning. Generative models enable new types of media creation across images, music, and text - including recent advances such as sketch-rnn and the Universal Music Translation Network. This one-day workshop broadly explores issues in the applications of machine learning to creativity and design. We will look at algorithms for generation and creation of new media and new designs, engaging researchers building the next generation of generative models (GANs, RL, etc). We investigate the social and cultural impact of these new models, engaging researchers from HCI/UX communities and those using machine learning to develop new creative tools. In addition to covering the technical advances, we also address the ethical concerns ranging from the use of biased datasets to building tools for better “DeepFakes”. Finally, we’ll hear from some of the artists and musicians who are adopting machine learning including deep learning and reinforcement learning as part of their own artistic process. We aim to balance the technical issues and challenges of applying the latest generative models to creativity and design with philosophical and cultural issues that surround this area of research.

Background
In 2016, DeepMind’s AlphaGo made two moves against Lee Sedol that were described by the Go community as “brilliant,” “surprising,” “beautiful,” and so forth. Moreover, there was little discussion surrounding the fact that these very creative moves were actually made by a machine; it was enough that they were great examples of go playing. At the same time, the general public showed more concern for other applications of generative models. Algorithms that allow for convincing voice style transfer (Lyrebird) or puppet-like video face control (Face2Face) have raised ethical concerns that generative ML will be used to make convincing forms of fake news

Balancing this, the arts and music worlds have positively embraced generative models. Starting with DeepDream and expanding with image and video generation advances (e.g. GANs) we’ve seen lots of new and interesting art and music technologies provided by the machine learning community. We’ve seen research projects like Google Brain’s Magenta, Sony CSL’s FlowMachines and IBM’s Watson undertake collaborations and attempt to build tools and ML models for use by these communities.

Research
Recent advances in generative models enable new possibilities in art and music production. Language models can be used to write science fiction film scripts (Sunspring), theatre plays (Beyond the Fence) and even replicate the style of individual authors (Deep Tingle). Generative models for image and video allow us to create visions of people, places and things that resemble the distribution of actual images (GANs etc). Sequence modelling techniques have opened up the possibility of generating realistic musical scores (MIDI generation etc) and even raw audio that resembles human speech and physical instruments (DeepMind’s WaveNet, MILA’s Char2Wav and Google’s NSynth). In addition, sequence modelling allows us to model vector images to construct stroke-based drawings of common objects according to human doodles (sketch-rnn). Lately, domain transfer techniques (FAIR’s Universal Music Translation Network) have enabled the translation of music across musical instruments, genres, and styles.

In addition to field-specific research, a number of papers have come out that are directly applicable to the challenges of generation and evaluation such as learning from human preferences (Christiano et al., 2017) and CycleGAN. The application of Novelty Search (Stanley), evolutionary complexification (Stanley - CPPN, NEAT, Nguyen et al - Plug&Play GANs, Innovation Engine) and intrinsic motivation (Oudeyer et al 2007, Schmidhuber on Fun and Creativity) techniques, where objective functions are constantly evolving, is still not common practice in art and music generation using machine learning.

Another focus of the workshop is how to better enable human influence over generative models. This could include learning from human preferences, exposing model parameters in ways that are understandable and relevant to users in a given application domain (e.g., similar to Morris et al. 2008), enabling users to manipulate models through changes to training data (Fiebrink et al. 2011), allowing users to dynamically mix between multiple generative models (Akten & Grierson 2016), or other techniques. Although questions of how to make learning algorithms controllable and understandable to users are relatively nascent in the modern context of deep learning and reinforcement learning, such questions have been a growing focus of work within the human-computer interaction community (e.g., examined in a CHI 2016 workshop on Human-Centred Machine Learning), and the AI Safety community (e.g. Christiano et al. 22017, using human preferences to train deep reinforcement learning systems). Such considerations also underpin the new Google “People + AI Research” (PAIR) initiative.

Artists and Musicians
All the above techniques improve our capabilities of producing text, sound and images and have helped popularise the themes of machine learning and artificial intelligence in the art world with a number of art exhibitions (ZKM’s Open Codes, Frankfurter Kunstverein’s I am here to learn, NRW Forum’s Pendoran Vinci) and media art festivals (Impakt Festival 2018 Algorithmic Superstructures, Retune 2016) dedicated to the topic.

Art and music that stands the test of time however requires more than generative capabilities. Recent research includes a focus on novelty in creative adversarial networks (Elgammal et al., 2017) and considers how generative algorithms can integrate into human creative processes, supporting exploration of new ideas as well as human influence over generated content (Atken & Grierson 2016a, 2016b). Artists including Mario Klingemann, Roman Lipski, Mike Tyka, and Memo Akten have further contributed to this space of work by creating artwork that compellingly demonstrates capabilities of generative algorithms, and by publicly reflecting on the artistic affordances of these new tools. Other artists such as Mimi Onuoha, Caroline Sinders, and Adam Harvey have explored the ethical dimensions of machine learning technologies, reflecting on the issues of biased datasets and facial recognition.

The goal of this workshop is to bring together researchers interested in advancing art and music generation to present new work, foster collaborations and build networks.

In this workshop, we are particularly interested in how the following can be used in art and music generation: reinforcement learning, generative adversarial networks, novelty search and evaluation as well as learning from user preferences. We welcome submissions of short papers, demos and extended abstracts related to the above.

Like last year, there will be an open call for a display of artworks incorporating machine learning techniques. The exhibited works serve as a separate and more personal forum for collecting and sharing some of the latest creative works incorporating machine learning techniques with the NIPS community.

Introduction (Talk)
Kenneth Stanley (Talk)
Yaroslav Ganin (Talk)
David Ha (Talk)
AI art gallery overview (Talk)
Yaniv Taigman (Talk)
Performing Structured Improvisations with Pre-existing Generative Musical Models (Contributed Talk)
Legend of Wrong Mountain: Full Generation of Traditional Chinese Opera Using Multiple Machine Learning Algorithms (Contributed Talk)
Lunch
Poster Session 1 (Posters)
Allison Parrish (Talk)
Break
TimbreTron: A WaveNet(CycleGAN(CQT(Audio))) Pipeline for Musical Timbre Transfer (Contributed talk)
Infilling Piano performances (Contributed Talk)
Improvised Robotic Design with Found Objects (Contributed Talk)
SpaceSheets: Interactive Latent Space Exploration through a Spreadsheet Interface (Contributed Talk)
Runway: Adding artificial intelligence capabilities to design and creative platforms (Contributed Talk)
Open Discussion (Discussion)
Poster Session 2 (Posters)
AI art show (Artwork)