Timezone: »
Over the past few years, generative machine learning and machine creativity have continued grow and attract a wider audience to machine learning. Generative models enable new types of media creation across images, music, and text - including recent advances such as sketch-rnn and the Universal Music Translation Network. This one-day workshop broadly explores issues in the applications of machine learning to creativity and design. We will look at algorithms for generation and creation of new media and new designs, engaging researchers building the next generation of generative models (GANs, RL, etc). We investigate the social and cultural impact of these new models, engaging researchers from HCI/UX communities and those using machine learning to develop new creative tools. In addition to covering the technical advances, we also address the ethical concerns ranging from the use of biased datasets to building tools for better “DeepFakes”. Finally, we’ll hear from some of the artists and musicians who are adopting machine learning including deep learning and reinforcement learning as part of their own artistic process. We aim to balance the technical issues and challenges of applying the latest generative models to creativity and design with philosophical and cultural issues that surround this area of research.
Background
In 2016, DeepMind’s AlphaGo made two moves against Lee Sedol that were described by the Go community as “brilliant,” “surprising,” “beautiful,” and so forth. Moreover, there was little discussion surrounding the fact that these very creative moves were actually made by a machine; it was enough that they were great examples of go playing. At the same time, the general public showed more concern for other applications of generative models. Algorithms that allow for convincing voice style transfer (Lyrebird) or puppet-like video face control (Face2Face) have raised ethical concerns that generative ML will be used to make convincing forms of fake news
Balancing this, the arts and music worlds have positively embraced generative models. Starting with DeepDream and expanding with image and video generation advances (e.g. GANs) we’ve seen lots of new and interesting art and music technologies provided by the machine learning community. We’ve seen research projects like Google Brain’s Magenta, Sony CSL’s FlowMachines and IBM’s Watson undertake collaborations and attempt to build tools and ML models for use by these communities.
Research
Recent advances in generative models enable new possibilities in art and music production. Language models can be used to write science fiction film scripts (Sunspring), theatre plays (Beyond the Fence) and even replicate the style of individual authors (Deep Tingle). Generative models for image and video allow us to create visions of people, places and things that resemble the distribution of actual images (GANs etc). Sequence modelling techniques have opened up the possibility of generating realistic musical scores (MIDI generation etc) and even raw audio that resembles human speech and physical instruments (DeepMind’s WaveNet, MILA’s Char2Wav and Google’s NSynth). In addition, sequence modelling allows us to model vector images to construct stroke-based drawings of common objects according to human doodles (sketch-rnn). Lately, domain transfer techniques (FAIR’s Universal Music Translation Network) have enabled the translation of music across musical instruments, genres, and styles.
In addition to field-specific research, a number of papers have come out that are directly applicable to the challenges of generation and evaluation such as learning from human preferences (Christiano et al., 2017) and CycleGAN. The application of Novelty Search (Stanley), evolutionary complexification (Stanley - CPPN, NEAT, Nguyen et al - Plug&Play GANs, Innovation Engine) and intrinsic motivation (Oudeyer et al 2007, Schmidhuber on Fun and Creativity) techniques, where objective functions are constantly evolving, is still not common practice in art and music generation using machine learning.
Another focus of the workshop is how to better enable human influence over generative models. This could include learning from human preferences, exposing model parameters in ways that are understandable and relevant to users in a given application domain (e.g., similar to Morris et al. 2008), enabling users to manipulate models through changes to training data (Fiebrink et al. 2011), allowing users to dynamically mix between multiple generative models (Akten & Grierson 2016), or other techniques. Although questions of how to make learning algorithms controllable and understandable to users are relatively nascent in the modern context of deep learning and reinforcement learning, such questions have been a growing focus of work within the human-computer interaction community (e.g., examined in a CHI 2016 workshop on Human-Centred Machine Learning), and the AI Safety community (e.g. Christiano et al. 22017, using human preferences to train deep reinforcement learning systems). Such considerations also underpin the new Google “People + AI Research” (PAIR) initiative.
Artists and Musicians
All the above techniques improve our capabilities of producing text, sound and images and have helped popularise the themes of machine learning and artificial intelligence in the art world with a number of art exhibitions (ZKM’s Open Codes, Frankfurter Kunstverein’s I am here to learn, NRW Forum’s Pendoran Vinci) and media art festivals (Impakt Festival 2018 Algorithmic Superstructures, Retune 2016) dedicated to the topic.
Art and music that stands the test of time however requires more than generative capabilities. Recent research includes a focus on novelty in creative adversarial networks (Elgammal et al., 2017) and considers how generative algorithms can integrate into human creative processes, supporting exploration of new ideas as well as human influence over generated content (Atken & Grierson 2016a, 2016b). Artists including Mario Klingemann, Roman Lipski, Mike Tyka, and Memo Akten have further contributed to this space of work by creating artwork that compellingly demonstrates capabilities of generative algorithms, and by publicly reflecting on the artistic affordances of these new tools. Other artists such as Mimi Onuoha, Caroline Sinders, and Adam Harvey have explored the ethical dimensions of machine learning technologies, reflecting on the issues of biased datasets and facial recognition.
The goal of this workshop is to bring together researchers interested in advancing art and music generation to present new work, foster collaborations and build networks.
In this workshop, we are particularly interested in how the following can be used in art and music generation: reinforcement learning, generative adversarial networks, novelty search and evaluation as well as learning from user preferences. We welcome submissions of short papers, demos and extended abstracts related to the above.
Like last year, there will be an open call for a display of artworks incorporating machine learning techniques. The exhibited works serve as a separate and more personal forum for collecting and sharing some of the latest creative works incorporating machine learning techniques with the NIPS community.
Sat 5:30 a.m. - 5:45 a.m.
|
Introduction
(
Talk
)
|
🔗 |
Sat 5:45 a.m. - 6:15 a.m.
|
Kenneth Stanley
(
Talk
)
|
Kenneth Stanley 🔗 |
Sat 6:15 a.m. - 6:45 a.m.
|
Yaroslav Ganin
(
Talk
)
|
Yaroslav Ganin 🔗 |
Sat 6:45 a.m. - 7:15 a.m.
|
David Ha
(
Talk
)
|
David Ha 🔗 |
Sat 7:15 a.m. - 7:30 a.m.
|
AI art gallery overview
(
Talk
)
|
Luba Elliott 🔗 |
Sat 8:00 a.m. - 8:30 a.m.
|
Yaniv Taigman
(
Talk
)
|
Yaniv Taigman 🔗 |
Sat 8:30 a.m. - 8:45 a.m.
|
Performing Structured Improvisations with Pre-existing Generative Musical Models
(
Contributed Talk
)
|
🔗 |
Sat 8:45 a.m. - 9:00 a.m.
|
Legend of Wrong Mountain: Full Generation of Traditional Chinese Opera Using Multiple Machine Learning Algorithms
(
Contributed Talk
)
|
Lingdong Huang · Syuan-Cheng Sun · Zheng Jiang 🔗 |
Sat 9:00 a.m. - 10:30 a.m.
|
Lunch
|
🔗 |
Sat 10:30 a.m. - 11:30 a.m.
|
Poster Session 1
(
Posters
)
|
Evan Casey · Colin A Raffel · Jonathan Simon · Juncheng Li · Robert Saunders · Petra Gemeinboeck · Eunsu Kang · Songwei Ge · Curtis Hawthorne · Anna Huang · Ting-Wei Su · Eric Chu · Memo Akten · Sonam Damani · Khyatti Gupta · Dilpreet Singh · Patrick Hutchings
|
Sat 11:30 a.m. - 12:00 p.m.
|
Allison Parrish
(
Talk
)
|
Allison Parrish 🔗 |
Sat 12:00 p.m. - 12:30 p.m.
|
Break
|
🔗 |
Sat 12:30 p.m. - 12:45 p.m.
|
TimbreTron: A WaveNet(CycleGAN(CQT(Audio))) Pipeline for Musical Timbre Transfer
(
Contributed talk
)
|
Sicong (Sheldon) Huang · Cem Anil · Xuchan Bao 🔗 |
Sat 12:45 p.m. - 1:00 p.m.
|
Infilling Piano performances
(
Contributed Talk
)
|
Daphne Ippolito 🔗 |
Sat 1:00 p.m. - 1:15 p.m.
|
Improvised Robotic Design with Found Objects
(
Contributed Talk
)
|
Azumi Maekawa 🔗 |
Sat 1:15 p.m. - 1:30 p.m.
|
SpaceSheets: Interactive Latent Space Exploration through a Spreadsheet Interface
(
Contributed Talk
)
|
Tom White 🔗 |
Sat 1:30 p.m. - 1:45 p.m.
|
Runway: Adding artificial intelligence capabilities to design and creative platforms
(
Contributed Talk
)
|
Cristobal Valenzuela · Anastasios Germanidis · Alejandro Matamala 🔗 |
Sat 1:45 p.m. - 2:15 p.m.
|
Open Discussion
(
Discussion
)
|
🔗 |
Sat 2:15 p.m. - 3:15 p.m.
|
Poster Session 2
(
Posters
)
|
Katy Gero · Le Zhou · Simiao Yu · Zhengyan Gao · Chris Donahue · Juncheng Li · TAEGYUN KWON · Patrick Hutchings · Charles Martin · Eunsu Kang · Asanobu Kitamoto · Zheng Jiang · Syuan-Cheng Sun · Philipp Roland Schmitt · Maria Attarian · Alex Lamb · Tarin CLANUWAT · Mauro Martino · Holly Grimm · Nikolay Jetchev
|
Sat 2:15 p.m. - 3:15 p.m.
|
AI art show ( Artwork ) link » | Ziv Epstein · Anna Chaney · Alex Champandard · Gene Kogan · Josh Davis 🔗 |
Author Information
Luba Elliott (independent AI Curator)
Luba Elliott is a curator, artist and researcher specialising in artificial intelligence in the creative industries. She is currently working to educate and engage the broader public about the latest developments in creative AI through monthly meetups, talks and tech demonstrations. As curator, she organised workshops and exhibitions on art and AI for The Photographers’ Gallery, the Leverhulme Centre for the Future of Intelligence and Google. Prior to that, she worked in start-ups, including the art collector database Larry’s List. She obtained her undergraduate degree in Modern Languages at the University of Cambridge and has a certificate in Design Thinking from the Hasso-Plattner-Institute D-school in Potsdam.
Sander Dieleman (DeepMind)
Rebecca Fiebrink (Goldsmiths University of London)
Jesse Engel (Google Brain)
Adam Roberts (Google Brain)
Tom White (University of Wellington School of Design)
Tom is a New Zealand based artist investigating machine perception. His current work focuses on creating physical artworks that highlight how machines “see” and thus how they think, suggesting that these systems are capable of abstraction and conceptual thinking. He has exhibited computer based artwork internationally over the past 25 years with themes of artificial intelligence, interactivity, and computational creativity. He is currently a lecturer and researcher at University of Wellington School of Design where he teaches students the creative potential of computer programming and artificial intelligence.
More from the Same Authors
-
2021 : MIDI-DDSP: Hierarchical Modeling of Music for Detailed Control »
Yusong Wu · Ethan Manilow · Kyle Kastner · Tim Cooijmans · Aaron Courville · Cheng-Zhi Anna Huang · Jesse Engel -
2023 Workshop: Machine Learning for Audio »
Brian Kulis · Sadie Allen · Sander Dieleman · Shrikanth Narayanan · Rachel Manzelli · Alice Baird · Alan Cowen -
2023 Workshop: NeurIPS 2023 Workshop on Machine Learning for Creativity and Design »
Yingtao Tian · Tom White · Lia Coleman · Hannah Johnston -
2022 : PORTAGING (live AI Performance) »
Kory Mathewson · Piotr Mirowski · Hannah Johnston · Tom White · Jason Baldridge -
2022 Workshop: Workshop on Machine Learning for Creativity and Design »
Tom White · Yingtao Tian · Lia Coleman · Samaneh Azadi -
2022 : Tangible Abstractions »
Tom White -
2021 Workshop: Machine Learning for Creativity and Design »
Tom White · Mattie Tesfaldet · Samaneh Azadi · Daphne Ippolito · Lia Coleman · David Ha -
2021 : HEAR 2021: Holistic Evaluation of Audio Representations + Q&A »
Joseph Turian · Jordan Shier · Bhiksha Raj · Bjoern Schuller · Christian Steinmetz · George Tzanetakis · Gissel Velarde · Kirk McNally · Max Henry · Nicolas Pinto · Yonatan Bisk · George Tzanetakis · Camille Noufi · Dorien Herremans · Jesse Engel · Justin Salamon · Prany Manocha · Philippe Esling · Shinji Watanabe -
2020 : Panel Discussion 2 »
Tom White · Jesse Engel · Aaron Hertzmann · Stephanie Dinkins · Holly Grimm -
2020 : Art Showcase 1 »
Luba Elliott -
2020 : magenta: Empowering Creative Agency with Machine Learning »
Jesse Engel -
2020 : Panel Discussion 1 »
Luba Elliott · Janelle Shane · Sofia Crespo · Scott Eaton · Adam Roberts · Angela Fan -
2020 : Art Showcase 1 »
Luba Elliott -
2020 Workshop: Machine Learning for Creativity and Design 4.0 »
Luba Elliott · Sander Dieleman · Adam Roberts · Tom White · Daphne Ippolito · Holly Grimm · Mattie Tesfaldet · Samaneh Azadi -
2020 : Introduction and Art Gallery Overview »
Luba Elliott -
2019 : AI Art Gallery Overview »
Luba Elliott -
2019 Workshop: NeurIPS Workshop on Machine Learning for Creativity and Design 3.0 »
Luba Elliott · Sander Dieleman · Adam Roberts · Jesse Engel · Tom White · Rebecca Fiebrink · Parag Mital · Christine McLeavey · Nao Tokui -
2019 : Poster Session »
Ethan Harris · Tom White · Oh Hyeon Choung · Takashi Shinozaki · Dipan Pal · Katherine L. Hermann · Judy Borowski · Camilo Fosco · Chaz Firestone · Vijay Veerabadran · Benjamin Lahner · Chaitanya Ryali · Fenil Doshi · Pulkit Singh · Sharon Zhou · Michel Besserve · Michael Chang · Anelise Newman · Mahesan Niranjan · Jonathon Hare · Daniela Mihai · Marios Savvides · Simon Kornblith · Christina M Funke · Aude Oliva · Virginia de Sa · Dmitry Krotov · Colin Conwell · George Alvarez · Alex Kolchinski · Shengjia Zhao · Mitchell Gordon · Michael Bernstein · Stefano Ermon · Arash Mehrjou · Bernhard Schölkopf · John Co-Reyes · Michael Janner · Jiajun Wu · Josh Tenenbaum · Sergey Levine · Yalda Mohsenzadeh · Zhenglong Zhou -
2018 : SpaceSheets: Interactive Latent Space Exploration through a Spreadsheet Interface »
Tom White -
2018 : AI art gallery overview »
Luba Elliott -
2017 : Deep learning for music recommendation and generation »
Sander Dieleman -
2017 : Invited Talk »
Rebecca Fiebrink -
2017 Workshop: Machine Learning for Creativity and Design »
Douglas Eck · David Ha · S. M. Ali Eslami · Sander Dieleman · Rebecca Fiebrink · Luba Elliott -
2017 Demonstration: Magenta and deeplearn.js: Real-time Control of DeepGenerative Music Models in the Browser »
Curtis Hawthorne · Ian Simon · Adam Roberts · Jesse Engel · Daniel Smilkov · Nikhil Thorat · Douglas Eck -
2016 Demonstration: Interactive musical improvisation with Magenta »
Adam Roberts · Jesse Engel · Curtis Hawthorne · Ian Simon · Elliot Waite · Sageev Oore · Natasha Jaques · Cinjon Resnick · Douglas Eck -
2016 Demonstration: Neural Puppet »
Tom White