Timezone: »

NeurIPS Workshop on Machine Learning for Creativity and Design 3.0
Luba Elliott · Sander Dieleman · Adam Roberts · Jesse Engel · Tom White · Rebecca Fiebrink · Parag Mital · Christine Payne · Nao Tokui

Sat Dec 14 08:00 AM -- 06:00 PM (PST) @ West 223 + 224
Event URL: https://neurips2019creativity.github.io/ »

Generative machine learning and machine creativity have continued to grow and attract a wider audience to machine learning. Generative models enable new types of media creation across images, music, and text - including recent advances such as StyleGAN, MuseNet and GPT-2. This one-day workshop broadly explores issues in the applications of machine learning to creativity and design. We will look at algorithms for generation and creation of new media, engaging researchers building the next generation of generative models (GANs, RL, etc). We investigate the social and cultural impact of these new models, engaging researchers from HCI/UX communities and those using machine learning to develop new creative tools. In addition to covering the technical advances, we also address the ethical concerns ranging from the use of biased datasets to the use of synthetic media such as “DeepFakes”. Finally, we’ll hear from some of the artists and musicians who are adopting machine learning including deep learning and reinforcement learning as part of their own artistic process. We aim to balance the technical issues and challenges of applying the latest generative models to creativity and design with philosophical and cultural issues that surround this area of research.

Sat 8:15 a.m. - 8:30 a.m.
Welcome and Introduction (Introduction)
Sat 8:30 a.m. - 9:00 a.m.
Transfer Learning for Text Generation (Invited Talk)
Alec Radford
Sat 9:00 a.m. - 9:30 a.m.
Deepfakes: commodification, consequences and countermeasures (Invited Talk)
Giorgio Patrini
Sat 9:30 a.m. - 9:45 a.m.
AI Art Gallery Overview (Introduction)
Luba Elliott
Sat 9:45 a.m. - 10:30 a.m.
Coffee Break (Break)
Sat 10:30 a.m. - 11:00 a.m.
Artist Lightning Talks (Spotlight)
Joanne Hastie, Maja Petric, Vibert Thio
Sat 10:30 a.m. - 11:00 a.m.
Yann LeCun (?) (Invited Talk)
Sat 11:00 a.m. - 11:10 a.m.
Neural Painters: A learned differentiable constraint for generating brushstroke paintings (Contributed Talk)
Rei Nakano
Sat 11:10 a.m. - 11:20 a.m.
Transform the Set: Memory Attentive Generation of Guided and Unguided Image Collages (Contributed Talk)
Nikolay Jetchev, Roland Vollgraf
Sat 11:20 a.m. - 11:30 a.m.
Paper Dreams: An Interactive Interface for Generative Visual Expression (Contributed Talk)
Guillermo Bernal, Lily Zhou
Sat 11:30 a.m. - 11:40 a.m.
Deep reinforcement learning for 2D soft body locomotion (Contributed Talk)
Junior Rojas
Sat 11:40 a.m. - 11:50 a.m.
Towards Sustainable Architecture: 3D Convolutional Neural Networks for Computational Fluid Dynamics Simulation and Reverse Design Workflow (Contributed Talk)
Josef Musil
Sat 11:50 a.m. - 12:00 p.m.
Human and GAN collaboration to create haute couture dress (Contributed Talk)
Jun Seita, Tatsuki Koga
Sat 12:00 p.m. - 1:30 p.m.
Lunch Break (Break)
Sat 1:30 p.m. - 2:30 p.m.
Poster Session 1 (Poster Session)
Han-Hung Lee, Asir Saeed, Terence Broad, Jon Gillick, Aaron Hertzmann, Gunjan Aggarwal, Eun Jee Sung, Alex Champandard, Junghyun Park, John Mellor, Vincent Herrmann, Da Gin Wu, Seri Lee, Park Jieun, TaeJae Han, wonseok jung, Seungil Kim
Sat 2:30 p.m. - 3:00 p.m.
How to Chain Trip (Invited Talk)
Claire Evans, Jona Bechtolt, Rob Kieswetter
Sat 3:00 p.m. - 3:30 p.m.
Sougwen Chung (Invited Talk)
Sougwen Chung
Sat 3:30 p.m. - 4:15 p.m.
Coffee Break (Break)
Sat 4:15 p.m. - 4:25 p.m.
MidiMe: Personalizing a MusicVAE model with user data (Contributed Talk)
Monica Dinculescu
Sat 4:25 p.m. - 4:35 p.m.
First Steps Towards Collaborative Poetry Generation (Contributed Talk)
Dave Uthus, Maria Voitovich
Sat 4:35 p.m. - 5:00 p.m.
Panel Discussion (Panel)
Sat 5:00 p.m. - 6:00 p.m.
Poster Session 2 (Poster Session)
Mayur Saxena, Nicholas Frosst, Vivien Cabannes, Gene Kogan, Austin Dill, Anurag Sarkar, Joel Ruben Antony Moniz, Vibert Thio, Scott Sievert, Lia Coleman, Frederik De Bleser, Brian Quanz, Jonathon Kereliuk, Panos Achlioptas, Mohamed Elhoseiny, Songwei Ge, Aidan Gomez, Jamie Brew
Sat 5:05 p.m. - 6:00 p.m.
Artwork (Demonstration)
Helena Sarin, Anthony Bourached, CJ Carr, Zack Zukowski, Aven Le Zhou, Katerina Malakhova, Maja Petric, Tom Laurenzo, Elle O'Brien, Matthew Wegner, Yuma Kishi, Sehmon Burnam

Author Information

Luba Elliott (elluba.com)

Luba Elliott is a curator, artist and researcher specialising in artificial intelligence in the creative industries. She is currently working to educate and engage the broader public about the latest developments in creative AI through monthly meetups, talks and tech demonstrations. As curator, she organised workshops and exhibitions on art and AI for The Photographers’ Gallery, the Leverhulme Centre for the Future of Intelligence and Google. Prior to that, she worked in start-ups, including the art collector database Larry’s List. She obtained her undergraduate degree in Modern Languages at the University of Cambridge and has a certificate in Design Thinking from the Hasso-Plattner-Institute D-school in Potsdam.

Sander Dieleman (DeepMind)
Adam Roberts (Google Brain)
Jesse Engel (Google Brain)
Tom White (Victoria University of Wellington School of Design)

Tom is a New Zealand based artist investigating machine perception. His current work focuses on creating physical artworks that highlight how machines “see” and thus how they think, suggesting that these systems are capable of abstraction and conceptual thinking. He has exhibited computer based artwork internationally over the past 25 years with themes of artificial intelligence, interactivity, and computational creativity. He is currently a lecturer and researcher at University of Wellington School of Design where he teaches students the creative potential of computer programming and artificial intelligence.

Rebecca Fiebrink (Goldsmiths University of London)
Parag Mital (HyperSurfaces, LTD)

Parag K. MITAL (US) is an artist and interdisciplinary researcher obsessed with the nature of information, representation, and attention. Using film, eye-tracking, EEG, and fMRI recordings, he has worked on computational models of audiovisual perception from the perspective of both robots and humans, often revealing the disjunct between the two, through generative film experiences, augmented reality hallucinations, and expressive control of large audiovisual corpora. Through this process, he balances his scientific and arts practice, with both reflecting on each other: the science driving the theories, and the artwork re-defining the questions asked within the research. His work has been exhibited internationally including the Prix Ars Electronica, ACM Multimedia, Victoria & Albert Museum, London’s Science Museum, Oberhausen Short Film Festival, and the British Film Institute, and featured in FastCompany, BBC, NYTimes, CreativeApplications.Net, and CreateDigitalMotion.

Christine Payne (OpenAI)
Nao Tokui (Keio University)

More from the Same Authors