Workshop
Fri Dec 08 08:00 AM -- 06:30 PM (PST) @ 202
Machine Deception
Ian Goodfellow · Tim Hwang · Bryce Goodman · Mikel Rodriguez
Machine deception refers to the capacity for machine learning systems to manipulate human and machine agents into believing, acting upon or otherwise accepting false information. The development of machine deception has had a long, foundational and under-appreciated impact on shaping research in the field of artificial intelligence. Thought experiments such as Alan Turing’s eponymous “Turing test” - where an automated system attempts to deceive a human judge into believing it is a human interlocutor, or Searle’s “Chinese room” - in which a human operator attempts to imbue the false impression of consciousness in a machine, are simultaneously exemplars of machine deception and some of the most famous and influential concepts in the field of AI.
As the field of machine learning advances, so too does machine deception seem poised to give rise to a host of practical opportunities and concerns. Machine deception can have many benign and beneficial applications. Chatbots designed to mimic human agents offer technical support and even provide therapy at a cost and scale that may not be otherwise achievable. On the other hand, the rise of techniques that leverage bots and other autonomous agents to manipulate and shape political speech online, has put machine deception in the political spotlight and raised fundamental questions regarding the ability to preserve truth in the digital domain. These concerns are amplified by recent demonstrations of machine learning techniques that synthesize hyper-realistic manipulations of audio and video.
The proposed workshop will bring together research at the forefront of machine deception, including:
Machine-machine deception: Where a machine agent deceives another machine agent, e.g. the use of “bot farms” that automate posting on social media platforms to manipulate content ranking algorithms or evolutionary networks to generate images that “fool” deep neural networks.
Human-machine deception: Where a human agent deceives a machine agent, e.g. the use of human “troll farms” to manipulate content ranking algorithms or use of adversarial examples to exploit fragility of autonomous systems (e.g. stop sign sticker for self driving cars or printed eye-glasses for facial recognition).
Machine-human deception: Where a machine agent is leveraged to deceives a human agent, e.g. the use of GANs to produce realistic manipulations of audio and video content.
Although the workshop will primarily focus on the technical aspects of machine deception, submissions from the fields of law, policy, social sciences and psychology will also be encouraged. It is envisaged that this interdisciplinary forum will both shine a light on what is possible given state of the art tools today, and provide instructive guidance for both technologists and policy-makers going forward.