Workshop: Vision Transformers: Theory and applications Fri 9 Dec 01:00 a.m.
Transformer models have demonstrated excellent performance on a diverse set of computer vision applications ranging from classification to segmentation on various data modalities such as images, videos, and 3D data. The goal of this workshop is to bring together computer vision and machine learning researchers working towards advancing the theory, architecture, and algorithmic design for vision transformer models, as well as the practitioners utilizing transformer models for novel applications and use cases.
The workshop’s motivation is to narrow the gap between the research advancements in transformer designs and applications utilizing transformers for various computer vision applications. The workshop also aims to widen the adaptation of transformer models for various vision-related industrial applications. We are interested in papers reporting their experimental results on the utilization of transformers for any application of computer vision, challenges they have faced, and their mitigation strategy on topics like, but not limited to image classification, object detection, segmentation, human-object interaction detection, scene understanding based on 3D, video, and multimodal inputs.
Workshop: Challenges in Deploying and Monitoring Machine Learning Systems Fri 9 Dec 03:00 a.m.
The goal of this event is to bring together people from different communities with the common interest in the Deployment of Machine Learning Systems.
With the dramatic rise of companies dedicated to providing Machine Learning software-as-a-service tools, Machine Learning has become a tool for solving real world problems that is increasingly more accessible in many industrial and social sectors. With the growth in number of deployments, also grows the number of known challenges and hurdles that practitioners face along the deployment process to ensure the continual delivery of good performance from deployed Machine Learning systems. Such challenges can lie in adoption of ML algorithms to concrete use cases, discovery and quality of data, maintenance of production ML systems, as well as ethics.
Workshop on neuro Causal and Symbolic AI (nCSI) Fri 9 Dec 06:00 a.m.
Understanding causal interactions is central to human cognition and thereby a central quest in science, engineering, business, and law. Developmental psychology has shown that children explore the world in a similar way to how scientists do, asking questions such as “What if?” and “Why?” AI research aims to replicate these capabilities in machines. Deep learning in particular has brought about powerful tools for function approximation by means of end-to-end traininable deep neural networks. This capability has been corroborated by tremendous success in countless applications. However, their lack of interpretability and reasoning capabilities prove to be a hindrance towards building systems of human-like ability. Therefore, enabling causal reasoning capabilities in deep learning is of critical importance for research on the path towards human-level intelligence. First steps towards neural-causal models exist and promise a vision of AI systems that perform causal inferences as efficiently as modern-day neural models. Similarly, classical symbolic methods are being revisited and reintegrated into current systems to allow for reasoning capabilities beyond pure pattern recognition. The Pearlian formalization to causality has revealed a theoretically sound and practically strict hierarchy of reasoning that serves as a helpful benchmark for evaluating the reasoning capabilities of neuro-symbolic systems.
Our aim is to bring together researchers interested in the integration of research areas in artificial intelligence (general machine and deep learning, symbolic and object-centric methods, and logic) with rigorous formalizations of causality with the goal of developing next-generation AI systems.
Workshop: The Symbiosis of Deep Learning and Differential Equations II Fri 9 Dec 06:00 a.m.
In recent years, there has been a rapid increase of machine learning applications in computational sciences, with some of the most impressive results at the interface of deep learning (DL) and differential equations (DEs). DL techniques have been used in a variety of ways to dramatically enhance the effectiveness of DE solvers and computer simulations. These successes have widespread implications, as DEs are among the most well-understood tools for the mathematical analysis of scientific knowledge, and they are fundamental building blocks for mathematical models in engineering, finance, and the natural sciences. Conversely, DL algorithms based on DEs--such as neural differential equations and continuous-time diffusion models--have also been successfully employed as deep learning models. Moreover, theoretical tools from DE analysis have been used to glean insights into the expressivity and training dynamics of mainstream deep learning algorithms.
This workshop will aim to bring together researchers with backgrounds in computational science and deep learning to encourage intellectual exchanges, cultivate relationships and accelerate research in this area. The scope of the workshop spans topics at the intersection of DL and DEs, including theory of DL and DEs, neural differential equations, solving DEs with neural networks, and more.
Workshop: Learning Meaningful Representations of Life Fri 9 Dec 06:30 a.m.
All events will be in a non-NeurIPS Zoom and on Gather.Town, without embedded streaming. Links below.
Workshop: HCAI@NeurIPS 2022, Human Centered AI Fri 9 Dec 07:00 a.m.
Workshop: Empowering Communities: A Participatory Approach to AI for Mental Health Fri 9 Dec 08:40 a.m.
Mental illness is the complex product of biological, psychological and social factors that foreground issues of under-representation, institutional and societal inequalities, bias and intersectionality in determining the outcomes for people affected by these disorders – the very same priorities that AI/ML fairness has begun to attend to in the past few years.
Despite the history of impoverished material investment in mental health globally, in the past decade, research practices in mental health have begun to embrace patient and citizen activism and the field has emphasised stakeholder (patients and public) participation as a central and absolutely necessary component of basic, translational and implementation science. This positions mental healthcare as something of an exemplar of participatory practices in healthcare from which technologists, engineers and scientists can learn.
The aim of the workshop is to address sociotechnical issues in healthcare AI/ML that are idiosyncratic to mental health.
Uniquely, this workshop will invite and bring together practitioners and researchers rarely found together “in the same room”, including:
- Under-represented groups with special interest in mental health and illness
- Clinical psychiatry, psychology and allied mental health professions
- Technologists, scientists and engineers from the machine learning communities
We will create an open, dialogue-focused exchange of expertise to advance mental health using data science and AI/ML with the expected impact of addressing the aforementioned issues and attempting to develop consensus on the open challenges.
Workshop: Trustworthy and Socially Responsible Machine Learning Fri 9 Dec 08:45 a.m.
To address these negative societal impacts of ML, researchers have looked into different principles and constraints to ensure trustworthy and socially responsible machine learning systems. This workshop makes the first attempt towards bridging the gap between security, privacy, fairness, ethics, game theory, and machine learning communities and aims to discuss the principles and experiences of developing trustworthy and socially responsible machine learning systems. The workshop also focuses on how future researchers and practitioners should prepare themselves for reducing the risks of unintended behaviors of sophisticated ML models.
This workshop aims to bring together researchers interested in the emerging and interdisciplinary field of trustworthy and socially responsible machine learning from a broad range of disciplines with different perspectives to this problem. We attempt to highlight recent related work from different communities, clarify the foundations of trustworthy machine learning, and chart out important directions for future work and cross-community collaborations.
Workshop: Graph Learning for Industrial Applications: Finance, Crime Detection, Medicine and Social Media Fri 9 Dec 08:50 a.m.
Graph structures provide unique opportunities in representing complex systems that are challenging to model otherwise, due to a variety of complexities such as large number of entities, multiple entity types, different relationship types, and diverse patterns.
This provides unique opportunities in using graph and graph-based solutions within a wide array of industrial applications. In financial services,graph representations are used to model markets’ transactional systems and detect financial crime. In the healthcare field, knowledge graphs have gained traction as the best way of representing the interdisciplinary scientific knowledge across biology, chemistry, pharmacology, toxicology, and medicine. By mining scientific literature and combining it with various data sources, the knowledge graphs provide an up-to-date framework for both human and computer intelligence to generate new scientific hypotheses, drug strategies, and ideas.
In addition to the benefits of graph representation, graph native machine-learning solutions such as graph neural networks, convolutional networks, and others have been implemented effectively in many industrial systems. In finance, graph dynamics have been studied to capture emerging phenomena in volatile markets. In healthcare, these techniques have extended the traditional network analysis approaches to enable link prediction. A recent example was BenevolentAI’s knowledge graph prediction that a baricitinib (now in clinical trials), a rheumatoid arthritis drug by Eli Lily, could mitigate COVID-19’s “cytokine storm”.
Graph representations allow researchers to model inductive biases, encode domain expertise, combine explicit knowledge with latent semantics, and mine patterns at scale. This facilitates explainability, robustness, transparency, and adaptability—aspects which are all uniquely important to the financial services industry as well as the (bio)medical domain. Recent work on numeracy, tabular data modeling, multimodal reasoning, and differential analysis, increasingly rely on graph-based learning to improve performance and generalizability. Additionally, many financial datasets naturally lend themselves to graph representation—from supply-chains and shipping routes to investment networks and business hierarchies. Similarly, much of the healthcare space is best described by complex networks from the micro level of chemical synthesis protocols and biological pathways to the macro level of public health.
In recent years, knowledge graphs have shown promise in furthering the capabilities of graph representations and learning techniques with unique opportunities such as reasoning. Reasoning over knowledge graphs enables exciting possibilities in complementing the pattern detection capabilities of the traditional machine learning solutions with interpretability and reasoning potential.
This path forward highlights the importance of graphs in the future of AI and machine learning systems. This workshop highlights the current and emerging opportunities from the perspective of industrial applications such as financial services, healthcare, (bio)medicine, and crime detection. The workshop is an opportunity for academic and industrial AI researchers to come together and explore shared challenges, new topics, and emerging opportunities.
Workshop on Machine Learning Safety Fri 9 Dec 09:00 a.m.
Designing systems to operate safely in real-world settings is a topic of growing interest in machine learning. As ML becomes more capable and widespread, long-term and long-tail safety risks will grow in importance. To make the adoption of ML more beneficial, various aspects of safety engineering and oversight need to be proactively addressed by the research community. This workshop will bring together researchers from machine learning communities to focus on research topics in Robustness, Monitoring, Alignment, and Systemic Safety.
* Robustness is designing systems to be reliable in the face of adversaries and highly unusual situations.
* Monitoring is detecting anomalies, malicious use, and discovering unintended model functionality.
* Alignment is building models that represent and safely optimize difficult-to-specify human values.
* Systemic Safety is using ML to address broader risks related to how ML systems are handled, such as cyberattacks, facilitating cooperation, or improving the decision-making of public servants.
5th Robot Learning Workshop: Trustworthy Robotics Fri 9 Dec 09:00 a.m.
Machine learning (ML) has been one of the premier drivers of recent advances in robotics research and has made its way into impacting several real-world robotic applications in unstructured and human-centric environments, such as transportation, healthcare, and manufacturing. At the same time, robotics has been a key motivation for numerous research problems in artificial intelligence research, from efficient algorithms to robust generalization of decision models. However, there are still considerable obstacles to fully leveraging state-of-the-art ML in real-world robotics applications. For capable robots equipped with ML models, guarantees on the robustness and additional analysis of the social implications of these models are required for their utilization in real-world robotic domains that interface with humans (e.g. autonomous vehicles, and tele-operated or assistive robots).
To support the development of robots that are safely deployable among humans, the field must consider trustworthiness as a central aspect in the development of real-world robot learning systems. Unlike many other applications of ML, the combined complexity of physical robotic platforms and learning-based perception-action loops presents unique technical challenges. These challenges include concrete technical problems such as very high performance requirements, explainability, predictability, verification, uncertainty quantification, and robust operation in dynamically distributed, open-set domains. Since robots are developed for use in human environments, in addition to these technical challenges, we must also consider the social aspects of robotics such as privacy, transparency, fairness, and algorithmic bias. Both technical and social challenges also present opportunities for robotics and ML researchers alike. Contributing to advances in the aforementioned sub-fields promises to have an important impact on real-world robot deployment in human environments, building towards robots that use human feedback, indicate when their model is uncertain, and are safe to operate autonomously in safety-critical settings such as healthcare and transportation.
This year’s robot learning workshop aims at discussing unique research challenges from the lens of trustworthy robotics. We adopt a broad definition of trustworthiness that highlights different application domains and the responsibility of the robotics and ML research communities to develop “robots for social good.” Bringing together experts with diverse backgrounds from the ML and robotics communities, the workshop will offer new perspectives on trust in the context of ML-driven robot systems.
Scope of contributions:
Specific areas of interest include but are not limited to:
* epistemic uncertainty estimation in robotics;
* explainable robot learning;
* domain adaptation and distribution shift in robot learning;
* multi-modal trustworthy sensing and sensor fusion;
* safe deployment for applications such as agriculture, space, science, and healthcare;
* privacy aware robotic perception;
* information system security in robot learning;
* learning from offline data and safe on-line learning;
* simulation-to-reality transfer for safe deployment;
* robustness and safety evaluation;
* certifiability and performance guarantees;
* robotics for social good;
* safe robot learning with humans in the loop;
* algorithmic bias in robot learning;
* ethical robotics.
Workshop: Tackling Climate Change with Machine Learning Fri 9 Dec 09:00 a.m.
The focus of this workshop is the use of machine learning to help address climate change, encompassing mitigation efforts (reducing greenhouse gas emissions), adaptation measures (preparing for unavoidable consequences), and climate science (our understanding of the climate and future climate predictions). Specifically, we aim to: (1) showcase high-impact applications of ML to climate change mitigation, adaptation, and climate science, (2) discuss related research directions to which the ML community can contribute, (3) brainstorm mechanisms to scale early academic research to successful, viable deployments, and (4) encourage fruitful collaboration between the ML community and a diverse set of researchers and practitioners from climate change-related fields. Building on our past workshops on this topic, this workshop particularly aims to explore the theme of climate change-informed metrics for AI, focusing both on (a) the domain-specific metrics by which AI systems should be evaluated when used as a tool for climate action, and (b) the climate change-related implications of using AI more broadly.
Workshop on Machine Learning for Creativity and Design Fri 9 Dec 09:15 a.m.
Deep Reinforcement Learning Workshop Fri 9 Dec 10:25 a.m.
In recent years, the use of deep neural networks as function approximators has enabled researchers to extend reinforcement learning techniques to solve increasingly complex control tasks. The emerging field of deep reinforcement learning has led to remarkable empirical results in rich and varied domains like robotics, strategy games, and multi-agent interactions. This workshop will bring together researchers working at the intersection of deep learning and reinforcement learning, and it will help interested researchers outside of the field gain a high-level view about the current state of the art and potential directions for future contributions.
Workshop: Cultures of AI and AI for Culture Fri 9 Dec 11:00 a.m.
Panels 1b and 2b will be hosted in a separate zoom room,
Contributed Panel 1b: Frameworks of AI/Culture entanglement
Panel Zoom Link:
https://us06web.zoom.us/j/85234340757?pwd=TEw1UkpYbmZWQktLSjc5M241WHd6QT09
Password: fishvale
Contributed Panel 2b: Theorizing AI/Culture entanglement
Panel Zoom Link:
https://us06web.zoom.us/j/85234340757?pwd=TEw1UkpYbmZWQktLSjc5M241WHd6QT09
Password: fishvale