Timezone: »
The workshop is a one-day event with invited speakers, oral presentations, and posters. The event brings together faculty, graduate students, research scientists, and engineers for an opportunity to connect and exchange ideas. There will be a panel discussion and a mentoring session to discuss current research trends and career choices in artificial intelligence and machine learning. While all presenters will identify primarily as latinx, all are invited to attend.
Tue 10:00 a.m. - 10:30 a.m.
|
Welcome Session
(
Initial Remarks
)
SlidesLive Video » |
Maria Luisa Santiago 🔗 |
Tue 10:31 a.m. - 11:00 a.m.
|
Meditations on First Deployment: A Practical Guide to Responsible Machine Learning
(
Keynote Presentation
)
SlidesLive Video » As the impact of data science & engineering reaches increasingly farther and wider, our professional responsibility as researchers and practitioners becomes more critical to society. The production data-driven systems we research, design, build and operate often bring inherent adversities with complex technical, societal and even ethical challenges. The skillsets required to tackle these challenges require us to go beyond the algorithms, and leverage diverse and cross-functional collaboration that often goes beyond a single data scientist or developer. In this talk we will cover practical insights from key core ethics themes in data science & engineering, including Privacy, Equity, Trust and Transparency. We will dive into the importance of these core themes, the growing societal challenges, and how organisations such as The Institute for Ethical AI, The Linux Foundation, the Association for Computer Machinery, NumFocus, the IEEE and other relevant academic and industry organisations are contributing to these through standards, policy advise and open source software initiatives. |
Alejandro Saucedo · Seyed Ali Alavi Bajstan 🔗 |
Tue 11:01 a.m. - 11:15 a.m.
|
Question & Answering Session
(
Q & A
)
|
Alejandro Saucedo 🔗 |
Tue 11:16 a.m. - 11:30 a.m.
|
Lifting the veil on hyper-parameters for value-baseddeep reinforcement learning
(
Oral
)
SlidesLive Video » Successful applications of deep reinforcement learning (deep RL) combine algorithmic design and careful hyper-parameter selection. The former often comes from iterative improvements over existing algorithms, while the latter is either inherited from prior methods or tuned for the specific method being introduced. Although critical to a method’s performance, the effect of the various hyper-parameter choices are often overlooked in favour of algorithmic advances. In this paper, we perform an initial empirical investigation into a number of often-overlooked hyper-parameters for value-based deep RL agents, demonstrating their varying levels of importance. We conduct this study on a varied set of classic control environments which helps highlight the effect each environment has on an algorithm’s hyper-parameter sensitivity. |
João Madeira Araújo · Johan Obando Ceron · Pablo Samuel Castro 🔗 |
Tue 11:30 a.m. - 11:45 a.m.
|
On the Pitfalls of Label Differential Privacy
(
Oral
)
SlidesLive Video » We study the privacy limitations of label differential privacy, which has emerged as an intermediate trust model between local and central differential privacy, where only the label of each training example is protected (and the features are assumed to be public). We show that the guarantees provided by label DP are significantly weaker than they appear, as an adversary can "un-noise" the perturbed labels. Formally we show that the privacy loss has a close connection with Jeffreys' divergence of the conditional distribution between positive and negative labels, which allows explicit formulation of the trade-off between utility and privacy in this setting. Our results suggest how to select public features that optimize this trade-off. But we still show that there is no free lunch---instances where label differential privacy guarantees are strong are exactly those where a good classifier does not exist. We complement the negative results with a non-parametric estimator for the true privacy loss, and apply our techniques on large-scale benchmark data to demonstrate how toachieve a desired privacy protection. |
Andres Munoz Medina · Róbert Busa-Fekete · Umar Syed · Sergei Vassilvitskii 🔗 |
Tue 11:45 a.m. - 12:00 p.m.
|
Exploring the Limits of Epistemic Uncertainty Quantification in Low-Shot Settings
(
Oral
)
SlidesLive Video » Uncertainty quantification in neural network promises to increase safety of AI systems, but it is not clear how performance might vary with the training set size. In this paper we evaluate seven uncertainty methods on Fashion MNIST and CIFAR10, as we sub-sample and produce varied training set sizes. We find that calibration error and out of distribution detection performance strongly depend on the training set size, with most methods being miscalibrated on the test set with small training sets. Gradient-based methods seem to poorly estimate epistemicuncertainty and are the most affected by training set size. We expect our results can guide future research into uncertainty quantification and help practitioners select methods based on their particular available data. |
Matias Valdenegro-Toro 🔗 |
Tue 12:00 p.m. - 12:15 p.m.
|
Q&A Oral presentations
(
Q&A
)
|
Matias Valdenegro-Toro · Andres Munoz Medina · Johan Obando Ceron · Anil Batra 🔗 |
Tue 12:15 p.m. - 12:30 p.m.
|
Lunch Break
|
🔗 |
Tue 12:31 p.m. - 1:01 p.m.
|
Connecting the dots between industry and research
(
Keynote Presentation
)
SlidesLive Video » |
Ana Paula Appel 🔗 |
Tue 1:00 p.m. - 1:15 p.m.
|
Q&A Anna Appel
(
Q&A
)
|
Ana Paula Appel 🔗 |
Tue 1:16 p.m. - 1:21 p.m.
|
Flexible Learning of Sparse Neural Networks via Constrained $L_0$ Regularization
(
Spotlight
)
SlidesLive Video »
We propose to approach the problem of learning $L_0$-sparse networks using a constrained formulation of the optimization problem. This is in contrast to commonly used penalized approaches, which combine the regularization terms additively with the (surrogate) empirical risk. Our experiments demonstrate that we can obtain approximate solutions to the constrained optimization problem with comparable performance to state-of-the art methods for $L_0$-sparse training. Finally, we discuss how this constrained approach provides greater (hyper-)parameter interpretability and accountability from a practitioner's point of view.
|
Jose Gallego-Posada · Juan Ramirez · Akram Erraqabi 🔗 |
Tue 1:21 p.m. - 1:26 p.m.
|
Performance Analysis of Quantum Machine Learning Classifiers
(
Spotlight
)
SlidesLive Video » In recent years, researchers have started looking into data transformations in quantum computation. They want to see how quantum computing affects the robustness and performance of machine learning methods. Quantum mechanics succeed in explaining some phenomena where classical formulas failed in the past. Thus, it expanded in analytical research fields such as Quantum Machine Learning (QML) over the years. The developing QML discipline has proven solutions to issues that are equivalent (or comparable) to those addressed by classical machine learning, including classification and prediction problems using quantum classifiers. As a result of these factors, quantum classifier analysis has become one of the most important topics in QML. This paper studies four quantum classifiers: Support Vector Classification with Quantum Kernel (SVCQK), Quantum Support Vector Classifier (QSVC), Variational Quantum Classifier (VQC), and Circuit Quantum Neural Network Classifier (CQNNC). We also report case study outcomes and results analysis utilizing linearly and non-linearly separable datasets generated. Our research is to explore if quantum information may aid learning or convergence. |
Pablo Rivas · Javier Orduz 🔗 |
Tue 1:26 p.m. - 1:31 p.m.
|
Curating the Twitter Election Integrity Datasets forBetter Online Troll Characterization
(
Spotlight
)
SlidesLive Video » In modern days, social media platforms provide accessible channels for the inter-action and immediate reflection of the most important events happening around the world. In this paper, we, firstly, present a curated set of datasets whose origin stem from the Twitter’s Information Operations efforts. More notably, these accounts, which have been already suspended, provide a notion of how state-backed human trolls operate.Secondly, we present detailed analyses of how these behaviours vary over time,and motivate its use and abstraction in the context of deep representation learning:for instance, to learn and, potentially track, troll behaviour. We present baselinesf or such tasks and highlight the differences there may exist within the literature.Finally, we utilize the representations learned for behaviour prediction to classify trolls from"real"users, using a sample of non-suspended active accounts. |
Albert Orozco Camacho · Reihaneh Rabbany 🔗 |
Tue 1:31 p.m. - 1:36 p.m.
|
Investigating generative neural-network models for building pest insect detectors in sticky trap images for the Peruvian horticulture
(
Spotlight
)
SlidesLive Video » Pest insects are a problem in horticulture, so early detection is key for their control. Sticky traps are an inexpensive way to obtain insect samples in crops, but identifying them manually is a time-consuming task. Building computational models to identify insect species in sticky trap images is therefore highly desirable. However, this is a challenging task due to the difficulty in getting sizeable sets of training images. In this paper, we studied the usefulness of three neural network generative models to synthesize pest insect images (DCGAN, WGAN, and VAE) for augmenting the training set and thus facilitate the induction of insect detector models. Experiments with images of seven species of pest insects of the Peruvian horticulture showed that the WGAN and VAE models are able to learn to generate images of such species. It was also found that the synthesized images can help to induce YOLOv5 detectors with significant gains in detection performance compared to not using synthesized data. A demo app that integrates the detector models can be accessed through the URL: https://bit.ly/3uXW0Ee |
Joel Cabrera Rios · Edwin Villanueva 🔗 |
Tue 1:36 p.m. - 1:41 p.m.
|
A Pharmacovigilance Application of Social Media Mining: An Ensemble Approach for Automated Classification and Extraction of Drug Mentions in Tweets
(
Spotlight
)
SlidesLive Video » Researchers have extensively used social media platforms like Twitter for knowledge discovery purposes, as tweets are considered a wealth of information that provides unique insights. Recent developments have further enabled social media mining for various biomedical tasks such as pharmacovigilance. A first step towards identifying a use-case of Twitter for the pharmacovigilance domain is to extract medication/drug terminologies mentioned in the tweets, which is a challenging task due to several reasons. For example, drug mentions in tweets may be incorrectly written, making the identification of these mentions more difficult. In this work, we propose a two step approach, first, we focused on classifying tweets with drug mentions via an ensemble model (containing transformer models), second, we extract drug mentions (along with their span positions) using a text-tagging/dictionary based approach, and a Named Entity Recognition (NER) approach. By comparing these two entity identification approaches, we demonstrate that using only a dictionary-based approach is not enough. |
Luis Alberto Robles Hernandez · Juan Banda 🔗 |
Tue 1:41 p.m. - 1:46 p.m.
|
A novel stochastic model based on echo state networks for hydrological time series forecasting
(
Spotlight
)
SlidesLive Video » The Stochastic Streamflow Models (SSMS) are time series models for precise prediction of hydrological data useful in hydrologic risk management. Nowadays, deep learning networks get many considerations in time series forecasting. However, despite their theoretical benefits, they fail due to their drawbacks, such as complex architectures, slow convergence and the vanishing gradient problem. In order to alleviate these drawbacks, we propose a new stochastic model applied in problems that involve stochastic behavior and periodic characteristics. The new model has two components, the first one, a type of recurrent neural network embedding the echo-state (ESN) learning mechanism instead of conventional backpropagation. The last component adds the uncertainty associated with stationary processes. This model is called Stochastic Streamflow Model ESN (SSMESN). It was calibrated with time series of monthly discharge data from MOPEX data set. Preliminar results show that the SSMESN can achieve a significant prediction performance, learning speed. This model, can be considered a first attempt that applies the echo state network methodology to stochastic process. |
Edson F. Luque 🔗 |
Tue 1:41 p.m. - 1:46 p.m.
|
Vehicle Speed Estimation Using Computer Vision And Evolutionary Camera Calibration
(
Spotlight
)
SlidesLive Video » Currently, the standard for vehicle speed estimation on urban areas and highways is radar or lidar speed signs which can be costly to buy, install and maintain. In addition, most major cities already implement networks of traffic surveillance cameras that can be utilized for vehicle speed estimation using computer vision. This work designs and implements such a system. Specifically, the proposed method is composed of three main components. First, we propose a camera calibration using homography with point correspondences between the image and world plane selected by the user. Second, a YOLOv4 object detector to locate the vehicles, and third, a modified object tracker that estimates vehicle speed. Moreover, for the calibration, a new method for camera calibration is developed specifically for this use case, using the estimation of density evolutionary algorithm. This methodology aims at correcting the misalignment between a point in the image plane and the world plane produced by the human operation. In addition, a basic direct linear transformation (DLT) and a random sample consensus robust version of DLT are implemented for comparison. Finally, the results show that the workflow using the proposed homography estimation method reduces the projection error from world to image point by 97\%, when compared to the other two methods, and the complete workflow can successfully register speed distributions expected from vehicles on urban traffic and handle dynamic changes in vehicle speed. |
Hector Mejia 🔗 |
Tue 1:46 p.m. - 2:00 p.m.
|
Q & A Spotlight Presentations
(
Q&A
)
|
Hector Mejia · Juan Banda · Javier Orduz · Joel Cabrera Rios · Reihaneh Rabbany · Jose Gallego-Posada · Edson F. Luque 🔗 |
Tue 2:00 p.m. - 2:01 p.m.
|
Introduction Joaquin Salas
(
Intro
)
|
🔗 |
Tue 2:01 p.m. - 2:31 p.m.
|
Climate Change, Biodiversity Loss, Human Vulnerability: The Role of AI in Challenging Times
(
Keynote Presentation
)
SlidesLive Video » |
Joaquin Salas 🔗 |
Tue 2:31 p.m. - 2:45 p.m.
|
Q&A Joaquin Salas
(
Q&A
)
|
Joaquin Salas 🔗 |
Tue 2:45 p.m. - 3:00 p.m.
|
Closing Remarks
(
Closing
)
SlidesLive Video » |
Andres Munoz Medina 🔗 |
-
|
LatinX Social, Wed 2 am UTC (Tue 6 pm pacific) ( Social - Gathertown ) link » | 🔗 |
-
|
Lifting the veil on hyper-parameters for value-baseddeep reinforcement learning
(
Poster
)
Successful applications of deep reinforcement learning (deep RL) combine algorithmic design and careful hyper-parameter selection. The former often comes from iterative improvements over existing algorithms, while the latter is either inherited from prior methods or tuned for the specific method being introduced. Although critical to a method’s performance, the effect of the various hyper-parameter choices are often overlooked in favour of algorithmic advances. In this paper, we perform an initial empirical investigation into a number of often-overlooked hyper-parameters for value-based deep RL agents, demonstrating their varying levels of importance. We conduct this study on a varied set of classic control environments which helps highlight the effect each environment has on an algorithm’s hyper-parameter sensitivity. |
João Madeira Araújo · Johan Obando Ceron · Pablo Samuel Castro 🔗 |
-
|
On the Pitfalls of Label Differential Privacy
(
Poster
)
We study the privacy limitations of label differential privacy, which has emerged as an intermediate trust model between local and central differential privacy, where only the label of each training example is protected (and the features are assumed to be public). We show that the guarantees provided by label DP are significantly weaker than they appear, as an adversary can "un-noise" the perturbed labels. Formally we show that the privacy loss has a close connection with Jeffreys' divergence of the conditional distribution between positive and negative labels, which allows explicit formulation of the trade-off between utility and privacy in this setting. Our results suggest how to select public features that optimize this trade-off. But we still show that there is no free lunch---instances where label differential privacy guarantees are strong are exactly those where a good classifier does not exist. We complement the negative results with a non-parametric estimator for the true privacy loss, and apply our techniques on large-scale benchmark data to demonstrate how toachieve a desired privacy protection. |
Andres Munoz Medina · Róbert Busa-Fekete · Umar Syed · Sergei Vassilvitskii 🔗 |
-
|
Exploring the Limits of Epistemic Uncertainty Quantification in Low-Shot Settings
(
Poster
)
Uncertainty quantification in neural network promises to increase safety of AI systems, but it is not clear how performance might vary with the training set size. In this paper we evaluate seven uncertainty methods on Fashion MNIST and CIFAR10, as we sub-sample and produce varied training set sizes. We find that calibration error and out of distribution detection performance strongly depend on the training set size, with most methods being miscalibrated on the test set with small training sets. Gradient-based methods seem to poorly estimate epistemicuncertainty and are the most affected by training set size. We expect our results can guide future research into uncertainty quantification and help practitioners select methods based on their particular available data. |
Matias Valdenegro-Toro 🔗 |
-
|
Flexible Learning of Sparse Neural Networks via Constrained $L_0$ Regularization
(
Poster
)
link »
We propose to approach the problem of learning $L_0$-sparse networks using a constrained formulation of the optimization problem. This is in contrast to commonly used penalized approaches, which combine the regularization terms additively with the (surrogate) empirical risk. Our experiments demonstrate that we can obtain approximate solutions to the constrained optimization problem with comparable performance to state-of-the art methods for $L_0$-sparse training. Finally, we discuss how this constrained approach provides greater (hyper-)parameter interpretability and accountability from a practitioner's point of view.
|
Jose Gallego-Posada · Juan Ramirez · Akram Erraqabi 🔗 |
-
|
A novel stochastic model based on echo state networks for hydrological time series forecasting
(
Poster
)
link »
The Stochastic Streamflow Models (SSMS) are time series models for precise prediction of hydrological data useful in hydrologic risk management. Nowadays, deep learning networks get many considerations in time series forecasting. However, despite their theoretical benefits, they fail due to their drawbacks, such as complex architectures, slow convergence and the vanishing gradient problem. In order to alleviate these drawbacks, we propose a new stochastic model applied in problems that involve stochastic behavior and periodic characteristics. The new model has two components, the first one, a type of recurrent neural network embedding the echo-state (ESN) learning mechanism instead of conventional backpropagation. The last component adds the uncertainty associated with stationary processes. This model is called Stochastic Streamflow Model ESN (SSMESN). It was calibrated with time series of monthly discharge data from MOPEX data set. Preliminar results show that the SSMESN can achieve a significant prediction performance, learning speed. This model, can be considered a first attempt that applies the echo state network methodology to stochastic process. |
Edson F. Luque 🔗 |
-
|
Curating the Twitter Election Integrity Datasets forBetter Online Troll Characterization
(
Poster
)
link »
In modern days, social media platforms provide accessible channels for the inter-action and immediate reflection of the most important events happening around the world. In this paper, we, firstly, present a curated set of datasets whose origin stem from the Twitter’s Information Operations efforts. More notably, these accounts, which have been already suspended, provide a notion of how state-backed human trolls operate.Secondly, we present detailed analyses of how these behaviours vary over time,and motivate its use and abstraction in the context of deep representation learning:for instance, to learn and, potentially track, troll behaviour. We present baselinesf or such tasks and highlight the differences there may exist within the literature.Finally, we utilize the representations learned for behaviour prediction to classify trolls from"real"users, using a sample of non-suspended active accounts. |
Albert Orozco Camacho · Reihaneh Rabbany 🔗 |
-
|
Investigating generative neural-network models for building pest insect detectors in sticky trap images for the Peruvian horticulture
(
Poster
)
link »
Pest insects are a problem in horticulture, so early detection is key for their control. Sticky traps are an inexpensive way to obtain insect samples in crops, but identifying them manually is a time-consuming task. Building computational models to identify insect species in sticky trap images is therefore highly desirable. However, this is a challenging task due to the difficulty in getting sizeable sets of training images. In this paper, we studied the usefulness of three neural network generative models to synthesize pest insect images (DCGAN, WGAN, and VAE) for augmenting the training set and thus facilitate the induction of insect detector models. Experiments with images of seven species of pest insects of the Peruvian horticulture showed that the WGAN and VAE models are able to learn to generate images of such species. It was also found that the synthesized images can help to induce YOLOv5 detectors with significant gains in detection performance compared to not using synthesized data. A demo app that integrates the detector models can be accessed through the URL: https://bit.ly/3uXW0Ee |
Joel Cabrera Rios · Edwin Villanueva 🔗 |
-
|
Performance Analysis of Quantum Machine Learning Classifiers
(
Poster
)
link »
In recent years, researchers have started looking into data transformations in quantum computation. They want to see how quantum computing affects the robustness and performance of machine learning methods. Quantum mechanics succeed in explaining some phenomena where classical formulas failed in the past. Thus, it expanded in analytical research fields such as Quantum Machine Learning (QML) over the years. The developing QML discipline has proven solutions to issues that are equivalent (or comparable) to those addressed by classical machine learning, including classification and prediction problems using quantum classifiers. As a result of these factors, quantum classifier analysis has become one of the most important topics in QML. This paper studies four quantum classifiers: Support Vector Classification with Quantum Kernel (SVCQK), Quantum Support Vector Classifier (QSVC), Variational Quantum Classifier (VQC), and Circuit Quantum Neural Network Classifier (CQNNC). We also report case study outcomes and results analysis utilizing linearly and non-linearly separable datasets generated. Our research is to explore if quantum information may aid learning or convergence. |
Pablo Rivas · Javier Orduz 🔗 |
-
|
A Pharmacovigilance Application of Social Media Mining: An Ensemble Approach for Automated Classification and Extraction of Drug Mentions in Tweets
(
Poster
)
link »
Researchers have extensively used social media platforms like Twitter for knowledge discovery purposes, as tweets are considered a wealth of information that provides unique insights. Recent developments have further enabled social media mining for various biomedical tasks such as pharmacovigilance. A first step towards identifying a use-case of Twitter for the pharmacovigilance domain is to extract medication/drug terminologies mentioned in the tweets, which is a challenging task due to several reasons. For example, drug mentions in tweets may be incorrectly written, making the identification of these mentions more difficult. In this work, we propose a two step approach, first, we focused on classifying tweets with drug mentions via an ensemble model (containing transformer models), second, we extract drug mentions (along with their span positions) using a text-tagging/dictionary based approach, and a Named Entity Recognition (NER) approach. By comparing these two entity identification approaches, we demonstrate that using only a dictionary-based approach is not enough. |
Luis Alberto Robles Hernandez · Juan Banda 🔗 |
-
|
Vehicle Speed Estimation Using Computer Vision And Evolutionary Camera Calibration
(
Poster
)
link »
Currently, the standard for vehicle speed estimation on urban areas and highways is radar or lidar speed signs which can be costly to buy, install and maintain. In addition, most major cities already implement networks of traffic surveillance cameras that can be utilized for vehicle speed estimation using computer vision. This work designs and implements such a system. Specifically, the proposed method is composed of three main components. First, we propose a camera calibration using homography with point correspondences between the image and world plane selected by the user. Second, a YOLOv4 object detector to locate the vehicles, and third, a modified object tracker that estimates vehicle speed. Moreover, for the calibration, a new method for camera calibration is developed specifically for this use case, using the estimation of density evolutionary algorithm. This methodology aims at correcting the misalignment between a point in the image plane and the world plane produced by the human operation. In addition, a basic direct linear transformation (DLT) and a random sample consensus robust version of DLT are implemented for comparison. Finally, the results show that the workflow using the proposed homography estimation method reduces the projection error from world to image point by 97\%, when compared to the other two methods, and the complete workflow can successfully register speed distributions expected from vehicles on urban traffic and handle dynamic changes in vehicle speed. |
Hector Mejia 🔗 |
Author Information
Maria Luisa Santiago (Accel.ai)
Andres Munoz Medina (Google)
Laura Montoya (Accel AI)
Karla Caballero Barajas (LatinX)
Isabel Metzger (Latinx In AI)
Jose Gallego-Posada (Mila, Université de Montréal)
Juan Banda (Georgia State University)
Gabriela Vega (Universidad Peruana de Ciencias Aplicadas)
Amanda Duarte (Universitat Politècnica de Catalunya (UPC))
Patrick Feeney (Tufts University)
Lourdes Ramírez Cerna (National University of Trujillo)
I'm graduated in Computer Science at the National University of Trujillo (2007-2012), and I have a master's degree in Computer Science from the Federal University of Ouro Preto (2012-2014). I have experience in Computer Science, focusing on Image Processing, Computer Vision, Pattern Recognition.
Walter M Mayor (universidad autonoma de occidente)
Omar U. Florez (Capital One)
Senior Research Manager in Conversational AI at Capital One
Rosina Weber (Drexel University)
Rocio Zorrilla (National Laboratory of Scientific Computing)
More from the Same Authors
-
2021 : A Joint Exponential Mechanism for Differentially Private Top-k Set »
Andres Munoz Medina · Matthew Joseph · Jennifer Gillenwater · Monica Ribero Diaz -
2021 : Population Level Privacy Leakage in Binary Classification wtih Label Noise »
Róbert Busa-Fekete · Andres Munoz Medina · Umar Syed · Sergei Vassilvitskii -
2021 : On the Pitfalls of Label Differential Privacy »
Andres Munoz Medina · Róbert Busa-Fekete · Umar Syed · Sergei Vassilvitskii -
2021 : Flexible Learning of Sparse Neural Networks via Constrained $L_0$ Regularization »
Jose Gallego-Posada · Juan Ramirez · Akram Erraqabi -
2021 : A Pharmacovigilance Application of Social Media Mining: An Ensemble Approach for Automated Classification and Extraction of Drug Mentions in Tweets »
Luis Alberto Robles Hernandez · Juan Banda -
2022 Poster: Controlled Sparsity via Constrained Optimization or: How I Learned to Stop Tuning Penalties and Love Constraints »
Jose Gallego-Posada · Juan Ramirez · Akram Erraqabi · Yoshua Bengio · Simon Lacoste-Julien -
2022 Poster: Private and Communication-Efficient Algorithms for Entropy Estimation »
Gecia Bravo-Hermsdorff · Róbert Busa-Fekete · Mohammad Ghavamzadeh · Andres Munoz Medina · Umar Syed -
2022 Affinity Workshop: LatinX in AI »
Maria Luisa Santiago · Juan Banda · CJ Barberan · MIGUEL GONZALEZ-MENDOZA · Caio Davi · Sara Garcia · Jorge Diaz · Fanny Nina Paravecino · Carlos Miranda · Gissella Bejarano Nicho · Fabian Latorre · Andres Munoz Medina · Abraham Ramos · Laura Montoya · Isabel Metzger · Andres Marquez · Miguel Felipe Arevalo-Castiblanco · Jorge Mendez · Karla Caballero · Atnafu Lambebo Tonja · Germán Olivo · Karla Caballero Barajas · Francisco Zabala -
2021 : Population Level Privacy Leakage in Binary Classification wtih Label Noise »
Róbert Busa-Fekete · Andres Munoz Medina · Umar Syed · Sergei Vassilvitskii -
2021 Affinity Workshop: Indigenous in AI Workshop »
Mason Grimshaw · Michael Running Wolf · Patrick Feeney -
2021 Affinity Workshop: Queer in AI Workshop 2 »
Claas Voelcker · Arjun Subramonian · Vishakha Agrawal · Luca Soldaini · Pan Xu · Pranav A · William Agnew · Juan Pajaro Velasquez · Yanan Long · Sharvani Jha · Ashwin S · Mary Anne Smart · Patrick Feeney · Ruchira Ray -
2021 Social: Latinx in AI Social »
Andres Munoz Medina · Maria Luisa Santiago -
2021 : Closing Remarks »
Andres Munoz Medina -
2021 : Q & A Spotlight Presentations »
Hector Mejia · Juan Banda · Javier Orduz · Joel Cabrera Rios · Reihaneh Rabbany · Jose Gallego-Posada · Edson F. Luque -
2021 : A Pharmacovigilance Application of Social Media Mining: An Ensemble Approach for Automated Classification and Extraction of Drug Mentions in Tweets »
Luis Alberto Robles Hernandez · Juan Banda -
2021 : Flexible Learning of Sparse Neural Networks via Constrained $L_0$ Regularization »
Jose Gallego-Posada · Juan Ramirez · Akram Erraqabi -
2021 : Q&A Oral presentations »
Matias Valdenegro-Toro · Andres Munoz Medina · Johan Obando Ceron · Anil Batra -
2021 : On the Pitfalls of Label Differential Privacy »
Andres Munoz Medina · Róbert Busa-Fekete · Umar Syed · Sergei Vassilvitskii -
2021 : Introduction to keynote »
Maria Luisa Santiago -
2021 : Welcome Session »
Maria Luisa Santiago -
2021 Affinity Workshop: Queer in AI Workshop 1 »
Claas Voelcker · Arjun Subramonian · Vishakha Agrawal · Luca Soldaini · Pan Xu · Pranav A · William Agnew · Juan Pajaro Velasquez · Yanan Long · Sharvani Jha · Ashwin S · Mary Anne Smart · Patrick Feeney · Ruchira Ray -
2020 : Laura Montoya Q&A »
Laura Montoya · William Agnew -
2020 : Laura Montoya: "Beyond the Binary" »
Laura Montoya -
2020 Affinity Workshop: LXAI Research @ NeurIPS 2020 »
Maria Luisa Santiago · Laura Montoya · Pedro Braga · Karla Caballero Barajas · Sergio H Garrido Mejia · Eduardo Moya · Vinicius Caridá · Ariel Ruiz-Garcia · Ivan Arraut · Juan Banda · Josue Caro · Gissella Bejarano Nicho · Fabian Latorre · Carlos Miranda · Ignacio Lopez-Francos -
2019 : Poster Session + Lunch »
Maxwell Nye · Robert Kim · Toby St Clere Smithe · Takeshi D. Itoh · Omar U. Florez · Vesna G. Djokic · Sneha Aenugu · Mariya Toneva · Imanol Schlag · Dan Schwartz · Max Raphael Sobroza Marques · Pravish Sainath · Peng-Hsuan Li · Rishi Bommasani · Najoung Kim · Paul Soulos · Steven Frankland · Nadezhda Chirkova · Dongqi Han · Adam Kortylewski · Rich Pang · Milena Rabovsky · Jonathan Mamou · Vaibhav Kumar · Tales Marra -
2019 : Poster lighting round »
Yinhe Zheng · Anders Søgaard · Abdelrhman Saleh · Youngsoo Jang · Hongyu Gong · Omar U. Florez · Margaret Li · Andrea Madotto · The Tung Nguyen · Ilia Kulikov · Arash einolghozati · Yiru Wang · Mihail Eric · Victor Petrén Bach Hansen · Nurul Lubis · Yen-Chen Wu -
2019 : Poster session »
Jindong Gu · Alice Xiang · Atoosa Kasirzadeh · Zhiwei Han · Omar U. Florez · Frederik Harder · An-phi Nguyen · Amir Hossein Akhavan Rahnama · Michele Donini · Dylan Slack · Junaid Ali · Paramita Koley · Michiel Bakker · Anna Hilgard · Hailey James · Gonzalo Ramos · Jialin Lu · Jingying Yang · Margarita Boyarskaya · Martin Pawelczyk · Kacper Sokol · Mimansa Jaiswal · Umang Bhatt · David Alvarez-Melis · Aditya Grover · Charles Marx · Mengjiao (Sherry) Yang · Jingyan Wang · Gökhan Çapan · Hanchen Wang · Steffen Grünewälder · Moein Khajehnejad · Gourab Patro · Russell Kunes · Samuel Deng · Yuanting Liu · Luca Oneto · Mengze Li · Thomas Weber · Stefan Matthes · Duy Patrick Tu -
2019 : Poster Session »
Gergely Flamich · Shashanka Ubaru · Charles Zheng · Josip Djolonga · Kristoffer Wickstrøm · Diego Granziol · Konstantinos Pitas · Jun Li · Robert Williamson · Sangwoong Yoon · Kwot Sin Lee · Julian Zilly · Linda Petrini · Ian Fischer · Zhe Dong · Alexander Alemi · Bao-Ngoc Nguyen · Rob Brekelmans · Tailin Wu · Aditya Mahajan · Alexander Li · Kirankumar Shiragur · Yair Carmon · Linara Adilova · SHIYU LIU · Bang An · Sanjeeb Dash · Oktay Gunluk · Arya Mazumdar · Mehul Motani · Julia Rosenzweig · Michael Kamp · Marton Havasi · Leighton P Barnes · Zhengqing Zhou · Yi Hao · Dylan Foster · Yuval Benjamini · Nati Srebro · Michael Tschannen · Paul Rubenstein · Sylvain Gelly · John Duchi · Aaron Sidford · Robin Ru · Stefan Zohren · Murtaza Dalal · Michael A Osborne · Stephen J Roberts · Moses Charikar · Jayakumar Subramanian · Xiaodi Fan · Max Schwarzer · Nicholas Roberts · Simon Lacoste-Julien · Vinay Prabhu · Aram Galstyan · Greg Ver Steeg · Lalitha Sankar · Yung-Kyun Noh · Gautam Dasarathy · Frank Park · Ngai-Man (Man) Cheung · Ngoc-Trung Tran · Linxiao Yang · Ben Poole · Andrea Censi · Tristan Sylvain · R Devon Hjelm · Bangjie Liu · Jose Gallego-Posada · Tyler Sypherd · Kai Yang · Jan Nikolas Morshuis -
2019 Poster: Differentially Private Covariance Estimation »
Kareem Amin · Travis Dick · Alex Kulesza · Andres Munoz Medina · Sergei Vassilvitskii -
2017 Poster: Revenue Optimization with Approximate Bid Predictions »
Andres Munoz Medina · Sergei Vassilvitskii