Skip to yearly menu bar Skip to main content


Invited Talk
in
Workshop: Interpretable Machine Learning for Complex Systems

The Power of Monotonicity ​for Practical​ Machine Learning (Maya Gupta)


Abstract:

​What prior knowledge do humans have about machine learning problems that we can take advantage of as regularizers? One common intuition is that certain inputs should have a positive (only) effect on the output, for example, the price of a house should only increase as its size goes up, if all else is the same. Incorporating such monotonic priors into our machine learning algorithms can dramatically increase their interpretability and debuggability. We'll discuss state-of-the-art algorithms to learn flexible monotonic functions, and share some stories about why monotonicity is such an important regularizer for practical problems where train and test samples are not IID, especially when learning from clicks.

Live content is unavailable. Log in and register to view live content