Skip to yearly menu bar Skip to main content


Demonstration

Matrix Calculus -- The Power of Symbolic Differentiation

Sören Laue · Matthias Mitterreiter · Joachim Giesen

Pacific Ballroom Concourse #D7

Abstract:

Numerical optimization is a work horse of machine learning that often requires the derivation and computation of gradients and Hessians. For learning problem that are modeled by some loss or likelihood function, the gradients and Hessians are typically derived manually, which is a time consuming and error prone process. Computing gradients (and Hessians) is also an integral part of deep learning frameworks that mostly employ automatic differentiation, aka algorithmic differentiation (typically in reverse mode). At www.MatrixCalculus.org we provide a tool for symbolically computing gradients and Hessians that can be used in the classical setting of loss and likelihood functions, for constrained optimization, but also for deep learning.

Live content is unavailable. Log in and register to view live content