Equivariance in machine learning

Abstract

This talk will be about the interface of representation theory and machine learning. In machine learning, one sometimes wants to learn quantities which are invariant or equivariant with respect to a group. For example, the decision as to whether there is a tiger nearby should not depend on the precise position of your head and thus this decision should be rotation invariant. Another example: quantities that appear in the analysis of point clouds often do not depend on the labelling of the points, and are therefore invariant under a large symmetric group. I will explain how to build networks which are equivariant with respect to a group action. What ensues is a fascinating interplay between group theory, representation theory and deep learning. Examples based on translations or rotations recover familiar convolutional neural nets, however the theory gives a blueprint for learning in the presence of complicated symmetry. These architectures appear very useful to mathematicians, but I am not aware of any major applications in mathematics as yet. Most of this talk will be a review of ideas and techniques well-known in to the geometric deep learning community. New material is joint work with Joel Gibson (Sydney) and Sebastien Racaniere (DeepMind).