Recently we proposed a new class of deep learning model known as deep declarative networks (DDNs). Processing nodes in DDNs involve solving an optimization problem in the forward pass. End-to-end learning requires that the optimization problem be differentiable, which has been shown possible for optimization problems with continuous objectives and constraints by applying the implicit function theorem to the optimality conditions. This project studies various extensions to DDNs and may involve collaboration with other students on the project. Students are expected to have read and understood the paper “Deep Declarative Networks: A New Hope” (https://arxiv.org/abs/1909.04866) and explored the tutorials in the associated GitLab repository (https://github.com/anucvml/ddn/tree/master/tutorials).
Duration: 6-24 units (1 or 2 semesters)