DIMACS / TRIPODS / MOPTA
  • Slide


Summer School

Important dates
Contact

Summer School

A summer school will be held August 10-12, 2018 on campus at Lehigh University. It will be taught by

  • Frank E. Curtis
  • Francesco Orabona
  • Martin Takac

Student nomination

Student nomination for the summer school is now closed!

Outline

The summer school will cover three topics:

  1. Python & Pytorch tutorial
    We will discuss the basics of Python (needed for this summer school as every lecture will have a coding component) and the Pytorch framework. During this segment, students will implement various algorithms, compare their performance, etc. We will also explain the benefits of using GPUs for deep learning and use a cloud platform (e.g., AWS) to run the code.
  2. Online learning and Stochastic Gradient Descent
    Online learning is a popular framework for designing and analyzing iterative optimization algorithms, including stochastic optimization algorithms or algorithms operating on large data streams. The emphasis in online learning algorithms is on adapting on the unknown characteristics of the data stream, aiming at designing algorithms with optimal guarantees and no hyperparameters to tune. In this lecture, we will review the basis of online learning, its connection with stochastic optimization, and the latest advancements. In particular, we will show how it is easy to design first-order stochastic methods that do not require the tuning of step sizes, yet they achieve practical and theoretical optimal performance.
  3. Beyond SG: Second-order methods for nonconvex optimization
    Users of optimization methods for machine learning have been fascinated by the success of stochastic gradient (SG) algorithms for solving large-scale problems. This interest extends even into settings in which first-order methods have been known to falter in the context of deterministic optimization, namely, when the objective function is nonconvex and negative curvature is present. While interesting theoretical results can be proved about SG for nonconvex optimization, there remain various interesting ways to move beyond SG that are worth exploring for the next generation of optimization methods for machine learning. In this segment, we discuss these opportunities along with new second-order-type techniques for solving stochastic nonconvex optimization problems, including inexact Newton, trust region, cubic regularization, and related techniques.