DIMACS / TRIPODS / MOPTA
  • Slide


Summer School

Important dates
Contact

Summer School

A summer school will be held August 10-12, 2018 on campus at Lehigh University. It will be taught by

  • Frank E. Curtis
  • Francesco Orabona
  • Martin Takac

Student nomination

There is no registration fee to attend the summer school. However, only selected students will be allowed to participate (due to limited space). To nominate a student for the summer school, please complete this form
Funds for travel will not be provided. However, breakfast and lunch will be provided and inexpensive shared hotel accommodation will be available. More information will be forthcoming.

Outline

The summer school will cover three topics:

  1. Python & Tensor-flow tutorial
    We will discuss the basics of Python (needed for this summer school as every lecture will have a coding component) and the TensorFlow framework. During this segment, students will implement various algorithms, compare their performance, etc. We will also explain the benefits of using GPUs for deep learning and use a cloud platform (e.g., AWS) to run the code.
  2. Online learning and Stochastic Gradient Descent
    Online learning is a popular framework for designing and analyzing iterative optimization algorithms, including stochastic optimization algorithms or algorithms operating on large data streams. The emphasis in online learning algorithms is on adapting on the unknown characteristics of the data stream, aiming at designing algorithms with optimal guarantees and no hyperparameters to tune. In this lecture, we will review the basis of online learning, its connection with stochastic optimization, and the latest advancements. In particular, we will show how it is easy to design first-order stochastic methods that do not require the tuning of step sizes, yet they achieve practical and theoretical optimal performance.
  3. Beyond SG: Second-order methods for nonconvex optimization
    Users of optimization methods for machine learning have been fascinated by the success of stochastic gradient (SG) algorithms for solving large-scale problems. This interest extends even into settings in which first-order methods have been known to falter in the context of deterministic optimization, namely, when the objective function is nonconvex and negative curvature is present. While interesting theoretical results can be proved about SG for nonconvex optimization, there remain various interesting ways to move beyond SG that are worth exploring for the next generation of optimization methods for machine learning. In this segment, we discuss these opportunities along with new second-order-type techniques for solving stochastic nonconvex optimization problems, including inexact Newton, trust region, cubic regularization, and related techniques.