ML

Metalearning

symposium

NIPS 2017

Speakers

Pieter Abbeel

Embodied Intelligence and UC Berkeley

https://people.eecs.berkeley.edu/~pabbeel/

Chrisantha Fernando

DeepMind

http://www.sussex.ac.uk/profiles/127298

Roman Garnett

Washington Univ. St. Louis

http://www.cse.wustl.edu/~garnett/

Frank Hutter

Freiburg Univ.

http://www2.informatik.uni-freiburg.de/~hutter/

Max Jaderberg

DeepMind

http://www.maxjaderberg.com

Quoc Le

Google Brain

https://research.google.com/pubs/QuocLe.html

Risto Miikkulainen

Sentient and UT Austin

http://www.cs.utexas.edu/~risto/

Juergen Schmidhuber

Nnaisense and IDSIA

http://people.idsia.ch/~juergen/

Satinder Singh

Cogitai and Univ. of Michigan

http://web.eecs.umich.edu/~baveja/

Ilya Sutskever

OpenAI

http://www.cs.toronto.edu/~ilya/

Kenneth Stanley

Uber and UCF

https://www.cs.ucf.edu/~kstanley/

Oriol Vinyals

DeepMind

https://research.google.com/pubs/OriolVinyals.html

Jane Wang

DeepMind

http://www.janexwang.com

Organizers

Risto Miikkulainen

Sentient and UT Austin

http://www.cs.utexas.edu/~risto/

Quoc Le

Google Brain

https://research.google.com/pubs/QuocLe.html

Kenneth Stanley

Uber and UCF

https://www.cs.ucf.edu/~kstanley/

Chrisantha Fernando

DeepMind

http://www.sussex.ac.uk/profiles/12729

Synopsis

Modern learning systems, such as the recent deep learning, reinforcement learning, and probabilistic inference architectures, have become increasingly complex, often beyond the human ability to comprehend them. Such complexity is important: The more complex these systems are, the more powerful they often are. A new research problem has therefore emerged: How can the complexity, i.e. the design, components, and hyperparameters, be configured automatically so that these systems perform as well as possible? This is the problem of metalearning. Several approaches have emerged, including those based on Bayesian optimization, gradient descent, reinforcement learning, and evolutionary computation. The symposium presents an overview of these approaches, given by the researchers who developed them. Panel discussion compares the strengths of the different approaches and potential for future developments and applications. The audience will thus obtain a practical understanding of how to use metalearning to improve the learning systems they are using, as well as opportunities for research on metalearning in the future.

Schedule

Thursday, December 7

2:00 — 9:30 PM @ The Grand Ballroom

2:00 – 2:10     Opening remarks
Quoc Le (slides)

Topic I: Evolutionary Optimization

Session Chair: Quoc Le

2:10 – 2:30     Evolving Multitask Neural Network Structure

Risto Miikkulainen (slides)

2:30 – 2:50    Evolving to Learn through Synaptic Plasticity

Ken Stanley (slides)

2:50 – 3:10     PathNet and Beyond

Chrisantha Fernando (slides)

Topic II: Bayesian Optimization

3:10 – 3:30     Bayesian Optimization for Automated Model Selection

Roman Garnett (slides)

3:30 – 3:50    Automatic Machine Learning (AutoML) and How To Speed It Up

Frank Hutter (slides)

4:00 – 4:30    COFFEE BREAK

Topic III: Gradient Descent

Session Chair: Chrisantha Fernando

4:30 – 4:50    Contrasting Model- and Optimization-based Metalearning

Oriol Vinyals (slides)

4:50 – 5:10     Population-based Training for Neural Network Meta-Optimization

Max Jaderberg (slides)

5:10 – 5:30     Learning to Learn for Robotic Control

Pieter Abbeel (slides)

5:30 – 5:50    On Learning How to Learn Learning Strategies

Juergen Schmidhuber (slides)

Topic IV: Reinforcement Learning

5:50 – 6:10     Intrinsically Motivated Reinforcement Learning

Satinder Singh

6:10 - 6:30     Self-Play

Ilya Sutskever

6:30 – 7:30: DINNER BREAK

Session Chair:  Ken Stanley

7:30 – 7:50     Neural Architecture Search

Quoc Le (slides)

7:50 – 8:10     Multiple scales of reward and task learning

Jane Wang (slides)

8:10 – 9:30     Panel discussion

Moderator: Risto Miikkulainen

Panelists: Frank Hutter, Juergen Schmidhuber, Ken Stanley, Ilya Sutskever

Sponsors

© 2017 Sentient Technologies, Inc.

ML

Metalearning

symposium

 NIPS 2017