Meta-SGD: Learning to Learn Quickly for Few-Shot Learning
Few-shot learning is challenging for learning algorithms that learn each task in isolation and from scratch. In contrast, meta-learning learns from many related tasks a meta-learner that can learn a new task more accurately and faster with fewer examples, where the choice of meta-learners is crucial. In this talk, I will present Meta-SGD, an SGD-like, easily trainable meta-learner that can initialize and adapt any differentiable learner in just one step, on both supervised learning and reinforcement learning. Compared to the popular meta-learner LSTM, Meta-SGD is conceptually simpler, easier to implement, and can be learned more efficiently. Compared to the latest meta-learner MAML, Meta-SGD has a much higher capacity by learning to learn not just the learner initialization, but also the learner update direction and learning rate, all in a single meta-learning process. Meta-SGD shows highly competitive performance for few-shot learning on regression, classification, and reinforcement learning.
About Dr. Zhenguo LI
Zhenguo Li is director of AI Theory Lab and principal researcher in Huawei Noah’s Ark Lab. He received the B.S. and M.S. degrees from the Department of Mathematics at Peking University, in 2002 and 2005, repectively, and the Ph.D degree from the Department of Information Engineering at the Chinese University of Hong Kong, in 2008. Before he joined Huawei, he was an associate research scientist in the Department of Electrical Engineering at Columbia University. His research interests include machine learning and artificial intelligence.