Deep Learning Summit

Deep learning is having a profound impact on AI applications. With the future of neural network-inspired computing in mind, re:Invent is hosting the first ever Deep Learning Summit. Designed for developers to learn about the latest in deep learning research and emerging trends, attendees will hear from industry thought leaders—members of the academic and venture capital communities—who will share their perspectives in 30-minute Lightning Talks.

The Summit will be held on Thursday, November 30th at The Venetian from 1:00PM — 5:00PM.

View the session catalog to register for the following Lightning Talks:

The Deep Learning Revolution
Terrence Sejnowski, The Salk Institute for Biological Studies

The recent rise of deep learning has its roots in learning algorithms that go back to the 1950s, which have been scaled up by a factor of a billion with high performance computing and big data. In this talk, Terrence Sejnowski will explore how recent advances in deep learning have impacted previously intractable problems in speech recognition, image understanding and natural language processing, opening up many new commercial applications. But just as we couldn’t predict the impact of the Internet when it was commercialized in the 1990s, we may not be able to imagine the impact of deep learning for the future.

Eye, Robot: Computer Vision and Autonomous Robotics
Aaron Ames & Pietro Perona, California Institute of Technology

Like mind and body, AI and robotics are increasingly connected. In this talk, Pietro Perona and Aaron Ames will present their latest work on bipedal locomotion and discuss how deep learning approaches combined with traditional controls and dynamic systems theory are used to design and train walking robots that can tackle any surface: from polished pavements to slippery snow. They will also share deep learning and computer vision approaches to the analysis of behavior toward the design of machines that can interact naturally with people.

Exploiting the Power of Language
Alexander Smola, Amazon Web Services

Deep Learning is vital for natural language processing—whether it's for understanding, speech recognition, machine translation or answering questions. In this talk, Alex Smola shares simple guiding principles for building a large variety of services, ranging from sequence annotation to sequence generation. He will also discuss how these designs can be carried out efficiently using modern deep learning capabilities.

Reducing Supervision: Making More with Less
Martial Herbert, Carnegie Mellon University

A key limitation of machine learning, in particular for computer vision tasks, is their reliance on vast amounts of strongly supervised data. This limits scalability, prevents rapid acquisition of new concepts, and limits adaptability to new tasks or new conditions. To address this limitation, Martial Herbert will explore ideas in learning visual models from limited data. The basic insight behind all of these ideas is that it is possible to learn from a large corpus of vision tasks how to learn models for new tasks with limited data, by representing the way visual models vary across tasks, also called model dynamics. The talk will also show examples from common visual classification tasks.

Learning Where to Look in Video
Kristen Grauman, University of Texas

The status quo in visual recognition is to learn from batches of photos labeled by human annotators. Yet cognitive science tells us that perception develops in the context of acting and moving in the world—and without intensive supervision. In this talk, Kristen Grauman will share recent work exploring how a vision system can learn how to move and where to look. Her research considers how an embodied vision system can internalize the link between “how I move” and “what I see”, explore policies for learning to look around actively, and learn to mimic human videographer tendencies, automatically deciding where to look in unedited 360 degree video.

Look, Listen, Learn: The Intersection of Vision and Sound
Antonio Torralba, MIT

Neural networks have achieved remarkable levels of performance and constitute the state of the art in recognition. In this talk, Antonio Torralba will discuss the importance of visualization in order to understand how neural networks work—in particular, the procedure to automatically interpret the internal representation learned by a neural network. He will also discuss how a neural network can be trained to predict sounds associated with a video. By studying the internal representation, the network learns to detect visual objects that make specific sounds like cars, sea waves, and other sources.

Investing in the Deep Learning Future
Matt Ocko, Data Collective Venture Capital

The rise of deep learning has become a hotbed for enterprises and startups creating new AI applications in everything from customer service bots to autonomous driving. But looking beyond what's possible today in computer vision and natural language, what comes next? In this talk, Matt Ocko will share ideas about emerging trends and hidden opportunities in deep learning that can make money while solving the world’s hardest and most urgent problems.

Inspired to get started with deep learning? Also choose from over 50 breakout sessions, hands-on workshops, labs and deep dive chalk talks in our Machine Learning track. View the session catalog for details.