“Deep Learning” systems, typified by deep neural networks, are increasingly taking over all AI tasks, ranging from language understanding, and speech and image recognition, to machine translation, planning, and even game playing and autonomous driving. As a result, expertise in deep learning is fast changing from an esoteric desirable to a mandatory prerequisite in many advanced academic settings, and a large advantage in the industrial job market.
In this course we will learn about the basics of deep neural networks, and their applications to various AI tasks. By the end of the course, it is expected that students will have significant familiarity with the subject, and be able to apply Deep Learning to a variety of tasks. They will also be positioned to understand much of the current literature on the topic and extend their knowledge through further study.
The course is well rounded in terms of concepts. It helps us understand the fundamentals of Deep Learning. The course starts off gradually with MLPs and it progresses into the more complicated concepts such as attention and sequence-to-sequence models. We get a complete hands on with PyTorch which is very important to implement Deep Learning models. As a student, you will learn the tools required for building Deep Learning models. The homeworks usually have 2 components which is Autolab and Kaggle. The Kaggle components allow us to explore multiple architectures and understand how to fine-tune and continuously improve models. The task for all the homeworks were similar and it was interesting to learn how the same task can be solved using multiple Deep Learning approaches. Overall, at the end of this course you will be confident enough to build and tune Deep Learning models.
Instructor: Bhiksha Raj
* -- contingent on registration
Lecture: Monday and Wednesday, 9.00am-10.20am
Location: Gates-Hillman Complex GHC 4102
Recitation: Friday, 9.00am-10.20am, Location: TBD
This course is worth 12 units.
Grading will be based on weekly quizzes, homework assignments and a final project.
There will be five assignments in all. Note that assignments 4 and 5 are released simultaneously. They will also be due on the same date.
|Quizzes||13 quizzes (bottom 3 quiz scores will be dropped), total contribution to grade 25%|
|Assignments||5 assignments, total contribution to grade 50%|
|Project||1 project, total contribution to grade 25%|
The course will not follow a specific book, but will draw from a number of sources. We list relevant books at the end of this page. We will also put up links to relevant reading material for each class. Students are expected to familiarize themselves with the material before the class. The readings will sometimes be arcane and difficult to understand; if so, do not worry, we will present simpler explanations in class.
Link to be posted
You can also find a nice catalog of models that are current in the literature here. We expect that you will be in a position to interpret, if not fully understand many of the architectures on the wiki and the catalog by the end of the course.
Kaggle is a popular data science platform where visitors compete to produce the best model for learning or analyzing a data set.
For assignments you will be submitting your evaluation results to a Kaggle leaderboard.
|Lecture||Start date||Topics||Lecture notes/Slides||Additional readings, if any||Quizzes/Assignments|
||Assignment 1Quiz 2|
|5||September 10||Quiz 3|
|7||September 17||TBA||Quiz 4|
Assignment 2 due
|17||October 22||Variational Autoencoders (VAEs)|
||Assignment 3 due
Assignments 4 and 5
||Assignments 4 and 5 due|
|Recitation||Start date||Topics||Lecture notes/Slides|
|1||August 31||Amazon Web Services (AWS)|
|2||September 7||Practical Deep Learning in Python|
|3||September 14||Optimization methods|
|4||September 21||Convolutional Networks|
|5||September 28||Basics of Recurrent networks|
|6||October 5||Recurrent networks 2: Loss functions, CTC|
|7||October 12||Visualization: What does the network learn|
|9||October 26||Variational autoencoders|
|11||November 9||Embeddings & HW Baselines|
|12||November 16||Hopfield Nets, Boltzmann machines, RBMs|
|13||November 30||Reinforcement Learning: Deep Q nets, policy gradient methods|