|Homework 0||January 19||A Python and PyTorch Primer||Handout (*.targ.gz)|
“Deep Learning” systems, typified by deep neural networks, are increasingly taking over all AI tasks, ranging from language understanding, and speech and image recognition, to machine translation, planning, and even game playing and autonomous driving. As a result, expertise in deep learning is fast changing from an esoteric desirable to a mandatory prerequisite in many advanced academic settings, and a large advantage in the industrial job market.
In this course we will learn about the basics of deep neural networks, and their applications to various AI tasks. By the end of the course, it is expected that students will have significant familiarity with the subject, and be able to apply Deep Learning to a variety of tasks. They will also be positioned to understand much of the current literature on the topic and extend their knowledge through further study.
If you are only interested in the lectures, you can watch them on the YouTube channel listed below.
The course is well rounded in terms of concepts. It helps us understand the fundamentals of Deep Learning. The course starts off gradually with MLPs and it progresses into the more complicated concepts such as attention and sequence-to-sequence models. We get a complete hands on with PyTorch which is very important to implement Deep Learning models. As a student, you will learn the tools required for building Deep Learning models. The homeworks usually have 2 components which is Autolab and Kaggle. The Kaggle components allow us to explore multiple architectures and understand how to fine-tune and continuously improve models. The task for all the homeworks were similar and it was interesting to learn how the same task can be solved using multiple Deep Learning approaches. Overall, at the end of this course you will be confident enough to build and tune Deep Learning models.
Lecture: Monday and Wednesday, 9:00 a.m. - 10:20 a.m. @ BH A51
Recitation: Friday, 9.00am-10.20am BH A51Office hours: TBD
Lecture: Monday and Wednesday, 3:00 p.m. – 4:20 p.m. @ CMR C421Office hours: TBD
11-785 is a graduate course worth 12 units. 11-485 is an undergraduate course worth 9 units.
Grading will be based on weekly quizzes (24%), homeworks (51%) and a course project (25%).
There will be weekly quizzes.
|Assignments||There will be five assignments in all. Assignments will include autolab components, where you must complete designated tasks, and a kaggle component where you compete with your colleagues.
|Project||All students are required to do a course project. The project is worth 25% of your grade|
|Final grade||The end-of-term grade is curved. Your overall grade will depend on your performance relative to your classmates.|
|Pass/Fail||Students registered for pass/fail must complete all quizzes, HWs and the project. A grade equivalent to B- is required to pass the course.|
|Auditing||Auditors are not required to complete the course project, but must complete all quizzes and homeworks. We encourage doing a course project regardless.|
Piazza is what we use for discussions. You should be automatically signed up if you're enrolled at the start of the semester. If not, please sign up. Also, please follow the Piazza Etiquette when you use the piazza.
AutoLab is what we use to test your understand of low-level concepts, such as engineering your own libraries, implementing important algorithms, and developing optimization methods from scratch.
Kaggle is where we test your understanding and ability to extend neural network architectures discussed in lecture. Similar to how AutoLab shows scores, Kaggle also shows scores, so don't feel intimidated -- we're here to help. We work on hot AI topics, like speech recognition, face recognition, and neural machine translation.
YouTube is where all lecture and recitation recordings will be uploaded. Links to individual lectures and recitations will also be posted below as they are uploaded. Videos marked “Old“ are not current, so please be aware of the video title.
The course will not follow a specific book, but will draw from a number of sources. We list relevant books at the end of this page. We will also put up links to relevant reading material for each class. Students are expected to familiarize themselves with the material before the class. The readings will sometimes be arcane and difficult to understand; if so, do not worry, we will present simpler explanations in class.
You can also find a nice catalog of models that are current in the literature here. We expect that you will be in a position to interpret, if not fully understand many of the architectures on the wiki and the catalog by the end of the course.
|Lecture||Date||Topics||Lecture Slides and Video||Additional Readings (if any)||Homework & Assignments|
||Slides (*.pdf)|| Hornik et al. (1989)
|0 - Part A||January 5||Fundamentals of Python|| Notebook (*.tar.gz)
|| YouTube (url)
|0 - Part B||January 5||Fundamentals of NumPy||Notebook (*.tar.gz)||YouTube (url)||Joseph Konan|
|0 - Part C||January 5||Fundamentals of Jupyter Notebook||Notebook (*.tar.gz)||YouTube (url)||Joseph Konan|
|0 - Part D||January 5||AWS. Will include tutorial, with google doc polling to check student status||Doc (url)||YouTube (url)||Christopher George|
|1||January 13||Your First Deep Learning Code||Notebook (*.zip)||YouTube (url)||Bhuvan, Soumya|
|2||January 24||How to compute a derivative||Amala, Yang|
|3||January 31||Optimizing the network||Advait, Yuying|
|4||February 7||Tensorboard, TSNE, Visualizing network parameters and outputs at every layer||Soumya, Yash|
|5||February 14||CNN: Basics||Hao, Zhefan|
|6||February 21||CNN: Losses, transfer learning||Rohit, Bhuvan|
|7||February 28||RNN: Basics||Advait, Chris|
|8||March 6||CTC||Chris, Soumya|
|9||March 20||Attention||Yang, Yuying|
|10||March 27||VAEs||Yash, Hao|
|11||April 3||Listen Attend Spell||Rohit, Amala|
|12||April 10||Generative Adversarial Networks (GANs)||Hao, Yash|
|13||April 17||Reinforcement Learning||Zhefan, Bhuvan|
|14||April 24||Hopfield nets / Boltzmann machines||Rohit, Yang|
|Number||Part||Topics||Release Date||Early-submission Deadline||On-time Deadline||Links|
|HW0||—||January 5||January 19||
|HW1||P1||MLP pytorch||January 19|
|P1-bonus||Dropout, ADAM in pytorch||January 19|
|P2||MLP, phoneme recognition||January 19|
|HW2||P1||CNN as scanning MLP, backprop||February 9|
|P1-bonus||CNN: conv1d/pooling/forward/backward||February 9|
|P2||Face Recognition: Classification and Verification||February 9|
|HW3||P1||RNN: forward/backward/CTC beam search||March 8|
|P1-bonus||Full BPTT, Full BPTT with forward backward||March 8|
|P2||Connectionist Temporal Classification||March 8|
|HW4||P1||Word-Level Neural Language Models||April 5|
|P2||Attention Mechanisms and Memory Networks||April 5|
|Team Formation||September 23rd, 2019|| Teams will be formed in groups of four each
*If you do not have a team after this point, you will be grouped randomly
|Project Proposal||October 7th, 2019||Project Description Guidelines|
|Midterm Report||Nov. 14th, 2019||report template is provided to detail your initial experiments|
|Poster Presentation||Dec. 5th, and 9th, 2019||It will be a final poster session of the different groups in all three campuses|
|Final Project Report||Dec. 7th, 2019||This should be the final document for the course project|