|Final Project Playlist||Please enjoy the final project videos from our students||Youtube Playlist|
“Deep Learning” systems, typified by deep neural networks, are increasingly taking over all AI tasks, ranging from language understanding, and speech and image recognition, to machine translation, planning, and even game playing and autonomous driving. As a result, expertise in deep learning is fast changing from an esoteric desirable to a mandatory prerequisite in many advanced academic settings, and a large advantage in the industrial job market.
In this course we will learn about the basics of deep neural networks, and their applications to various AI tasks. By the end of the course, it is expected that students will have significant familiarity with the subject, and be able to apply Deep Learning to a variety of tasks. They will also be positioned to understand much of the current literature on the topic and extend their knowledge through further study.
If you are only interested in the lectures, you can watch them on the YouTube channel listed below.
The course is well rounded in terms of concepts. It helps us understand the fundamentals of Deep Learning. The course starts off gradually with MLPs and it progresses into the more complicated concepts such as attention and sequence-to-sequence models. We get a complete hands on with PyTorch which is very important to implement Deep Learning models. As a student, you will learn the tools required for building Deep Learning models. The homeworks usually have 2 components which is Autolab and Kaggle. The Kaggle components allow us to explore multiple architectures and understand how to fine-tune and continuously improve models. The task for all the homeworks were similar and it was interesting to learn how the same task can be solved using multiple Deep Learning approaches. Overall, at the end of this course you will be confident enough to build and tune Deep Learning models.
Lecture: Monday and Wednesday, 9:00 a.m. - 10:20 a.m.
Zoom Link: Meeting Link , Meeting ID: 403 746 7921
Recitation: Friday, 9.00am-10.20am BH A51Office hours:
Lecture: Monday and Wednesday, 3:00 p.m. – 4:20 p.m. @ CMR C421Office hours:
11-785 and 11-685 are a graduate courses worth 12 units. 11-485 is an undergraduate course worth 9 units.
Grading will be based on weekly quizzes (24%), homeworks (51%) and a course project (25%).
There will be weekly quizzes.
|Assignments||There will be five assignments in all. Assignments will include autolab components, where you must complete designated tasks, and a kaggle component where you compete with your colleagues.
|Project||All students are required to do a course project. The project is worth 25% of your grade|
|Final grade||The end-of-term grade is curved. Your overall grade will depend on your performance relative to your classmates.|
|Pass/Fail||Students registered for pass/fail must complete all quizzes, HWs and the project. A grade equivalent to B- is required to pass the course.|
|Auditing||Auditors are not required to complete the course project, but must complete all quizzes and homeworks. We encourage doing a course project regardless.|
Piazza is what we use for discussions. You should be automatically signed up if you're enrolled at the start of the semester. If not, please sign up. Also, please follow the Piazza Etiquette when you use the piazza.
AutoLab is what we use to test your understand of low-level concepts, such as engineering your own libraries, implementing important algorithms, and developing optimization methods from scratch.
Kaggle is where we test your understanding and ability to extend neural network architectures discussed in lecture. Similar to how AutoLab shows scores, Kaggle also shows scores, so don't feel intimidated -- we're here to help. We work on hot AI topics, like speech recognition, face recognition, and neural machine translation.
YouTube is where all lecture and recitation recordings will be uploaded. Links to individual lectures and recitations will also be posted below as they are uploaded. Videos marked “Old“ are not current, so please be aware of the video title.
The course will not follow a specific book, but will draw from a number of sources. We list relevant books at the end of this page. We will also put up links to relevant reading material for each class. Students are expected to familiarize themselves with the material before the class. The readings will sometimes be arcane and difficult to understand; if so, do not worry, we will present simpler explanations in class.
You can also find a nice catalog of models that are current in the literature here. We expect that you will be in a position to interpret, if not fully understand many of the architectures on the wiki and the catalog by the end of the course.
|Lecture||Date||Topics||Lecture Slides and Video||Additional Readings (if any)||Homework & Assignments|
|| Hornik et al. (1989)
|| Widrow and Lehr (1992)
Convergence of perceptron algorithm
|| Werbos (1990)
Rumelhart, Hinton and Williams (1986)
||Backprop fails to separate, where perceptrons succeed, Brady et al. (1989)|
|| Momentum, Polyak (1964)
|| ADAGRAD, Duchi, Hazan and Singer (2011)
Adam: A method for stochastic optimization, Kingma and Ba (2014)
||Falhman and Lebiere (1990)|
||Bidirectional Recurrent Neural Networks|
||How to compute a derivative (*.pdf)|
||Labelling Unsegmented Sequence Data with Recurrent Neural Networks|
||Tutorial on VAEs (Doersch)
Autoencoding variational Bayes (Kingma)
|0 - Part A||-||Fundamentals of Python|
|0 - Part B||-||Fundamentals of NumPy|
|0 - Part C||-||Fundamentals of Jupyter Notebook|
|0 - Part D||-||AWS. Will include tutorial, with google doc polling to check student status|
|1||August 31||Your First Deep Learning Code|
|2||September 11||How to compute a derivative|
|3||September 18||Optimizing the network|
|4||September 25||Tensorboard, TSNE, Visualizing network parameters and outputs at every layer|
|5||October 2||CNN: Basics|
|6||October 9||CNN: Losses, transfer learning|
|7||October 30||RNN: Basics|
|10||November 20||Listen Attend Spell|
|11||December 4||Hopfield nets / Boltzmann machines|
|13||TBD||Generative Adversarial Networks (GANs)|
|Number||Part||Topics||Release Date||Early-submission Deadline||On-time Deadline||Links|
|P1-bonus||Dropout and Adam|
|P2||MLP, phoneme recognition|
|HW2||P1||CNN as scanning MLP, backprop|
|P1-bonus||Conv2D and Pooling|
|P2||Face Recognition: Classification and Verification|
|HW3||P1||RNN: forward/backward/CTC beam search|
|P1-bonus||CTC Loss and RNN BPTT|
|P2||Connectionist Temporal Classification|
|HW4||P1||Word-Level Neural Language Models|
|P2||Attention Mechanisms and Memory Networks|
|Team Formation|| Teams will be formed in groups of four each
*If you do not have a team after this point, you will be grouped randomly
|Project Proposal||Project Description Guidelines|
|Midterm Report||This report template is provided for you to detail your initial experiments|
|Final Project Video||This is the final video for the course project||Video Instructions|
|Final Project Report||This should be the final document for the course project|
Grade Breakdown: 10% - Proposal; 15% - Midterm Report; 20% - Project Video; 15% - Project Video Follow-up; 40% - Paper peer review.