Assignment | Deadline | Description | Links | |
---|---|---|---|---|
HW4P1 | Regular: April 28th, 11:59 PM EST | Language Modelling using LSTMs |
Autolab, Writeup, Handout (.tar) |
|
HW4P2 |
Early (Bonus): April 16th, 11:59 PM EST Regular: April 28th, 11:59 PM EST |
Attention-based Speech Recognition |
Kaggle, Writeup |
|
Project Gallery | ||||
Here's an example of a successful project from Fall 2020. The team developed an AI Limmerick generator, and compiled a book from the AI Poet's creations. |
Project Report, Project Video, Book (Amazon) | |||
This piece is performed by the Chinese Music Institute at Peking University (PKU) together with PKU's Chinese orchestra. This is an adaptation of Beethoven: Serenade in D major, Op.25 - 1. Entrata (Allegro),for Chinese transverse flute (Dizi), clarinet and flute. |
In the event that the course is moved online due to CoVID-19, we will continue to deliver lectures via zoom. In the event that an instructor is unable to deliver a lecture in person, we will broadcast that lecture over zoom or, in extreme situations, expect you to view pre-recorded lectures from prior semesters. You will be notified through Piazza should any of these eventualities arise.
“Deep Learning” systems, typified by deep neural networks, are increasingly taking over all AI tasks, ranging from language understanding, and speech and image recognition, to machine translation, planning, and even game playing and autonomous driving. As a result, expertise in deep learning is fast changing from an esoteric desirable to a mandatory prerequisite in many advanced academic settings, and a large advantage in the industrial job market.
In this course we will learn about the basics of deep neural networks, and their applications to various AI tasks. By the end of the course, it is expected that students will have significant familiarity with the subject, and be able to apply Deep Learning to a variety of tasks. They will also be positioned to understand much of the current literature on the topic and extend their knowledge through further study.
If you are only interested in the lectures, you can watch them on the YouTube channel.
The course is well rounded in terms of concepts. It helps us understand the fundamentals of Deep Learning. The course starts off gradually with MLPs and it progresses into the more complicated concepts such as attention and sequence-to-sequence models. We get a complete hands on with PyTorch which is very important to implement Deep Learning models. As a student, you will learn the tools required for building Deep Learning models. The homeworks usually have 2 components which is Autolab and Kaggle. The Kaggle components allow us to explore multiple architectures and understand how to fine-tune and continuously improve models. The task for all the homeworks were similar and it was interesting to learn how the same task can be solved using multiple Deep Learning approaches. Overall, at the end of this course you will be confident enough to build and tune Deep Learning models.
Courses 11-785 and 11-685 are equivalent 12-unit graduate courses, and have a final project and HW 5 respectively. Course 11-485 is the undergraduate version worth 9 units, the only difference being that there is no final project or HW 5.
Instructors:
TAs:
Lecture: Tuesday and Thursday, 11:50 a.m. - 1:10 p.m.
Recitation: Friday, 11:50 a.m. - 1:10 p.m.
Office hours: We will be using OHQueue for zoom related Office hours, others would be in-person. The OH schedule is given below.
Day | Time (Eastern Time) | TA | Zoom/In Person Venue |
---|---|---|---|
Monday | 10:00 - 11:00 am | Soumya Empran | Zoom |
1:00 - 2:00 pm | Urvil Kenia | Wean 3110 | |
3:00 - 4:00 pm | Fuyu Tang | GHC 6708 | |
6:00 - 7:00 pm | Amelia Kuang | GHC 6708 | |
6:30 - 7:30 pm | Diksha Agarwal | GHC 6708 | |
7:00 - 9:00 pm | Rucha Khopkar | Zoom | |
7:00 - 9:00 pm | Lavanya Gupta | Zoom | |
Tuesday | 2:00 - 3:00 pm | Manasi Purohit | Wean 3110 |
3:00 - 4:00 pm | Ameya Mahabaleshwarkar | Zoom | |
3:00 - 4:00 pm | Diksha Agarwal | Zoom | |
Wednesday | 8:00 - 9:00 am | Germann Atakpa | Zoom / Room B209 (Kigali) |
10:00 - 11:00 am | Soumya Empran | Zoom | |
2:00 - 3:00 pm | Urvil Kenia | Wean 3110 | |
2:00 - 3:00 pm | Bradley Warren | Wean 3110 | |
3:00 - 4:00 pm | Fuyu Tang | Wean 3110 | |
6:00 - 8:00 pm | Zhe Chen | Wean 3110 | |
6:35 - 7:35 pm | Ruoyu Hua | Zoom / Room 208 (SV) | |
Thursday | 4:00 - 5:00 pm | Shreyas Piplani | Zoom |
4:00 - 5:00 pm | Wenwen Ouyang | Zoom | |
5:00 - 7:00 pm | David Park | Zoom | |
6:30 - 8:30 pm | Chaoran Zhang | Wean 3110 | |
Friday | 8:00 - 9:00 am | Germann Atakpa | Zoom / Room B209 (Kigali) |
1:00 - 3:00 pm | Ameya Mahabaleshwarkar | Zoom | |
1:00 - 3:00 pm | Jeff Moore | Wean 3110 | |
2:00 - 3:00 pm | Bradley Warren | Wean 3110 | |
3:00 - 5:00 pm | Aparajith Srinivasan | Wean 3110 | |
Saturday | 8:00 - 10:00 am | Iffanice Houndayi | Zoom / Room B209 (Kigali) |
1:00 - 2:00 pm | Amelia Kuang | Zoom | |
1:00 - 2:00 pm | Ruoyu Hua | Zoom | |
2:00 - 3:00 pm | Ameya Mahabaleshwarkar | Zoom | |
2:00 - 3:00 pm | Shreyas Piplani | Zoom | |
3:00 - 4:00 pm | Rucha Khopkar | Zoom | |
Sunday | 9:00 - 11:00 am | Adebayo Oshingbesan | Zoom |
9:00 - 11:00 am | John Jeong | Zoom | |
11:00 am - 12:00 pm | Iffanice Houndayi | Zoom / Room B209 (Kigali) | |
3:00 - 5:00 pm | Roshan Ram | Zoom | |
5:00 - 6:00 pm | Manasi Purohit | Zoom |
Policy | ||
Breakdown | ||
Score Assignment | Grading will be based on weekly quizzes (24%), homeworks (50%) and a course project (25%). Note that 1% of your grade is assigned to Attendance. | |
Quizzes | ||
Quizzes |
There will be weekly quizzes.
|
|
Assignments | ||
Assignments | There will be five assignments in all. Assignments will include autolab
components, where you must complete designated tasks, and a kaggle component
where you compete with your colleagues.
|
|
Project | ||
Project |
|
|
Attendance | ||
Attendance |
|
|
Final grade | ||
Final grade | The end-of-term grade is curved. Your overall grade will depend on your performance relative to your classmates. | |
Pass/Fail | ||
Pass/Fail | Students registered for pass/fail must complete all quizzes, HWs and if they are in the graduate course, the project. A grade equivalent to B- is required to pass the course. | |
Auditing | ||
Auditing | Auditors are not required to complete the course project, but must complete all quizzes and homeworks. We encourage doing a course project regardless. | |
End Policy |
This semester we will be implementing study groups. It is highly recommended that you join a study group; see the forms on the bulletin.
Piazza is what we use for discussions. You should be automatically signed up if you're enrolled at the start of the semester. If not, please sign up here. Also, please follow the Piazza Etiquette when you use the piazza.
AutoLab is what we use to test your understand of low-level concepts, such as engineering your own libraries, implementing important algorithms, and developing optimization methods from scratch.
Kaggle is where we test your understanding and ability to extend neural network architectures discussed in lecture. Similar to how AutoLab shows scores, Kaggle also shows scores, so don't feel intimidated -- we're here to help. We work on hot AI topics, like speech recognition, face recognition, and neural machine translation.
CMU students who are not in the live lectures should watch the uploaded lectures at Media Services in order to get attendance credit. Links to individual videos will be posted as they are uploaded.
YouTube is where non-CMU folks can view all lecture and recitation recordings. Videos marked “Old“ are not current, so please be aware of the video title.
The course will not follow a specific book, but will draw from a number of sources. We list relevant books at the end of this page. We will also put up links to relevant reading material for each class. Students are expected to familiarize themselves with the material before the class. The readings will sometimes be arcane and difficult to understand; if so, do not worry, we will present simpler explanations in class.
You can also find a nice catalog of models that are current in the literature here. We expect that you will be in a position to interpret, if not fully understand many of the architectures on the wiki and the catalog by the end of the course.
Lecture | Date | Topics | Slides and Video | Additional Materials | Quiz |
---|---|---|---|---|---|
0 | - |
|
Slides (*.pdf) Video (YT) |
Quiz 0 | |
1 | Tuesday Jan 18 |
|
Slides (*.pdf) Video (YT) Video (MT) |
The New Connectionism (1988)
On Alan Turing's Anticipation of Connectionism |
Quiz 1 | Quiz
2 | Thursday Jan 20 |
|
Slides (*.pdf) Video (MT) Video (YT) |
Shannon (1949) Boolean Circuits |
|
3 | Tuesday Jan 25 |
|
Slides (*.pdf) Video (YT) Video (MT) |
Widrow and Lehr (1992) Adaline and Madaline Convergence of perceptron algorithm Threshold Logic TC(Complexity) AC(Complexity) |
Quiz 2 |
4 | Thursday Jan 27 |
|
Slides (*.pdf) Video (YT) Video (MT) |
Werbos
(1990) Rumelhart, Hinton and Williams (1986) |
|
5 | Tuesday Feb 1 |
|
Slides (*.pdf) Part1 Video (YT) Part2 Video (YT) Video (MT) |
Werbos
(1990) Rumelhart, Hinton and Williams (1986) |
Quiz 3 |
6 | Thursday Feb 3 |
|
Slides (*.pdf) Video (YT) Video (MT) |
Backprop fails to separate, where
perceptrons succeed, Brady et al. (1989) Why Momentum Really Works |
|
7 | Tuesday Feb 8 |
|
Slides (*.pdf) Video (YT) |
Momentum,
Polyak (1964) Nestorov (1983) Derivatives and Influences |
Quiz 4 |
8 | Thursday Feb 10 |
|
Slides (*.pdf) Video1 (YT) Video2 (YT) Video (MT) |
Derivatives and Influence Diagrams ADAGRAD, Duchi, Hazan and Singer (2011) Adam: A method for stochastic optimization, Kingma and Ba (2014) |
|
9 | Tuesday Feb 15 |
|
Slides (*.pdf) Video (YT) Video (MT) |
Quiz 5 | |
10 | Thursday Feb 17 |
|
Slides (*.pdf) Video (YT) Video (MT) |
||
11 | Tuesday Feb 22 |
|
Slides (*.pdf) Video (YT) Video (MT) |
CNN Explainer | Quiz 6 |
12 | Thursday Feb 24 |
|
Slides (*.pdf) Video (YT) Video (MT) |
||
13 | Tuesday March 1 |
|
Slides (*.pdf) Video (YT) Video (MT) |
Fahlman
and Lebiere (1990) How to compute a derivative, extra help for HW3P1 (*.pptx) |
Quiz 7 |
14 | Thursday March 3 |
|
Slides (*.pdf) Video (YT) Video (MT) |
Bidirectional Recurrent Neural Networks | |
- | Tuesday March 8 |
|
Quiz 8 | ||
- | Thursday March 10 |
| |||
15 | Tuesday March 15 |
|
Slides (*.pdf) Video (YT) Video (MT) |
LSTM | Quiz 9 |
16 | Thursday March 17 |
|
Slides (*.pdf) Video (YT) |
||
17 | Tuesday March 22 |
|
Slides (*.pdf) Video1 (YT) Video2 (YT) Video (MT) |
Labelling Unsegmented Sequence Data with Recurrent Neural Networks | Quiz 10 |
18 | Thursday March 24 |
|
Slides (*.pdf) Video (YT) |
||
19 | Tuesday March 29 |
|
Slides (*.pdf) Video (YT) |
Attention Is All You
Need A comprehensive Survey on Graph Neural Networks |
Quiz 11 |
20 | Thursday March 31 |
|
Slides (*.pdf) Video (YT) |
||
21 | Tuesday April 5 |
|
Slides (*.pdf) Video (YT) |
Tutorial on VAEs (Doersch) Autoencoding variational Bayes (Kingma) |
Quiz 12 |
- | Thursday April 7 |
|
|||
22 | Tuesday April 12 |
|
Slides (*.pdf) Video (YT) |
Quiz 13 | |
23 | Thursday April 14 |
|
Video (YT) Slides (*.pdf) Slides (*.pptx) |
||
24 | Tuesday April 19 |
|
Video (YT) |
Quiz 14 | |
25 | Tuesday April 19 |
|
Slides (*.pptx) Video (YT) |
||
26 | Tuesday April 26 |
|
Slides (*.pdf) Video (YT) | No Quiz | |
27 | Thursday April 28 |
|
Slides (*.pdf) Video (YT) |
Recitation | Date | Topics | Materials | Videos | Instructor |
---|---|---|---|---|---|
0A | Jan 10, 2022 | Python & OOP Fundamentals | Chaoran, Roshan | ||
0B | Jan 10, 2022 | Fundamentals of NumPy | Notebook (*.zip) | Rucha, Shreyas | |
0C | Jan 10, 2022 | PyTorch Tensor Fundamentals | Notebook + Cheatsheet (*.zip) | Lavanya, Aparajith | |
0D | Jan 10, 2022 | Dataset & DataLoaders | Notebook + Slides (*.zip) |
Video (YT)
|
Fuyu, Soumya |
0E | Jan 10, 2022 | Introduction to Google Colab |
Video
(YT)
|
Ameya, Rucha | |
0F | Jan 10, 2022 | AWS Fundamentals |
Video (YT):
1,
2,
3,
4 |
Roshan, Ameya | |
0G | Jan 10, 2022 | Debugging, Monitoring | Rukayat, Brad | ||
0H | Jan 10, 2022 | Remote Notebooks | Notebook + Markdown (*.zip) | Zhe, Manasi | |
0I | Jan 10, 2022 | What to do if you're struggling | Slides (*.pdf) |
Video (YT)
|
Brad, Urvil |
0J | Due: Jan. 23 | Data Preprocessing | Chaoran, Diksha | ||
1 | Jan 21, 2022 | Your first MLP Code | Slides (*.zip) |
|
Lavanya, Roshan, Amelia |
2 | Jan 28, 2022 | Optimizing the Networks, Ensembles | Notebook + Slides (*.zip) |
Video (YT) |
Rucha, Urvil |
HW1 Bootcamp | Jan 26, 2022 | How to get started with HW1 |
Video (YT)
|
Bradley, Wenwen | |
3 | Feb 4, 2022 | Computing Derivatives & Autograd | Slides (*.pdf) |
Video (YT)
|
Chaoran, John |
4 | Feb 11, 2022 | Hyperparameters Tuning | Slides + Notebook (*.zip) |
Video (YT) |
Brad, Urvil, Ruoyu |
5 | Feb 18, 2022 | CNN: Basics & Backprop | Slides + Notebook (*.zip) |
Video (YT)
|
Aparajith, Amelia, Manasi |
HW2 Bootcamp | Feb 24, 2022 | How to get started with HW2 |
Slides (p2) (*.pdf) MobileNet code (*.py) |
Video (YT)
|
David, Manasi, Soumya |
6 | Feb 25, 2022 | CNNs: Classification & Verification | Slides(*.pdf) |
Video (YT) |
Manasi, Iffanice |
7 | Mar 4, 2022 | Paper Writing Workshop | Slides (*.zip) |
Video (YT)
|
Rukayat, David |
8 | Mar 11, 2022 | RNN Basics (Pre-recorded) |
Slides (*.pdf) Code (*.zip) |
Video (YT)
|
Aparajith, Soumya, Shreyas, Lavanya |
9 | Mar 18, 2022 | CTC, Beam Search | Slides (*.pdf) |
Video (YT)
|
Ameya, Soumya |
HW3 Bootcamp | Mar 24, 2022 | How to get started with HW3 |
Slides (p1) (*.pdf) Slides (p2) (*.pdf) |
Video (YT) |
Aparajith, Diksha |
10 | Mar 25, 2022 | Attention, MT, LAS | Slides (*.zip) |
Video (YT) |
Ameya, Lavanya |
11 | Apr 1, 2022 | Transformers | Slides+Notebook(*.zip) |
Video (YT) |
Ameya, Zhe |
HW4 Bootcamp | April 6, 2022 | How to get started with HW4 |
Notebook (*.ipynb) Notebook (*.ipynb) |
Video (YT) |
Ameya, Zhe | 12 | Apr 15, 2022 | Generative Adversarial Networks (GANs) + HW5 Bootcamp | Slides (*.ipynb) |
Video (YT) |
Zhe, Fuyu |
13 | Apr 22, 2022 | Graph Neural Networks | John | ||
14 | Pre-recorded | YOLO | Chaoran, Manasi |
∑ Ongoing, ∏ Upcoming
Assignment | Release Date | Due Date | Related Materials / Links |
---|---|---|---|
HW0p1 | Winter Break | Jan 23rd, 2022 11:59 PM EST |
Autolab,
Handout
(see recitation 0s) |
HW0p2 | Winter Break | Jan 23rd, 2022 11:59 PM EST |
Autolab,
Handout
(see recitation 0s) |
HW1p1 | Jan 23rd, 2022 | Feb 17th, 2022 11:59 PM EST |
Autolab,
Writeup (pdf),
Handout (.tar) Computing Derivatives (pdf) |
HW1p2 | Jan 23rd, 2022 | Early Submission: Jan 31th, 2022 11:59 PM EST |
Kaggle,
Writeup (pdf),
Canvas Quiz |
Final: Feb 17th, 2022 11:59 PM EST |
|||
HW1 Bonus | Jan 31st, 2022 | Mar 17th, 2022 11:59 PM EST |
Autolab,
Writeup (pdf),
Handout (.tar) |
Project Proposal | - | TBA | Canvas Submission (TBA) |
HW2p1 | Feb 17th, 2022 | Mar 17th, 2022 11:59 PM EST |
Autolab, Writeup, Handout (.tar) |
HW2p2 | Feb 17th, 2022 | Early Submission: Feb 25th, 2022 11:59 PM EST |
Face Classification: Kaggle,
Face Verification: Kaggle, Writeup (*.pdf) |
Final: Mar 17th, 2022 11:59 PM EST |
|||
Project Midterm Report | - | TBA | Canvas Submission (TBA) |
HW3p1 | Mar 17th, 2022 | Apr 7th, 2022 11:59 PM EST |
Autolab, handout (*.zip), Writeup (*.pdf) |
HW3p2 | Mar 17th, 2022 | Early Submission: Mar 26th, 2022 11:59 PM EST |
Kaggle, Writeup (*.pdf), Canvas Quiz |
Final: Apr 7th, 2022 11:59 PM EST |
|||
HW4p1 | Apr 4th, 2022 | Apr 28th, 2022 11:59 PM EST |
Writeup(*.pdf), Handout(*.zip) |
HW4p2 | Apr 5th, 2022 | Early Submission (Bonus): Apr 16th, 2022 11:59 PM EST |
Writeup(*pdf), Kaggle |
Final: Apr 28th, 2022 11:59 PM EST |
|||
Final Project Video Presentation & Preiliminary Project Report | Apr 28th, 2022 |
May 2nd, 2022 5:59 PM EST |
Preliminary Report: Canvas Submission (TBA) |
Project Peer reviews |
May 3rd, 2022 |
May 4th, 2022 11:59 PM EST |
- |
Final Project Report Submission | - |
May 6th, 2022 11:59 PM EST |
Canvas Submission (TBA) |
This is a selection of optional textbooks you may find useful