11-485/785 Introduction to Deep Learning
Fall 2018

“Deep Learning” systems, typified by deep neural networks, are increasingly taking over all AI tasks, ranging from language understanding, and speech and image recognition, to machine translation, planning, and even game playing and autonomous driving. As a result, expertise in deep learning is fast changing from an esoteric desirable to a mandatory prerequisite in many advanced academic settings, and a large advantage in the industrial job market.

In this course we will learn about the basics of deep neural networks, and their applications to various AI tasks. By the end of the course, it is expected that students will have significant familiarity with the subject, and be able to apply Deep Learning to a variety of tasks. They will also be positioned to understand much of the current literature on the topic and extend their knowledge through further study.

Course description from student point of view

The course is well rounded in terms of concepts. It helps us understand the fundamentals of Deep Learning. The course starts off gradually with MLPs and it progresses into the more complicated concepts such as attention and sequence-to-sequence models. We get a complete hands on with PyTorch which is very important to implement Deep Learning models. As a student, you will learn the tools required for building Deep Learning models. The homeworks usually have 2 components which is Autolab and Kaggle. The Kaggle components allow us to explore multiple architectures and understand how to fine-tune and continuously improve models. The task for all the homeworks were similar and it was interesting to learn how the same task can be solved using multiple Deep Learning approaches. Overall, at the end of this course you will be confident enough to build and tune Deep Learning models.

What students say about the previous edition of the course

Instructor: Bhiksha Raj

TAs:

Lecture: Monday and Wednesday, 9.00am-10.20am

Location: Gates-Hillman Complex GHC 4102

Recitation: Friday, 9.00AM-10.20AM

Office hours: Schedule

Prerequisites

  1. We will be using one of several toolkits (the primary toolkit for recitations/intruction is PyTorch). The toolkits are largely programmed in Python. You will need to be able to program in at least one of these languages. Alternately, you will be responsible for finding and learning a toolkit that requires programming in a language you are comfortable with,
  2. You will need familiarity with basic calculus (differentiation, chain rule), linear algebra and basic probability.

Units

This course is worth 12 units.

Course Work

Grading

Grading will be based on weekly quizzes, homework assignments and a final project.

There will be five assignments in all. They will also be due on the same date.

Maximum
Quizzes 14 quizzes (bottom 2 quiz scores will be dropped), total contribution to grade 24%
Assignments 5 assignments, total contribution to grade 41%
Project 1 project, total contribution to grade 35%

Late Policy

The late policy will be such that with every day you are late, you would be eligible for a lower grade. For example, if the homework was due on 22nd and you would submit on 23rd then you would be only eligible for a B in the homework. The grades would keep dropping as the days go by.

Books

The course will not follow a specific book, but will draw from a number of sources. We list relevant books at the end of this page. We will also put up links to relevant reading material for each class. Students are expected to familiarize themselves with the material before the class. The readings will sometimes be arcane and difficult to understand; if so, do not worry, we will present simpler explanations in class.

Discussion board: Piazza

We will use Piazza for discussions. Here is the link. Please sign up.

You can also find a nice catalog of models that are current in the literature here. We expect that you will be in a position to interpret, if not fully understand many of the architectures on the wiki and the catalog by the end of the course.

Kaggle

Kaggle is a popular data science platform where visitors compete to produce the best model for learning or analyzing a data set.

For assignments you will be submitting your evaluation results to a Kaggle leaderboard.

Academic Integrity

You are expected to comply with the University Policy on Academic Integrity and Plagiarism.
  • You are allowed to talk with / work with other students on homework assignments
  • You can share ideas but not code, you should submit your own code
Your course instructor reserves the right to determine an appropriate penalty based on the violation of academic dishonesty that occurs. Violations of the university policy can result in severe penalties including failing this course and possible expulsion from Carnegie Mellon University. If you have any questions about this policy and any work you are doing in the course, please feel free to contact your instructor for help.

Some ideas for projects

Schedule of Classes

Lecture Start date Topics Lecture notes/Slides Additional readings, if any Quizzes/Assignments
1 August 29
  • Introduction to deep learning
  • Course logistics
  • History and cognitive basis of neural computation.
  • The perceptron / multi-layer perceptron
slides
video
Quiz 1
2 August 31
  • The neural net as a universal approximator
slides
HW1 Released!
3 September 5
  • Training a neural network
  • Perceptron learning rule
  • Empirical Risk Minimization
  • Optimization by gradient descent
slides
Quiz 2
4 September 10
  • Back propagation
  • Calculus of back propogation
slides
5 September 12
  • Backprop Continued
Quiz 3
6 September 17
  • Convergence in neural networks
  • Rates of convergence
  • Loss surfaces
  • Learning rates, and optimization methods
  • RMSProp, Adagrad, Momentum
slides
7 September 19
  • Stochastic gradient descent
  • Acceleration
  • Overfitting and regularization
  • Tricks of the trade:
    • Choosing a divergence (loss) function
    • Batch normalization
    • Dropout
slides
Different perspective on batchnorm
Quiz 4
8 September 24
  • Convolutional Neural Networks (CNNs)
  • Weights as templates
  • Translation invariance
  • Training with shared parameters
  • Arriving at the convlutional model
slides
HW 1 due
HW 2 Released!
9 September 26
  • Models of vision
  • Neocognitron
  • Mathematical details of CNNs
  • Alexnet, Inception, VGG
slides
Quiz 5
10 October 1
  • Recurrent Neural Networks (RNNs)
  • Modeling series
  • Back propogation through time
  • Bidirectional RNNs
slides
11 October 3
  • Stability
  • Exploding/vanishing gradients
  • Long Short-Term Memory Units (LSTMs) and variants
  • Resnets
slides
Quiz 6
12 October 8
  • Loss functions for recurrent networks
  • Sequence prediction
slides
13 October 10
  • Sequence To Sequence Methods
  • Connectionist Temporal Classification (CTC)
slides
Quiz 7
14 October 15
  • Sequence-to-sequence models
  • Examples from speech and language
slides
HW 2 due
HW 3 Released!
15 October 17
  • Attention
slides
Quiz 8
16 October 22 Guest Lecture: Scott E Fahlman
  • Cascade Correlation
17 October 24
  • What to networks represent
  • Autoencoders and dimensionality reduction
  • Learning representations
  • Variational Autoencoders (VAEs)
Slides VAE lecture
18 October 31 Guest Lecture: Yossi Keshet
  • Adversarial Examples
slides
HW 3 due
Quiz 9
HW 4 Released!
19 November 5
  • Generative Adversarial Networks (GANs)
slides
part 2
20 November 7
  • Hopfield Networks
  • Boltzmann Machines
slides
Quiz 10
21 November 12
  • Training Hopfield Networks
  • Stochastic Hopfield Networks
slides
22 November 14
  • Restricted Boltzman Machines
  • Deep Boltzman Machines
slides
Quiz 11
23 November 19
  • Reinforcement Learning 1
slides Quiz 12
24 November 21
  • Thanksgiving Break - No Classes
26 November 26
  • Reinforcement Learning 2
HW 4 due
27 November 28
  • Graham Neubig: Guest Lecture
slides Quiz 13
28 December 3
  • Reinforcement Learning 3
  • Q Learning
  • Deep Q Learning
slides
29 December 5
  • Reinforcement Learning 4
  • Review
Quiz 14

Schedule of Recitations (Note: dates may shift)

Recitation Start date Topics Lecture notes/Slides
1 August 27 Amazon Web Services (AWS) slides
video
2 September 7 Your first Deep Learning Code slides
3 September 14 Efficient Deep Learning/Optimization Methods slides
4 September 21 Debugging and Visualization slides
5 September 28 Convolutional Neural Networks slides
6 October 5 Tips for Homework 2 slides
7 October 12 Recurrent Neural Networks slides
8 October 19 Recurrent networks 2: Loss functions, CTC slides
9 October 26 Attention slides
10 November 2 Variational autoencoders slides
11 November 9 GANs
12 November 16 Hopfield Nets, Boltzmann machines, RBMs
13 November 30 Reinforcement Learning slides

Piazza TA Schedule

Monday Dhruv, Soham, Shubham, David
Tuesday Nihar, Soham, Ryan, Ipsita
Wednesday Dhruv, Nebiyou, Ahmed, Raphael
Thursday Madhura, Nebiyou, Shaden, Jiwaei, Anushree
Friday Madhura, Omar, Jiwaei, Nihar
Saturday Omar, Ipsita, Shubham, Raphael, Anushree
Sunday Ryan, David, Ahmed, Shaden

Documentation and Tools

Textbooks

Deep Learning
Deep Learning By Ian Goodfellow, Yoshua Bengio, Aaron Courville Online book, 2017
Neural Networks and Deep Learning
Neural Networks and Deep Learning By Michael Nielsen Online book, 2016
Deep Learning with Python
Deep Learning with Python By J. Brownlee
Parallel Distributed Processing
Parallel Distributed Processing By Rumelhart and McClelland Out of print, 1986