11-485/785 Introduction to Deep Learning
Fall 2018

“Deep Learning” systems, typified by deep neural networks, are increasingly taking over all AI tasks, ranging from language understanding, and speech and image recognition, to machine translation, planning, and even game playing and autonomous driving. As a result, expertise in deep learning is fast changing from an esoteric desirable to a mandatory prerequisite in many advanced academic settings, and a large advantage in the industrial job market.

In this course we will learn about the basics of deep neural networks, and their applications to various AI tasks. By the end of the course, it is expected that students will have significant familiarity with the subject, and be able to apply Deep Learning to a variety of tasks. They will also be positioned to understand much of the current literature on the topic and extend their knowledge through further study.

Course description from student point of view

The course is well rounded in terms of concepts. It helps us understand the fundamentals of Deep Learning. The course starts off gradually with MLPs and it progresses into the more complicated concepts such as attention and sequence-to-sequence models. We get a complete hands on with PyTorch which is very important to implement Deep Learning models. As a student, you will learn the tools required for building Deep Learning models. The homeworks usually have 2 components which is Autolab and Kaggle. The Kaggle components allow us to explore multiple architectures and understand how to fine-tune and continuously improve models. The task for all the homeworks were similar and it was interesting to learn how the same task can be solved using multiple Deep Learning approaches. Overall, at the end of this course you will be confident enough to build and tune Deep Learning models.

What students say about the previous edition of the course

Instructor: Bhiksha Raj

TAs:

Lecture: Monday and Wednesday, 9.00am-10.20am

Location: Gates-Hillman Complex GHC 4102

Recitation: Friday, 9.00am-10.20am, Location: TBD

Office hours:

Prerequisites

  1. We will be using one of several toolkits (the primary toolkit for recitations/intruction is PyTorch). The toolkits are largely programmed in Python. You will need to be able to program in at least one of these languages. Alternately, you will be responsible for finding and learning a toolkit that requires programming in a language you are comfortable with,
  2. You will need familiarity with basic calculus (differentiation, chain rule), linear algebra and basic probability.

Units

This course is worth 12 units.

Course Work

Grading

Grading will be based on weekly quizzes, homework assignments and a final project.

There will be five assignments in all. They will also be due on the same date.

Maximum
Quizzes 13 quizzes (bottom 3 quiz scores will be dropped), total contribution to grade 24%
Assignments 5 assignments, total contribution to grade 41%
Project 1 project, total contribution to grade 35%

Late Policy

The late policy will be such that with every day you are late, you would be eligible for a lower grade. For example, if the homework was due on 22nd and you would submit on 23rd then you would be only eligible for a B in the homework. The grades would keep dropping as the days go by.

Books

The course will not follow a specific book, but will draw from a number of sources. We list relevant books at the end of this page. We will also put up links to relevant reading material for each class. Students are expected to familiarize themselves with the material before the class. The readings will sometimes be arcane and difficult to understand; if so, do not worry, we will present simpler explanations in class.

Discussion board: Piazza

We will use Piazza for discussions. Here is the link. Please sign up.

You can also find a nice catalog of models that are current in the literature here. We expect that you will be in a position to interpret, if not fully understand many of the architectures on the wiki and the catalog by the end of the course.

Kaggle

Kaggle is a popular data science platform where visitors compete to produce the best model for learning or analyzing a data set.

For assignments you will be submitting your evaluation results to a Kaggle leaderboard.

Academic Integrity

You are expected to comply with the University Policy on Academic Integrity and Plagiarism.
  • You are allowed to talk with / work with other students on homework assignments
  • You can share ideas but not code, you should submit your own code
Your course instructor reserves the right to determine an appropriate penalty based on the violation of academic dishonesty that occurs. Violations of the university policy can result in severe penalties including failing this course and possible expulsion from Carnegie Mellon University. If you have any questions about this policy and any work you are doing in the course, please feel free to contact your instructor for help.

Tentative Schedule

Lecture Start date Topics Lecture notes/Slides Additional readings, if any Quizzes/Assignments
1 August 29
  • Introduction to deep learning
  • Course logistics
  • History and cognitive basis of neural computation.
  • The perceptron / multi-layer perceptron
Quiz 1
2 August 31
  • The neural net as a universal approximator
3 September 3
  • Training a neural network
  • Perceptron learning rule
  • Empirical Risk Minimization
  • Optimization by gradient descent
Assignment 1
Quiz 2
4 September 5
  • Back propagation
  • Calculus of back propogation
5 September 10 Quiz 3
6 September 12
  • Stochastic gradient descent
  • Acceleration
  • Overfitting and regularization
  • Tricks of the trade:
    • Choosing a divergence (loss) function
    • Batch normalization
    • Dropout
7 September 17 TBA Quiz 4
8 September 19
  • Optimization continued
9 September 24
  • Convolutional Neural Networks (CNNs)
  • Weights as templates
  • Translation invariance
  • Training with shared parameters
  • Arriving at the convlutional model
Quiz 5
10 September 26
  • Models of vision
  • Neocognitron
  • Mathematical details of CNNs
  • Alexnet, Inception, VGG
11 October 1
  • Recurrent Neural Networks (RNNs)
  • Modeling series
  • Back propogation through time
  • Bidirectional RNNs
Quiz 6
12 October 3
  • Stability
  • Exploding/vanishing gradients
  • Long Short-Term Memory Units (LSTMs) and variants
  • Resnets
13 October 8
  • Loss functions for recurrent networks
  • Sequence prediction
Assignment 2 due
Assignment 3
Quiz 7
14 October 10
  • Sequence To Sequence Methods
  • Connectionist Temporal Classification (CTC)
15 October 15
  • What to networks represent
  • Autoencoders and dimensionality reduction
  • Learning representations
Quiz 8
16 October 17
  • Sequence-to-sequence models, Attention models, examples from speech and language
17 October 22 Variational Autoencoders (VAEs)
18 October 24
  • Generative Adversarial Networks (GANs) Part 1
Assignment 3 due
Quiz 9
Assignments 4 and 5
18 October 31
  • Generative Adversarial Networks (GANs) Part 2
19 November 5
  • TBA
Quiz 10
20 November 7
  • Hopfield Networks
  • Energy functions
21 November 12
  • Training Hopfield Networks
  • Stochastic Hopfield Networks
Quiz 11
22 November 14
  • Restricted Boltzman Machines
  • Deep Boltzman Machines
23 November 19
  • Reinforcement Learning 1
Quiz 12
24 November 21
  • Thanksgiving Break - No Classes
24 November 26
  • Reinforcement Learning 2
26 November 28
  • Reinforcement Learning 3
Assignments 4 and 5 due
27 December 3
  • Q Learning
  • Deep Q Learning
28 December 5
  • Newer models and trends
  • Review

Tentative Schedule of Recitations (Note: dates may shift)

Recitation Start date Topics Lecture notes/Slides
1 August 27 Amazon Web Services (AWS)
2 September 7 Your first Deep Learning Code
3 September 14 Efficient Deep Learning/Optimization Methods
4 September 21 Convolutional Neural Networks
5 September 28 Debugging and Visualization
6 October 5 Basics of Recurrent Neural Networks
7 October 12 Recurrent networks 2: Loss functions, CTC
8 October 19 Attention
9 October 26 Research in Deep Learning
10 November 2 Variational autoencoders
11 November 9 GANs
12 November 16 Reinforcement Learning
13 November 30 Hopfield Nets, Boltzmann machines, RBMs

Documentation and Tools

Textbooks

Deep Learning
Deep Learning By Ian Goodfellow, Yoshua Bengio, Aaron Courville Online book, 2017
Neural Networks and Deep Learning
Neural Networks and Deep Learning By Michael Nielsen Online book, 2016
Deep Learning with Python
Deep Learning with Python By J. Brownlee
Parallel Distributed Processing
Parallel Distributed Processing By Rumelhart and McClelland Out of print, 1986