You can find here info and materials for the EPFL course EE-559 “Deep Learning”, taught by François Fleuret. This course is an introduction to deep learning tools and theories, with examples and exercises in the PyTorch framework.

The written exam will take place on **
Saturday June 23rd 08h15–11h15 in rooms CE3, CE4, and PO01
Polydôme**.

For the mini-project:

- Group compositions have to be specified on
the course
Moodle before
**Friday May 4th, 23:59.** - Reports and
implementations have to be uploaded to
the course
Moodle before
**Friday May 18th, 23:59.** - Oral evaluations will take place
**Friday May 25th morning**and**Thursday June 7th morning**. Rooms PH H3 31 and PH H3 33 both days.

There has been an update of the files for the practicals. If you installed the VM weeks ago, or use the one on EPFL machines in CM1 103, please download the updated:

and do not use the ones already in the VM if any.

You can contact our fantastic team of TAs: Angelos Katharopoulos, Bugra Tekin, Cijo Jose, Jean-baptiste Cordonnier, Pierre Baqué, Suraj Srinivas, Tao Lin, Tatjana Chavdarova, Téo Stocco, Teresa Yeo, and Timur Bagautdinov.

Thanks to Adam Paszke, Alexandre Nanchen, Xavier Glorot, Andreas Steiner, Matus Telgarsky, and Diederik Kingma, for their answers or comments.

You will find here slides, handouts with two slides per pages, voice-over videos, and practicals. This is still work in progress.

A Virtual Machine, and a helper python prologue for the practical sessions are available below.

Courses #1 / #2 / #3 / #4 / #5 / #6 / #7 / #8 / #9 / #10 / #11 / #12 / #13 / #14

What is deep learning, some history, what are the current applications. torch.Tensor, linear regression.

- slides / handout / video (1h04min) (part a)
- slides / handout / video (37min) (part b)
- practical / dlc_practical_1_solution.py

Empirical risk minimization, capacity, bias-variance dilemma, polynomial regression, k-means and PCA.

Linear classifiers, perceptron, linear separability and feature extraction, Multi-Layer Perceptron, gradient descent, back-propagation.

- slides / handout / video (51min) (part a)
- slides / handout / video (41min) (part b)
- practical / dlc_practical_3_solution.py

Generalized acyclic graph networks, torch.autograd, batch processing, convolutional layers and pooling, torch.nn.Module.

- slides / handout / video (51min) (part a)
- slides / handout / video (46min) (part b)
- practical / dlc_practical_4_solution.py
- dlc_practical_4_embryo.py

Cross-entropy, L1 and L2 penalty. Vanishing gradient, weight initialization, Xavier's rule, loss monitoring. torch.autograd.Function.

Theoretical advantages of depth, rectifiers, drop-out, batch normalization, residual networks, advanced weight initialization. GPUs and torch.cuda.

Deep networks for image classification (AlexNet, VGGNet), object detection (YOLO), and semantic segmentation (FCN). Data-loaders, neuro-surgery, and fine-tuning.

Visualizing filters and activations, smoothgrad, deconvolution, guided back-propagation. Optimizing samples from scratch, adversarial examples. Dilated convolutions.

Transposed convolution layers, autoencoders, variational autoencoders, non volume-preserving networks.

GAN, Wasserstein GAN, Deep Convolutional GAN, Image-to-Image translations, model persistence.

TBD

**Guest speaker:** Soumith Chintala, Facebook

Large data-sets and models, effectively parallelizing on GPUs, pythonless deployment. torch.distributed, ONNX, exporting to Caffe2.

**Guest speaker:** Andreas Steiner, Google

ML at Google, running experiments in a distributed environment, introduction to TensorFlow.

**Guest speaker:** Andreas Steiner, Google

Transforming QuickDraw data, estimator and experiment interfaces, CNN/RNN classifiers, CloudML.

- Linear algebra (vector and Euclidean spaces),
- differential calculus (Jacobian, Hessian, chain rule),
- Python,
- basics in probabilities and statistics (discrete and continuous distributions, law of large numbers, conditional probabilities, Bayes, PCA),
- basics in optimization (notion of minima, gradient descent),
- basics in algorithmic (computational costs),
- basics in signal processing (Fourier transform, wavelets).

You may have to look at the python 3, jupyter, and PyTorch documentations at

The final grade will be 25% for each of the two mini-projects grade, and 50% for the written exam during the exam session.

We have a slack workspace for the course to interact with the assistants and lecturer. Please ask the assistants how to join.

Helper python prologue for the practical sessions: dlc_practical_prologue.py

This prologue parses command-line arguments as follows

usage: dummy.py [-h] [--full] [--tiny] [--force_cpu] [--seed SEED] [--cifar] [--data_dir DATA_DIR] DLC prologue file for practical sessions. optional arguments: -h, --help show this help message and exit --full Use the full set, can take ages (default False) --tiny Use a very small set for quick checks (default False) --force_cpu Keep tensors on the CPU, even if cuda is available (default False) --seed SEED Random seed (default 0, < 0 is no seeding) --cifar Use the CIFAR data-set and not MNIST (default False) --data_dir DATA_DIR Where are the PyTorch data located (default $PYTORCH_DATA_DIR or './data')

It sets the default Tensor to torch.cuda.FloatTensor if cuda is available (and --force_cpu is not set).

The prologue provides the function

load_data(cifar = None, one_hot_labels = False, normalize = False, flatten = True)

which downloads the data when required, reshapes the images to 1d vectors if flatten is True, narrows to a small subset of samples if --full is not selected, moves the Tensors to the GPU if cuda is available (and --force_cpu is not selected).

It returns a tuple of four tensors: train_data, train_target, test_data, and test_target.

If cifar is True, the data-base used is CIFAR10, if it is False, MNIST is used, if it is None, the argument --cifar is taken into account.

If one_hot_labels is True, the targets are converted to 2d torch.Tensor with as many columns as there are classes, and -1 everywhere except the coefficients [n, y_n], equal to 1.

If normalize is True, the data tensors are normalized according to the mean and variance of the training one.

If flatten is True, the data tensors are flattened into 2d tensors of dimension N × D, discarding the image structure of the samples. Otherwise they are 4d tensors of dimension N × C × H × W.

import dlc_practical_prologue as prologue train_input, train_target, test_input, test_target = prologue.load_data() print('train_input', train_input.size(), 'train_target', train_target.size()) print('test_input', test_input.size(), 'test_target', test_target.size())

prints

data_dir ./data * Using MNIST ** Reduce the data-set (use --full for the full thing) ** Use 1000 train and 1000 test samples train_input torch.Size([1000, 784]) train_target torch.Size([1000]) test_input torch.Size([1000, 784]) test_target torch.Size([1000])

A Virtual Machine (VM) is a software that simulates a complete computer. The one we provide here includes a Linux operating system and all the tools needed to use PyTorch from a web browser (firefox or chrome).

It is already installed on the machines in room CM1 103 for the exercise sessions. You can start it with Windows Start menu → All programs → VirtualBox → VB_DEEP_LEARNING.

If you want to use your own machine, first download and install: Oracle's VirtualBox then download the file: Virtual machine OVA package (large file ~3.6Gb) and open it in VirtualBox with File → Import Appliance.

You should now see an entry in the list of VMs. The first time it starts, it provides a menu to choose the keyboard layout you want to use (you can force the configuration later by passing forcekbd to the kernel through GRUB).

**If the VM does not start and VirtualBox complains that the VT-x
is not enabled, you have to activate the virtualization capabilities
of your Intel CPU in the BIOS of your computer.**

The VM automatically starts a JupyterLab on port 8888 and exports that port to the host. This means that you can access this JupyterLab with a web browser on the machine running VirtualBox at: http://localhost:8888/ and use python notebooks, view files, start terminals, and edit source files. Typing !bye in a notebook or bye in a terminal will shutdown the VM.

You can run a terminal and a text editor from inside the Jupyter notebook for exercises that require more than the notebook itself. Source files can be executed by running in a terminal the python command with the source file name as argument. Both can be done from the main Jupyter window with:

- New → Text File to create the source code, or selecting the file and clicking Edit to edit an existing one.
- New → Terminal to start a shell from which you can run python.

**Files saved in the VM are erased when the VM is re-installed,
which happens for each session on the EPFL machines. So you should
download files you want to keep from the jupyter notebook to your
account and re-upload them later when you need them.**

This VM also exports an ssh port to the port 2022 on the host, which allows to log in with standard ssh clients on Linux and OSX, and with applications such as PuTTY on Windows. The default login is 'dave' and password 'dummy', same password for the root.

Note that performance for computation will not be as good as if you install PyTorch natively on your machine (which is possible only on Linux and OSX at the moment). In particular, the VM does not take advantage of a GPU if you have one.

**Finally, please also note that this VM is configured in a
convenient but highly non-secured manner, with easy to guess
passwords, including for the root, and network-accessible
non-protected Jupyter notebooks.**

This VM is built on a Linux Debian 9.3 “stretch”, with miniconda, PyTorch 0.3.1, TensorFlow 1.4.1, MNIST, CIFAR10, and many Python utility packages installed.

The following topics will **not** appear in the written exam:

- In 1b. “PyTorch Tensors:”
- Tensor internals.

- In 3a. “Linear classifiers, perceptron”:
- A bit of history, the perceptron.

- In 4a. “DAG networks, autograd, convolution layers”:
- Autograd's graph implementation, high order derivation.

- In 5. “Losses, optimization, and initialization”:
- Weight gain for ReLU.
- Writing new torch.autograd.Function.

- In 6. “Going deeper”:
- The
*proof*of the bound on the distance between f and g (the inequality itself should be known.) - The expression of the derivatives for batch normalization.
- torch.nn.DataParallel().

- The
- In 7. “Computer vision”:
- The loss of YOLO.
- Details about the various networks' architectures.

Here are the two mandatory mini-projects:

- Mini-project 1: Prediction of finger movements from EEG recordings.
- Mini-project 2: Implementing from scratch a mini deep-learning framework.

For each, you have to upload a zip file with the implementation and
a 3–5 pages report to
the course
Moodle before **Friday May 18th, 23:59.** They will count
each for a quarter of the finale grade.

You can work individually or form groups up to three students that must remain unchanged for the two projects. Evaluation will be done with a 12min oral session per group.

**Exchange of code or report snippets between groups, and
materials taken “from the web” are forbidden. Also, every
student must have a clear understanding of her/his group's entire
source code and report. This will be checked during the oral
presentation.**

The pdf files and videos on this page are licensed under the Creative Commons BY-NC-SA 4.0 International License.