Introduction to Deep Learning 4010-IDL
1. Recaps of Machine Learning 501.
2. Basic nomenclature in deep learning: what is a neuron (and it’s similarity to a biological neuron), the architecture of a feedforward neural network, activation functions and weights.
3. How a neural network computes the output given an input in a single forward pass, and how to use this network to train a model. Learn how to calculate the loss and adjust weights using a technique called backpropagation. Different types of activation functions are also introduced.
4. Techniques to improve training speed and accuracy. Identify the pros and cons of using gradient descent, stochastic gradient descent, and mini-batches. With the foundational knowledge on neural networks covered in Weeks 2 through 4, learn how to build a basic neural network using Keras* with TensorFlow* as the backend.
5. How can you prevent overfitting (regularization) in a neural network? In this class, learn about penalized cost function, dropout, early stopping, momentum, and some optimizers like AdaGrad and RMSProp that help with regularizing a neural network.
6. Convolutional neural networks (CNN) and compare them to the fully connected neural networks already introduced. Learn how to build a CNN by choosing the grid size, padding, stride, depth, and pooling.
7. Using the LeNet-5* topology, learn how to apply all the CNN concepts learned in the last lesson to the MNIST (Modified National Institute of Standards and Technology) dataset for handwritten digits. With a trained neural network, see how the primitive features learned in the first few layers can be generalized across image classification tasks, and how transfer learning helps.
8. Deep learning literature talks about many image classification topologies like AlexNet, VGG-16 and VGG-19, Inception, and ResNet. This week, learn how these topologies are designed and the usage scenarios for each.
9.One practical obstacle to building image classifiers is obtaining labeled training data. Explore how to make the most of the available labeled data using data augmentation and implement data augmentation using Keras*.
10. Recurrent neural networks (RNN) and their application to natural language processing (NLP).
11. Advanced topics for developing an RNN and how the concept of recurrence can be used to solve the issue with variable sequence and ordering of words. Take out your notebook and pencil and work through the math of RNNs.
12. Long short term memory (LSTM).
Rodzaj przedmiotu
Tryb prowadzenia
Koordynatorzy przedmiotu
Efekty kształcenia
Studenci zapoznają się z uczeniem maszynowym (Deep Learning), poznają podstawowe narzędzia i metody. Zapoznają się z podstawami sieci neuronowych, zasadami działania i przykładowymi implementacjami. Studenci uzyskują więdzę na temat stosowania uczenia maszynowego do rozwiązywania wybranych projektów.
Kryteria oceniania
Zaliczenie na podstawie samodzielnie opracowanego raportu na zadany temat oraz zaliczenia ćwiczeń (samodzielnej realizacji ćwiczeń w kursie on-line).
Literatura
Intel Academy:https://software.intel.com/en-us/ai-academy/students/kits/deep-learning-501
Więcej informacji
Dodatkowe informacje (np. o kalendarzu rejestracji, prowadzących zajęcia, lokalizacji i terminach zajęć) mogą być dostępne w serwisie USOSweb: