The course "Contemporary Methods for Machine Learning" introduces students to the methods and tools for creating deep neural networks (NN). The types of activation functions that add nonlinearity in the NN and those for estimating its loss are considered. The course focuses on the types of machine learning model errors and their minimization techniques, such as regularization and dropout, which generalize the model and increase its accuracy on unknown test sets.
Particular attention has been paid to the concept of convolution and how it is practically applied to image recognition through deep convolution neural networks. Algorithms for detecting and classifying objects in an raster image are considered.
The course illustrates the application of deep neural networks in natural language processing (NLP). The word vector concept is introduced. It is pointed out its application in text classification. The term recurrent LSTM cell and its variants are defined. Applications of the LSTM cell in standard NLP tasks is considered for predicting the next letter / word and generating text. Special attention is paid to the encoder / decoder architecture and the Sequence-to-Sequence (Seq2Seq) model based on it and their application to another standard NLP task like natural language translation are presented.
The content of the course provides students with theoretical knowledge and practical skills to design, train, and apply deep neural networks through Python programming language and Google's Tensorflow machine learning library tools.