The aim of this course is to provide a rigorous yet accessible treatment of parallel algorithms, including theoretical models of parallel computation, parallel algorithm design for homogeneous and heterogeneous platforms, complexity and performance analysis, and fundamental notions of scheduling. The focus is on algorithms for distributed-memory parallel architectures in which computing elements communicate by exchanging messages.
The aspects of obtaining high performance from the use of parallel computers are present. The course focuses on the students' connection between the architecture of the computer and the algorithm (program) in order to provide high algorithm performance. The different problems and their solutions are consider, taking into account the different computer architectures. Special attention is focused to parallel computers with distributed memory and the construction of parallel algorithms. Laboratory exercises are based on the development of programs-models for evaluation of the received speed by the parallel algorithms: transmission of complex messages, matrix calculations, sorting of huge data sets, etc. The MPI (MPICH2) Parallel Process Management Environment is used and the main functions of the environment library are applied. The appropriate models are develope in order to evaluate the performance of parallel algorithms. The Specific applications are implemente to solve complex practical problems.