Module 6: Parallel and High-Performance Computing

In this lesson, we learn the industry-standard approaches to parallelize tightly-coupled calculations. Such calculations are frequently encountered in modeling and simulation in mathematics, physical sciences, and engineering. The MPI (Message Passing Interface) library provides capability for a computation to scale out to many, many machines at once. MPI is available in popular programming languages such as C, C++, Fortran, Python, Java, and many more. OpenMP is a programming model that allows one to conveniently convert a sequential program to a shared-memory parallel program, and is available in C, C++, and Fortran. Both MPI and OpenMP are explicitly parallel programming approaches, where one has to define the data distribution, work sharing, and the coordination among workers. MPI and OpenMP can be useful in computationally-intensive simulations where code performance and efficient interprocess communication are essential for timely completion of the calculations.

Please check out our Parallel and High-Performance Computing lesson at the following site:

https://deapsecure.gitlab.io/deapsecure-lesson06-par/

Workshop Resources (Spring 2021):