This lesson is still being designed and assembled (Pre-Alpha version)

DeapSECURE module 6: Parallel and High-Performance Computing

In this lesson, we learn the industry-standard approaches to parallelize tightly-coupled calculations. Such calculations are frequently encountered in modeling and simulation in mathematics, physical sciences, and engineering. The MPI (Message Passing Interface) library provides capability for a computation to scale out to many, many machines at once. MPI is available in popular programming languages such as C, C++, Fortran, Python, Java, and many more. OpenMP is a programming model that allows one to conveniently convert a sequential program to a shared-memory parallel program, and is available in C, C++, and Fortran. Both MPI and OpenMP are explicitly parallel programming approaches, where one has to define the data distribution, work sharing, and the coordination among workers. MPI and OpenMP can be useful in computationally-intensive simulations where code performance and efficient interprocess communication are essential for timely completion of the calculations.

Prerequisites

  • Python programming skill
  • C/C++ programming skill (for OpenMP section)

Schedule

Setup Download files required for the lesson
00:00 1. Introduction to Parallel Programming What is and why parallel programming?
What are the different parallel programming models?
What are some parallel programming standards?
00:20 2. Serial and Parallel Programming What are computing resources (hardware)?
What is serial computing and programming?
What is parallel computing and programming?
What memory schemas are available for parallel computers?
00:40 3. Introduction to MPI: Distributed Memory Programming in Python using mpi4py What are the key elements of MPI?
What is MPI4PY?
01:00 4. Communicating Data with MPI How do we perform point-to-point communications in MPI?
How do we perform collective communications in MPI?
02:00 5. Problem Decomposition
02:00 6. A Template for a Simple Parallel Program
02:00 7. Parallel Computation of Statistics of a Large Array How do we convert a serial program to a parallel program using MPI?
03:00 8. Image Encryption for Privacy How do we encrypt bitmap images using Paillier encryption?
04:00 9. Parallel Computation with Homomorphic Encryption Hands-on: How to use parallel computing to speed up HE computations?
04:00 10. Outro to Real-World Parallel Computing Aht are the different parallelization approaches?
04:30 Finish

The actual schedule may vary slightly depending on the topics and exercises chosen by the instructor.