"

Chapter 3: Parallel and Distributed Programming

Introduction

This chapter introduces the fundamental concepts of parallel computing. In the first section, the reader will become familiar with the anatomy of a High-Performance Computing HPC cluster, giving the reader a general understanding of the various components of a cluster and how they interact with one another. These ideas will be illustrated with a workshop analogy. Before getting into parallel programming, the various types of computer memory will be discussed. Understanding the characteristics of the various types of memory is essential for designing and writing effective parallel code. Thereafter, the two primary parallel design patterns, data parallelism and task parallelism, will be introduced. To help the reader build an intuitive understanding of these design patterns, the workshop analogy previously given will be revisited and extended. Once the two main paradigms in parallel programming have been introduced, the various hardware bottlenecks and overheads that can negatively affect code performance will be discussed. Understanding these hurdles will help the reader to make an informed decision regarding where and when to apply the two parallel design patterns. Finally, the theoretical limitations to parallel performance improvements will be discussed. A series of examples and exercises are given at the end of the chapter to help the reader reinforce and remember what has been discussed in the chapter. This chapter includes the following sections:

definition

License

Icon for the Creative Commons Attribution 4.0 International License

Introduction to Advanced Research Computing using Digital Research Alliance of Canada Resources Copyright © by Jazmin Romero; Roger Selzler; Nicholi Shiell; Ryan Taylor; and Andrew Schoenrock is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.