Parallel Computing

   

Tuesday 7 March

 
09:00 - 09:55 Lecture 1

Parallel Computing

 

Marek Biskup

The objective of this lecture is to present  the principles of Parallel Computing, and the key underlying techniques, with a special highlight on the relationships between hardware architecture and the corresponding software techniques.

 

The lecture targets computer scientists interested in an overview of Parallel Computing, as well as more expert programmers.

 

No specific knowledge of parallel computing is required but familiarity with parallel programming will help understanding some of the techniques presented in the second part of the lecture.

 

Supercomputers

- Motivation

- Users
- Architectures
- Example: BlueGene/L

- Parallel Desktop Computers: multicore CPUs

Parallel Application Examples
- HEP Data Analysis

Designing a Parallel Application
-
Methodology

- Example - Successive Overrelaxation

- Performance Metrics

Communication within a Parallel Application
- Shared Memory

- Message passing

- Synchronous and Asynchronous Communication

- Transposition Table-driven Scheduling

Sources of Errors
- Deadlock

- Race Condition

- Message Ordering

Higher Level Communication
- MPI
- RPC
- Java RMI

Parallel Application Example - 15 puzzle solver

- Static Job Allocation

- Dynamic Load Balancing