Communication

From Mesham
Jump to navigationJump to search

Communication

Key to parallel computing is the idea of communication. There are two general communication models, shared memory and message passing. It is important to consider both these models because of the different advantages and disadvantages which each exhibits.

Shared Memory

In the shared memory model, each process shares the same memory and therefore the same data. In this model communication is implicit. When programming using this model care must be taken to avoid memory conflicts. There are a number of different sub models, such as Parallel Random Access Machine (PRAM) whose simplicity to understand has lead to its popularity.

PRAM

The figure below illustrates how a PRAM would look, with each processor sharing the same memory and by extension the program to execute. However, a pure PRAM machine is impossible to create in reality with a large number of processors due to hardware constraints, so variations to this model are required in practice.

A Parallel Random Access Machine

Incidentally, you can download a PRAM simulator (and very simple programming language) for it here (PRAM Simulator) and here (very simple language.) This simulator, written in Java, implements a parallel version of the MIPS architecture. The simple language for it (APL) is cross compiled using GNU's cross assembler.

BSP

Bulk Synchronous Parallelism (BSP) is a parallel programming model that abstracts from low-level program structures in favour of supersteps. A superstep consists of a set of independent local computations, followed by a global communication phase and a barrier synchronisation. One of the major advantages to BSP is the fact that with four parameters it is possible to predict the runtime cost of parallelism. It is considered that this model is a very convenient view of synchronisation. However, barrier synchronisation does have an associated cost, the performance of barriers on distributed-memory machines is predictable, although not good. On the other hand, this performance hit might be the case, however with BSP there is no worry of deadlock or livelock and therefore no need for detection tools and their additional associated cost. The benefit of BSP is that it imposes a clearly structured communication model upon the programmer, however extra work is required to perform the more complex operations, such as scattering of data.

Logic of Global Synchrony

Another model following the shared memory model is Logic of Global Synchrony (LOGS) . LOGS consists of a number of behaviours - an initial state, a final state and a sequence of intermediate states. The intermediate global states are made explicit, although the mechanics of communication and synchronisation are abstracted away.

Advantages

  • Relatively Simple
  • Convenient

Disadvantages

  • Poor Performance
  • Not Scalable

Message Passing

Message passing is a paradigm used widely on certain classes of parallel machines, especially those with distributed memory. In this model, processors are very distinct from each other, with the only connection being that messages can be passed between them. Unlike in the shared memory model, in message passing communication is explicit. The figure below illustrates a typical message passing parallel system setup, with each processor equipped with its own services such as memory and IO. Additionally, each processor has a separate copy of the program to execute, which has the advantage of being able to tailor it to specific processors for efficiency reasons. A major benefit of this model is that processors can be added or removed on the fly, which is especially important in large, complex parallel systems.

Message Passing Communication Architecture

Advantages

  • Good Performance
  • Scalable

Disadvantages

  • Difficult to program and maintain