Parallel, distributed, and grid computing

. Parallel, distributed, and grid computing. pages 333-378, 6, 2009.

Editoren
  • B. Buchberger
  • M. Affenzeller
  • A. Ferscha
  • M. Haller
  • T. Jebelean
  • E.P. Klement
  • P. Paule
  • G. Pomberger
  • W. Schreiner
  • R. Stubenrauch
  • R. Wagner
  • G. Weiß
  • W. Windsteiger
TypIn Buch
VerlagSpringer
KapitelParallel, distributed, and grid computing
Ausgabe1st Edition
ISBN978-3-642-02126-8
Monat6
Jahr2009
Seiten333-378
Ausgabe1st Edition
Abstract

The core goal of parallel computing is to speedup computations by executing independent computational tasks concurrently (“in parallel”) on multiple units in a processor, on multiple processors in a computer, or on multiple networked computers which may be even spread across large geographical scales (distributed and grid computing); it is the dominant principle behind “supercomputing” respectively “high performance computing”. For several decades, the density of transistors on a computer chip has doubled every 18–24 months (“Moore’s Law”); until recently, this rate could be directly transformed into a corresponding increase of a processor’s clock frequency and thus into an automatic performance gain for sequential programs. However, since also a processor’s power consumption increases with its clock frequency, this strategy of “frequency scaling” became ultimately unsustainable: since 2004 clock frequencies have remained essentially stable and additional transistors have been primarily used to build multiple processors on a single chip (multi-core processors). Today therefore every kind of software (not only “scientific” one) must be written in a parallel style to profit from newer computer hardware.