Parallel, distributed, and grid computing
Editors |
B. Buchberger B. Buchberger M. Affenzeller M. Affenzeller A. Ferscha A. Ferscha M. Haller M. Haller T. Jebelean T. Jebelean E.P. Klement E.P. Klement P. Paule P. Paule G. Pomberger G. Pomberger W. Schreiner W. Schreiner R. Stubenrauch R. Stubenrauch R. Wagner R. Wagner G. Weiß G. Weiß W. Windsteiger W. Windsteiger |
Title | Parallel, distributed, and grid computing |
Type | in book |
Publisher | Springer |
Chapter | Parallel, distributed, and grid computing |
Edition | 1st Edition |
ISBN | 978-3-642-02126-8 |
Month | June |
Year | 2009 |
Pages | 333-378 |
Abstract | The core goal of parallel computing is to speedup computations by executing independent computational tasks concurrently (“in parallel”) on multiple units in a processor, on multiple processors in a computer, or on multiple networked computers which may be even spread across large geographical scales (distributed and grid computing); it is the dominant principle behind “supercomputing” respectively “high performance computing”. For several decades, the density of transistors on a computer chip has doubled every 18–24 months (“Moore’s Law”); until recently, this rate could be directly transformed into a corresponding increase of a processor’s clock frequency and thus into an automatic performance gain for sequential programs. However, since also a processor’s power consumption increases with its clock frequency, this strategy of “frequency scaling” became ultimately unsustainable: since 2004 clock frequencies have remained essentially stable and additional transistors have been primarily used to build multiple processors on a single chip (multi-core processors). Today therefore every kind of software (not only “scientific” one) must be written in a parallel style to profit from newer computer hardware. |