How scientists will use "metacomputational" facilities to solve complex problems.
Highly Parallel Computations: Algorithms and Applications Editor: M.P. Bekakos, Hardcover: 456 pages, WIT Press 2001
In today's scientific community new areas of study are emerging that are highly technical and resource-intensive. These include:
-- Protein folding
-- Core-Collapse Super-Nova
The "best" interconnection of processing elements, memory/storage, and computer backplanes is necessary to resolve these problems, with the "best" architecture being dependent on the problem to be solved. But resource constraints mandate that some of these "clustered" elements be geographically distributed, requiring complex interconnection systems.
Not all interconnections are equal. Bandwidth, the speed of the interconnection, latency, the delay across the pipe, jitter, and the variability of the delay all characterize the interconnection and determine the usefulness of the computational system for certain classes of problem solutions. From a computational standpoint, these issues resolve down to processor coupling (full mesh, nearest neighbor, loose model, memory model), shared memory, message passing (or a hybrid of these), and bandwidth of interconnection or "backplane speed."
Different methods exist for solving problems and using various data sets depending on the relative coupling of the CPUs and storage resources. "Ab Initio" solutions require tight coupling. Heuristic modeling has less stringent coupling requirements. "Embarrassingly parallel" applications are efficiently distributed and withstand latency across networked systems.
Several large international projects in this area, such as The Grid and The Distributed Terascale Facility sponsored by the National Science Foundation, provide environments for large-scale computational facilities and interconnections between these facilities. This set of resources is reinvigorating the scientific community to some extent in the solution of large scale, "grand challenge" problems.
"Highly Parallel Computations: Algorithms and Applications" presents a series of self-contained chapters written by scientists in related areas and assembled in the format of journal papers. The book discusses pipelined vector computers, array processors, distributed computer systems (closely and loosely coupled), clusters of workstations, and VLSI systolic arrays; both general and special purpose. In this context, "tight" coupling implies 10ms or less. The more loosely coupled systems, with latencies in the 50-100ms range, are useful for other problem sets. The GRID model is popular today, providing loosely and tightly coupled co-located resources with access to shared cycles on large machines. This book provides a survey of current research topics that cover a broad swath of the area.
The book provides introductory basic information on parallel computational methods in Chapters 1 and 2 and in the chapter introductions. At the same time, the book provides fairly up-to-date information that is relevant to the broader community and the understanding of recent concepts such as systolic array processing, in a fully referenced format that is valuable to scientists researching specific topics presented in the book. Specific chapters, such as Chapter 1, provide good definitions, explanations and examples including computer code. While the book has value as an introduction to the field, it mainly succeeds as a survey of current topics and associated references.
The chapter on Neural Networks focuses on a specific application rather than the techniques of parallel computing used to solve the problems of neural networks, a change from previous chapters.
A diligent reader can garner much from this book. Multiple readings reveal layers of information that build to understanding of the basic concepts, even if complete understanding of the algorithms would take much longer. As a survey of the area, this book is not for the weak-hearted.
One thing that I would have liked to have at my fingertips was a clear definition of the taxonomy of applications and the match to different architectures. As I read the book, I did outside study to learn as much as I could about this area and found that with some added understanding the book was more meaningful. I'm sure this is because I am new to some of the specifics of the depth of the papers in the book, however, the audience could be widened for this work if it included additional introductory material, or pointers to introductory material. Some of this is imbedded, but requires multiple readings to gain enough understanding of the more in-depth material. A set of prerequisite reading material would be very useful for someone not directly involved in the specific algorithms included.
"Highly Parallel Computations: Algorithms and Applications" benefits several classes of readers. It is most valuable for scientists and students directly involved in the specific work of the chapters (similar to discipline-specific scientific journals). It will help the diligent reader create a fairly deep understanding of concepts beyond the basics. This material will become more necessary as computers and networks merge into the "metacomputational" facilities of the future.
Ronald R. Hutchins is Associate Vice Provost for Research and Technology and Chief Technology
Officer, Office of Information Technology, at Georgia Institute of Technology. He received his
Doctor of Philosophy in Computer Science at the Georgia Institute of Technology, and Bachelor of
Science in Mathematics and Computer Science at Georgia Southern College. His current fields of
interest and development center on computer networking, but are divided into four primary facets:
production network management; educational collaboration technologies; high-speed large-scale
network design and management; and mobile and nomadic computing.