In Proc. 1985 Int'l. Conf. on Parallel Processing, IEEE, pp. 697-699.

Varying Diameter and Problem Size in Mesh-Connected Computers
Preliminary version

Russ Miller
Dept. of Computer Science, State University of New York at Buffalo

Quentin F. Stout
EECS Department, University of Michigan

Abstract: On a mesh-connected computer, moving data across the mesh is the most time-consuming operation in many algorithms. This time can be reduced by using a mesh with smaller diameter, i.e., with fewer processing elements. To accommodate inputs of the same size, this requires that the processors have more memory. For image processing and graph theoretic algorithms we analyze the time as a function of the mesh diameter and problem size. We show that for many problems, smaller diameters can yield faster algorithms, and that there is a choice of diameter that is simultaneously best for several of these problems. Further, for these problems and this number of processing elements (or any smaller number), the mesh is an optimal interconnection scheme.

Keywords: parallel computer, mesh computer, semigroup operations, component labeling, image processing, convex hull, minimal spanning forest, communication bounds, parallel algorithm, graph theory, computer science, supercomputing, parallel computing

Full paper (Postscript)
Full paper (PDF)


Other papers in parallel computing
Overview of work on parallel computing


Quentin's Home Copyright © 2005-2018 Quentin F. Stout