- Tutorial on ``Parallel Computing 101''
- A somewhat whimsical description of parallel computing.
- Some of my current and former students

I've also organized a range of seminars, and directed numerous undergraduate and graduate research projects. These included REU (Research Experience for Undergraduates) projects, one of which was written up in the EECS departmental newsletter as an example of what undergraduates were capable of doing.

One of the more unusual graduate seminars I've taught here at Michigan was a summer project in which four MS-level graduate students took turns picking computer science topics and doing readings and making an overview presentations with questioning from the others (and me). Each student was required to pick a few diverse topics, so that the overall effect was to learn a little bit about a wide range of computer science. I learned a lot from the students.

The course emphasizes the design and analysis of algorithms. We examine common, key, problems, such as sorting, searching, finding shortest paths, determining a convex hull, or scheduling a set of tasks, and then develop efficient algorithms for them. This involves showing that the algorithms are correct, analyzing their time and space requirements, and determining lower bounds for the problems to show that no other algorithm could be significantly better. Systematic techniques for algorithm development, such as divide-and-conquer, dynamic programming, and greedy algorithms, are covered, as are a variety of analysis techniques such as solutions of recursive equations, worst-case and expected-case analyses, amortized analyses, competitive analyses, lower bounds, and adversary arguments.

The final portion of the class covers NP-completeness, starting with definitions and techniques for proving that a problem is NP-complete. Then we consider various ways to approach NP-complete problems, such as approximation algorithms or fast expected time algorithms.

**Prerequisites**: Graduate standing in CSE, or
a good data structures course plus some experience.
For additional information, see the
**course homepage**.

**Parallel computers are easy to build - it's the software that takes work.**

This is a graduate or advanced undergraduate level course that I usually teach each Fall term. About half the class is from Computer Science and Engineering, and half is from a wide range of other areas throughout the sciences and engineering. For a CSE student, the course satisfies general 500-level requirements for the MA and PhD, or computer science electives for the BA and BS. For a student in the Scientific Computing program and various other programs, it satisfies computer science distribution requirements. For a student in the CSCS program, it is an approved course related to complex systems. Students typically range from undergraduates through postdocs, and occasionally faculty sit in on the course as well.

The course covers a wide range of aspects of parallel computing, with the emphasis on developing efficient software. Because there is not a single parallel computing model, we also have to learn some about various parallel architectures, since there is significant hardware/software interaction. We examine reasons for poor parallel performance, and various techniques for improving it.

An important part of the class is the term project. This can be in almost any aspect of parallel computing. Students have done projects in parallelizing large serial codes, designing new parallel algorithms, proposing new operating system features, doing performance evaluation of a parallel computer, etc. Often students can integrate this project in with their other work, so, for example, it may be part of their thesis. Many other students have used this to start a new research area, and have ultimately gotten a thesis in the topic. Many of the projects have lead to publications, and a few have resulted in awards of various types. Some of the application-oriented projects have included parallelized simulations of reactors, oil fields, protein folding, car crashes, galaxies, bone fractures, etc.; optimized designs of traffic flow, patient allocation in a clinical trial, error-correcting codes, etc.; image processing, learning algorithms, and so forth.As for jobs, most of the students go on to use parallel computing in their future employment, with many of them getting their jobs specifically because of this expertise. These jobs tend to be very well-paying, and are in medium to large corporations, consulting firms, scientific laboratories, the computing industry, etc. I make various postings of such jobs available to the students.

For additional information, see the
**course
homepage**.
I also have a somewhat
whimsical explanation
of parallel computing.

Copyright © 1995-2017 Quentin F. Stout |