For two decades, software developers enjoyed an increase in application
performance due to a doubling of transistors in processers about every
two years, in accordance with Moore's Law. Much of this performance gain
resulted from increasing clock speeds. However, due to temperature concerns,
this is no longer possible. The industry now agree that the future of
architecture designs lies in multicores, i.e., processors with
simpler cores running at lower frequencies. As a consequence,
all computer systems today, from embedded devices to high-end servers, are
being built with multicore processors. Thus software developers can no longer
sit idly by ("the La-Z Boy Era") and wait for application performance to
improve. In fact, application performance is likely to degrade given future
generations of multicores with ever more simple cores.
Although researchers in industry and academia are exploring many different
multicore hardware design choices, most agree that software for execution on
multicore processors is the major unsolved problem. Unlike earlier generations
of hardware evolution, this shift will have a major impact on how software is
designed and developed. Developers will have to learn how to properly design
their applications to utilize multicore parallelism.
The first part of the course will consist of discussions about current
multicore architecutures (specifically, GPUs) and parallel programming
models (specifically, OpenCL with a lecture or two on CUDA and HMPP).
The second part of the course will involve
discussing several research papers (chosen by students) pertaining to
multicores.
Each student will be asked to present one paper to the class.
The third and last part of the course will consist of presentations by
students of programming projects. Students will have a variety of multicore
architectures available for their class projects, including several
systems accelerated with GPUs.
|