Research
I am broadly interested in computer architecture with
particular emphasis on energy/power efficiency, data center architecture,
smartphone/handheld architecture, memory systems, and performance evaluation methodology.
The advent of the Client + Cloud computing paradigm, where users continually interact with data and content hosted on systems "in the Cloud" from their ever-connected, portable client devices, has fueled a voracious appetite for greater capability in the client and meteoric growth in the data centers that provide the underlying compute infrastructure. Power looms as a first-class design constraint at both ends of this link. While battery life is an obvious concern in handheld devices, equally important are volumetric/weight constraints and a strict requirement to use only passive cooling, all of which impose limits on the device's peak power. Data centers consume an alarmingly- high fraction of the world's energy. The total carbon footprint of the world's data centers is roughly the same as the CO2 emissions of the entire Czech Republic. In the US, annual data center energy consumption is approaching 100 billion kWh--2.5% of domestic power generation--at an estimated cost of $7.4 billion. Power also drives the capital cost of data center infrastructure; nearly half of all costs are directly proportional to the facility's peak power draw [Hamilton, 2010]. Improving the energy efficiency of Client + Cloud computing infrastructure while continuing to scale performance is a critical challenge for computer systems research. My past and ongoing research seeks to enhance the performance and reduce the energy consumption, capital costs, and carbon footprint of cloud infrastructure by improving efficiency at all scales, from client devices and server systems to facility-scale power and cooling infrastructure. |