Newsletters




Power Efficiency: A New Application Design Goal


Until recently, IT professionals have been conditioned to regard response time, or throughput, as the ultimate measure of application performance.  It's as though we were building automobiles and only concerned with faster cars and bigger trucks. Yet, just as the automotive industry has come under increasing pressure to develop more fuel-efficient vehicles, so has the IT industry been challenged to reduce the power drain associated with today's data centers. 

Data center power costs have risen as a consequence of faster, hotter CPUs, as well as the explosion of "big data" applications and massive data warehouses.  The downward pressure on IT budgets, coupled with the upward growth of data center power consumption, inevitably leads to a greater focus on power efficiency.  Furthermore, the carbon footprint of data centers is growing more significant every year, so there is arguably an environmental consideration, as well.  The environmental angle is expressed as "Green IT," but it's mainly the dollar cost of energy consumption that most directly concerns commercial companies.

The initial response from the larger players in the IT industry has been to focus on data center efficiency. Companies such as Yahoo! and Google have put enormous effort into reducing their power usage effectiveness (PUE) ratio, which measures the amount of energy going into such non-computational activities as cooling.  These companies also focus on performance per watt, which measures how much computation can be delivered for each watt of electrical power.  For large IT shops, the amount of power drain from the data center is becoming as significant as the number of machines or the raw computing power provided.

Companies like Google and Microsoft have gone to extraordinary lengths to reduce power consumption.  Batch jobs might be moved across the worldwide network during the day to take advantage of the lower cooling costs incurred at night.  Excess heat production might be used to heat office buildings or even to power steam turbines.  

Network and private cloud vendors are looking at ways to automatically minimize power consumption.  The network vendors imagine an application-aware network that could move computing tasks to hosts that can deliver the best performance for the lowest power cost.  Virtualization vendors are looking at ways in which physical hosts can be powered down during idle periods, dynamically consolidating virtual servers onto a smaller number of physical hosts.

These data center strategies are a start towards energy-efficient IT.   I suspect that a similar change in thinking will arise within the application development community, as well.  The exponentially increasing power of modern CPUs has resulted in application architects favoring approaches which conserve programmer effort at the expense of burning compute cycles.  CPU cycles traditionally have been seen as free resources that might as well be consumed to achieve application performance goals. 

With the emergence of dynamically provisioned CPU resources from private and public clouds, these CPU-hungry approaches are going to be more visible:  platforms such as Microsoft Azure directly charging by CPU consumption, and CPU consumption directly related to power consumption.

Application architects and developers may, therefore, have to focus more closely on reducing CPU consumption.  For instance, massively parallel approaches such as Hadoop can be attractive because they leverage large numbers of cheap servers.  However, when the power consumption profiles of these systems are examined, they may start to lose some of their economic justification.  

Achieving energy efficient IT requires more than just efficient hardware. Application architecture patterns need to evolve, as well, to avoid unnecessary CPU and IO consumption, and, consequently, power consumption. 


Sponsors