CPUs in Flux

Bookmark and Share

Two columns ago, I started a series of articles pointing out that tough times might be in the future for the DBA profession because of major disruptive changes happening in the wider IT world (see "2012 Might Really Be the End of the World as We Know It"). Last issue, I spoke about the Solid State Disk and how it's changing the way we have to deal with and troubleshoot IO performance (see "The Changing State of Hardware" in the August E-Edition of DBTA at

This time, I want to talk about computing power and multicore CPUs. Moore's Law famously states that the numbers of transistors in an integrated circuit will double every 18-24 months. The media has rather illustriously contorted Moore's Law into, "The processing power of a CPU will double every 18-24 months." Those two statements aren't really saying the same thing, however. Moore's Law still rings true, though the pop-culture version has taken a few knocks in the last several years.

The popular CPU version of this saying was largely true until recent years, when circuit designers discovered that designing faster and faster chips meant they'd also have to design more sophisticated and power-hungry cooling systems for those chips.  (Have you seen the size of cooling sinks and fans on CPUs these days? Without them, you could fry an egg on a CPU.) So, instead of trying to overcome the cooling and energy consumption challenges, engineers figured out that by keeping CPU speeds a bit slower, they could pack more CPUs onto a single silicon wafer, thereby continuing to amplify the power of their CPUs.

While it's not too expensive to get hold of 4- and 8-core CPUs these days, truly massive multicore chips are just over the horizon. Intel has announced a 32-core chip line known as "Keifer," which promises 15x performance of the fastest Xeon CPU currently available, running a total of 128 threads concurrently. Not to be outdone, AMD and even NVidia (the graphic CPU people) have their own massively parallel CPUs in the works.

At first glance, this trend might lead you to think, "I don't really care. I'm a DBA." CPUs are using the domain of the system admin, after all. On the other hand, you might be thinking that over-abundant CPU and IO can mask bad code for a long, long time. Maybe my role as the performance-tuning expert is under threat?

The answer, I think, comes in the form of another famous computing law - Amdahl's Law - which states that the speed up of a parallelized system is limited by the time needed for the nonparallelized parts of the program. For example, if we have a SQL query that requires 8 hours to process on a single core, and a particular step in the query that takes 2 hours cannot be parallelized, then no matter how many parallel processors we have, the query cannot be run in less than 2 hours.

And, while SQL Server can easily handle parallelizing individual transactions, most application developers do not know how to write a strongly parallelized application. In fact, the IT industry is so heavily dominated by single-threaded applications, with no end in sight among the new ranks of developers, that Economist magazine even ran a multipage story about developers failing to take advantage of multicore CPUs. ("Parallel Bars," The Economist, June 2, )

What's the Verdict?

Where I had declared SSD to be a major game changer in the way DBAs work, I believe that multi-core CPUs will be a bit more subtle. The fact that many applications can't take advantage of parallel process (although SQL Server is not in that category) means a bit of breathing room for DBAs as the performance tuning experts in the IT shop. However, multicore CPUs will definitely act as enablers for another DBA boogeyman - virtualization. Lots of CPU cores mean lots of opportunities for virtual machines. 

Once parallel processing is the norm for college students studying computer science, we'll start to see a lot more applications that leverage it. But, at current rates, that's still a few years off.

Virtualization, on the other hand, is here today and here to stay. We'll talk about virtualization in the next column.