Newsletters




The Challenge of Massively Parallel Desktops


MOORE’S law—first expressed by Intel cofounder Gordon Moore in 1965—predicts that computing power will increase exponentially, doubling roughly every 18 months. Moore’s law has proved remarkably accurate and we have all benefited from the rapid growth in CPU and computer memory available for our desktop computers.

Moore ’s law will eventually run out of steam as the density of integrated circuits becomes so great that the speed-of-light limit will prevent any further increases in processing speed. Although we’re still a fair way off from hitting this speed-of-light limit, we are already experiencing a significant change in the economics of delivering increased CPU power. Put simply, it is becoming far more economical to increase the number of CPUs—or “cores”—in a computer than to increase the processing power of individual cores. Over the last few years, almost all desktop systems have become “dual-core,” meaning they contain two CPU cores. These are housed in the same motherboard socket that previously supported a single CPU.

Server systems easily take advantage of multiple CPUs since they are servicing the needs of multiple concurrently connected users. Desktop systems find it harder to utilize multiple CPUs since they provide services to a single human being. It’s true that almost any desktop can take advantage of two CPUs. However, as dual-core gives way to quadcore, and quad-core gives way to larger and larger numbers of cores, it’s going to become harder for desktop applications to utilize all of the CPUs. With Intel predicting 80-core systems within the next 10 years, this could represent a challenge for desktop applications.

Most application programs consist of one or more “threads,” each of which can run on a separate CPU. For instance, in a word processor a separate thread might check spelling as you type. However, to take advantage of very large numbers of cores, an application needs to move beyond simply creating threads for specialized purposes: Every significant operation needs to be rewritten with multithreading in mind. Doing this in our existing application languages is tough. With today’s tools, writing a multithreaded application requires elite programmers and extended schedules.

Functional programming paradigms and languages are designed to provide a partial solution to this dilemma, allowing functions that implement business logic to be passed as arguments to other routines that handle concurrency and iteration. This allows looping and iteration to be abstracted and permits business logic to be expressed separately from the logic that handles the division of work among processors. Languages designed from the ground up for functional programming include Haskal—a standards-based open source language—and Microsoft’s F#. F# is Microsoft’s attempt to create a functional programming language with wide applicability and which interoperates with .NET.

Microsoft has also extended the Language Integrated Query (LINQ) facility with parallel features—creating PLINQ. These new programming languages aim to allow programmers to more easily write applications which can seamlessly exploit the additional processing power offered by multicore computers. However, it takes many years for new programming paradigms and languages to become established. In the meantime, the inability of modern applications to exploit multicore architectures is likely to limit the usefulness of the massively multicore desktop.


Sponsors