Page 1 of 2 next >>

AI—A Dangerous Tool or a Fool’s Errand?

The concern has become fear and may soon develop into a panic. The worry that has permeated the annals of science fiction for 100 years and recently become a reality for some is due mostly to the alternative use of, amazingly, video game computer chips.

VMware CEO Patrick Gelsinger has called AI a “30-year overnight success” story. The concerns are wide-ranging as new applications for the technology emerge. What will people do for work? Does this mean that a “universal basic income” will become a necessity? How many occupations will be replaced by a ubiquitous and pervasive world dominated by AI?

The Fear That Machines Will Replace Us All

Throughout scientific history, a relatively constant theme has been that mathematicians deal in the abstract. In the 1600s, Sir Francis Bacon, the “father of empiricism,” helped transform pure philosophy into technology when he developed what became known as “the scientific method.”

It was with this turning point of Renaissance genius that scientific thinking started producing technological innovations and products. Machines were created, and those machines could far exceed the productivity of the singular human counterparts who operated them.

Machines have created valuable products, and, sometimes, these machines have replaced the very people who produced the same products because they did it less expensively and with higher quality. Throughout the next centuries, the first Industrial Revolution, the 19th and 20th century thinkers and think tanks, politicians and purveyors of science fiction, doomsayers and religious leaders have feared (or profited from the fear) that machines would quite simply “replace us all.”

Now we are in the 21st century and the time of extreme high-speed computing. For example, NVIDIA chips, which are known as GPUs (graphics processing units) and were originally created for video games, can process thousands of times the instructions of chipsets designed less than a decade ago. These chips and others have the ability to handle datasets of multidimensions and nearly infinite size with incredible speed.

The interpolations and extrapolations are done so fast on such vast amounts of data so as to appear to be done at “the speed of thought” or faster—and certainly more accurately. After all, the computers are not distracted, don’t have ulterior motives, aren’t affected by fatigue or stress, and certainly don’t complain when they are tasked with, well, multitasking. So, if they are not only faster than humans but also cheaper, does this mean that they are better?

The Reality of AI

We should consider the various applications carefully before we resign the human race to a universal basic income with a lifetime sentence of banality. Some examples should be informative. The applications developed from AI’s precursors, machine learning and deep learning, are indeed remarkable.

Page 1 of 2 next >>


Subscribe to Big Data Quarterly E-Edition