Newsletters




Google Glass - at last!


Seriously chronic geeks like me usually were raised on a strong diet of science fiction that shaped our expectations of the future.   Reading Heinlein and Asimov as a boy led me to expect flying cars and robot servants.  Reading William Gibson and other “cyberpunk” authors as a young man led me to expect heads-up virtual reality glasses and neural interfaces.   Flying cars and robot companions don’t seem to be coming anytime soon, but we are definitely approaching a world in which virtual – or at least augmented – reality headsets and brain control interfaces become mainstream. 

Prototypes of Heads-Up Displays (HUDs) have been around for some time.    These systems display icons, text and graphics directly over your visual field, using helmets or glasses. But, to date, none have been quite as compelling as Google’s “Glass” project – which shows the promise of consumer friendly HUDs.

Google Glass leverages elements of several other advanced Human Computer Interaction (HCI) technologies.   “Google Goggles” technology is an augmented reality technology that has been available for several years.  It runs on a smartphone and – using the cell phone’s camera – recognizes objects using pattern recognition or bar code scanning.  Information about the object can then be displayed against the image of the object.

There’s also nothing particularly new about voice recognition – in fact, I’m using voice recognition to dictate this article.  However, Apple’s Siri technology demonstrated that voice recognition could be made reliable and intelligent using server-based processing and collective intelligence techniques.

Google Glass uses many of the augmented reality capabilities of Google Goggles, but utilizes a camera and display contained in a pair of spectacles , rather than  a smartphone.  It relies heavily on voice recognition to accept commands.

Google Glass displays messages and alerts directly in the user’s visual field.  Voice commands allow the user to respond to them, and initiate interactions and integration of GPS. The camera allows for location-sensitive content to be generated.  In a demonstration video, the user uses Google Glass to navigate a city, video chat with friends, take photographs, and view and create reminders, all without the need to interact directly with a smartphone.

Google suggests that commercial glasses leveraging this technology might be available as early as 2013.  Reaction generally has been pretty enthusiastic, although the possibility that Google will insert advertising into the augmented reality concerns some. 

For many situations, voice command technology is a good solution; but, the idea of mind control is even more intriguing.  Indeed, prototype neuroheadsets that can measure brainwave activity, and, thereby, allow the user to perform simple tasks by thought control, are available - www.emotiv.com/apps/epoc, for example.   

A more dramatic demonstration of the potential of brain control was seen earlier this year, when a paralyzed woman used a brainwave interface to control a robotic arm.   The technology – BrainGate – used surgically implanted electrodes, though similar results are possible using only external sensors. 

The potential for prosthetic limbs and other assistive technologies are breathtaking – direct brain control of artificial limbs and “companion robots” would be an incredible breakthrough for those afflicted with serious disabilities.  And these technologies would eventually filter through to consumer and business applications.

The merging of heads-up displays as demonstrated by Google Glass, effective voice recognition as promised by Siri, and, eventually, even mind control paint a vision of the future that might more closely resemble the science fiction of my youth – with keyboards, mice and display screens supplanted by wearable computing interfaces that respond to voice, eye movement, gestures, and thoughts.  Applications that leverage these interface paradigms are going to be revolutionary.


Sponsors