Hardware Advances Make 2018 the Year to Monetize Neural Nets

Neural networks and deep learning are frequently recognized as the key to futuristic innovations like facial recognition, with consumers paying up for the novelty of using their visage to unlock their phones, and police squads partnering with AI startups to catch criminals. But the use cases for neural nets extend well beyond those flashy scenarios, and innovations in hardware are enhancing accessibility, signaling that it’s time for more companies to find appropriate business cases for their industry.

Neural Networks in 2018: Heading for Hardware

Neural networks are not even close to new. First proposed in 1944, neural nets were a major computer science research subject until the late 1960s, slipping out of view until a brief appearance in the 1980s, only to fall out of favor again. Fast forward to this decade, neural nets have facilitated deep learning, which in turn is the basis for phones’ speech recognition and a slew of other everyday applications.

If in a few short years neural nets have transformed technology, the expectation for progress this year is only realistic. As neural networks are easier to build and deploy, they will increasingly be seen in hardware applications, which will only increase accessibility across industries.

Google TPU

Google’s Tensor Processing Units (TPUs) are now available to developers. These custom chips, designed to run machine learning workloads on TensorFlow, are Google’s alternative to the GPUs traditionally used. Google’s TPU processes specific machine learning workloads more quickly and efficiently, but the hardware is also making neural net technology more accessible for businesses with its on-demand computing.

The accelerators are accessible on demand, the same technology Google uses for its state-of-the-art products, providing technology to businesses without requiring they build a data center or create their own process for training and prediction.

iOS Facial Recognition Chips

Apple’s iPhoneX features an A11 bionic neural engine chip, custom built to handle artificial intelligence processes including facial recognition. The chip can perform 600 billion operations per second and is one of the first consumer electronic applications with hardware specifically built to handle machine learning at the device level.

IBM Power9

IBM’s most recent Power chip, the Power9, was designed specifically to improve performance on common artificial intelligence frameworks like TensorFlow. Ultimately this means more speed as users build and run models. While IBM plans to sell the chips to cloud vendors and third-party manufacturers, the company is also releasing a computer powered by the chip and marketing the chip as a service on IBM Cloud.

Why It Matters

These hardware applications ultimately signal accessibility. Products that accelerate the machine learning process and enable companies to implement neural networks without their own data centers allow more businesses to experience the benefits within their own industries, applied to their own challenges.

Yet while many may believe these new technologies are powerful and useful, they struggle to find the appropriate business case for applying neural networks. But there are business cases for every industry.


While sensors and the Internet of Things have dominated the conversation around industrial technology, manufacturing companies have struggled to bridge analog information gaps in the supply chain. From animal processing to produce sorting, the look and feel of a product has historically required a discerning eye, ensuring bruised produce is discounted or discarded or a wayward wing doesn’t make its way into a package of drumsticks.

Deep learning changes things, training on thousands of images to “learn” what’s what. Take a Japanese farmer who created a machine to sort cucumbers. Hoping to give his mother a break from manually categorizing them, he used TensorFlow, creating and training an algorithm on images of his mother’s sorting work. This same concept can be applied across the manufacturing industry.


The pursuit of automation of process isn’t always a product of cost savings or efficiency—scarcity of human expertise is another reason to consider the use of neural networks to solve problems. Google developers set out to address the fastest growing cause of blindness: diabetic retinopathy. While it is preventable, regular screenings and analysis of retinal images are required, and many countries lack enough doctors to carry out annual screenings.

By training a TensorFlow model on images opthamologists analyzed and diagnosed, developers were able to create a program that can not only analyze retinal images and return a diagnosis, but do so with slightly more accuracy than traditional methods.

Financial Services and Insurance

Fraud detection is natural fit for deep learning application in the financial services and insurance industries. Harvard students recently applied an autoencoder to health insurance claims and were able to detect fraud with 95.7 percent accuracy. This same concept can be, and is already, leveraged to detect fraudulent credit card transactions, banking fraud and insurance scams.

Consumer Products

The same technology that enables Amazon’s Alexa to wake—or stay asleep—at mention of its name has other applications. A startup company is modeling other sounds, beyond music and voice, such as breaking glass or a baby’s cry. The applications for real-time audio analysis are far-reaching for consumers.

The technology could be leveraged for home security, with an integration into other smart home products meaning text alerts at the sound of voices or broken glass, but only when your house “knows” you’re away from home. Alternatively real-time audio analysis could be applied to health and safety for loved ones, with a device able to detect changes in speech or keywords that indicate help is needed.

3 Steps Toward Monetizing New Neural Network Capabilities

Neural networks are enabling enterprises to bridge analog information gaps in their business processes and supply chains, converting complex information into digital insights. These processes previously required extensive manual labor to process and create information gaps that resist quantification. New capabilities enabled by neural nets can be a powerful tool to bridge the analog information gaps, but require thoughtful planning to ensure it isn’t a solution looking for a problem.

  • Map the gaps: identify and define the “information black holes” in your enterprise. Take a holistic view of business processes with a view towards shedding light on analog processes. These gaps typically resist forecasting and optimization. This is a cross-functional exercise that involves collaboration front line business SMEs and technical talent. Be ready to get your hands dirty!
  • Quantify the impact: getting the most out of investments in new technology means prioritizing projects that will provide achievable results. Build the business case by quantifying the impacts and the realistic barriers to implementation.
  • Prioritize your roadmap: sequence projects in a way that contributes to enhancing organizational capabilities while delivering value. Keep in mind that some of the highest value use cases may be more difficult to implement, so tackle projects in a way that builds on early wins to enhance team capabilities.