Enterprise IT Watch Blog

Jan 12 2017   3:02PM GMT

Neuromorphic chipsets are shifting deep learning into overdrive

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh

Tags:
Chipsets
Deep learning
iot

chipset-1486437-639x423
Chipset image via FreeImages

By James Kobielus (@jameskobielus)

Deep learning has moved well beyond the proof of concept stage. The technology is rapidly being incorporated into diverse applications in the cloud and at the network’s edge, especially in embedded, mobile, and Internet of Things (IoT) platforms.

Deep learning is all the rage. But the pace at which the technology is being adopted depends on the extent to which it is incorporated into commodity neuromorphic chipsets. To be ready for widespread adoption, deep learning’s algorithmic smarts need to be miniaturized into low-cost, reliable, high-performance chips for robust crunching of locally acquired sensor data. Chipsets must be able to execute layered deep neural network algorithms—especially convolutional and recurrent—that detect patterns in high-dimensional data objects.

Embedded deep learning apps will be as diverse as the endpoints whose automated behaviors they drive. In 2017 and beyond, a new generation of neuromorphic chipsets is emerging to address the growing demand for acceleration of artificial intelligence (AI)-powered mobile devices, IoT endpoints, and connected cars. Embedding of fast deep-learning chipsets is fundamental to the promise of an IoT in which endpoints can take actions autonomously based on algorithmic sensing of patterns in locally acquired sensor data.

What deep-learning chipset architecture will become the industry’s de facto standard? It’s too early to say. Currently, most deep neural networks run on graphics processing units, but other approaches are taking shape and are in various stages of being commercialized across the industry. What emerges from this ferment will be innovative approaches that combine GPUs with central processing units, field programmable gate arrays, and application-specific integrated circuits such as the Google TensorFlow Processing Unit.

However, no matter what architecture they incorporate or what deep-learning apps they drive, mass-market neuromorphic chipsets will need to support the following core requirements:

  • Perform fast-matrix manipulations at lightning speed in highly parallel architectures in order to identify complex, elusive patterns—such as objects, faces, voices, threats, etc.;
  • Achieve 10-100x boosts in the performance, scalability, and power efficiency of deep learning hardware platforms available to the mass market;
  • Process sensor datasets that are locally acquired, low latency, specialized, and predominantly persisted in memory;
  • Accelerate specialized neural-network functions, in keeping with the task-specific nature of most deep-learning edge applications;
  • Execute a wide range of hierarchical neural-net processing patterns in a consistent fashion, in keeping with the various requirements of image, video, audio, and other complex pattern-recognition tasks;
  • Enable flash-upgrading of to push revised deep neural network algorithms to edge devices over wireless connections;
  • Minimize interprocessor communication and infrastructure roundtripping, in keeping with the need for deep-learning edge devices to operate in intermittently connected, low-bandwidth, autonomous-decisioning scenarios.
  • Enable over-the-air or remote distribution of machine learning and other algorithmic artifacts, as well as security patches and updates, will become the standard approach
  • Provide more resource-efficient neural-network designs, model compression, and data codings that compress the algorithms and data deployed to deep-learning edge devices without sacrificing predictive accuracy

For the success of the deep-learning industry, a positive sign is the speed at which next-generation neuromorphic hardware platforms are taking shape. As discussed in this recent EETimes article:

  • Hardware startups and venture-capital funding are entering the deep learning field at a blistering pace.
  • Benchmarking tools for assessing and optimizing the comparative performance of deep neural nets on alternative hardware platforms are being adopted.
  • Hardware-based test and prototyping platforms for deep-neural network developers are coming into developers’ hands.
  • Industry projects, such as NeuRAM3, are springing up to develop new multi-core neuromorphic chip designs that address the deep-learning industry’s insatiable need for speed, scalability, miniaturization, and power-efficiency

There’s no doubt that embedded neuromorphic chips have the potential to change the world around us and even prolong our lives. Check out IBM’s recent “5 in 5” announcement for examples of medical, environmental, and other IoT apps that benefit from deep-learning algorithms in embedded and/or cloud-based platforms.

 Comment on this Post

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when other members comment.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to:

Share this item with your network: