CAMPBELL, CA — (Marketwired) — 07/21/16 — Wave Computing, venture-backed start-up company, today disclosed that it is developing a family of Deep Learning computers. These products are based on a newly developed, massively parallel dataflow processing architecture called the Wave Dataflow Processing Unit (DPU). Wave–s Deep Learning Computers are designed to natively support dataflow-based Deep Learning frameworks, such as Google TensorFlow and Microsoft CNTK, while delivering an order of magnitude better compute efficiency over existing systems such as those using Graphics Processing Units (GPUs).
The proliferation of smart devices and the acceleration of the Internet of Things (IoT) has created host of new opportunities for the capturing, managing and analyzing data in real time from the edge to the datacenter and cloud. Harnessing this new data and gaining deep insights will be critical to the future success of businesses across numerous industry verticals, from financial and medical to retail and healthcare. McKinsey & Company estimate more than $4 Trillion per year of positive economic impact by 2025 for companies that can analyze and leverage this new source of business intelligence to create new services on existing and Deep Learning-optimized infrastructure.
One key to harvesting insights from data is the use of new, powerful Deep Learning applications. Current approaches have relied on repurposing traditional hardware, such as cumbersome FPGAs or power hungry GPUs. The result is hardware implementations that take too long to train the newer Deep Learning models, don–t fit the power budgets, or do not deliver the needed performance for the latest Deep Learning frameworks — such as Tensorflow. These barriers can only be surmounted using a new type of hardware based on customized architectures. One such example is Google–s Tensor Processing Unit (TPU). Another example is Wave–s DPU.
TensorFlow is an open source software library for numerical computation using dataflow graphs. TensorFlow was originally developed by researchers and engineers working on the Google Brain Team within Google–s Machine Intelligence research organization for the purposes of conducting machine learning and deep neural networks research. Google has made TensorFlow open source in order to create an open standard for exchanging research ideas and incorporating machine learning into products.
CNTK (Computational Network Toolkit) is production quality, multi-machine, software library and toolkit for deep learning from Microsoft. CNTK is a unified computational network framework that describes deep neural networks as a series of computational steps via a directed graph. CNTK was developed by scientists at Microsoft Research working on deep neural networks for speech recognition. Microsoft has made CNTK open source in order to create an open standard for exchanging research ideas and putting deep neural networks in products.
Nicole Hemsoth, co-editor of , wrote in her , “What is interesting about Yann LeCun–s view is not surprising necessarily: current architectures are not offering enough in terms of performance to stand up to the next crop of deep learning algorithms, as they overextend current acceleration tools and other programmatic limitations.”
Wave–s DPU architecture is implemented leveraging HMC Gen 2 based memory technology to meet the high bandwidth demands of Deep Learning. “We applaud Wave–s focus on deep learning and their decision to adopt HMC memory for this highly demanding application,” remarked Steve Pawlowski, VP of Advanced Computing Solutions for Micron. “We believe that HMC Gen 2 represents the ideal solution for machine learning, which requires high throughput, good capacity and memory scalability.”
“We are excited to establish a leadership position in the Deep Learning market with our new approach to computing. Our approach is based on providing –the right tools for the job– and we believe our dataflow architecture brings unmatched hardware advantages to supporting dataflow-based software frameworks such as TensorFlow and CNTK. We are looking forward to providing customers a new level of performance for both inference and training.” Derek Meyer, CEO Wave Computing.
:
Wave–s DPU is built upon a revolutionary dataflow computing technology that exploits data and model parallelisms present in Deep Learning models, such as convolutional and recurrent neural networks. The Wave DPU architecture is specifically built to accelerate new generation of dataflow based Deep Learning frameworks, making it ideal for organizations using these frameworks to develop, test and deploy their Deep Learning models. Wave plans to disclose the technical details of the DPU architecture at the Linley Processor Conference on September 27-28.
Wave–s DPU architecture and Deep Learning Computer features include:
Tens of thousands of processing nodes interconnected in regular arrays
Massive amounts of local memory and high external memory bandwidth
Real-time re-programmability for efficient support of computations specific to deep learning algorithms
Native support for dataflow based machine learning frameworks such as TensorFlow and CNTK
High scalability across multiple Deep Learning Computers
Wave is collaborating with ecosystem partners, hardware platform developers and lead customers under the company–s Early Access program beginning later in 2016. General availability of Wave–s Deep Learning Computers will be in 2017.
Wave Computing was founded with the vision of delivering the world–s fastest and most energy efficient computers for the Deep Learning market. Wave is realizing this vision through the development of game-changing dataflow processing technology with unmatched computer-power efficiency. Backed by Tier 1 VCs, an IP portfolio including over 50 U.S. patents, and a track record of innovation, Wave is dedicated to accelerating the application of Deep Learning in the datacenter and beyond. Wave is based in Campbell, California.
Image Available:
Image Available:
You must be logged in to post a comment Login