亚洲免费av电影一区二区三区,日韩爱爱视频,51精品视频一区二区三区,91视频爱爱,日韩欧美在线播放视频,中文字幕少妇AV,亚洲电影中文字幕,久久久久亚洲av成人网址,久久综合视频网站,国产在线不卡免费播放

        ?

        Neuromorphic Computing Advances Deep-Learning Applications

        2020-11-05 09:52:26ChrisPalmer
        Engineering 2020年8期

        Chris Palmer

        Senior Technology Writer

        In the quest for ever faster and more efficient computing,researchers and manufacturers are busy exploring novel processing architectures. Among these, neuromorphic computing—the emulation of brain function inside computer chips—is showing particular promise for applications involving deep learning, an increasingly common form of artificial intelligence (AI) that uses neural networks inspired by brains to uncover patterns in large datasets.

        In traditional machine learning based on conventional computer hardware, the memory and processing nodes are physically separated. In contrast, neuromorphic computer hardware mimics neurons and places both functions in the same spot.By eliminating the need to transfer data back and forth between processing and storage sites, this architecture can substantially reduce computing time and power requirements for certain specific learning tasks such as pattern recognition and classification.

        While the concept of neuromorphic computing originated in the late 1980s, its trajectory has been hampered by the slow pace of algorithm development, the need for novel materials with which to build the joint memory/processing nodes, and challenges in scaling up. Early neuromorphic neural networks had no ‘‘plasticity,” said Thomas Cleland, a professor of psychology at Cornell University in Ithaca, NY, USA; once they were set up and trained to do a particular task,that was it—to do something different they needed to be rebuilt and retrained.That constraint was‘‘extremely limiting,” said Cleland.

        Technical advances have now largely overcome this constraint.‘‘One of the fundamental advances in AI over the last decade is coming up with faster and better ways to do learning,”said Gabriel Kreiman, a professor of ophthalmology and associate director of the Center for Brains, Minds and Machines at Harvard Medical School in Cambridge, MA, USA. ‘‘Implanting plasticity directly on the hardware so it can be retrained without starting from scratch can be quite transformative.”

        Two new applications of neuromorphic computing showcase the potential of this kind of design to efficiently solve a wide array of problems with great speed and minimal power expenditure: an electronic nose that can learn the scent of a chemical after just one exposure [1] and a machine-vision device with an image sensor that doubles as an artificial neural network and can process images thousands of times faster than conventional technology [2,3].

        The electronic nose is a ‘‘one-shot learning” olfaction system Cleland built with Nabil Imam,an engineer at Intel’s Neuromorphic Computing Laboratory in Santa Clara, CA, USA. The system is powered by Intel’s fifth-generation neuromorphic chip (Fig. 1[1]), Loihi, which contains 128 core processing units, each with a built-in learning module, and more than 130 000 computational‘‘neurons” linked to thousands of their neighbors [4].

        Cleland and Imam evaluated their system by pitting it against a traditional neural network in a smell test of ten odors wafting through a wind tunnel outfitted with 72 metal oxide gas sensors(data derived from a publicly available dataset [5]). Training for the neuromorphic system involved a single exposure to each odor,while hundreds of trials went into training the traditional AI.Every learned smell comprised only 20%-80%of the overall tested aroma,reflecting real-world conditions where numerous odors often blend in with one another.The neuromorphic AI identified the target odor 92% of the time, compared to 52% of the time for the traditional AI [1].

        ‘‘We can train our algorithm once on a clean odor,like orange or amyl acetate [a banana-like scent], and present that odor against many different backgrounds,” Cleland said. ‘‘You could test it in a bakery, a garbage dump, or a swamp, and it would be able to recognize that odor.”

        Training of standard AI, in addition to being time-consuming and power-hungry, has to start from scratch every time a new smell is added. The neuromorphic AI, on the other hand, can keep learning new scents simply by adding new ‘‘neurons” to the network. Cleland is now trying to adapt the system to work in autonomous robots. ‘‘We would like to be able to train it within seconds, and have it accurately detect odors, even if they are deeply obscured by uncontrolled contaminants,” he said. ‘‘We do not want to have to say, ‘Oh yeah, it does not work when things are acidic or when it is too humid or whatever.’”

        Potential applications for the system include air quality monitoring, toxic waste identification, land mine detection, trace drug detection, and medical diagnoses. However, the algorithm is not limited to chemosensation, Cleland said. He and his team have used it to classify ground cover types from hyperspectral satellite images and differentiate frog calls in South America jungles [6].‘‘We can work with anything where we have a sufficient number of sensors,” he said. ‘‘The one caveat is the sensors need to be good enough to detect the things you want to detect.”

        Fig. 1. Cornell University and Intel researchers built an electronic nose that can learn the scent of a chemical after just one exposure on top of Loihi, Intel’s fifthgeneration research chip for neuromorphic computing [1]. The chip, shown here,places memory and processing nodes within individual modules to enable superefficient detection of odors and other patterned stimuli [4]. Credit: Tim Herman/Intel Corporation.

        While Cleland and Imam leveraged Intel’s Loihi chip,researchers at Vienna University of Technology(TU Wien)have designed their own neuromorphic chip that enables incredibly fast image processing (Fig. 2 [2,3]). Machine vision technology typically involves cameras scanning image pixels row by row, converting video frames to digital signals, then transmitting the data to off-board computers for analysis—all of which cause significant delays. The TU Wien group sought to speed up this process by developing an image sensor that itself functions as an artificial neural network capable of simultaneously acquiring and analyzing images.‘‘Combining sensing with computing in one step really opens up a whole new direction for image interpretation,” said Lukas Mennel, a graduate student at the TU Wien Photonics Institute in Austria.

        The new sensor consists of a three-by-three array of pixels that each represents a neuron [2]. The pixels in turn consist of three photodiodes that each represents a synapse. Each photodiode is made from three-atom-thick sheets of tungsten diselenide, a semiconductor with a tunable response to light. Such tunability allows the photodiodes to remember and respond to light in a programmable way.

        To test their system, the TU Wien researchers used lasers to project the letters‘‘n,” ‘‘v,”and‘‘z” onto the neural network image sensor [3]. The sensor was able to correctly process the image of the letter at the equivalent of 20 million frames per second (fps).In contrast,conventional machine vision technology would be capable of processing the images at no more than about 1000 fps.

        Mennel said the sensor’s speed is limited only by the speed of the electrons in the circuits and that, theoretically, the system could operate a few orders of magnitude faster than what they have reported. In addition to the ultra-fast processing, the image sensor does not consume any electrical power when in operation.Rather,the sensed photons themselves provide the necessary electric current to power the sensor.

        The TU Wien image sensor technology has a variety of highspeed applications, including fracture mechanics—determining which direction cracks propagate from—and particle detection—figuring out which of several possible particles has just passed by. While in theory the system could handle complex tasks such as guiding autonomous vehicles,it would need to be scaled up significantly, Mennel said. ‘‘So, the obvious next step is scaling up,which should be fairly easy since people are now able to build sensors with millions of pixels.”

        Based on these results, it looks like neuromorphic computing could become an important part of the digital future.‘‘The amount of power consumed by current machine-learning approaches is enormous, often prohibitively so,” Kreiman said. ‘‘Neuromorphic computing shows potential to revolutionize the way we think about computation, in terms of enabling certain approaches that are currently not feasible, and at a fraction of the cost.”

        Fig.2. (a)The image sensor chip developed by TU Wien researchers doubles as an artificial neural network that processes images thousands of times faster than conventional techniques [2,3]. (b) The artificial neural network auto-encodes noise-free images projected onto the sensor into a current code which is converted into a binary activation code and finally reconstructed into an image by the decoder[2,3].Once trained the auto-encoder can take noisy inputs and reconstruct the projected images.Credit:TU Wien,with permission.

        精品91精品91精品国产片| 色www永久免费视频| 一二三四视频社区在线| 中字无码av电影在线观看网站| 日本女优中文字幕在线观看| 偷拍一区二区三区四区视频| 国产动作大片中文字幕| 亚洲国产理论片在线播放| 91免费国产| 中文字幕一区二三区麻豆| 亚洲妇熟xxxx妇色黄| 亚洲av片不卡无码久久| 亚洲国产剧情在线精品视| 精品老熟女一区二区三区在线| 精品乱人伦一区二区三区| 精品爆乳一区二区三区无码av| 国产精品无码不卡在线播放| 男女视频网站在线观看| 久久亚洲精品成人av无码网站| 亚洲国产区男人本色| 免费国人成人自拍视频| 国产自产二区三区精品| 精品人妻伦九区久久aaa片| 制服丝袜天堂国产日韩| 蜜桃色av一区二区三区麻豆| 激情人妻另类人妻伦| 三年片免费观看大全国语| 狠狠丁香激情久久综合| 中文字幕成人精品久久不卡91| 中文字幕日韩欧美一区二区三区| 欧美日韩久久久精品a片| 国产免费一区二区av| 国产剧情一区二区三区在线| 超碰97资源站| 久久精品伊人无码二区| 99久久久69精品一区二区三区| 国产熟妇与子伦hd| 国产成人无码aⅴ片在线观看| 中国少妇和黑人做爰视频| 国产精品沙发午睡系列| 国模少妇一区二区三区|