亚洲免费av电影一区二区三区,日韩爱爱视频,51精品视频一区二区三区,91视频爱爱,日韩欧美在线播放视频,中文字幕少妇AV,亚洲电影中文字幕,久久久久亚洲av成人网址,久久综合视频网站,国产在线不卡免费播放

        ?

        Neuromorphic Computing Advances Deep-Learning Applications

        2020-11-05 09:52:26ChrisPalmer
        Engineering 2020年8期

        Chris Palmer

        Senior Technology Writer

        In the quest for ever faster and more efficient computing,researchers and manufacturers are busy exploring novel processing architectures. Among these, neuromorphic computing—the emulation of brain function inside computer chips—is showing particular promise for applications involving deep learning, an increasingly common form of artificial intelligence (AI) that uses neural networks inspired by brains to uncover patterns in large datasets.

        In traditional machine learning based on conventional computer hardware, the memory and processing nodes are physically separated. In contrast, neuromorphic computer hardware mimics neurons and places both functions in the same spot.By eliminating the need to transfer data back and forth between processing and storage sites, this architecture can substantially reduce computing time and power requirements for certain specific learning tasks such as pattern recognition and classification.

        While the concept of neuromorphic computing originated in the late 1980s, its trajectory has been hampered by the slow pace of algorithm development, the need for novel materials with which to build the joint memory/processing nodes, and challenges in scaling up. Early neuromorphic neural networks had no ‘‘plasticity,” said Thomas Cleland, a professor of psychology at Cornell University in Ithaca, NY, USA; once they were set up and trained to do a particular task,that was it—to do something different they needed to be rebuilt and retrained.That constraint was‘‘extremely limiting,” said Cleland.

        Technical advances have now largely overcome this constraint.‘‘One of the fundamental advances in AI over the last decade is coming up with faster and better ways to do learning,”said Gabriel Kreiman, a professor of ophthalmology and associate director of the Center for Brains, Minds and Machines at Harvard Medical School in Cambridge, MA, USA. ‘‘Implanting plasticity directly on the hardware so it can be retrained without starting from scratch can be quite transformative.”

        Two new applications of neuromorphic computing showcase the potential of this kind of design to efficiently solve a wide array of problems with great speed and minimal power expenditure: an electronic nose that can learn the scent of a chemical after just one exposure [1] and a machine-vision device with an image sensor that doubles as an artificial neural network and can process images thousands of times faster than conventional technology [2,3].

        The electronic nose is a ‘‘one-shot learning” olfaction system Cleland built with Nabil Imam,an engineer at Intel’s Neuromorphic Computing Laboratory in Santa Clara, CA, USA. The system is powered by Intel’s fifth-generation neuromorphic chip (Fig. 1[1]), Loihi, which contains 128 core processing units, each with a built-in learning module, and more than 130 000 computational‘‘neurons” linked to thousands of their neighbors [4].

        Cleland and Imam evaluated their system by pitting it against a traditional neural network in a smell test of ten odors wafting through a wind tunnel outfitted with 72 metal oxide gas sensors(data derived from a publicly available dataset [5]). Training for the neuromorphic system involved a single exposure to each odor,while hundreds of trials went into training the traditional AI.Every learned smell comprised only 20%-80%of the overall tested aroma,reflecting real-world conditions where numerous odors often blend in with one another.The neuromorphic AI identified the target odor 92% of the time, compared to 52% of the time for the traditional AI [1].

        ‘‘We can train our algorithm once on a clean odor,like orange or amyl acetate [a banana-like scent], and present that odor against many different backgrounds,” Cleland said. ‘‘You could test it in a bakery, a garbage dump, or a swamp, and it would be able to recognize that odor.”

        Training of standard AI, in addition to being time-consuming and power-hungry, has to start from scratch every time a new smell is added. The neuromorphic AI, on the other hand, can keep learning new scents simply by adding new ‘‘neurons” to the network. Cleland is now trying to adapt the system to work in autonomous robots. ‘‘We would like to be able to train it within seconds, and have it accurately detect odors, even if they are deeply obscured by uncontrolled contaminants,” he said. ‘‘We do not want to have to say, ‘Oh yeah, it does not work when things are acidic or when it is too humid or whatever.’”

        Potential applications for the system include air quality monitoring, toxic waste identification, land mine detection, trace drug detection, and medical diagnoses. However, the algorithm is not limited to chemosensation, Cleland said. He and his team have used it to classify ground cover types from hyperspectral satellite images and differentiate frog calls in South America jungles [6].‘‘We can work with anything where we have a sufficient number of sensors,” he said. ‘‘The one caveat is the sensors need to be good enough to detect the things you want to detect.”

        Fig. 1. Cornell University and Intel researchers built an electronic nose that can learn the scent of a chemical after just one exposure on top of Loihi, Intel’s fifthgeneration research chip for neuromorphic computing [1]. The chip, shown here,places memory and processing nodes within individual modules to enable superefficient detection of odors and other patterned stimuli [4]. Credit: Tim Herman/Intel Corporation.

        While Cleland and Imam leveraged Intel’s Loihi chip,researchers at Vienna University of Technology(TU Wien)have designed their own neuromorphic chip that enables incredibly fast image processing (Fig. 2 [2,3]). Machine vision technology typically involves cameras scanning image pixels row by row, converting video frames to digital signals, then transmitting the data to off-board computers for analysis—all of which cause significant delays. The TU Wien group sought to speed up this process by developing an image sensor that itself functions as an artificial neural network capable of simultaneously acquiring and analyzing images.‘‘Combining sensing with computing in one step really opens up a whole new direction for image interpretation,” said Lukas Mennel, a graduate student at the TU Wien Photonics Institute in Austria.

        The new sensor consists of a three-by-three array of pixels that each represents a neuron [2]. The pixels in turn consist of three photodiodes that each represents a synapse. Each photodiode is made from three-atom-thick sheets of tungsten diselenide, a semiconductor with a tunable response to light. Such tunability allows the photodiodes to remember and respond to light in a programmable way.

        To test their system, the TU Wien researchers used lasers to project the letters‘‘n,” ‘‘v,”and‘‘z” onto the neural network image sensor [3]. The sensor was able to correctly process the image of the letter at the equivalent of 20 million frames per second (fps).In contrast,conventional machine vision technology would be capable of processing the images at no more than about 1000 fps.

        Mennel said the sensor’s speed is limited only by the speed of the electrons in the circuits and that, theoretically, the system could operate a few orders of magnitude faster than what they have reported. In addition to the ultra-fast processing, the image sensor does not consume any electrical power when in operation.Rather,the sensed photons themselves provide the necessary electric current to power the sensor.

        The TU Wien image sensor technology has a variety of highspeed applications, including fracture mechanics—determining which direction cracks propagate from—and particle detection—figuring out which of several possible particles has just passed by. While in theory the system could handle complex tasks such as guiding autonomous vehicles,it would need to be scaled up significantly, Mennel said. ‘‘So, the obvious next step is scaling up,which should be fairly easy since people are now able to build sensors with millions of pixels.”

        Based on these results, it looks like neuromorphic computing could become an important part of the digital future.‘‘The amount of power consumed by current machine-learning approaches is enormous, often prohibitively so,” Kreiman said. ‘‘Neuromorphic computing shows potential to revolutionize the way we think about computation, in terms of enabling certain approaches that are currently not feasible, and at a fraction of the cost.”

        Fig.2. (a)The image sensor chip developed by TU Wien researchers doubles as an artificial neural network that processes images thousands of times faster than conventional techniques [2,3]. (b) The artificial neural network auto-encodes noise-free images projected onto the sensor into a current code which is converted into a binary activation code and finally reconstructed into an image by the decoder[2,3].Once trained the auto-encoder can take noisy inputs and reconstruct the projected images.Credit:TU Wien,with permission.

        蜜臀aⅴ国产精品久久久国产老师 国产精品久久婷婷六月丁香 | 91超碰在线观看免费| 综合人妻久久一区二区精品| 日韩亚洲一区二区三区四区| 亚洲色成人www永久在线观看| 无码国产精品一区二区免费16| 亚洲精品综合在线影院| 国产内射一级一片高清内射视频| 极品少妇hdxx麻豆hdxx| 中国一 片免费观看| 在线观看精品视频一区二区三区| 亚洲长腿丝袜中文字幕| 97人伦影院a级毛片| 国产山东熟女48嗷嗷叫| 久久aⅴ无码av高潮AV喷| 日本高级黄色一区二区三区| 色费女人18毛片a级毛片视频| 最新亚洲人成无码网www电影| 狠狠亚洲超碰狼人久久老人| 国产自拍视频在线观看免费| 凹凸在线无码免费视频| 无码免费人妻超级碰碰碰碰| 麻豆视频在线观看免费在线观看| 蜜桃一区二区三区在线视频| 国产小视频在线看不卡| 无码精品日韩中文字幕| 四虎精品视频| 久久中文字幕av第二页| 久久精品国产熟女亚洲| 国产无遮挡无码视频免费软件 | 国产三级精品av在线| 精品丰满人妻无套内射| 久久AV中文一区二区三区| 91国语对白在线观看| 久青草影院在线观看国产| 中文字幕乱码无码人妻系列蜜桃| 亚洲午夜精品久久久久久人妖| 一区二区久久精品66国产精品| 亚洲国产高清精品在线| 在教室伦流澡到高潮hgl视频| 久久国产精品免费一区六九堂|