亚洲免费av电影一区二区三区,日韩爱爱视频,51精品视频一区二区三区,91视频爱爱,日韩欧美在线播放视频,中文字幕少妇AV,亚洲电影中文字幕,久久久久亚洲av成人网址,久久综合视频网站,国产在线不卡免费播放

        ?

        Pushing Mathematical Limits,a Neural Network Learns Fluid Flow

        2021-09-24 06:45:28DanaMackenzie
        Engineering 2021年5期

        Dana Mackenzie

        Senior Technology Writer

        Drop a pebble into a flowing stream of water.It may not change the pattern of flow very much.But if you drop a pebble into a different place,it may change a lot.Who can predict?

        Answer:A neural network can.A group of computer scientists and mathematicians at the California Institute of Technology(Caltech)in Pasadena,CA,USA,has opened up a new arena for artificial intelligence(AI),by showing that a neural network can teach itself how to solve a broad class of fluid flow problems,much more rapidly and more accurately than any previous computer program[1].

        ‘‘When our group got together two years ago,we discussed which scientific domains are ripe for disruption by AI,”said Animashree Anandkumar,a professor of computing and mathematical sciences and co-leader of the artificial intelligence for science(AI4Science)initiative at Caltech.‘‘We decided that if we could find a strong framework for solving partial differential equations,we could have a wide impact.”Their first target was the Navier–Stokes equations in two dimensions,which describe the motion of an infinitely thin sheet of water(Fig.1)[1].Their neural network,which they call a‘‘Fourier neural operator,”dramatically outperforms any previous differential equation solver on this type of problem,exceeding their speed by a factor of 400 and increasing their accuracy by 30%.

        Partial differential equations(PDEs)are the kind of equation that Isaac Newton’s laws of motion naturally lead to.For this reason,they are fundamental to science,and any major advance in solving them would have broad ramifications.‘‘We are having discussions with so many teams,from industry and academia and national labs,”said Anandkumar.‘‘We are already doing experiments on fluid flow in three dimensions.”O(jiān)ne good use case would be the equations for modeling nuclear fusion,Anandkumar said.Another would be materials design,she added,especially plastic and elastic materials,an area in which team member Kaushik Bhattacharya,a professor of mechanics and materials science,‘‘has deep experience.”

        Computers emerged,in part,out of efforts during the Second World War to predict projectile motion using differential equations[2].They have been used ever since to solve differential equations,with varying degrees of accuracy and success.But previous approaches,whether they involved traditional computer programming or AI,have always worked on one‘‘instance”of an equation at a time.For example,they can figure out how one pebble dropped in one place affects the flow of water.Then they can learn how a pebble dropped in a different place changes it.But they will not generalize to understand how any pebble dropped in any place changes the flow.That is the ambitious goal behind the Caltech Fourier neural operator.

        There is,of course,a good reason why this has not been done before.Neural networks excel at learning associations between what mathematicians call finite-dimensional spaces.For example,the Google AI program AlphaGo,that beat the strongest human Go player,learned a function between Go positions(which are finite,though astronomical,in number)and Go moves[3].By contrast,the Fourier neural operator takes as input the initial velocity field of a fluid and produces as output the velocity field a certain time later.Both of these velocity fields live in infinite-dimensional spaces—which is just a mathematical way of saying that there are infinitely many ways in which you can toss a pebble into flowing water.

        Fig.1.Water flows in a thin sheet over a fountain.The Caltech AI4Science team reports that a neural network can predict the motion of such two-dimensional fluid flow much more rapidly and accurately than computer programs using standard methods to solve differential equations[1].Their work,which has potentially broad ramifications for advancing science through improved modeling of natural phenomena such as nuclear fusion,continues with experiments on fluid flow in three dimensions.Credit:Pixabay(public domain).

        The Caltech team trained the Fourier neural operator by showing it a few thousand instances of a Navier–Stokes equation solved by traditional methods[1].The network is then evaluated by a‘‘cost function,”which measures how far off its predictions are from the correct solution,and it evolves in a way to gradually improve its predictions.Because the network starts with a curated set of inputs and outputs,this is called‘‘supervised learning.”Google’s original version of AlphaGo learned by a combination of supervised and unsupervised learning(though a later version used unsupervised only)[3].Other neural network programs used in image processing typically employ supervised learning[4].

        But no matter how much training data you have,you might not be able to explore more than the tiniest part of an infinite-dimensional space.You cannot try out all the places where you could put a pebble into a stream.And without some kind of prior assumptions,your network is not guaranteed to correctly predict what happens when the pebble is dropped into a new place.

        For this and other reasons,‘‘We wanted to take the relevant parts of neural networks and combine them with domain-specific understanding on the math side,”said Andrew Stuart,another AI4Science team member and a professor of computing and mathematical sciences.

        Specifically,Stuart knew that linear PDE—the simplest kind of PDE—can be solved with the well-known method of Green’s functions,a device used to solve difficult ordinary and PDE which may be unsolvable by other methods[5].Basically,it provides a template for an appropriate solution to the equation.This template can be approximated in a finite-dimensional space,so it reduces the problem from infinite dimensions to finite dimensions.

        The Navier–Stokes equations are nonlinear,so no such template is known for them.But if there were something similar to a Green’s function for the Navier–Stokes equation,a nonlinear but still finitedimensional template,then a neural network should be able to learn it.There was no guarantee that this would work,but Stuart called it a‘‘well-informed gamble.”Experience has shown time and time again that neural networks are extremely good for learning nonlinear maps in finite-dimensional spaces,he said.

        Learning a nonlinear operator between infinite-dimensional spaces is a‘‘holy grail”of computational science,said Daniele Venturi,assistant professor of applied mathematics at the University of California,Santa Cruz in Santa Cruz,CA,USA.Venturi,whose research involves differential equations and infinite-dimensional function spaces,said he is not convinced that the Caltech group has gotten there yet.‘‘It is in general impossible to learn a nonlinear map between infinite-dimensional spaces based on a finite number of input–output pairs,”he said.‘‘But it is possible to approximate it.The main question is really the computational cost of such approximation,and its accuracy and efficiency.The results they have shown are really,really impressive.”

        In addition to unprecedented speed and accuracy,the Caltech group’s method has other remarkable properties[1].By design,it can predict the fluid flow even in places where you have no initial data and predict the result of disturbances not seen before.The program also confirms an emergent behavior of solutions to the Navier–Stokes equations:Over time,they redistribute energy from long to short wavelengths.This phenomenon,called an‘‘energy cascade,”was proposed by Andrei Kolmogorov in the 1940s as an explanation for turbulence in fluids[6].

        The next frontier for the Fourier neural operator is three-dimensional fluid flow,where turbulence and chaos are major obstacles.Can neural networks tame chaos?‘‘We know that chaos means we cannot precisely predict the fluid motion over long time horizons,”Anandkumar said.‘‘But we also know from theory that there are statistical invariants,such as invariant measures and stable attractors.”If the neural network could learn where the attractors are,it would be possible to make better probabilistic predictions,even when precise deterministic projections are impossible.Anandkumar points out that the network could control a chaotic system so that it does not head toward an undesirable attracting state.‘‘In nuclear fusion,for example,the ability to control disruptions,such as loss of stability of the plasma,becomes very important,”she said.

        日本免费大片一区二区| 小12箩利洗澡无码视频网站| 草莓视频在线观看无码免费| 精品国产av一区二区三区| 蜜臀久久99精品久久久久久| 蜜臀av 国内精品久久久| 亚州精品无码人妻久久| 三级黄片一区二区三区| 国产免费三级av在线| 污污内射在线观看一区二区少妇| 亚洲a∨天堂男人无码| 亚洲一区久久久狠婷婷| 免费观看91色国产熟女| 国产av无码专区亚洲av极速版| 一区二区无码中出| 国产熟妇一区二区三区网站| 大尺度免费观看av网站| 色八a级在线观看| av少妇偷窃癖在线观看| 精品国产女主播一区在线观看| 国产亚洲精品美女久久久m| www插插插无码免费视频网站 | 人人鲁人人莫人人爱精品| 少妇内射高潮福利炮| 蜜芽尤物原创AV在线播放| 国产午夜免费啪视频观看| 娇妻在交换中哭喊着高潮| 国产精品无码无片在线观看| 国产三级av在线播放| 国产精品亚洲第一区二区三区| 狠狠色噜噜狠狠狠888米奇视频 | 亚洲av国产大片在线观看| 亚洲国产精品久久又爽av| 久激情内射婷内射蜜桃| 久久久99精品成人片中文字幕| 久久久精品国产老熟女| 亚洲综合成人婷婷五月网址| 欧美人与动牲交片免费| 久久精品av一区二区免费| 久久亚洲av成人无码国产最大| 又色又爽又黄又硬的视频免费观看|