亚洲免费av电影一区二区三区,日韩爱爱视频,51精品视频一区二区三区,91视频爱爱,日韩欧美在线播放视频,中文字幕少妇AV,亚洲电影中文字幕,久久久久亚洲av成人网址,久久综合视频网站,国产在线不卡免费播放

        ?

        Application of convolutional neural networks to large-scale naphtha pyrolysis kinetic modeling☆

        2019-01-14 11:10:48FengHuaZhouFangTongQiu
        Chinese Journal of Chemical Engineering 2018年12期

        Feng Hua ,Zhou Fang ,Tong Qiu ,*

        1 Department of Chemical Engineering,Tsinghua University,Beijing 100084,China

        2 Beijing Key Laboratory of Industrial Big Data System and Application,Beijing 100084,China

        Keywords:Convolutional neural network Network motif Naphtha pyrolysis Kinetic modeling

        A B S T R A C T System design and optimization problems require large-scale chemical kinetic models.Pure kinetic models of naphtha pyrolysis need to solve a complete set of stiff ODEs and is therefore too computational expensive.On the other hand,artificial neural networks that completely neglect the topology of the reaction networks often have poor generalization.In this paper,a framework is proposed for learning local representations from largescale chemical reaction networks.At first,the features of naphtha pyrolysis reactions are extracted by applying complex network characterization methods.The selected features are then used as inputs in convolutional architectures.Different CNN models are established and compared to optimize the neural network structure.After the pre-training and fine-tuning step,the ultimate CNN model reduces the computational cost of the previous kinetic model by over 300 times and predicts the yields of main products with the average error of less than 3%.The obtained results demonstrate the high efficiency of the proposed framework.

        1.Introduction

        Accurate models that simulate naphtha pyrolysis process are important because ethylene is an essential feedstock in olefin industry.The ethylene cracking process involves a great number of reactions that are highly coupled and often multi-route.Thus,naphtha pyrolysis network is a typical large-scale chemical reaction network(CRN).The previous molecular model developed by Kumar et al.[1]is most widely used in olein industry due to the advantages of fast calculation and ease of development,but the simplified kinetics also limit the heavy feedstock characterization.Recently,Van Goethem et al.[2]developed a model to simulate the steam cracking process.Its kinetic scheme,having progressed across the years by Ranzi et al.[3]is based on equations from the stand point of free radicals,which enables a more detailed description in the heavy feedstock components.For the purpose of moving beyond single particles and modeling the entire cracking reactor,Fang et al.[4]combined the detailed pyrolysis network with the heat transfer in furnace simulation.The results show good agreement with experimental data in industrial cracking furnaces.The flow chart of their simulation model is illustrated in Fig.1.

        Other than naphtha pyrolysis,the chemical kinetic models are commonly used in tackling CRN problems[5].Mathematically,the behavior of a chemical network can be captured by solving high dimensional ordinary differential equations for mass,heat and momentum transport.A representative expression of the ODEs is given below,where Nmrepresents the concentration of the substrate m,vimrepresents the coefficient of the substrate m in the reaction i in the stoichiometry matrix and rirepresents the reaction rate of the reaction i.

        Numerical solutions such as GEAR methods are used to solve the ODEs[6].But for complex models,numerical solutions are often computationally expensive and sometimes impractical because the reaction rates are complicated functions of concentrations.For these reasons,there is a growing trend to focusing on the qualitative features of CRNs while disregarding precise values of its parameters[7].Moreover,research implies that modularity in the structure of CRN can help understanding the overall behavior by defining meaningful subsystems[8].It might be a good idea to use the modular features of the CRNs to guide the modeling of chemical processes,which coincides with the working mechanism of convolutional neural networks.

        The recent progress of deep learning motivates its application in the fields of process and system engineering,such as chemical processes prediction[9,10],optimization of operational conditions[11]and fault prediction modeling[12].Besides the neural networks above,the convolutional neural network(CNN)is also one of the most effective deep learning models.

        Fig.1.Flow chart of the ethylene cracking simulation[4].

        The CNN structure is inspired by the visual cortex of animals.In 1990,the structure of CNN was first introduced and improved[13,14].Like other deep neural networks,CNN is a multi-layer feed forward network[15].However,due to its special design,CNN can learn visual patterns from raw images with little preprocessing.The basic CNN is typically composed of convolutional layers,pooling layers and fully connected layers.Among them,the convolutional layers aim to extract local features from the input images.Each convolution layer usually consists of several different kernels so that different feature maps can be obtained.The parameters in each kernel are the same,such a weight-share mechanism can effectively reduce the model complexity and the training difficulty(Fig.2).

        Different from images where pixels have explicit spatial order,naphtha pyrolysis network that consists of hundreds of components and thousands of reactions is less easily to be described.Due to the irregularity of nodes connection and the difficulty of network representation,to our knowledge,no work has been reported yet in the literature on the application of CNN in modeling of large-scale naphtha pyrolysis kinetics.In this paper,a novel method is proposed to apply CNN in naphtha pyrolysis kinetic modeling.Three different CNN architectures:Lenet-5(shallow),AlexNet(deep),GoogleNet(deeper)are explored and compared.Their separate benefits and disadvantages have been further discussed.

        2.Graph Representation

        Fig.2.An overview of the framework proposed in the paper.

        Large-scale chemical reaction networks are often highly coupled and clustered.Some components that act as glue to connect different subsystems appear nearly everywhere in the network[16].Among those networks,naphtha pyrolysis network is a representative case.Based on publicly available information of naphtha pyrolysis network[17,18],we assemble a list of 4694 reactions involving 142 chemical compounds and radicals that represents the main routes of naphtha pyrolysis.Then,a substrate graph is constructed and visualized as a representation derived from the stoichiometric equations.In the substrate graph,GS=(VS,ES).Its vertex set VSincludes all the components in the network.When two substrates S1,S2appear in the same reaction(either as reactants or as products),they will be linked by an edge,e=(S1,S2)∈ES.For instance,given two stoichiometric equations:

        Its substrate graph can be illustrated as Fig.3(a).For naphtha pyrolysis,the visualization of such a substrate graph is done by using the software package Gephi[19]and is shown in Fig.3(b).

        The input graph that is used as the final receptive field is unweighted.One might argue that reaction rates can be considered as possible candidates for the weight of edges.However,the network contains large numbers of reactions.Among those reactions,some reaction rate constants cannot be obtained.In addition,the reaction rate changes with temperature and thus cannot be treated as a constant.

        3.Local Feature Extraction

        When CNN is applied to images,a receptive field moves over the image with a certain stride and reads features from the image pixels.Since the pixels have an explicit spatial order,the movement of the receptive field is always from top-left to bottom-right.Thus,even different pixels can have the same relative position in the receptive field given that their structural relation is identical.

        To extend the application of CNN to CRN problems,similar to images,we need to construct locally connected neighborhoods for certain nodes in the input CRNs[20].Those neighborhoods will be normalized and serve as the receptive fields in CNN architecture.To do that,we need to determine a mapping from the substrate graph to a matrix of reals so that substrates with similar structural relations with be positioned similarly in the matrix.For a given substrate graph of a CRN,we propose a framework consisting of following steps:

        1.Neighborhood assembly:for each feed species,a fixed-size neighborhood is assembled to represent a local feature.We address this problem by two methods:nodes labeling based on degrees ranking and feature mining based on motif detection.

        2.Normalization process:the substrates in the neighborhood will be replaced by the detailed molecular composition.After normalization,the matrix of reals will enter the convolutional architecture.

        3.Convolutional architecture:learn topological features from the resulting patches with CNN.

        3.1.Neighborhood assembly

        3.1.1.Nodes labeling based on degrees

        For each feedstock molecule,algorithm node labeling is called to construct its neighborhood.The basic idea is shown in Fig.4.Given the selected node v and the size of the receptive field k,the algorithm performs a breadth- first search,ranks the nodes according to their degrees,and adds the qualified nodes to the set G(v).First,the algorithm finds the 1°connected nodes N1(v)of the input node v.If the number of nodes in N1(v):n1≥k-1,the algorithm ranks N1(v)based on degrees,takes the first k-1 and adds to G(v).Else,it adds all of N1(v)to G(v),finds the next-level connected nodes,i.e.the 2°connected nodes N2(v).If the number of nodes in N2(v):n2≥k-1-n1,it ranks N2(v)based on degrees,takes the first k-1-n1and adds to G(v).Else,the rest can be done in a similar manner until the number of nodes in G(v)reaches k.

        Algorithm.Nodes Labeling

        1.input:feedstock molecule v,receptive field size k,degree l,original graph G

        2.output:receptive field G(v)for v

        3.N0(v)=[v];G=[];i=0

        4.while length(G(v))+length(Ni(v))<k do

        5.G=G∪N1(v)

        6.Ni+1(v)= ∪vi∈Ni(v)N1(vi)

        7.i++

        8.sort Ni(v)according to degree l

        9.select the first k-length(G(v))nodes of Ni(v),add to G(v)

        10.return receptive field G(v)

        Fig.3.Visualization of the Substrate Graph.(a)Substrate graph of the instance.(b)Substrate graph of the complete naphtha pyrolysis network.

        Fig.4.An illustration of the nodes labeling algorithm based on degrees ranking.To construct a neighborhood for the red node of fixed-size k(here:k=7),after adding its 1°and 2°connected nodes,we need to select between node No.7 and No.8.Since the degree of No.7 is bigger than that of No.8,we choose No.7 node to be added into the neighborhood of the root node.

        Nevertheless,large amounts of highly-connected nodes are found in the naphtha pyrolysis network.Those nodes mainly represent radicals which widely exist in chain initiation,chain growth and hydrogen abstraction reactions.As intermediates,they play an important role in coupled reactions.In the receptive field constructed by the labeling algorithm based on degrees,those nodes appear repeatedly.Since a large part of them are radicals who act as intermediates and whose concentration is negligible and even zero,the receptive fields that enter the convolutional architecture will be full of zeros,resulting in invalid convolutional operations and a huge waste of computational resources.More explicitly,because of the ubiquity of the short chain intermediates,when we label each node and set the size of its neighborhood to 3,it is almost sure that in the input matrix(142×3),only 31 feedstock species will not be 0,the rest will all be 0.

        Although the node labeling algorithm based on degrees might be an instinctive and convenient solution to assembly neighborhood,it is not appropriate for networks where low concentration radicals appear frequently.For naphtha pyrolysis network,a second mapping method is proposed and recommended:feature mining based on motif detection.

        3.1.2.Feature mining based on motif detection

        The complexity of networks is essentially the complexity of connections between nodes.Researchers find that certain connections in the real networks turn out to be more frequent than in the random graph[21,22].These subgraphs,often called graphlets or motifs,are thought to reflect certain functional properties and represent the features of the network topology.

        In the naphtha pyrolysis network,motifs are enumerated and analyzed as possible features.A fast algorithm for detecting network motifs[23]is implemented and realized by the software package FANMOD[24].Apart from counting the frequencies of certain motifs in the naphtha pyrolysis network,their frequencies are also counted in the 100 randomized graphs generated.In order to show that the motifs are more significant in the naphtha pyrolysis network than in the random networks,two statistics:Z-score and P-value are calculated and emphasized.Their values reveal the statistical significance of a motif in a particular network.The Z-score is defined as follows:

        where N(i)is the number of occurrences of the pattern i in the real network,Nr(i)is the average number of occurrences of the pattern i in a sufficiently large set of random networks and σr(i)is the standard deviation of Nr(i).A large Z-Score illustrates that the motif i appears more frequently than in the random networks.The P-value is defined as follows:

        where stands for the significance level(usually is 0.05 or 0.01).The P-value stands for the probability P that the frequency of a certain motif in a random network is equal or larger to its frequency in the target network.That is to say,a certain motif is statistically more significant,if its Z-score is bigger and its P-value is closer to zero[25,26].

        In Table 1,f(i)and fr(i)stand for the frequency of a motif in naphtha pyrolysis network and random ones.A motif is taken as a feature ofthe network if its Z-score is superior than 5 and its P-value inferior than 0.05(Z>5,P<0.05).As is shown in Table 1,motif size-3 motifs(ID=2)and size-4 motifs(ID=3)are features of the naphtha pyrolysis network.It is known that the feed of naphtha pyrolysis contains 31 species.For each of the 31 nodes,all of its size-3 and size-4 motifs are enumerated.In the substrate graph constructed in Fig.3(b)where 142 substrates are connected by 4692 reactions,34942 size-3 motifs and 528665 size-4 motifs are found in the network.Since the number of motifs directly determines the size of input in the convolutional architecture and further determines the number of parameters and computation time,smaller motifs are preferable.The 34942 size-3 motifs cover 132 out of 142 substrates in the network.We think that they are qualified and appropriate to be taken as a local representation.Therefore,we will use the size-3 motifs to start the construction of the reception fields.Thus,the input graph for each training sample is 34942×3.This step is equivalent to taking reactions as the center and extracting substrates information.

        Table 1Naphtha pyrolysis network motifs detection

        In naphtha pyrolysis problem,motif detection has significant advantages compared with node labeling method.As is mentioned before,only 31 feedstock species are selected as the starting nodes,which can guarantee that at least one number in every line of the input matrix is not 0.Meanwhile,the reaction between the feedstock species(such as A+B!C)is bound to enter the input matrix,while in the node labeling method,the reaction may be discarded because the degree of A or B is too low.Therefore,by applying motif detection,the ratio of the value 0 in the input matrix can be significantly reduced.

        3.2.Data normalization

        The receptive field for each feed species is constructed by normalization process.We replace the feed species in the assembled neighborhood by their corresponding mass fraction.Data normalization transforms the input to have zero-mean and unit variance.

        Suppose the input is a vector,i.e.,x=[x1,x2,…,xk],k=31,the transformation is done as follows:

        where μxand δ2xare respectively the mean and variance of the input vector and ε is a constant small value.The normalization is essential as the precedent step of convolutional architecture,because it reduces the dependence of gradients of the initial values,which benefits the gradient flow through the overall network.This permits higher learning rate as well as avoids divergence in training.So far,the proposed framework based on motifs is illustrated in Fig.5.

        3.3.Network architectures

        Inputs for CNN architectures can be divided into two classes.Apart from the regularized data generated in the previous step,which extracts substrate information centering on reactions,operating conditions are also used as input data.The operating conditions are coil inlet temperature(CIT),coil outlet temperature(COT),coil inlet pressure(CIP),feed rate and water/oil ratio.We train the neural nets to predict the values of 9 key product yields after simulated naphtha pyrolysis in the tubular cracking reactor.

        The training ofCNNisa global optimization problem.The error function is defined as the average relative deviation of product yields calculated by the kinetic model and our CNN model.

        Stochastic gradient descent algorithm[27]are used to update the weights and the biases to minimize the error function during training.Weights are initialized using Xavier method[28]with weights picked from a Gaussian distribution and biases set to constants.As for the architectures,we imitate three image-based CNN models and apply them to solve the CRN problem.

        3.3.1.LeNet-5 architecture

        LeNet-5 is a classic CNN architecture.It was first developed to solve basic handwritten digits classification problems.Its detailing architecture is shown in Fig.6.There are a number of latter-developed CNN architectures,but they all inherit the basic components from LeNet-5.In our design,we set 142 kernels in the first convolutional layer,representing the extraction of reaction features centering on 142 substrates.Fig.7 shows the schematic view of the resulting network.

        Fig.5.An illustration of the framework based on motif detection.At first,the red node and the blue node are selected to assembly neighborhoods.All of their size-3 motifs are enumerated and each node normalized.The data will then enter the following convolutional layers.In the naphtha pyrolysis network,for each training sample,the input is 34942 size-3 motifs.

        Fig.6.LeNet-5 network[29].

        3.3.2.AlexNet architecture

        After LeNet-5,many methods have been created to improve the performance of CNN models.In 2012,Krizhevsky et al.[30]changed the previous shallow architecture of CNN and proposed a deep network called AlexNet which show significant improvements on image classification task on the dataset of ImageNet.Among all of its innovations,two most important ones are the use of ReLu and Dropout.ReLu is a special kind of activation function[31],where the output is 0 when the input<0,else the output equals to the input.Due to its nonsaturating nonlinearity,it can largely decrease training time with gradient descent.Dropout is a regularization method that simulates the combination of many different models and thus can reduce efficiently over fitting in the fully-connected layers[32].Illustrations of ReLu and Dropout are given in Fig.8(a)and(b).

        In the AlexNet model of naphtha pyrolysis,the dropout ration is set to 0.5.Besides,we shuffle and batch the training data with the mini-batch size of 50 and for 2000 epochs to enlarge quantities of training data.With the help of the above optimization,the network is largely deepened.Fig.9 shows the schematic view of the resulting network.

        Fig.7.LeNet-5 architecture.

        Fig.8.Advances in AlexNet[29].

        Fig.9.AlexNet Architecture.

        3.3.3.GoogleNet architecture

        With the success of AlexNet,several works have been done to further deepen the CNN structure.Among them,GoogleNet is very representative[33].It increases both the width and depth of network at modest computational cost by applying the so-called inception module,shown in Fig.10.With the help of inception module,the network uses various convolutional filter sizes to capture multiscale patterns and approximates the optimal sparse structure.Specially,they place smaller kernels before bigger kernels to reduce dimensions,which is the key factor of keeping low computational cost[34].

        In the GoogleNet model of naphtha pyrolysis,1×1 kernels are placed before bigger kernels to reduce dimensions.This design enables us to increase both the width and depth of the network without a computational blow up.In the successful instance,the net is 12 layers deep when counting layers with parameters or 16 layers deep if counting pooling layers.The architecture details are described in Fig.11.

        4.Optimization and Comparison of Network Architecture

        4.1.Optimization of network architecture

        In order to minimize training cost as well as avoid over fitting,the network structure must be optimized before using.Different layers and hyperparameters are tried to find the best structure.Taking AlexNet as an example,its possible candidates are shown in Table 2.In Table 2,the best performing candidate is selected for the next step.The parameters mainly include the number of convolution kernels in the convolutional layer and the number of hidden cells in the FC layers.

        FC*represents dropout ratio 0.5 is utilized for this FC layer.In the following,Model 3 is explained as an example.Its schematic view is given in Fig.9.The input size of one sample matrix is 34942×3,where 34942 represents motif number and 3 represents the motif size.Here we use 5 convolutional layers,3 max pooling layers and 2 FC layers.In the first convolutional layer,the kernel sizes are set to 3 and the strides are set to 1.In the second and third convolutional layers,the kernel sizes are set to 9 and the strides are set to 4.In the fourth and fifth convolutional layer,the kernel sizes are set to 3 and the strides are set separately to 2 and 1.The first convolutional layer contains 142 filters,representing 142 substrates and the rest all contain 32 filters.The three max pooling layers are behind the first,second and last convolutional layers.The kernel size is set to 9×3 in the first pooling layer,4×1 in the second and 3×1 in the last pooling layer.The strides are separately 8×3,2×1.2×1.Here the output of one sample matrix is a three-dimensional array(35×1×32).As mentioned in Section 1,the input of FC layers must be a one-dimensional vector,hence we utilize a flatten layer to reshape three-dimensional arrays into one-dimensional vectors with the size of 1125(35×1×32+5 operating conditions).The output length of the first FC layer is set to 128 and dropout is used for this layer.The last FC layer outputs the yields of 9 key products.

        Fig.10.An illustration of Inception module in GoogleNet.

        Fig.11.Google Netarchitecture.

        Table 2AlexNet candidates for ethylene cracking

        The scale of the training set must be sufficiently large to train a neural network.We had wished to use the industrial data[35]of naphtha pyrolysis from a petrochemical plant.However the amount of real industrial data(10 sets)is not sufficiently large to train the neural nets.The training of the convolutional neural networks is thus divided into two steps.

        4.2.Pre-training stage

        In the first step,an existing kinetic model[4],as is mentioned in Section 1,is used to generate training and test data.Based on the published industrial data of naphtha pyrolysis in the cracking reactor KBR SC-1,the model generates 2500 sets of input data by introducing random perturbation.

        When designing the working area of the training set,the input is divided into two categories:the feed part and the operating condition part.Both are generated by adding random disturbances of 5%on several sets of benchmark data,and for the feed part,only 31 species are processed.The operating conditions include coil inlet temperature(CIT),coil outlet temperature(COT),coil inlet pressure(CIP),feed rate and water/oil ratio.The CNN model is established to replace the slower mechanism model.In production practice,feed rate,water/oil ratio and COT are frequently changed,and the actual range of variation is relatively small.In the training set,the range of feed rate is 6.0–6.7 t·h-1·tube-1,w/o ratio 0.475–0.525 and COT 828–915 degrees.We believe that such working area range can meet the actual demands of the industry.

        The corresponding output data is generated through the kinetic model,resulting in 2500 input–output pairs where the input contains 5 operating conditions and 31 mass fractions of feedstock species,and the output contains the yields of 9 key products.In the 2500 pairs,2000 are used as the training set and the other 500 are used as the test set.

        To prove that the proposed framework in the paper improves the performance of simulation,an artificial neural network(ANN)who has similar amount of parameters as the above LeNet-5 model is also constructed.The architecture of the traditional ANN is relatively simple,but it completely neglects the typology of the reaction networks.The entire process is like a “black-box”approach,resulting in the relatively low accuracy and poor ductility of the model.The comparison of different neural networks is listed in Table 3.

        Table 3The comparison of different neural networks

        Compared with the ANN model,convolutional neural networks perform much better on the test set.CNN models reduce the relative error by half,from over 10%of ANN model to less than 5%.The optimization of network architectures aims to use fewer parameters while being more accurate.The parameters of AlexNet dramatically decrease about 70 times than LeNet-5 while the test error reduces from 4.65%to 3.25%.The reason is that in LeNet-5,the majority of parameters locates in dense layers like the fully-connected layers.In AlexNet,the architecture is much more sparsely connected since the convolutional layers largely reduce the dimensions of data before the fully-connected structure.The amount of GoogleNet parameters slightly increases because the significant increase of neurons in convolutional layers,resulting from the larger and deeper network architecture.However,compared with the ANN model and LeNet-5 model,GoogleNet is still a sparse structure.The evolution of loss function of different models is shown in Fig.12.

        Fig.12.Evolution of loss function of different models.

        In the three CNN models,LeNet-5 performs the worst.It is not completely convergent even in the later period of training where obvious fluctuation can be seen.AlexNet and GoogleNet is nearly the same and performs much better than LeNet-5 model.Their loss functions have decreased to around 10%in the 100th training epoch and keep decreasing with training epochs.The loss is basically stable at lower than 5%over 1000 epochs.It can be supposed that for these two models,the training epochs can be reduced while the effect is ensured.Nevertheless,the performance of Google Net is not obviously better than AlexNet.It is thus clear that the depth of AlexNet model is sufficient for our study of naphtha pyrolysis network.

        4.3.Fine-tuning stage

        In the second step,we take 6 out of 10 sets of the real industrial data and use them to further train the initial neural network.The ultimate model is used to predict the product yields of four sets of untrained inputs to testify its effect.In the actual chemical plant,the operating conditions that are often modified are feed rate,water/oil ratio and especially COT.Table 4 shows 8 sets of the operatingconditions.Among them,the first four sets are trained,while the last four untrained.

        Table 4Operating conditions of eight test samples

        The calculation results are listed in Fig.13.We list the productyields of H2,CH4,C2H4,C2H6,C3H6,C3H8,C4H6,NC4H8and IC4H8.The results show the excellent predictive effect of our model.When the model encounters new operating conditions,it can achieve a good predictive effect while using a small training set.The results also show that the predictive effect of the sample in the training set is better than those out of the training set.Generally speaking,it is proven that the proposed framework of convolutional neural network can effectively extract the local features of the large-scale chemical reaction networks and in our case,we apply it to the simulation of naphtha pyrolysis network.We calculate the average running time of 100 model calls.The traditional kinetic model takes 31.112 s,while the hybrid model 0.097 s.The latter model is approximately 321 times faster than the average running time of kinetic model.

        5.Conclusions

        In this work,we propose a framework for learning local representations from large-scale chemical reaction networks.It combines three main procedures:1)a graph theoretical analysis to discover its representative modular features;2)generation of local normalized neighborhood for the input molecules;3)the application of CNN architecture.

        Three different CNN models,LeNet-5,AlexNet and GoogleNet,along with a shallow ANN model,are built to predict the main product yields of naphtha pyrolysis network.The models based on CNN structure perform significantly better than ANN as the former can learn from the network topology while the latter is completely data-driven.In the pre-training stage,with 2000 training epochs,the test error of the ANN model is above 10%while the CNN models generally reaches 4%and below.Among the three CNN models,deep networks,namely AlexNet and GoogleNet,improve accuracy and converge faster than the shallow one.Compared with the Lenet-5 model,the number of parameters of AlexNet and GoogleNet dramatically decreases about 70 times while the test error reduces from 4.65%to 3.25%and 3.41%.The AlexNet model is then fine-tuned by industrial data.The final model shows an excellent agreement with industrial data.Meanwhile,it reduces the computational cost of the previous kinetic model by over 300 times.

        However,it should be mentioned that the model only considers the variation of feed and operation conditions as inputs,while the changes in the furnace tube structure were not taken into account.Like most industrial softwares do,when the tube structure or the furnace type changes,the model should be reestablished.Itis necessary to regenerate the training data and train the CNN model.Therefore,the amazing increase of simulation speed brought by the CNN model is accompanied by its relatively high one-time training cost.

        We find the conclusion for image-based problems are also applicable for chemical reaction networks:a more sparse architecture of neural network can lower the amount of training parameters,increase the width and the depth of the network as well as improve its performance.The naphtha pyrolysis network is specially studied in this article,but the proposed framework can be applied to the modeling of other large-scale CRNs problems.

        Fig.13.Simulation error of key products.

        Nevertheless,it should be noticed that motif detections are restricted to small size ones because of the network scale and the combinatorial complexity.The limit of motif size can to some extent affects the extraction of features from the network.In future work we plan to further decrease the test error by adjusting the network structure and parameters including the use of multi-scale receptive fields.

        人人妻人人爽人人做夜欢视频九色 | 精品人妻码一区二区三区剧情| 黄色国产一区二区99| av在线免费观看网站免费| 最新国产精品拍自在线观看| 插b内射18免费视频| 精品国产福利一区二区在线| 亚洲免费不卡av网站| 亚洲av日韩专区在线观看| 午夜国产视频一区二区三区| 日本少妇高潮喷水视频| 欧美日韩在线视频| 国产suv精品一区二区883| 成全视频高清免费| 色窝综合网| 国产一级一区二区三区在线播放 | 午夜精品一区二区久久做老熟女| 成人自拍一二在线观看| 最新欧美精品一区二区三区| 日本50岁丰满熟妇xxxx| 无码不卡一区二区三区在线观看| 国产亚洲成年网址在线观看| 亚洲免费在线视频播放| 亚洲黄色天堂网站在线观看禁18| 97碰碰碰人妻无码视频| a级国产乱理伦片在线播放| 免费一本色道久久一区| 国产精品自拍首页在线观看| 91人妻人人做人人爽九色| 亚洲a无码综合a国产av中文| 丁香花在线影院观看在线播放| 国产人成精品免费视频| 在线观看精品视频一区二区三区| 国产自产自现在线视频地址| 顶级高清嫩模一区二区| 怡红院av一区二区三区| 精品国产自产久久久| 亚洲综合天堂av网站在线观看| 一区二区二区三区亚洲| 午夜精品久久久久久久| 免费人妻无码不卡中文字幕18禁|