亚洲免费av电影一区二区三区,日韩爱爱视频,51精品视频一区二区三区,91视频爱爱,日韩欧美在线播放视频,中文字幕少妇AV,亚洲电影中文字幕,久久久久亚洲av成人网址,久久综合视频网站,国产在线不卡免费播放

        ?

        Prediction of Compressive Strength of Self-Compacting Concrete Using Intelligent Computational Modeling

        2017-12-11 08:19:56SusomDuttaRamachandraMurthyDookieKimandPijushSamui
        Computers Materials&Continua 2017年2期

        Susom Dutta, A. Ramachandra Murthy, Dookie Kim and Pijush Samui

        Prediction of Compressive Strength of Self-Compacting Concrete Using Intelligent Computational Modeling

        Susom Dutta1, A. Ramachandra Murthy2, Dookie Kim3and Pijush Samui4

        In the present scenario, computational modeling has gained much importance for the prediction of the properties of concrete. This paper depicts that how computational intelligence can be applied for the prediction of compressive strength of Self Compacting Concrete (SCC). Three models, namely, Extreme Learning Machine (ELM), Adaptive Neuro Fuzzy Inference System (ANFIS) and Multi Adaptive Regression Spline (MARS)have been employed in the present study for the prediction of compressive strength of self compacting concrete. The contents of cement (c), sand (s), coarse aggregate (a), fly ash (f), water/powder (w/p) ratio and superplasticizer (sp) dosage have been taken as inputs and 28 days compressive strength (fck) as output for ELM, ANFIS and MARS models. A relatively large set of data including 80 normalized data available in the literature has been taken for the study. A comparison is made between the results obtained from all the above-mentioned models and the model which provides best fit is established. The experimental results demonstrate that proposed models are robust for determination of compressive strength of self-compacting concrete.

        Self Compacting Concrete (SCC), Compressive Strength, Extreme Learning Machine (ELM), Adaptive Neuro Fuzzy Inference System (ANFIS), Multi Adaptive Regression Spline (MARS).

        1 Introduction

        Concrete is composed mainly of cement (commonly Portland cement), fine aggregate,coarse aggregate and water. Concrete is a versatile material that can easily be mixed to meet a variety of special needs and formed to virtually any shape. Concrete solidifies and hardens after mixing with water and placement due to a chemical process known as hydration. The water reacts with the cement, which bonds the other components together,eventually creating a stone-like material. The uniaxial compressive strength of concrete is considered as the most crucial property in case of concrete mix design and quality controlwhich is determined by number of factors. Several factors affect the concrete mix design like to derive a concrete as High-Performance Concrete, it should possess, in addition to good strength, several other favorable properties. The water/cement (w/c) ratio in the concrete is lower than normal concrete which requires special additives in the concrete,along with a superplasticizer to obtain good workability.

        The nature of aggregate is important to incur high strength. The gradation of the aggregates influences the workability. The order in which the materials are mixed is also important for the workability of the concrete. From engineering point of view, strength is the most important property of structural concrete. The strength of the concrete is determined by the characteristics of the mortar, coarse aggregate, fine aggregate and the interface. Property of concrete is influenced by the properties of each constituent added in it. For example, for the same quality mortar, diverse types of coarse aggregate with different shape, texture, mineralogy, and strength may result in different concrete strengths. The tests for compressive strength are generally carried out at about 7 or 28 days from the day when the concrete is casted. Generally, strength after 28-days is standard and therefore essential and if required strength for other ages can be carried out.Accidentally, if there is some experimental error in designing the mix, the test results will fall short of required strength, the entire process of concrete design must be repeated which may be a costly and time consuming. The same applies to all types of concrete, i.e.normal concrete, self-compacting concrete, ready mixed concrete, etc. It is well acknowledged that prediction of the compressive strength of concrete is most important in modern concrete designing and in taking engineering decisions.

        The property of a self-compacting concrete (SCC) (Schutteret al,2008) is the fresh concrete should flow around reinforcement and consolidate within formwork under its own weight that exhibits no defects due to segregation or bleeding. The guiding principle for this type of concrete is that the sedimentation velocity of a particle is inversely proportional to the viscosity of the floating medium in which the particle exists. The mix design principle is that the flowability and viscosity of the paste is adjusted by proportioning the cement and additives, water to powder ratio and then by adding superplasticizers and Viscosity Modifying Admixtures (VMA). It requires manipulation of several mixture variables to ensure satisfactory flowable behavior and proper mechanical properties. Also, absence of theoretical relationships between mixture proportioning and measured engineering properties of SCC makes it more complex.

        This study adopts Extreme Learning Machine (ELM), Adaptive Neuro Fuzzy Inference System (ANFIS), Multivariate Adaptive Regression Spline (MARS) for prediction of 28 days compressive strength of Self Compacting Concrete (SCC). ELM proposed by Huanget al,(2004), is an easy-to use and effective learning algorithm of single-hidden layer feed-forward neural networks (SLFNs). The classical learning algorithm in neural network, e. g. backpropagation, requires setting several user-defined parameters and may get into local minimum. ELM is used in various fields like Renewable Energy (Wang, et al. 2015), Neurocomputing (Fu A-M.et al.2014), Mechanical Engineering (Gao, et al.2013), Bioinformatics (Priya, et al., 2012). ANFIS (Takagi and Sugeno 1985) can be trained to provide input/output data mappings and one can get the relationship between model inputs and corresponding outputs. ANFIS is a kind of artificial neural network that is based on Takagi–Sugeno fuzzy inference system. It enables the knowledge that has been learnt in the network training to be translated into a set of fuzzy rules that describe the model input/output relationship in a more transparent fashion. It is employed in many fields such as Powder Technology (Pourtousiet al. 2015), Applied Energy (Yang and Entchev, 2014), Hydrology (Chang and Wang, 2013), Communications (Leeet al.2012).MARS is a flexible, more accurate, and faster simulation method for both regression and classification problems (Friedman 1991; Salford Systems 2001). It is capable of fitting complex, nonlinear relationships between output and input variables. Some examples of its usage are Biological conversations (Kandelet al.2015), Ecological Modeling(Pickens and King, 2014), Transportation (Sunet al.2013).

        The data used in both these techniques is taken from Siddiqueet al.(2011).

        2 Dataset employed

        The data used (Table 1) in both the techniques are normalized against their maximum values (Siddique,et al.2011). In carrying out the formulation, the data has been divided into two sub-sets:

        (a) Training dataset: This is required to construct the model. In this study, 64 (80% of total data) out of the 80 values are considered as training dataset.

        (b) A testing dataset: This is required to estimate the model performance. In this study,the remaining 16 (20% of total data) values are considered as testing dataset.

        Table 1: Details of the data for prediction of compressive strength of SCC

        32 220 180 0.39 0.6 916 900 43 33 220 180 0.39 0.35 916 900 47 34 220 180 0.39 0.1 916 900 44 35 198 232 0.36 0.5 872 900 52 36 220 180 0.39 0.35 916 900 45 37 220 180 0.33 0.35 982 900 51 38 170 200 0.43 0.5 928 900 33 39 275 155 0.43 0.2 830 900 36 40 247 165 0.45 0.12 845 846 34.6 41 238 159 0.4 0.29 844 844 37.8 42 232 155 0.35 0.38 846 847 48.3 43 207 207 0.45 0.4 845 843 33.2 44 200 200 0.4 0.17 842 843 34.9 45 197 197 0.35 0.28 856 856 38.9 46 169 254 0.45 0 853 853 30.2 47 163 245 0.4 0.2 851 851 26.2 48 161 241 0.35 0.3 866 864 35.8 49 350 162 0.59 0.09 768 840 51.7 50 349 162 0.57 0.14 779 852 59.9 51 350 133 0.52 0.16 815 883 55.3 52 350 111 0.51 0.15 831 900 61 53 250 257 0.77 0.11 787 853 51.5 54 427 115 0.45 0.12 779 844 59.4 55 348 224 0.5 0.43 783 848 58.6 56 350 90 0.48 0.14 852 923 46.5 57 327 173 0.53 0.2 902 803 61.6 58 380 145 0.48 0.1 788 854 73.5 59 350 186 0.51 0.11 786 851 70.4 60 380 145 0.48 0.13 988 659 65.5 61 380 192 0.53 0.1 931 621 67.8 62 275 250 0.67 0.09 775 840 54.5 63 325 60 0.65 0.43 899 850 30.8 64 325 60 0.65 0.43 899 850 32.6 65 325 120 0.75 0.43 755 850 32.2 66 249 60 0.68 0.43 1079 850 24

        67 325 60 0.85 0.43 722 850 13.3 68 370 96 0.57 0.25 833 850 39.5 69 400 60 0.63 0.43 718 850 30.4 70 325 60 0.65 0.43 899 850 35.3 71 370 24 0.69 0.62 770 850 18.7 72 325 0 0.55 0.43 1042 850 41.2 73 280 96 0.87 0.25 820 850 19.6 74 325 60 0.65 0.75 896 850 27.7 75 325 60 0.65 0.43 898 850 35 76 325 60 0.65 0.12 900 850 31.4 77 370 96 0.57 0.62 830 850 38.8 78 325 60 0.65 0.43 898 850 34.3 79 280 96 0.87 0.62 817 850 15.9 80 370 24 0.69 0.25 772 850 26.4

        3 Extreme learning machine (ELM) model

        The ELM algorithm was originally proposed by Huanget al.in 2004 and it makes use of the SLFN. The main concept behind the ELM lies in the random initialization of the SLFN weights and biases. Then, using Theorem 1 and under the conditions of the theorem, the input weights and biases do not need to be adjusted and it is possible to calculate implicitly the hidden-layer output matrix and hence the output weights. The network is obtained with very few steps and very low computational cost.

        The defects of gradient-based learning in a single hidden-layer feedforward neural network (SLFN) are avoided by using ELM. It determines optimal weights analytically.Let us consider the following two datasets.

        Where x is the input, y is the output and N is number of datasets.

        In this paper,

        In a single hidden layer feed forward networks (SLFN), the relation between input and output is given below:

        where wiis the input weight vector between the ithneuron in the hidden layer and the input layer, bimeans the input bias of the ithneuron in the hidden layer, xjis the jthinput data vector, f() is an activation function of the hidden neuron, βiis the output weight vector between the ithhidden neuron and the output layer,is number of hidden nodes,N is number of training samples and yjmeans the target vector of the jthinput data.Equation (3) can be written in the following way.

        where H is the hidden layer output matrix of the network (Huang and Babri, 1998; Huang,2003).

        In ELM, the values of wiand biare not tuned during training. Random values are assigned for wiand biaccording to any continuous sampling distribution (Huang et al.,2004; Huang and Siew, 2004, 2005). The value of β is determined from the following equation.

        Where H-1is the Moore-Penrose generalized inverse (Serre, 2002) of the hidden layer output matrix H.

        ELM has been developed by using MATLAB (MathWork Inc R2012a).

        4 Adaptive neuro fuzzy inference system (ANFIS) model

        Fuzzy logic is a form of many-valued logic and it deals with reasoning that is approximate rather than fixed and exact. The nature of uncertainty in a slope design is very important that should be considered. Fuzzy set theory was developed specially to deal with uncertainties that are nonrandom in nature.

        There are several FISs that have been employed in various applications. The most commonly used include:

        · Mamdani Fuzzy Model;

        · Takagi-Sugeno-Kang fuzzy (TSK) model;

        · Tsukamoto fuzzy model;

        · Singleton fuzzy model.

        In fuzzy system, all sets are not crisp, but some are fuzzy. These fuzzy sets can be modeled in linguistic human terms such as large, small and medium (Takagi and Sugeno 1985). This is very valuable to model human behavior. A fuzzy set is a set containing elements that have varying degree of membership. The degree of membership gives fuzzy sets flexibility in modeling (Bezdek, 1981). The membership can be discrete or continuous type. The most commonly used membership functions are triangular,trapezoidal, gaussian and bell function. ANFIS makes inference by fuzzy logic and shapes fuzzy membership function using neural network (Altrock 1995; Brown & Harris 1995). In the literature, there are several inference techniques developed for fuzzy rulebased systems, such as Mamdani and Sugeno (Brown & Harris 1995). In this study,Sugeno-type systems have been used. In Sugeno, output of the fuzzy rule is differentiated by a crisp function. Typical representation of a fuzzy rule in Sugeno system is given by:if x1is A1and x2is A2…and xNis ANthen y=f(x), where A1, A2… and ANare fuzzy sets and y is crisp function. In this system, outcome of each rule is crisp value and weighted average has been used to calculate the result of all the rules. The definition of the nonlinear mapping of Sugeno-type system (fFS) can be given as follows:

        In whichmis the number of rules,nis the number of data points and μAis the membership function of fuzzy set A. membership function has been determined iteratively to produce correct outputs by ANFIS. There are different types of membership functions such as triangular, trapezoidal, gaussian and bell function. In this analysis,Gaussian membership function has been used. The form of the gaussian function used is as follows:

        where,candthe mean and standard deviation of the data respectively. Learning process in ANFIS methodology is commonly performed by two techniques, i.e. back propagation and hybrid learning algorithms.

        ANFIS has been developed by using MATLAB (MathWork Inc R2012a).

        5 Multi adaptive regression spline (MARS) model

        MARS is widely accepted by researchers and practitioners for the following reasons.

        · MARS is capable of modeling complex non-linear relationship among variables without strong model assumptions.

        · MARS can capture the relative importance of independent variables to the dependent variable when many potential independent variables are considered.

        · MARS does not need long training process and hence can save lots of model building time, especially when the dataset is huge.

        Finally, one strong advantage of MARS over other classification techniques is the resulting model can be easily interpreted. It not only points out which variables are important in classifying objects/observations, but also indicates an object/observation belongs to a specific class when the built rules are satisfied. The final fact has important managerial and interpretative implications and can help to make appropriate decisions.

        The MARS model splits the data into several splines on an equivalent interval basis(Friedman 1991). In every spline, MARS splits the data further into many subgroups(Yanget al., 2003). Several knots are created by MARS. These knots can be located between different input variables or different intervals in the same input variable, to separate the subgroups. The data of each subgroup are represented by a basis function(BF). The general form of a MARS predictor is as follows:

        where N is the number of data and C(B) is a complexity penalty that increases with the number of BF in the model and which is defined as:

        6 Result and discussions

        Error and Correlation Calculations

        The validity of each model can be verified using these following formulas:

        The mean absolute error (MAE) is a quantity used to measure how close predictions are to the actual value.

        Root-mean-square error (RMSE) is used to measure the differences between predicted value by the models and the actual values.

        Coefficient of correlation(R) has been used as main criterion to examine the performance of the developed models. The value of R has been determined by using the following equation:

        ρ is known as the Performance Index is used to check the accuracy of the predicted values.

        For developing ELM, the numbers of hidden nodes have consequences on training performance. The best performance is obtained at 15 hidden nodes. Therefore, the number of hidden nodes is set to 15. The initial training datasets is assigned as 64. The Block Range is provided as 25. Radial basis function has been adopted as activation function. Graphs are plotted between Actual Normalized Strength and Predicted Normalized Strength. Figure 1 shows the performance of training and testing dataset respectively. After the compilation of the model, following results are obtained.

        Training and testing performance are illustrated in the table 4.

        As shown in figure 1, the value of R is nearly close to one for training as well as testing datasets. Therefore, the developed ELM proves that it is quite capable of predicting of 28 days compressive strength of SCC.

        The value of error and correlation functions for ELM is shown in table 4.

        Figure 1: Performance of training and testing dataset (ELM)

        In ANFIS model, Gaussian membership function has been used in this analysis. The hypothesized initial numbers of membership functions for each input are 55. A suitable pattern has to be chosen for the best performance of the network. Figure 2 shows the architecture of ANFIS model for this study.

        Figure 2: Architecture of ANFIS model

        After the training (with 50 epochs) was complete, the final configuration for the Fuzzy Inference System (FIS) is:

        Number of output membership functions Number of fuzzy rules 55 55

        Neuro-fuzzy adaptive network for Strength:

        Number of input 6 (c, s, a, f, w/p and sp )Number of membership function for each input 55 Type of membership functions for each input Gaussian Type of membership functions for each output Linear Number of training epochs 50

        Training and testing performance are illustrated in the table 4.

        The performance of training and testing dataset has been illustrated in figure 2. It is observed from figure that the value of R is equal to one for training as well as testing dataset. So this proves that it is the most effective technique/model for the prediction of 28 days compressive strength of SCC.

        Figure 3: Performance of training and testing dataset (ANFIS)

        The value of error and correlation functions for ANFIS is shown in table 4.For MARS model, during training, the forward stepwise procedure was carried out to select 42 basis functions (BF) to build the MARS model. This was followed by the backward stepwise procedure to remove redundant basis functions. The final model includes 36 basis functions, which are listed in Table 2 together with their corresponding equations and am.

        Table 2: list of basis functions which give the best performance

        BF28 BF25 * max(0, 0.2 -f) 4.4944 BF29 BF25 * max(0, s -0.689655172413793) 4.9905 BF30 BF25 * max(0, 0.689655172413793 -s) -2.4291 BF31 BF11 * max(0, f -0.3) 4.4487 BF32 BF11 * max(0, 0.3 -f) -17.1001 BF33 BF29 * max(0, 0.2 -f) -3.0855 BF34 BF27 * max(0, a -0.111111111111111) -17.1058 BF35 BF27 * max(0, 0.111111111111111 -a) -28.8211

        The final equation for the prediction of strength (fck) based on MARS model is given below:

        where,

        a0== coefficient of the constant basis function, or the constant term;

        The ANOVA decomposition is specified in row wise for each ANOVA function. The columns represent summary quantities for corresponding ones. The first column lists the function number. The second gives the standard deviation (STD) of the function. This gives indication of its (relative) importance to the overall model and can be interpreted in a manner like a standard regression coefficient in a linear model. The third column provides another indication of the importance of the corresponding ANOVA function, by listing the GCV score for a model with the entire basis functions corresponding to that ANOVA function removed. This can be used to judge whether this ANOVA function is making an important contribution to the model, or whether it just slightly helps to improve the global GCV score. The fourth column gives the number of basis functions comprising the ANOVA and the last column of Table 3 gives the predictor variables associated with the ANOVA function. Table 3 shows the ANOVA decomposition for Training dataset.

        Table 3: ANOVA decomposition for Training dataset

        Figure 3 depicts the performance of training and testing dataset. It is observed from figure that the value of R is close to one for training but not close to one for testing datasets. Therefore, the developed MARS proves its feeble ability for prediction of 28 days compressive strength of SCC.

        The value of error and correlation functions for MARS is shown in table 3.

        Figure 4: Performance of training and testing dataset (MARS)

        The approximates of error and correlation functions i.e. mean absolute error (MAE), rootmean-square error (RMSE), coefficient of correlation(R) and performance index (ρ) for all the methods employed are consolidated in table 4.

        Table 4: Approximates of error and correlation functions

        A comparative study has been carried out between the developed ELM, ANFIS and MARS models. Figure 1, 2 and 3 shows the graph of R value of the training and testing datasets for ELM, ANFIS and MARS models respectively. It can be inferred from figure 2 that the performance of ANFIS is best than the performance of ELM and MARS model.It is also clear from Table 5 that the performance of ANFIS is best.

        The performance of training and testing dataset is almost same for the ELM and MARS models but ANFIS shows the best performance among the three models. So, the developed models do not show overtraining. Therefore, the developed models have good generalization capability. Datasets are normalized between for developing the ELM,ANFIS and MARS models. The developed models do not make assumption about the dataset. The developed MARS gives equation for prediction of strength. However,ANFIS and MARS do not use statistical parameters of the dataset for prediction. ELM makes use of single-hidden layer feed-forward neural networks (SLFNs). ANFIS uses membership function for developing the model. MARS adopts basis function for final prediction.

        7 Summary and conclusions

        This study has described the application of ELM, ANFIS and MARS models for the prediction of 28 days compressive strength of Self Compacting Concrete (SCC). The performance of ANFIS is better than ELM and MARS model. User can employ the developed model for prediction of compressive strength. The developed models can be used as a quick tool for prediction of 28 days compressive strength of Self Compacting Concrete (SCC). This paper shows that the developed ANFIS is a robust model for prediction of 28 days compressive strength of Self Compacting Concrete (SCC).

        Altrock CV.(1995):Fuzzy logic and neurofuzzy applications explained. Prentice-Hall,New Jersey.

        Bezdek, JC.(1981):Pattern recognition with fuzzy objective function algorithms:Plenum, New York.

        Brown M, Harris C.(1994):Neurofuzzy adaptive modeling and control. Prentice-Hall,New Jersey.

        Chang, F.-J., Wang, K.-W.(2013): A systematical water allocation scheme for drought mitigation control,Journal of Hydrology 507:124-133.

        Craven P, Wahba G.(1979): Smoothing noisy data with spline functions: Estimating the correct degree of smoothing by the method of generalized cross-validation.Numerical Mathematics 31:317–403.

        De Veaux RD, Psichogios DC, Ungar LH. (1993): A comparison of two nonparametric estimation schemes: MARS and neural networks.Comput. Chem. Eng 17(8):819–837.

        Friedman JH(1991): Multivariate Adaptive Regression Splines. Annals of Statistics 19:1–141.

        Fu, A.-M., Wang, X.-Z., He, Y.-L., Wang, L.-S.(2014): A study on residence error of training an extreme learning machine and its application to evolutionary algorithms,Neurocomputing 146:75-82.

        G De Schutter, J Gibbs, P Domone, PJM Bartos(2008): Self-compacting concrete,Whittles Publishing, Dunbeath, Scotland, UK.

        Gao, F., Li, H., Xu, B.(2013): Applications of extreme learning machine optimized by ICPSO in fault diagnosis, Zhongguo Jixie Gongcheng/China Mechanical Engineering,24(20):2753-2757.

        Huang GB, Babri HA.(1998): Upper bounds on the number of hidden neurons in feedforward networks with arbitrary bounded nonlinear activation functions. IEEE Trans Neural Networks. 9(1): 224–229.

        Huang GB, Siew CK.(2004): Extreme Learning Machine: RBF Network Case.Proc.Eighth Int’l Conf. Control, Automation, Robotics, and Vision (ICARCV ’04),Kunming,China, 6–9 December 2:1029-1036.

        Huang GB, Siew CK.(2005): Extreme Learning Machine with Randomly Assigned RBF Kernels.Int’l J. Information Technology. 11(1).

        Huang GB, Zhu QY, Siew CK.(2004): Extreme Learning Machine: A New Learning Scheme of Feedforward Neural Networks.Proc. Int’l Joint Conf. Neural Networks,Budapest, Hungary, Jul. 25–29, 2004, 2: 985–990

        Huang GB.(2003): Learning capability and storage capacity of twohidden- layer Feedforward networks.IEEE Trans Neural Networks. 14(2): 274–281.

        K. T. Law, Y. L. Cao & G. N. He.(1990): An energy approach for assessing seismic liquefaction potential,Canadian Geotechnical Journal. 27(3): 320-329.

        Kandel, K., Huettmann, F., Suwal, M.K., Ram Regmi, G., Nijman, V., Nekaris,K.A.I., Lama, S.T., Thapa, A., Sharma, H.P., Subedi, T.R.(2015): Rapid multi-nation distribution assessment of a charismatic conservation species using open access ensemble model GIS predictions: Red panda (Ailurus fulgens) in the Hindu-Kush Himalaya region,Biological Conservation 181:150-161.

        Lee, S.-H., Lim, J.-H., Moon, K.-I.(2012): An ANFIS model for environmental performance measurement of transportation,Communications in Computer and Information Science 352: 289-29.

        M. K. Yegian,E. A. Marciano.(1991): Earthquake‐Induced Permanent

        Deformations: Probabilistic Approach,J. Geotech. Engrg. 117(1):35–50.

        Pickens, B.A., King, S.L.(2014): Linking multi-temporal satellite imagery to coastal wetland dynamics and bird distribution,Ecological Modelling 285:1-12.

        Pourtousi, M., Sahu, J.N., Ganesan, P., Shamshirband, S., Redzwan, G.(2015): A combination of computational fluid dynamics (CFD) and adaptive neuro-fuzzy system(ANFIS) for prediction of the bubble column hydrodynamics,Powder Technology 274:466-481.

        Priya, E., Srinivasan, S., Ramakrishnan, S.(2012): Classification of tuberculosis digital images using hybrid evolutionary extreme learning machines,Computational Collective Intelligence 7653: 268-277.

        Rafat Siddique, Paratibha Aggarwal, Yogesh Aggarwal(2011): Prediction of compressive strength of self-compacting concrete containing bottom ash using artificial neural network,Advances in Engineering Software 42(10): 780–786.

        Serre D.(2002): Matrices: Theory and Applications. Springer-Verlag.

        Sun, L., Pan, Y., Gu, W.(2013): Data mining using regularized adaptive B-splines regression with penalization for multi-regime traffic stream models,Journal of Advanced Transportation, 48(7):876-890.

        Takagi and Sugeno(1985): Fuzzy Identification of Systems and its Applications to Modeling, Systems,Man and Cybernetics, IEEE Transactions, 15(1): 116-132.

        Wang, J., Hu, J., Ma, K., Zhang, Y. (2015): A self-adaptive hybrid approach for wind speed forecasting,Renewable Energy, 78: 374–385.

        Yang CC, Prasher SO, Lacroix R, Kim SH(2003): A Multivariate Adaptive Regression Splines Model for Simulation of Pesticide Transport in Soils,Biosystems Engineering, 86(1): 9–15.

        Yang, L., Entchev, E.(2014): Performance prediction of a hybrid microgeneration system using adaptive neuro-fuzzy inference system (ANFIS) technique, Applied Energy,134: 197–203.

        1Undergraduate Student, School of Mechanical & Building Sciences (SMBS), VIT University, Vellore,Tamil Nadu 632014, India. Email: susomdutta7@gmail.com.

        2Senior Scientist, Computational Structural Mechanics Group, CSIR- Structural Engineering Research Centre,Taramani, Chennai- 600 113. email : murthyarc@serc.res.in

        3Professor, Department of Civil Engineering, Kunsan National University, Kunsan, Jeonbuk, South Korea.Email: kim2kie@chol.com

        4Associate Professor, National Institute of Technology, Patna, India, Email: pijush@nitp.ac.in.

        精品无码一区二区三区爱欲九九| 亚洲中文字幕午夜精品| 日日天干夜夜狠狠爱| 国产人澡人澡澡澡人碰视频| 小13箩利洗澡无码免费视频| 午夜精品久久久| 亚洲啊啊啊一区二区三区| 免费一区二区在线观看视频在线| 久人人爽人人爽人人片av| 欧美丰满大乳高跟鞋| 欧美在线观看www| 女人18毛片aa毛片免费| 国产成人无码综合亚洲日韩| 无码人妻精品一区二区三18禁| 99久久精品久久久| 国产精品一区二区三区av在线| 亚洲香蕉成人av网站在线观看| 日本在线观看| 无码高潮少妇毛多水多水免费| 亚洲不卡在线免费视频| 又色又爽又高潮免费视频观看| 提供最新的在線欧美综合一区| 国产av一区二区三区香蕉| 国产精华液一区二区三区| 亚洲综合精品伊人久久| 思思99热| 国产av一区二区三区天美| 无码人妻一区二区三区在线 | 精品人妻伦九区久久AAA片69| 久久久久久国产精品免费免费| 伊人影院综合在线| 西西少妇一区二区三区精品| 久久精品国产av一级二级三级| 九九久久精品无码专区| 久久久精品电影| 日本av不卡一区二区三区| 久久伊人精品一区二区三区| 久久艹影院| 福利视频自拍偷拍视频| 激情综合色综合啪啪开心| 国产女女做受ⅹxx高潮|