亚洲免费av电影一区二区三区,日韩爱爱视频,51精品视频一区二区三区,91视频爱爱,日韩欧美在线播放视频,中文字幕少妇AV,亚洲电影中文字幕,久久久久亚洲av成人网址,久久综合视频网站,国产在线不卡免费播放

        ?

        Neural Network Based Thermal Analysis of Ultradeep Submicron Digital Circuit Design

        2023-10-20 13:31:52ShrutiKalra

        Shruti Kalra

        (Department of Electronics and Communication Engineering, Jaypee Institute of Information Technology,Noida 201301, Uttar Pradesh, India)

        Abstract:With a reduction in transistor dimensions to the nanoscale regime of 45 nm or less, quantum mechanical effects begin to reveal themselves and have an impact on key device performance parameters. As a result, in order to develop simulation tools that can be used for the design of nanoscale transistors in the future, new theories and modelling methodologies must be developed that properly and effectively capture the physics of quantum transport. An artificial neural network (ANN) is used in this paper to examine nanoscale CMOS circuits and predict the performance parameters of CMOS-based digital inverters for a temperature range of 300 K to 400 K. The training algorithm included three hidden layers with sizes of 20, 10, and 8, as well as a function fitting ANN with Bayesian Backpropagation Regularization. Further, simulation through HSPICE using Predictive Technology Model (PTM) nominal parameters has been done to compare with ANN (trained using an analytical model) results. The obtained results lie within the acceptable range of 1%-10%. Moreover, it has also been demonstrated that the ANN simulation provides a speed improvement of around 85 % over the HSPICE simulation, and that it can be easily integrated into software tools for designing and simulating complicated CMOS logic circuits.

        Keywords:optimization; ultradeep submicron technology; unified MOSFET model; artificial neural network

        0 Introduction

        The scaling of transistor size has been the fundamental driver of the exponential advances in integrated circuit performance that have occurred over the last three decades. The intrinsic advantages of MOSFET scaling include speed improvement and energy reduction. The benefits of MOSFET scaling declined when the technology node was scaled below 45 nm due to the emergence of quantum effects. Since low-dimensional CMOS devices are widely used in both digital and analogue circuits, it is necessary to accurately characterize quantum effects in order to achieve high performance in the design of nanoscale circuits[1-3]. Obtaining closed-form analytical models for nanodevices is challenging or almost impossible in the context of analytical modelling due to the complexity involved in VLSI design at an ultradeep submicron technology node. As a result, models are constructed by simplifying the complete physical model. The compact models make it possible to simulate nanoscale circuits at the system level in a short amount of time. However, due to the simplifications performed during the model creation process, the accuracy of such a model might be less. When designing complicated systems, accuracy and simplicity of models are critical considerations. A model based on artificial intelligence would be preferable, since it would be more realistic and deliver practical solutions[4-7].

        Ref.[8] presented CMOS based inverter design considering propagation delay and using an artificial bee colony optimization algorithm. Refs.[9] and [10] investigated the switching characteristics of a CMOS inverter using neural networks and particle swarm optimization techniques, demonstrating the utility of ANN in solving complex nonlinear problems. Ref.[11] designed a three-phase photovoltaic grid linked inverter using a radial basis function neural network, and Ref.[12] designed a cascaded H-Bridge multilevel inverter using a Neural Network controller. Ref.[13] presented a summary of the diverse fields in which neural networks can be applied. In all the above work, long channel equations have been taken into consideration for predicting delays, which cannot be used for lower technology nodes. Thus, in this work,α-power law based approximate delay model has been taken into account for predicting delays at up to an ultradeep submicron technology up to 22 nm.

        CMOS inverter is generally considered to be the fundamental part of any digital circuit. Its design is very crucial in the development of digital integrated circuits[13]. If we are moving towards the nanoscale regime, modelling device behaviour with advanced methodologies becomes of utmost importance since the problem becomes more complicated with the large number of parameters involved and cannot be modelled using simple equations[14]. This paper explores the technique of building a CMOS inverter with an artificial neural network having several hidden layers at different technology nodes and different temperatures. I have examined how well my proposed model[15]can be extended to predict the delay of a CMOS inverter at a given technology node and operating temperature. The aspect ratio, load capacitance, and supply voltage are the three parameters taken as ANN input parameters. The aim is to describe the problem statement, the input and output parameters of ANN, its configuration, and the outcome of various experiments performed. The following are the four important contributions:

        1) Using the unified and approximate drain current approach based on theα-power law-based model for strong, moderate, and weak inversion regions described in my earlier work[15], the delay of the CMOS inverter is estimated. The results obtained for drain current in Ref.[15] were in proximity with HSPICE within the acceptable error range of 0-2%. Table 1 represents the values of the model parameters taken into consideration which was extracted using HSPICE simulation utilizing predictive technological model (PTM)[16].

        2)Velocity saturation indexα, specific currentI0and threshold voltageVthare the temperature-dependent parameters in the proposed delay equation. The variance of these temperature-dependent parameters has been demonstrated, and the curve has been modified to simpler quadratic equations.

        3)The applicability of ANN in the design of microelectronic circuits at an ultradeep submicron technology node is discussed in this work. Because of this, an inverter structure has been used to show the algorithms, taking into account their dynamic properties.

        4)For the NOT gate, an ANN network has been constructed that approximates the rise time in response to varying input voltage levels, load capacitance, and aspect ratio. The methodology has been adopted for the temperature range of 300 K-400 K to validate our models against analytical data and HSPICE simulations.

        The following is the procedure adopted:

        1)First, in order to set the input layer of the neural network, data sets were created by HSPICE simulation and mapped to some random functions and the proposed model (Refer to Table 2) in order to remove the biasness of the data. Further, training and testing methods were adopted which provided encouraging and successful results at technological nodes ranging from 130 nm to 22 nm and temperatures ranging from 298.15 K to 398.15 K. Thus, the structure can be used to generate and predict additional outputs for a variety of different input values.

        2)The improvement in the simulation speed of ANN over HSPICE was observed and tabulated in Table 3. It shows that the ANN model is very accurate and that its simulation is much quicker than that of the HSPICE. Thus, these blocks may be used in conjunction with software tools such as PSPICE or alone as a simulator for the design and modelling of CMOS logic circuits.

        1 Introduction to Neural Networks

        Modeling complicated and nonlinear processes using artificial neural networks (ANNs) has become more popular in recent years. A hidden-layer artificial neural network connects a problem’s input and output patterns. In these networks, communication links are weighted, as shown in Fig. 1. Learning algorithm sets weights. Output for hidden layer nodej:

        (1)

        The network output is given as:

        (2)

        wherewijare the input weights to nodejof the hidden layer,bjis the node bias, andwoiare the hidden-to-output weights.

        ANNs may be categorized by their learning algorithm: ANNs with set weights do not require learning[17-18]. Unsupervised ANNs: Their weights are solely updated based on input data. The networks adapt based on previous input. Most ANNs are supervised. Both input and output network data are used.

        The activation function determines a neuron’s output and input based on its input activity. Threshold, piecewise linear, sigmoid, tangent hyperbolic, and Gaussian are common functions[17]. The network calculates weights and biases by analyzing input-output data. Back-propagation is one approach to get these characteristics[17-19]. Weights and biases are changed periodically to reduce the network’s mean square error. This approach may be used incrementally or batch-wise. After each input, weights and biases are modified progressively. In batch mode, weights and biases are modified after the whole training set.

        Table 1 Obtained values of α, I0 and Vth at different temperatures and technology nodes

        Table 3 Speed improvement comparison of HSPICE and ANN

        Fig. 1 ANN structure

        In this paper, the ANN architecture used has been designed by mapping design parameters to CMOS inverter output parameters for a given process technology. Further, the model was extended to see if it could handle the effect of rising temperature. The dataset utilized to train the model was obtained from the proposed analytical model and divided into two sets: (1) Inverter delay at room temperature at 300 K for different load capacitance, supply voltage and aspect ratio. (2) Inverter delay at elevated temperature of 400 K for different load capacitance, supply voltage and aspect ratio.

        The number of input nodes of a neural network is chosen as per our choice of mapping variables to the desired output. To improve the accuracy of the ANN, the number of hidden layers, the number of units present in the hidden layer, activation function and the other parameters should be properly chosen. The neural network used in this paper has three inputs:Wn/Wpratio, load capacitance and supply voltage. The output parameter is delay (rise time). The training was done with 70 % data, validation with 15 % data and testing was done with the remaining 15% data.

        2 α-power Law Based CMOS Delay Model

        The speed of digital systems is affected by the way digital integrated circuits switch. A digital system’s transient performance requirements are typically among the most critical design specifications that the circuit designer must fulfill. It is important to estimate and optimize the circuit’s switching speed early in design[20]. The closed-form time delay functions were extracted under the presumption of pulse excitations for lumped load capacitance. WhenVgs=Vds=Vdd, the ON current of MOSFET,ION, can be expressed as[15]:

        (3)

        The CMOS inverter’s time delay is calculated by the time it takes to charge or discharge the load. The delay of propagation,tp, of a gate can be represented as:

        (4)

        Here,CLis the output load capacitance;IONis the ON current of MOSFET on drain.

        Substituting the value ofIONfrom Eq.(3).

        (5)

        Here,

        (6)

        Load capacitanceCLis the amount of the intrinsic capacitance of the fan-out gates from the driving phase and is thus proportional to:

        CL∝CoxL(ξiWi+Wi+1)

        (7)

        whereξiis the ratio of driver-stage parasitic capacitance to fanout-stage input gate capacitance. Driver and load stage annotations areiandi+1. In Eq. (5),WbecomesWi. PuttingCLfrom Eq. (7) into Eq. (5),

        (8)

        ForNnumbers of stages across the pathway, the path delay can be estimated as sum of the gate delays. Thus,

        (9)

        (10)

        wherekpdis a technology-dependent factor, and PD(W) relies on the path transistor’s gate size. Traditional latency forNstages may be described in terms of the average single-stage propagation delaytpand the logic depthLd, but this does not enable gate sizing.

        PD=tpLd

        3 Factors Influencing Delay Under Temperature Fluctuations

        I0,Vth,φtandαare the temperature-dependent system parameters that differ with respect to temperature in Eq. (5). One of the most powerful temperature dependent functions is the threshold voltage. The threshold voltage dependence on temperature can be expressed as[21]:

        NMOS

        (11)

        PMOS

        (12)

        HereVth(T0) is the threshold voltage obtained at the room temperature;αTis the temperature coefficient of device threshold voltage.

        Fig. 2 illustrates the threshold voltage at 90 nm and 22 nm (see Table 1 for 130 nm to 22 nm). The figure shows that a smaller technology node has more variability in temperature-based threshold voltage. Today’s manufacturing technique introduces a limited quantity of high-energy dopant atoms into silicon. Each system’s dopant atom placement and distribution cannot be properly regulated. Dopant concentration affects threshold voltage[22]. Through HSPICE simulation, extracted parameterαTmeasures threshold voltage variation. Next, temperature affectsI0current. In theI0equation,μeffandφtare temperature-varying factors. The temperature-dependent specific currentI0at 32 nm is depicted in Fig.3. The plotted curve is a quadratic expression.

        I0(T)=D(T)2+E(T)+F

        (13)

        The technology dependent fitting parameters obtained using the Levenberg-Marquardt algorithm here areD,EandF.

        The Velocity Saturation Factorαis the next parameter that varies with temperature. By equating the transistor ON current provided in Eq.(3) with that obtained via HSPICE simulation, the values ofαat different temperatures are computed.

        Fig. 2 Percentage difference in NMOSFET threshold voltage with respect to the change of two nodes of the technology(Vdd=1 V)

        Fig. 3 Curve fitting plot at 32 nm technology node with specific current changing with temperature(Vdd=1 V)

        The temperature variation ofαis tabulated in the Table 1 from 130 nm-22 nm technology node. Fig. 4 shows the variance ofαwith respect to the 32 nm technology node temperature. With the following curve-fitted expression, it can be better described.

        α(T)=A(T)B+C

        (14)

        The technology-dependent fitting parameters acquired using the Levenberg-Marquardt algorithm areA,B,Chere. For further study of the circuit output at different temperatures, the temperature-based expressions forVth,I0andαcan be used. From the above graphs, it can be concluded that the value of threshold voltage decreases as temperature rises,αincreases andI0decreases.

        Fig. 4 Curve fitted the velocity saturation index α plot with 32 nm node technology temperature(Vdd=1 V)

        4 Design of CMOS Inverter using ANN

        Moore’s law has sustained the global semiconductor industry for four decades. However, as we enter the era of nanoscale technologies, scaling-as-usual faces substantial challenges[23]. Naturally, the issues are well-known. Simple scaling eventually ceases to exist as we approach greater atomistic dimensions. Although the devices are smaller, several performance characteristics deteriorate: leakage increases, gain declines, and susceptibility to inevitable tiny manufacturing process changes increases considerably. Many modern designs are constrained by power and energy constraints. We can no longer anticipate worst-case behavior for these technologies based on experience with a few process corners. Nothing is deterministic anymore: the majority of critical factors are statistical in nature; many displays intricate relationships and distressingly large variations. The increasing costs of producing circuits in these scaled technologies (e.g., mask prices) aggravate these predictability issues[24]. Nonetheless, we see substantial opportunity in these difficulties. The purpose of this study is to provide a solution by taking advantage of ANN by analyzing how circuits are designed and effectively respond when such problems arise. Given space limitations, we restrict our focus to the design and analysis of the CMOS inverter.

        The neural network presented in this paper for designing a CMOS inverter has three hidden layers with sizes of 20, 10, and 8 (Refer to Fig. 5). In this article, the fitnet MATLAB command is used to generate the fitness function of the neural network. The training of the network is done by Bayesian Regularization Backpropagation[25]. Bayesian Regularization is generally used to minimize a linear combination of squared errors and weights. Moreover, it also modifies the linear combination which results in a network with strong properties of generalization at the end of training. Matlab has a Trainbr network training function that is based on LM (Levenberg-Marquardt) optimization to change the weight and the bias values. Trainbr's default parameters have been utilized in the present work.

        Fig. 5 ANN structure

        A statistical model’s ability to accurately predict a result is measured by a value known as the coefficient of determination (R), which may range from 0 to 1. TheRmay be interpreted as the percentage of variance in the dependent variable that is predicted by the statistical model. The accuracy of the proposed analytical model can be predicted as the coefficient of determination (R) as described in the accuracy plots of 130 nm and 22 nm technologies at two different temperatures (298.15 K and 398.15 K) in this paper.

        5 Results and Discussion

        To begin, it is necessary to decide on a development environment in which the neural model should be built and then eventually implemented. As a result, MATLAB has been chosen to be the tool of choice for designing and developing the necessary neural network models. The neural network is constructed by first producing the appropriate programme code in the MATLAB console, which is followed by decorating the code with the appropriate algorithms and necessary specifications. The Neural Network model was made to make things more accurate. It has more hidden layers than other models and is used to measure quality.

        Next step is to pick training and testing data sets from the data obtained from HSPICE simulation. By cutting the full data set into two halves with suitable index mapping, 70 % of it is used for training and the rest for testing. The training data set is used to teach the neural network what to do, and the test data set is used to see how well it did. Next, the trained neural network was tested. The training dataset’s lowest and maximum values are used to train the neural network for improved performance.

        The least-squares support-vector machine (LSSVM) is an effective technique to effectively scale the larger data-set available in the literature for statistics and statistical modelling. It is the least-squares versions of support-vector machines. SVMs are a set of related supervised learning methods that analyze data and recognize patterns, and they are used for classification and regression analysis. In LSSVM, the answer is obtained by solving a series of linear equations, as opposed to the convex quadratic programming (QP) problem that is required for standard support vector machines (SVMs)[26-28]. The same can be used for scaling effectively the larger data as my future work.

        Next, the training process is conducted by giving the correct training function and parameters for the specific problem set and the parameters of the neural network, i.e., proper algorithm, number of neurons, number of layers, number of hidden layers, etc. For iterative estimates and improved performance, the neural network is initialized using the specified parameters. Checking neural network training results may reduce the network’s mistake rate. Choosing the right training algorithm maximizes neural network training time. Table 2 shows the mean square error computed as described below and the CPU time required for different functions:

        (15)

        After seeing the results obtained from the proposed model as described in Table 2, following methodology was adopted for further analysis:

        1)The ANN model is trained for a single inverter from the delay obtained from the analytical model (Eq.(8) ) at room temperature of 300 K and at temperature of 400 K.

        2) Eq. (8) is utilized to seed a dataset of 2000 points for 22 nm technology node and 5000 points for 130 nm technology node. The variation of load capacitance, supply voltage andWn/Wpratio were taken into consideration while building up the dataset.

        3)Table 1 contains the values ofα,I0andVthat different temperatures and technology nodes used in the analytical model given in Eq. (8). It is important to note that when measuring delays in this paper, input rise time has been kept negligible.

        4)The mean squared error measures the average of the squares of the errors—that is, the average squared difference between the estimated values and the actual value. The performance is predicted by Eq.(15).

        a)Fig. 5 presents the structure of the neural network that was used in this paper. All sections of the analysis described in Section 1 were conditioned on the same ANN hardware for different datasets with the main exception being the number of epochs.

        b)Fig. 6 and Fig. 7 show the performance and the accuracy plots respectively for the ANN model at 22 nm technology node (300 K).

        Fig. 6 Performance plot of analytical model at 22 nm technology node (room temperature)

        c)Fig. 8 and Fig. 9 show the performance and the accuracy plots respectively for the ANN model at 22 nm technology node (400 K).

        d)Fig. 10 and Fig. 11 show the performance and the accuracy plots respectively for the ANN model at 130 nm technology node (300 K).

        e)Fig. 12 and Fig. 13 show the performance and the accuracy plots respectively for the ANN model at 130nm technology node (400K).

        Fig. 8 Performance plot of analytical model at 22 nm technology node (398.15 K)

        Fig. 9 Accuracy plot of analytical model at 22 nm technology node (398.15 K)

        Fig.10 Performance plot of analytical model at 130 nm technology node(room temperature)

        Fig.11 Accuracy plot of analytical model at 130 nm technology node (room temperature)

        Fig.12 Performance plot of analytical model at 130 nm technology node (398.15 K)

        f)The accuracy and the performance of the proposed model is presented in Figs.14- 21 for 22 nm and 130 nm technology nodes and 300 K and 400 K temperature values respectively.

        Fig.13 Accuracy plot of analytical model at 130 nm technology node (398.15 K)

        Fig.14 Performance plot of HSPICE simulations at 22 nm technology node (room temperature)

        Fig.15 Accuracy plot of HSPICE simulations at 22 nm technology node (room temperature)

        Fig. 16 Performance plot of HSPICE simulations at 22 nm technology node (398.15 K)

        Fig.17 Accuracy plot of HSPICE simulations at 22 nm technology node (398.15 K)

        Fig.18 Performance plot of HSPICE simulations at 130 nm technology node (room temperature)

        Fig. 20 Performance plot of HSPICE at 130 nm technology node (398.15 K)

        Fig.21 Accuracy plot of the HSPICE at 130 nm technology node (398.15 K)

        Following interpretations can be drawn from the above analysis done:

        1)The proposed analytical model provides goodness of fit greater than 99% for technology nodes ranging from 22 nm-130 nm and temperatures ranging from 298.15 K-398.15 K.

        2)Further, the results obtained from calculating MSE show that ANN simulation with analytical data is in excellent agreement with ANN simulation with HSPICE data, with error less than 1%.

        3)Table 3 shows the improvement in the speed obtained utilising the proposed model for different temperatures at supply voltage of 1 V. The obtained results show that as the number of data points is increased, HSPICE simulation becomes slower (because of the number of parameters involved in simulation).

        4)Moreover, it can be seen from Table 3 that speed improvement up to 80% is obtained when 5000 data points are considered.

        6 Conclusions

        Artificial Neural Networks were utilized in this paper to investigate their utility for designing circuits at ultradeep submicron technology node. The ANN was created to evaluate its capacity for mapping complicated functions and resolving challenges associated with rising non-linearity in the field of design. The neural network was constructed to map the design parameters to the delay parameters of a CMOS inverter. Datasets were produced using both equations and SPICE to determine how effectively the network responds to increasing levels of complexity. The ANN performed exceedingly well in all experiments, with a performance on test data close to 99.9%. This is a promising breakthrough in the field of circuit design since it demonstrates that an ANN can be effectively integrated with evolutionary algorithms such as PSO and ABC algorithms to tweak design parameters to attain the desired performance characteristics for various types of circuits.

        亚洲av色香蕉一区二区三区老师| 国产真实二区一区在线亚洲| 日韩亚洲在线一区二区| 国产av精品一区二区三区久久 | 久久高潮少妇视频免费| 国产精品性色av麻豆| 精品久久久久久无码中文野结衣| 亚洲国产长腿丝袜av天堂| 国产精品高清视亚洲乱码有限公司 | 亚洲精品久久久久中文字幕二区| 国内精品福利在线视频| 中文字幕亚洲高清精品一区在线| 欧美巨鞭大战丰满少妇| 亚洲av综合久久九九| 国产精品99精品一区二区三区∴| 手机在线中文字幕av| 国产人妻鲁鲁一区二区| 国产精品无码a∨精品影院| 中文字幕一区二区三区四区在线 | 美女扒开腿露内裤免费看| 日本特黄特色特爽大片| 成人在线激情网| 狼人av在线免费观看| 男女真人后进式猛烈视频网站| 天天鲁在视频在线观看| 99re免费在线视频| 成人综合激情自拍视频在线观看| 亚洲精品在线国产精品| 中文亚洲成a人片在线观看| 亚洲熟女av超清一区二区三区| 免费人成在线观看播放视频| 人妻尝试又大又粗久久| 国产精品久久久久免费a∨| 亚洲毛片av一区二区三区| 日韩av一区二区三区激情在线| 伊人久久精品久久亚洲一区| 国产成人免费一区二区三区| 精品国产一区二区三区av新片| 亚洲欧洲国产成人综合在线| 无遮挡亲胸捏胸免费视频| 国产成年女人毛片80s网站|