Kng Rui,Zhng Qingyun,Zeng Zhiguo,Enrico Zio,Li Xioyng
aSchool of Reliability and Systems Engineering,Beihang University,Beijing 100083,China
bEnergy Department,Politecnico di Milano,Milano 20133,Italy
cChair on Systems Science and Energy Challenge,Fondation Electricite′de France(EDF),Centrale Supelec,Université Paris-Saclay,Chatenay-Malabry,Paris 92290,France
Measuring reliability under epistemic uncertainty:Review on non-probabilistic reliability metrics
Kang Ruia,Zhang Qingyuana,Zeng Zhiguoc,1,*,Enrico Ziob,c,Li Xiaoyanga
aSchool of Reliability and Systems Engineering,Beihang University,Beijing 100083,China
bEnergy Department,Politecnico di Milano,Milano 20133,Italy
cChair on Systems Science and Energy Challenge,Fondation Electricite′de France(EDF),Centrale Supelec,Université Paris-Saclay,Chatenay-Malabry,Paris 92290,France
In this paper,a systematic review of non-probabilistic reliability metrics is conducted to assist the selection of appropriate reliability metrics to model the influence of epistemic uncertainty.Five frequently used non-probabilistic reliability metrics are critically reviewed,i.e.,evidencetheory-based reliability metrics,interval-analysis-based reliability metrics,fuzzy-interval-analysis based reliability metrics,possibility-theory-based reliability metrics(posbist reliability)and uncertainty-theory-based reliability metrics(belief reliability).It is pointed out that a qualified reliability metric that is able to consider the effect of epistemic uncertainty needs to(1)compensate the conservatism in the estimations of the component-level reliability metrics caused by epistemic uncertainty,and(2)satisfy the duality axiom,otherwise it might lead to paradoxical and confusing results in engineering applications.The five commonly used non-probabilistic reliability metrics are compared in terms of these two properties,and the comparison can serve as a basis for the selection of the appropriate reliability metrics.
Reliability refers to the capacity of a component or a system to perform its required functions under stated operating conditions for a specified period of time.1Reliability engineering has nowadays become an independent engineering discipline,which measures the reliability by quantitative metrics and controls it via reliability-related engineering activities implemented in the product lifecycle,i.e.,failure mode,effect and criticality analysis(FMECA),2fault tree analysis(FTA),3environmental stress screening(ESS),4reliability growth testing(RGT),5etc.Among all the reliability-related engineering activities,measuring reliability is a fundamental one.6Measuring reliability refers to quantifying the reliability of a component or system by quantitative metrics.A key problem in measuring reliability is how to deal with the uncertainty affecting the product's reliability.Broadly speaking,uncertainty can be categorized as aleatory uncertainty which refers to the uncertainty inherent in the physical behavior of the system,7,8and epistemic uncertainty which refers to the uncertainty that is caused by incomplete knowledge.7,9
In the early years of reliability engineering,reliability has been measured by probability-based metrics,e.g.,in terms of the probability that the component or system does not fail(referred to as probabilistic reliability in this paper10),and estimated by statistical methods based on failure data(e.g.,see Ref.11).However,in engineering practice,the available failure data,if there are any,are often far from sufficient for accurate statistical estimates.12Also,the statistical methods do not explicitly model the actual process that leads to the failure.Rather,the failure process is regarded as a black box and assumed to be uncertain,which is described indirectly based on the observed distribution of the time-to-failure(TTF).From the perspective of uncertainties,the statistical methods do not separate the root causes of failures and uncertainties and therefore,they do not distinguish between aleatory and epistemic uncertainties.
As technology evolves,modern products often have high reliability,making it even harder to collect enough failure data,which severely challenges the use of statistical methods.13At the same time,as the knowledge of the failure mechanisms accumulates,deterministic models are available to describe the failure process based on the physical knowledge of the failure mechanisms(referred to as physics-of-failure(PoF)models14).An alternative method to estimate the probabilistic reliability is,then,developed based on the PoF models.In this paper,these methods are referred to as the model-based methods.Unlike statistical methods,model-based methods treat the actual failure process as a white box:the TTFs are predicted by deterministic PoF models,while the uncertainty affecting the TTF is assumed to be caused by random variations in the model parameters(aleatory uncertainty).The probabilistic reliability is,then,estimated by propagating aleatory uncertainties through the model analytically or numerically,e.g.,by Monte Carlo simulation.15,16Compared to statistical methods,model-based methods explicitly describe the actual failure process(by the deterministic PoF models)and separate the root cause of failures(assumed to be deterministic)and the aleatory uncertainty(the random variation of model parameters).The separation of deterministic root causes and aleatory uncertainty allows the designer to implement parametric design for reliability,e.g.,the reliability-based design optimization (RBDO),17,18tolerance optimization,19,20etc.,which marks significant advancement in reliability engineering.
From the perspective of uncertainties,only aleatory uncertainty is considered in the model-based methods.In practice,however,the trustfulness of the predicted reliability is severely influenced by epistemic uncertainty.As in today's highly competitive markets,it is more and more frequent to use the model-based method to measure reliability,due to the severe shortage on failure data.To better quantify the reliability with the model-based methods,the effect of epistemic uncertainty should also be considered.Epistemic uncertainty relates to the completeness and accuracy of the knowledge:if the failure process is poorly understood,there will be large epistemic uncertainty.21–23For instance,the deterministic PoF model might not be able to perfectly describe the failure process,e.g.,due to incomplete understanding of the failure causes and mechanisms.21,24Besides,the precise values of the model parameters might not be accurately estimated due to lack of data in the actual operational and environmental conditions.Both of these two factors introduce epistemic uncertainty into the reliability estimation:the more severe the effect of these factors is,the less trustful the predicted reliability is.
In literature,there are various approaches to measure reliability under epistemic uncertainty,e.g.,probability theory(subjective interpretation25,26),evidence theory,27interval analysis,28,29fuzzy interval analysis,30possibility theory,31,32uncertainty theory,33etc.In this paper,a critical review on these reliability metrics is conducted to assist the selection of appropriate metrics.Some researchers and practitioners use probability theory to describe epistemic uncertainty,taking a Bayesian interpretation of probability.25,26In recent years,problems in dealing with epistemic uncertainty by probabilistic methods have been pointed out.34,35Non-probabilistic metrics have,then,been proposed to model epistemic uncertainty.In this paper,we discuss these non-probabilistic reliability metrics.
More specifically,five reliability metrics are discussed in this paper,i.e.,evidence-theory-based reliability metrics,interval analysis-based reliability metrics,fuzzy-interval-analysis-based reliability metrics,possibility-theory-based reliability metrics(posbist reliability)and uncertainty-theory-based reliability metrics(belief reliability).They are classified,based on the mathematical essence of the metrics,as probability-interval based and monotone-measure-based reliability metrics.The former refers to an interval that contains all the possible reliabilities/failure probabilities,while the latter refers to reliability metrics that are defined based on a monotone measure(or fuzzy measure36).A further classification is given in Fig.1.The probability-interval-based and monotone-measure-based reliability metrics are reviewed in Sections 2 and 3,respectively.
Probability-interval-based reliability metrics(PIB metrics)describe the effect of epistemic uncertainty by an interval of values of failure probabilities/reliabilities.The width of the interval represents the extent of epistemic uncertainty:wide intervals represent large epistemic uncertainty.When there is no effect of epistemic uncertainty,the probability interval becomes a single distribution function of the TTFs.We consider three of the most popular non-probabilistic methods for epistemic uncertainty representation,i.e.,evidence theory,interval analysis(probability box)and fuzzy interval analysis.We review each of these three methods separately in the remaining of this section.
Fig.1 Classification of existing non-probabilistic reliability metrics.
Evidence theory,also known as Dempster–Shafer theory or as the theory of belief functions,was established by Shafer37for representing and reasoning with uncertain,imprecise and incomplete information.38It is a generalization of the Bayesian theory of subjective probability in the sense that it does not require probabilities for each event of interest,but bases the belief in the truth of an event on the probabilities of other propositions or events related to it.37Evidence theory provides an alternative to the traditional manner in which probability theory is used to represent uncertainty by means of the specification of two degrees of likelihood,belief and plausibility,for each event under consideration.The belief value of an event measures the degree of belief that the event will occur and the plausibility value measures the extent to which evidence does not support the negation of the event.Evidence theory is applied to describing uncertainty when the application of probability theory cannot be supported,e.g.,when few samples of data are available to estimate the probability accurately.37
To obtain the evidence-theory-based reliability metrics,the first step is to de fine the frame of discernment:
where the set Θ includes all the possible and mutually exclusive elementary propositions or hypotheses with respect to the uncertain events.Let Ai(i=1,2,...,2m)denote the subsets of Θ.All the subsets(also called focal sets)compose the power set of Θ,which is denoted by 2Θ.Next,basic probability assignment(BPA)is assigned to each focal set to represent our belief in the event associated to it.BPA is essentially a mapping function m:2Θ→[0,1],which satisfies
In practice,the values of the BPAs are assigned by experts to represent the effect of epistemic uncertainty.Focal sets and their associated BPAs comprise the evidence,based on which the belief and plausibility of an event B can be calculated:
where Aidenote the focal sets and m(Ai)is its BPA.
The belief in event B is quantified as the sum of the masses assigned to all sets enclosed by it;hence,it can be interpreted as a lower bound representing the amount of belief that supports the event.The plausibility of event B is,instead,the sum of the BPAs assigned to all sets whose intersection with event B is not empty;hence,it is an upper bound on the probability that the event occurs.39Thus,
When the event B is the failure of a component or system,Eq.(3)leads to an interval that contains all possible failure probabilities/reliabilities,representing the effect of epistemic uncertainty on the reliability estimation:the larger the width of the interval,the greater the epistemic uncertainty is,and thus,the less we can trust the estimated reliability.
Rakowsky reviewed some early applications of evidencetheory-based reliability metrics constructed based on failure modes and effects analysis(FMEA),event tree analysis(ETA)and FTA.40Mourelatos and Zhou used evidence theory to construct failure probability intervals and applied them in engineering design optimization.41–43In reliability-based optimization(RBO),based on the interval of failure probability,Alyanak et al.developed a new method for projecting gradients in RBO when available data are not enough.44Yao et al.developed a sequential optimization and mixed uncertainty analysis method for RBO,where evidence theory is used to describe epistemic uncertainty.45Similar to Bayesian network,the evidential network was developed to construct the failure probability intervals.46Yang et al.applied the evidential network to FTA and calculated the failure probability intervals.47Bae et al.constructed failure probability intervals in large-scale structures based on evidence theory by identifying the failure region and expressing it as a function of the focus sets.27,48Considering the large computing cost,Bae et al.introduced an approximation method to calculate the failure probability intervals under the framework of evidence theory.49Jiang et al.developed an efficient evaluation method for structure reliability with epistemic uncertainty using evidence theory,which reduced the computation cost compared with traditional methods.50To solve the problem of constructing failure intervals with dependent parameters,Jiang et al.developed a multidimensional evidence-theory model,where the dependency is addressed by an ellipsoidal model.51Baraldi et al.studied the situation in which a number of experts provided different information about the imprecise parameters,and belief and plausibility functions are used to develop upper and lower bounds of cumulative probability functions.52,53Lo et al.assessed seismic probabilistic risk of nuclear power plants and built associated failure probability intervals based on evidence theory.54Khalaj et al.applied evidence theory to risk based reliability analysis.55Yao et al.studied the uncertainty quantification in multidisciplinary optimization and developed a new method to calculate the failure probability intervals based on optimization in the frame work of evidence theory.56,57
Another way to construct the interval of failure probabilities is to use interval analysis(or probability boxes).Given a model y=f(x),interval analysis assumes that the input variable x is subjected to epistemic uncertainty and is described by an interval(or convex sets if the input variables are multidimensional)comprised of an lower bound xLand an upper bound xU,so that xL≤x≤xU.Then,interval mathematics or numerical optimization methods are used to derive the upper and lower bounds of the output variable y.58When interval analysis is applied to probabilistic models,upper and lower bounds of the probability of interests can be calculated,which form a probability''boxquot;(p-box)that contains all possible values of that probability.Since reliability is calculated by a probabilistic model,the p-box becomes a natural tool to describe the epistemic uncertainty influencing the calculated reliability.
Ferson et al.are among the first ones who apply the p-box to describing and propagating epistemic uncertainty in a reliability model,deriving intervals that contain all possible values of failure probabilities.59,60Karanki et al.applied p-box to evaluate the probability of system failure under the influence of epistemic uncertainty.61Using a similar method to describe epistemic uncertainty,Zhang et al.developed interval Monte Carlo simulation methods,62interval importance sampling methods63and quasi-Monte Carlo methods64to calculate the interval of failure probabilities when the structures are implicitly modeled based on a finite element model.Beer et al.developed a calculation method for failure probability intervals,which is specially designed for small sample size and is based on quasi-Monte Carlo simulations.65,66Xiao et al.put forward a saddle-point-based approximation method to enhance the computational efficiency in calculating the interval of structural failure probability.67Qiu et al.developed methods to construct the interval of failure probabilities with small sample size,using numerical optimization methods.68–70Crespo et al.applied p-box to the analysis of polynomial systems subject to parameter uncertainties.71
Fuzzy-interval-analysis-based method allows the consideration of both aleatory and epistemic uncertainty simultaneously.34The method can be regarded as the combination of probability theory and fuzzy set theory,where the effect of aleatory uncertainty is described by probability distributions,while the effect of epistemic uncertainty is described by possibility distributions.For instance,in a model z=f(x,y),the input variable x might be subject to aleatory uncertainty and described by a probability density function fX(.);while the other variable y might be subject to epistemic uncertainty and described by a possibility distribution Πy(.)(often through expert opinion elicitations).
Kaufmann and Gupta introduced the basic idea of expressing randomness(probability)in combination with imprecision(possibility)via hybrid numbers.72Ferson et al.73,74extended Kaufmann's work by developing computational rules of hybrid numbers(i.e.,the probability distributions are fuzzily known),which can be applied in risk assessment.Through the computational method,the random fuzzy sets can be obtained and converted to the upper and lower bounds of failure probability.Guyonnet et al.introduced a hybrid method to propagate both aleatory and epistemic uncertainties using fuzzy interval analysis.75In this method,the possibility distribution function of the output variable z can be first calculated based on the Monte-Carlo sampling method and the possibility extension principle,and then used to derive the upper and lower bounds of failure probabilities based on fuzzy interval analysis.76Baudrit et al.developed a postprocessing method based on belief functions(evidence theory)to extract useful information and to construct the failure probability bounds based on the results of the hybrid method,34and they proved that the method improved the work of Ferson et al.73,74and Guyonnet et al.75Baraldi and Zio summarized the hybrid method that jointly propagates probabilistic and possibilistic uncertainties,and compared the method with pure probabilistic and pure fuzzy methods.77Based on the work of Baudrit et al.34Li and Zio applied the fuzzy interval analysis method to assess the reliability of a distributed generation system,which is affected by serious epistemic uncertainty.30The hybrid fuzzy interval analysis method has also been applied successfully in other areas,e.g.,reliability assessment of a flood protection dike78and a turbo-pump lubricating system.79Flage et al.used probabilistic-possibilistic computational framework to propagate uncertainties in FTA,giving rise to the failure probability bounds of top event.80Li et al.developed a hybrid-universal-generating-function-based(HUGF)method for the fuzzy interval analysis of multi-state systems.81
Although differences exist in the way that the interval of failure probabilities is constructed,all the three methods reviewed in Sections 2.1–2.3 use this interval as the reliability metrics.The width of the interval reflects the extent of epistemic uncertainty.One important problem in reliability theory is how to calculate the system-level reliability metrics based on the reliability metrics of the components.Since PIB metrics are intervals of probabilities,the system-level PIB metrics are calculated based on the laws of probability theory.This fact causes a common problem for the PIB metrics when applied to calculate system reliability metrics.Consider the following example.
Example 1.Consider a series system composed of 30 components.Suppose that the real reliability of each component is 0.95.Since the system is subject to epistemic uncertainty,the PIB metrics are used to quantify the reliability of the components.We suppose that the reliability interval for each component is[0.9,1].Then,following the laws of probability theory,the system's PIB reliability metric will be[0.930,130]=[0.04,1].This interval is not representative of the actual uncertainty on the system reliability and obviously too wide to provide any valuable information in practical applications.
The reason for the unsatisfactory result in Example 1 is that the imprecision in the component reliability metrics(the width of the interval)is amplified by the product law of probability theory that calculates the intersection of events.The systemlevel reliability metric should be able to compensate for the conservatism in the component-level reliability metrics caused by the consideration of epistemic uncertainty.Monotone measure-based reliability metrics are developed for this aim.
Monotone measure was defined by Choquet as a generalization of the classical measure theory.82Let X be a finite universal set,and letlbe a non-empty family of subsets of X.Then g:l→[0,∞]is a monotone measure on(X,l)if it satisfies the following requirements:
Probability measure is a special case of the monotone measure,which is also additive.As pointed out by Klir and Smith,83non-additive monotone measures might be able to represent broader types of uncertainty than the addictive probability theory.Therefore,they are applied to developing reliability metrics that model epistemic uncertainty.Typical monotone-measure-based reliability metrics include posbist reliability which is based on possibility theory,and belief reliability which is based on uncertainty theory.
The most widely applied possibility-theory-based reliability metric is the posbist reliability.The two basic assumptions of posbist reliability are32,84
(1)Possibility assumption:the system failure behavior is fully characterized in the context of possibility measures.
(2)Binary-state assumption:the system demonstrates only two crisp states,i.e.fully functioning or fully failed.At any time,the system is in one of the two states.
In posbist reliability theory,lifetime of a system(or a component)is a non-negative real-valued fuzzy variable,and the posbist reliability of a system(or a component)is defined as the possibility measure that the system(or the component)performs its assigned functions properly during a predefined exposure period in a given environment.84The epistemic uncertainty is,then,described and propagated based on possibility theory.
Following the definition of posbist reliability,Cai et al.developed posbist reliability analysis methods for series,parallel,series–parallel,parallel–series and coherent systems.84,85Huang et al.proposed detailed posbist reliability analysis methods for k-out-of-n:G systems.86Cai et al.studied posbist reliability behavior of cold stand-by and warm stand-by systems,considering both full reliable and non-full reliable conversion switches.87Utkin et al.extended Cai's work to repairable systems and developed a posbist reliability analysis method based on state transition diagram.88,89Huang et al.introduced a posbist reliability fault tree analysis(posbist FTA)method for coherent systems to evaluate reliability and safety.90He et al.developed calculation methods of posbist reliability for typical systems when the components are symmetric Gaussian fuzzy variables.91Bhattacharjee et al.investigated the posbist reliability of k-out-of-n systems and pointed out that the posbist reliability does not depend on the number of components.92
In essence,posbist reliability is a possibility measure.In possibility theory,the possibility measure Π(.)satisfies the following three axioms:93
Axiom 1.For the empty set ?,there is Π(?)=0.
Axiom 2.For the universal set Γ,there is Π(Γ)=1.
Axiom 3.For any events Λ1and Λ2in the universal set Γ,there is Π(Λ1∪ Λ2)=max(Π(Λ1),Π(Λ2)).
Axiom 3 shows that the operation laws of possibility theory differ from those of probability theory.Therefore,the system reliability analysis method is also different from that based on probability theory.For instance,Cai et al.proved that the system posbist reliability is the minimum one among all the posbist reliabilities of its components.32This difference makes it possible for possibility theory to compensate the conservatism caused by epistemic uncertainty in component-level reliability estimations.
Example 2.Consider a series system composed of 300 components.An extreme case is considered where all the components are designed with sufficient margins,so that they are completely reliable and the real reliability should be 1.It is easy to verify that the system's reliability is also 1,which means that the system is highly reliable.However,since the system is subject to epistemic uncertainty,the estimates of component-level reliabilities are likely to be conservative.We suppose,for example,the reliability of each component is estimated to be R1=R2=...=R300=0.99.If we use probability theory to model the reliability metric,the system reliability is
It can be seen from the result that the conservatism in component-level reliability estimates is amplified by the operation laws of probability theory,which contradicts with our intuitions since a highly reliable system is judged as highly unreliable.
If we use the posbist reliability,however,the system reliability is
which avoids the previous counter-intuitive result and demonstrates that possibility theory can compensate the conservatism in the component-level reliability estimates caused by epistemic uncertainty.
3.1.1.Problems with posbist reliability
A major drawback of the possibility-theory-based reliability metrics is that the possibility measure does not satisfy the duality axiom,which might lead to counter-intuitive results when applied in practical reliability-related applications.
Example 3.Let event Λ1={The system is working}and Λ2={The system fails}.It is obvious that the universal set Γ=Λ1∪Λ2.Also,we have the posbist reliability and posbist unreliability to be Rpos=Π(Λ1)andrespectively.According to Axioms 2 and 3,we have
Therefore,if Rposdoes not equal to 1,e.g.,Rpos=0.8,Rposmust equal to 1.Vice versa,ifdoes not equal to 1,e.g.,mustequalto 1.This is acounterintuitive result and easily confuses the decision maker in real applications.Hence,even though designed to consider epistemic uncertainty,a reliability metric should still satisfy the duality axiom.
As just explained in Section 3.1.1,one major drawback of the possibility-theory-based reliability metrics is that possibility theory does not satisfy the duality axiom.To overcome this drawback,belief reliability has been developed based on uncertainty theory.Founded by Liu,33,94uncertainty theory relies on the uncertain measure to describe the belief degree of events affected by epistemic uncertainty,which is a monotone measure based on the following four axioms:
1)Normality axiom:M{Γ}=1 for the universal set Γ.
2)Duality axiom:M{Λ}+M{Λc}=1 for any event Λ.
Belief reliability was defined by Zeng et al.as the uncertainty measure of the system to perform specified functions within given time under given operating conditions.95Zeng et al.developed an evaluation method for component belief reliability,which incorporates the influences from design margin,aleatory uncertainty and epistemic uncertainty.96The issue of quantifying the effect of epistemic uncertainty is addressed by developing a method based on the performance of engineering activities related to reducing epistemic uncertainty.97,98The reason why uncertainty theory should be chosen as the theoretical foundation of belief reliability was explained by Zeng et al.99by comparing it with other commonly encountered theories to deal with epistemic uncertainty,i.e.,evidence theory,possibility theory,Bayesian theory,etc.system reliability analysis methods are also developed for coherent systems.95,99
Compared to the PIB metrics,belief reliability uses the minimum operation to calculate the belief degree of the intersection events, and therefore can compensate for the conservatism in the component-level reliability metrics caused by the consideration of epistemic uncertainty.Compared to the possibility-theory-based reliability metrics,belief reliability satisfies the duality axiom,which avoids the possible paradoxical results often encountered in engineering applications of the possibility-theory-based reliability metrics.Therefore,belief reliability is a promising reliability metric to measure the reliability affected by epistemicuncertainty.However,the researches in the theory of belief reliability are far from mature.In fact,as shown in the classical probability-based reliability theory,there are four major topics in the research of reliability theory:
(1)How to measure reliability(measurement).
(2)How to evaluate the reliability of a system based on the reliability of its components(analysis).
(3)How to design the system so that the desired reliability level can be fulfilled(design).
Table 1 Comparison of five reliability metrics.
(4)How to demonstrate that the system satisfies its reliability requirements(demonstration).
Among the four topics,measurement is the most fundamental one.Since belief reliability is an entirely different reliability metric from the classical probability-based reliability metrics,new analysis,design and demonstration methods are also needed for the theory of belief reliability.As reviewed before,however,current researches on belief reliability only concentrate on the first two problems.The problems of design and demonstration are still relatively unexplored and deserve further investigations.
To summarize,we make a comparison of the five reviewed reliability metrics(see Table 1)in terms of theory basis,methods to obtain metric,and existing problems.This will help people to choose appropriate reliability metric according to different demands and situations.
In this paper,a systematic review is conducted on the nonprobabilistic reliability metrics that are used to describe the effect of epistemic uncertainty.Five reliability metrics are discussed,i.e.,the evidence-theory-based,interval-analysis-based,fuzzy-interval-analysis-based,possibility-theory-based(posbist reliability)and uncertainty-theory-based reliability metrics(belief reliability).Among them,the former three provide,in essence,an interval that contains all the possible values of the reliabilities/failure probabilities whereas the latter two give monotone measures.
An investigation of the five metrics reveals two important features that a qualified reliability metric under epistemic uncertainty should possess:(1)it should be able to compensate the conservatism in the component-level reliability metrics caused by the consideration of epistemic uncertainty,and(2)it should satisfy the duality axiom,otherwise it might lead to paradoxical and confusing results in engineering applications.
Finally,the five reliability metrics are compared with respect to the above two features,as well as other important characteristics which can be used to assist the selection of appropriate reliability metrics considering the effect of epistemic uncertainty.
This work has been performed within the initiative of the Center for Resilience and Safety of Critical Infrastructures(CRESCI,http://cresci.buaa.edu.cn).This work was supported by National Natural Science Foundation of China(No.61573043).
1.Ebeling CE.An introduction to reliability and maintainability engineering.Long Grove:Waveland Press;2005.p.5.
2.Stamatis DH.Failure mode and effect analysis:FMEA from theory to execution.Milwaukee:Quality Press;2003.p.21–81.
3.Zio E.An introduction to the basics of reliability and risk analysis.Singapore:World Scientific Printers;2007.p.115–35.
4.Kuo W,Chien WTK,Kim T.Reliability,yield,and stress burn-in:a unified approach for microelectronics systems manufacturingamp;software development.New York:Springer;1998.p.103–9.
5.Yang G.Life cycle reliability engineering.Hoboken:Wiley;2007.p.237–9.
6.Saleh JH,Marais K.Highlights from the early(and pre-)history of reliability engineering.Reliab Eng Syst Saf 2006;91(2):249–56.
7.Kiureghian AD,Didevsen O.Aleatory or epistemic?Does it matter?Struct Saf 2009;31(2):105–12.
8.Helton JC,Johnson JD,Oberkampf WL,Sallaberry CJ.Representation of analysis results involving aleatory and epistemic uncertainty.Int J Gen Syst 2010;39(6):605–46.
9.Aven T,Zio E.Some considerations on the treatment of uncertainties in risk assessment for practical decision making.Reliab Eng Syst Saf 2011;96(1):64–74.
10.Barlow RE,Proschan F.Mathematical theory of reliability.New York:Wiley;1987.p.5–45.
11.Meeker WQ,Escobar LA.Statistical methods for reliability data.New York:Wiley;1998.p.4–18.
12.Aven T.On the meaning of a black swan in a risk context.Saf Sci 2013;2013(57):44–51.
13.Cushing MJ,Mortin DE,Stadterman TJ,Malhotra A.Comparison of electronics-reliability assessment approaches.IEEE Trans Reliab 1993;42(4):542–6.
14.McPherson JW.Reliability physics and engineering:time-to-failure modeling.New York:Springer;2013.
15.Mohaghegh Z,Modarres M.A probabilistic physics-of-failure approach to common cause failures in reliability assessment of structures and components.Trans Am NuclSoc2011;2011(105):635–7.
16.Zio E.The Monte Carlo simulation method for system reliability and risk analysis.Berlin:Springer;2013.p.19–57.
17.Qiu Z,Huang R,Wang X,Qi W.Structural reliability analysis and reliability-based design optimization:recent advances.Sci Chin Phys Mech Astron 2013;56(9):1611–8.
18.Aoues Y,Chateauneuf A.Benchmark study of numerical methods for reliability-based design optimization.Struct Multidiscip Optim 2010;41(2):277–94.
19.Liu SG,Jin Q,Wang P,Xie RJ.Closed-form solutions for multiobjective tolerance optimization.Int J Adv Manuf Technol 2014;70(9–12):1859–66.
20.Zhai G,Zhou Y,Ye X.A tolerance design method for electronic circuits based on performance degradation.Qual Reliab Eng Int 2015;31(4):635–43.
21.Aven T,Baraldi P,Flage R,Zio E.Uncertainty in risk assessment:the representation and treatment of uncertainties by probabilistic and non-probabilistic methods.Noida:Wiley;2014.p.13–6.
22.Pate-Cornell E.On''Black Swansquot;and''Perfect Stormsquot;:risk analysis and management when statistics are not enough.Risk Anal 2012;32(11):1823–33.
23.Bjerga T,Aven T,Zio E.An illustration of the use of an approach for treating model uncertainties in risk assessment.Reliab Eng Syst Saf 2014;125:46–53.
24.Draper D.Assessment and propagation of model uncertainty.J R Stat Soc 1995;57(1):45–97.
25.Igusa T,Buonopane SG,Ellingwood BR.Bayesian analysis of uncertainty for structural engineering applications.Struct Saf 2002;24(2–4):165–86.
26.Troffaes MCM,Walter G,Kelly D.A robust Bayesian approach to modeling epistemic uncertainty in common-cause failure models.Reliab Eng Syst Saf 2014;125:13–21.
27.Bae HR,Grandhi RV,Canfield RA.Epistemic uncertainty quantification techniques including evidence theory for large-scale structures.Comput Struct 2004;82(13–14):1101–12.
28.Moore RE.Methods and applications of interval analysis.Philadelphia:Siam;1979.p.9–56.
29.Yang X,Liu Y,Zhang Y,Yue Z.Hybrid reliability analysis with both random and probability-box variables.Acta Mech 2014;226(5):1341–57.
30.Li YF,Zio E.Uncertainty analysis of the adequacy assessment model of a distributed generation system.Renewable Energy 2012;41:235–44.
31.Dubois D,Prade H.Possibility theory.New York:Plenum Press;1986.p.32.
32.Cai KY,Wen CY,Zhang ML.Fuzzy variables as a basis for a theory of fuzzy reliability in the possibility context.Fuzzy Sets Syst 1991;42(2):145–72.
33.Liu BD.Uncertainty theory.Berlin:Springer;2010.
34.Baudrit C,Dubois D,Guyonnet D.Joint propagation and exploitation of probabilistic and possibilistic information in risk assessment.IEEE Trans Fuzzy Syst 2006;14(5):593–608.
35.Baccou J,Chojnacki E,Mercat-Rornmens C,Baudrit C.Extending Monte Carlo simulations to represent and propagate uncertainties in presence of incomplete knowledge:application to the transfer of a radionuclide in the environment.J Environ Eng ASCE 2008;134(5):362–8.
36.Oberkampf WL,Helton JC,Sentz K.Mathematical representation of uncertainty.AIAA non-deterministic approaches forum.2001 Apr 16–19;In:2001;AIAA;Seattle(WA).Reston,No.:AIAA-2001-1645;p.16–9.
37.Shafer G.A mathematical theory of evidence.Princeton:Princeton University Press;1976.
38.Smets P.Belief function.In:Smets P,Mamdani EH,Dubois D,Prade H,editors.Non-standard logics for automated systems.London:Academic Press;1988.p.253–82.
39.Dempster AP.Upper and lower probabilities induced by a multivalued mapping.Ann Math Stat 1967;38(2):325–39.
40.Rakowsky UK.Fundamentals of the evidence theory and its applications to reliability modeling.Int J Reliab Qual Saf Eng 2007;14(6):579–601.
41.Mourelatos ZP,Zhou J.Reliability estimation and design with insufficient data based on possibility theory.AIAA J 2005;43(8):1696–705.
42.Mourelatos ZP,Zhou J.A design optimization method using evidence theory.J Mech Des 2006;128(4):901–8.
43.Zhou J,Mourelatos ZP.A sequential algorithm for possibilitybased design optimization.J Mech Des 2008;130(1):011001(1–10).
44.Alyanak E,Grandhi R,Bae HR.Gradient projection for reliability-based design optimization using evidence theory.Eng Optim 2008;40(10):923–35.
45.Yao W,Chen XQ,Huang YY,Gurdal Z,van Tooren M.Sequential optimization and mixed uncertainty analysis method for reliability-based optimization.AIAA J 2013;51(9):2266–77.
46.Simon C,Weber P.Evidential networks for reliability analysis and performance evaluation of systems with imprecise knowledge.IEEE Trans Reliab 2009;58(1):69–87.
47.Yang JP,Huang HZ,Liu Y,Li YF.Evidential networks for fault tree analysis with imprecise knowledge.Int J Turbo Jet Eng 2012;29(2):111–22.
48.Bae HR,Grandhi RV,Canfield RA.Sensitivity analysis of structural response uncertainty propagation using evidence theory.Struct Multidiscip Optim 2006;31(4):270–9.
49.Bae HR,Grandhi RV,Canfield RA.An approximation approach for uncertainty quantification using evidence theory.Reliab Eng Sys Saf 2004;86(3):215–25.
50.Jiang C,Zhang Z,Han X,Liu J.A novel evidence-theory-based reliability analysis method for structures with epistemic uncertainty.Comput Struct 2013;129:1–12.
51.Jiang C,Wang B,Li ZR,Han X,Yu DJ.An evidence-theory model considering dependence among parameters and its application in structural reliability analysis.Eng Struct 2013;57:12–22.
52.Baraldi P,Compare M,Zio E.Maintenance policy performance assessment in presence of imprecision based on Dempster–Shafer theory of evidence.Inf Sci 2013;245:112–31.
53.Baraldi P,Compare M,Zio E.Dempster–Shafer theory of evidence to handle maintenance models tainted with imprecision.In:11th international probabilistic safety assessment and management conference.2012 June 25–29;Helsinki.Finland,Helsinki:PSAM11amp;ESREL2012;2012.p.61–70.
54.Lo CK,Pedroni N,Zio E.Treating uncertainties in a nuclear seismic probabilistic risk assessment by means of the Dempster–Shafer theory of evidence.Nucl Eng Technol 2014;46(1):11–26.
55.Khalaj M,Makui A,Tavakkoli-Moghaddam R.Risk-based reliability assessment under epistemic uncertainty.J Loss Prev Process Ind 2012;25(3):571–81.
56.Yao W,Chen XQ,Huang YY,van Tooren M.An enhanced unified uncertainty analysis approach based on first order reliability method with single-level optimization.Reliab Eng Syst Saf 2013;116:28–37.
57.Yao W,Chen XQ,Ouyang Q,van Tooren M.A reliability-based multidisciplinary design optimization procedure based on combined probability and evidence theory.Struct Multidiscip Optim 2013;48(2):339–54.
58.Alefeld G,Mayer G.Interval analysis:theory and applications.J Comput Appl Math 2000;121(1–2):421–64.
59.Ferson S,Ginzburg L.Different methods are needed to propagate ignoranceand variability.Reliab Eng SystSaf 1996;54(1–2):133–44.
60.Ferson S,Hajagos JG.Arithmetic with uncertain numbers:rigorous and(often)best possible answers.Reliab Eng Syst Saf 2004;85(1–3):135–52.
61.Karanki DR,Kushwaha HS,Verma AK,Ajit S.Uncertainty analysis based on probability bounds(P-Box)approach in probabilistic safety assessment.Risk Anal 2009;29(5):662–75.
62.Zhang H,Mullen RL,Muhanna RL.Interval Monte Carlo methods for structural reliability.Struct Saf 2010;32(3):183–90.
63.Zhang H.Interval importance sampling method for finite elementbased structural reliability assessment under parameter uncertainties.Struct Saf 2012;38:1–10.
64.Zhang H,Dai H,Beer M,Wang W.Structural reliability analysis on the basis of small samples:an interval quasi-Monte Carlo method.Mech Syst Signal Process 2013;37(1–2):137–51.
65.Beer M,Ferson S,Kreinovich V.Imprecise probabilities in engineering analyses.Mech Syst Signal Process 2013;37(1–2):4–29.
66.Beer M,Zhang Y,Quek ST,Phoon KK.Reliability analysis with scarceinformation:comparing alternative approaches in a geotechnical engineering context.Struct Saf 2013;41:1–10.
67.Xiao NC,Li YF,Yu L,Wang ZL,Huang HZ.Saddlepoint approximation-based reliability analysis method for structural systems with parameter uncertainties.Proc Inst Mech Eng Part O 2014;228(5):529–40.
68.Qiu ZP,Wang XJ,Chen JY.Exact bounds for the static response set of structures with uncertain-but-bounded parameters.Int J Solids Struct 2006;43(21):6574–93.
69.Qiu ZP,Yang D,Elishakoff I.Probabilistic interval reliability of structural systems.Int J Solids Struct 2008;45(10):2850–60.
70.Wang XJ,Elishakoff I,Qiu ZP,Kou CH.Non-probabilistic methods for natural frequency and buckling load of composite plate based on the experimental data.Mech Based Des Struct Mach 2011;39(1):83–99.
71.Crespo LG,Kenny SP,Giesy DP.Reliability analysis of polynomial systems subject to p-box uncertainties.Mech Syst Signal Process 2013;37(1–2):121–36.
72.Kaufmann A,Gupta MM.Introduction to fuzzy arithmetic:theory and applications.New York:Van Nostrand Reinhold;1985.p.7.
73.Cooper JA,Ferson S.Hybrid processing of stochastic and subjective uncertainty data.Risk Anal 1996;16(6):785–91.
74.Ferson S,Ginzburg L.Hybrid arithmetic Third international symposium on uncertainty modeling and analysis,1995,and annual conference of the North American Fuzzy Information Processing Society.1995 Sep 17–20;College Park(MD).Piscataway(NJ):IEEE;1995.p.619–23.
75.Guyonnet D,Bourgine B,Dubois D,Fargier H,Co?me B,Chile`s JP.Hybrid approach for addressing uncertainty in risk assessments.J Environ Eng 2003;129(1):68–78.
76.Dubois D,Kerre E,Mesiar R,Prade H.Fuzzy interval analysis.In:Fundamentals of fuzzy sets.Berlin:Springer;2000.p.483–581.
77.Baraldi P,Zio E.A combined Monte Carlo and possibilistic approach to uncertainty propagation in event tree analysis.Risk Anal 2008;28(5):1309–25.
78.Pedroni N,Zio E,Ferrario E,Pasanisi A,Couplet M.Hierarchical propagation of probabilistic and non-probabilistic uncertainty in the parameters of a risk model.Comput Struct 2013;126:199–213.
79.Baraldi P,Compare M,Zio E.Uncertainty treatment in expert information systems for maintenance policy assessment.Appl Soft Comput 2014;22:297–310.
80.Flage R,Baraldi P,Zio E,Aven T.Probability and possibility based representations of uncertainty in fault tree analysis.Risk Anal 2013;33(1):121–33.
81.Li YF,Ding Y,Zio E.Random fuzzy extension of the universal generating function approach for the reliability assessment of multi-state systems under aleatory and epistemic uncertainties.IEEE Trans Reliab 2014;63(1):13–25.
82.Choquet G.Theory of capacities.Ann Inst Fourier 1954;5:131–295.
83.Klir GJ,Smith RM.On measuring uncertainty and uncertainty based information:recent developments.Ann Math Artif Intell 2001;32(1–4):5–33.
84.Cai KY,Wen CY,Zhang ML.Posbist reliability behavior of typical systems with two types of failure.Fuzzy Sets Syst 1991;1991(43):17–32.
85.Cai KY.Introduction to fuzzy reliability.Norwell:Kluwer Academic Publishers;1996.p.193–218.
86.Huang HZ,Li YF,Liu Y.Posbist reliability theory of k-out-of-n:G system.J Mult Valued Logic Soft Comput 2010;16(1–2):45–63.
87.Cai KY,Wen CY,Zhang ML.Posbist reliability behavior of faulttolerant systems.Microelectron Reliab 1995;35(1):49–56.
88.Utkin LV.Fuzzy reliability of repairable systems in the possibility context.Microelectron Reliab 1994;34(12):1865–76.
89.Utkin LV,Gurov SV.A general formal approach for fuzzy reliability analysis in the possibility context.Fuzzy Sets Syst 1996;83(2):203–13.
90.Huang HZ,Tong X,Zuo MJ.Posbist fault tree analysis of coherent systems.Reliab Eng Syst Saf 2004;84(2):141–8.
91.He LP,Xiao J,Huang HZ,Luo ZQ.System reliability modeling and analysis in the possibility context.In:2012 International conference on quality,reliability,risk,maintenance,and safety engineering(ICQR2MSE);2012 June 15–18;Chengdu,China.Piscataway(NJ):IEEE;2012.p.361–7.
92.Bhattacharjee S,Nanda AK,Alam SS.Study on posbist systems.Int J Qual Stat Reliab 2012;2012(2012):1–7.
93.Zadeh LA.Fuzzy sets as a basis for a theory of possibility.Fuzzy Sets Syst 1978;1(1):3–28.
94.Liu B.Why is there a need for uncertainty theory?J Uncertain Syst 2012;6(1):3–10.
95.Zeng ZG,Wen ML,Kang R.Belief reliability:a new metrics for products' reliability.Fuzzy Optim Decis Making 2013;12(1):15–27.
96.Zeng ZG,Kang R,Wen ML,Chen YX.Measuring reliability during product development considering aleatory and epistemic uncertainty.In:Reliability and maintainability symposium 2015 annual;2015 Jan 26–29;Palm Harbor(FL).Piscataway(NJ):IEEE;2015.p.1–6.
97.Jiang X,Zeng Z,Kang R,Chen Y.A naive Bayes based method for evaluation of electronic product reliability simulation tests.Electron Sci Technol 2015;2(1):49–54.
98.Fan M,Zeng Z,Kang R.A novel approach to measure reliability based on beliefreliability.J Syst Eng Electron 2015;37(11):2648–53.
99.Zeng Z,Kang R,Zio E,WenML.A new metric of reliability:belief reliability[Internet],2015[cited 2015 Dec];Available from:http://dx.doi.org/10.13140/RG.2.1.2701.4003.
Kang Ruiis a distinguished professor in School of Reliability and Systems Engineering,Beihang University,Beijing,China.He is a famous reliability expert in Chinese industry.He received his bachelor's and master's degrees in electrical engineering from Beihang University in 1987 and 1990,respectively.He has developed six courses and published eight books and more than 150 research papers.His main research interests include reliability and resilience for complex system and modeling epistemic uncertainty in reliability and maintainability.He is currently serving as the associate editor of IEEE Transactions on Reliability,and is the founder of China Prognostics and Health Management Society.He received several awards from the Chinese government for his outstanding scientific contributions,including Changjiang Chair Professor awarded by the Chinese Ministry of Education.
Zhang Qingyuanis a master student at School of Reliability and Systems Engineering,Beihang University.He received his B.S.degree from Beihang University in 2015.His research focuses on theory of belief reliability and uncertainty quantification.
Zeng Zhiguoreceived his Ph.D.degree in 2015 from School of Reliability and Systems Engineering,Beihang University.Heisnowapostdoc researcher Chair on Systems Science and Energy Challenge,Fondation Electricité de France(EDF),Centrale Supelec,UniversitéParis-Saclay.His current research interests include the theory of belief reliability,uncertainty quantification and reliability of complex systems.
Enrico Zioreceived the Ph.D.degrees in nuclear engineering from the Politecnicodi Milano,Milan,Italy in1995,and from the Massachusetts Institute of Technology,Cambridge,MA,USA in 1998.He is currently the director of Chair on Systems Science and Energy Challenge,Fondation Electricitéde France(EDF),CentraleSupelec,Université Paris-Saclay and a full professor with the Politecnico di Milano.His current research interests include the characterization and modeling of the failure/repair/maintenance behavior of components,complex systems and their reliability, Monte Carlo simulation, uncertainty quantification and soft computing techniques.
Li Xiaoyangis an associate professor at School of Reliability and Systems Engineering,Beihang University.She is also the executive director of the Center for Resilience and Safety of Critical Infrastructures(CRESCI).She has been to NSF Industry/University Cooperative Research Center on Intelligent Maintenance Systems(IMS),University of Cincinnati,as a visiting professor for one year.Her research interests include design of experiment,lifetime modeling and accelerated testing.
4 December 2015;revised 16 February 2016;accepted 22 February 2016
Available online 9 May 2016
Belief reliability;
Epistemic uncertainty;
Evidence theory;
Interval analysis;
Possibility theory;
Probability box;
Reliability metrics;
Uncertainty theory
?2016 Chinese Society of Aeronautics and Astronautics.Production and hosting by Elsevier Ltd.This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
*Corresponding author.Tel.:+86 10 82338236.
E-mail addresses:kangrui@buaa.edu.cn(R.Kang),zhangqingyuan@buaa.edu.cn (Q.Zhang),zhiguo.zeng@centralesupelec.fr(Z.Zeng),enrico.zio@ecp.fr(E.Zio),leexy@buaa.edu.cn(X.Li).
1Postdoc researcher in Centralesupelec.
Peer review under responsibility of Editorial Committee of CJA.
CHINESE JOURNAL OF AERONAUTICS2016年3期