Yulong HUANG, Mingming BAI, Yonggang ZHANG
College of Intelligent Systems Science and Engineering, Harbin Engineering University, Harbin 150001, China
Abstract:This paper presents a novel multiple-outlier-robust Kalman filter(MORKF)for linear stochastic discretetime systems. A new multiple statistical similarity measure is first proposed to evaluate the similarity between two random vectors from dimension to dimension. Then, the proposed MORKF is derived via maximizing a multiple statistical similarity measure based cost function. The MORKF guarantees the convergence of iterations in mild conditions, and the boundedness of the approximation errors is analyzed theoretically. The selection strategy for the similarity function and comparisons with existing robust methods are presented. Simulation results show the advantages of the proposed filter.
Key words: Kalman filtering; Multiple statistical similarity measure; Multiple outliers; Fixed-point iteration;State estimate
The Kalman filter(KF)has played an important role in many engineering fields such as navigation,positioning, target tracking, control, and communications(Simon,2006). The outlier interference problem often occurs in these applications because of unreliable sensor measurements,external disturbances,and unknown modeling errors. In general, a linear state-space model for such an outlier-corrupted state estimate problem can be formulated as follows:
wherekdenotes the discrete time index,xk∈Rnandzk∈Rmdenote the state and measurement vectors, respectively,Fk∈Rn×nandHk∈Rm×nrepresent the state transition and measurement matrices, respectively, andwk∈Rnandvk∈Rmdenote the outlier-contaminated state and measurement noise vectors, respectively, both of which have non-Gaussian heavy-tailed distributions. Unfortunately, for an outlier-contaminated linear system,the optimality of the classical KF is violated, and its filtering performance degrades remarkably.
To solve this problem, many efforts have been made to improve the robustness of the classical KF.By using the influence function approach in the KF,a series of M-estimators has been constructed by minimizing the well-chosen robust cost function, in which the Huber KF(HKF)serves as the best known M-estimator (Huber, 2011). An alternative robust M-estimate method, named the maximum correntropy KF(MCKF), has also been proposed by maximizing the correntropy of the predictive error and the residual error (Chen et al., 2017). To further use the heavy-tailed features inherent in the outliercontaminated noise, many robust filters have been proposed based on non-Gaussian distribution modeling (Ting et al., 2007; Huang et al., 2016, 2019c;Roth et al., 2017), in which the robust Student’stbased KF (RSTKF) (Piché et al., 2012; Huang et al., 2017, 2019a, 2019b) acts as a typical example. Recently, a novel statistical similarity measure based Kalman filtering(SSMKF)framework has been proposed(Huang et al.,2020)which maximizes a statistical similarity measure based cost function.The SSMKF provides a general solution for outliercontaminated linear systems,and it includes the popular RSTKF as its special case when a logarithmic similarity function is selected(Huang et al.,2020).
In a state-space model, the state and measurement variables often vary from dimension to dimension. For example,in a target tracking problem, the state variables of position and velocity have different magnitudes and propagation features, and the measurement variables suffer from different external disturbances. Consequently, the outliers of different state and measurement variables should indeed be different in intensity and occurrence probability.As a result, the outliers occurring in different state and measurement dimensions may possess different statistical properties in practical applications, and are therefore named multiple outliers in this study.The newly emerging SSMKF is incapable of addressing multiple outliers because it was designed based on an assumption that the outliers occurring in different state and measurement dimensions have the same statistical properties (Huang et al., 2020). To some extent, the existing M-estimator can reduce the effects of multiple outliers (Huber, 2011), but the randomness inherent in the state vector is neglected, which limits its estimation accuracy. The main aspects of the methods mentioned above and the proposed filter are summarized in Table 1.
In this paper, we present a novel multipleoutlier-robust KF (MORKF) for linear stochastic discrete-time systems. A new multiple statistical similarity measure(MSSM)is first proposed to evaluate the similarity between two random vectors from dimension to dimension. The MORKF is developed by maximizing an MSSM-based cost function. Convergence is guaranteed under mild conditions and the rationality of assumptions is discussed. The similarity function selections and comparisons with existing robust KFs are also presented. Simulation results illustrate that the developed MORKF has improved estimation accuracy but with heavier computational burden than the existing HKF,MCKF,and SSMKF.Table 2 presents the acronyms and nomenclature that are used in this paper.
Table 1 Summary of the main points of the existing methods and the proposed filter
Table 2 Acronyms and nomenclature
Different from Huang et al.(2020),in this study we focus on evaluating the similarity between two random vectors,denoted asαandβ,from dimension to dimension. Hence,a novel MSSM is proposed and formulated as follows:
whereαandβdenote twop-dimensional random vectors,andαiandβiare theithelements ofαandβrespectively.f(·) denotes the similarity function,which satisfies the following conditions: (1)f(·) is continuous and differentiable on [0,+∞); (2)(l)<0 on [0,+∞);(3)(l)≥0 on [0,+∞).
It is obvious from Eq. (2) that the proposed MSSM satisfiess(α,β)=s(β,α),which means that the proposed MSSM is a symmetric measure for the two evaluated random vectorsαandβ. The second condition indicates that the MSSMs(α,β)increases monotonously as the difference between the two evaluated random vectorsαandβdecreases, and vice versa.
Proposition 1The proposed MSSM achieves the unique maximum point if and only ifα=β.
The proof of Proposition 1 is given in Appendix A.
The proposed MSSM is a generalized form for the existing similarity measures. For instance, the proposed MSSMs(α,β) becomes the negative form of the well-known mean squared error (MSE) measure whenf(·) is selected asf(l)=-l. In addition,the proposed MSSMs(α,β) is identical to the existing correntropy measure whenf(·) is selected as(Chen et al., 2017). The MSSM can be diverse when various similarity functions are selected, and thus different MORKFs can be constructed by maximizing the corresponding MSSMbased cost function.
Remark 1The statistical similarity measure proposed in Huang et al. (2020) is used to evaluate the overall similarity between two random vectors,which makes it suitable for detecting the outliers with the same statistical properties in different dimensions.In contrast, the proposed MSSM in this study can be employed to evaluate the separate similarity between two random vectors from dimension to dimension, and therefore it is more suitable for detecting multiple outliers as compared with the previous statistical similarity measure in Huang et al. (2020).
The core design of the proposed MORKF is to look for an optimal posterior PDFq*(xk) via maximizing the MSSM-based cost function as follows:
whereSk|k-1andSRkare the square root matrices of the nominal predictive error covariance matrix(PECM)Pk|k-1and the nominal measurement noise covariance matrix(MNCM)Rk,respectively,i.e.,
Because the predictive mean vectorcan be calculated using Eq.(5)and the measurement vectorzkcan be obtained from external sensors, these two quantitiesandzkare totally known in the design of the proposed MORKF.As a result,the joint PDFs in Eq. (3) are marginalized, and the MSSMbased cost function can be rewritten as follows:
wherefx(·) andfz(·) are the similarity functions with respect to state and measurement vectors respectively, andTkiandUkjare the column vectors of the inverse square root matricesandrespectively,which are formulated as follows:
Due to the non-Gaussianity of the posterior PDF, it is not feasible to solve the maximization problem(7)analytically. To overcome this difficulty and obtain an approximate solution for Eq. (7), a heuristic idea is to assume the posterior PDF as Gaussian, namely,q(xk)≈N(xk;μk,Σk). Based on this, the original MSSM-based cost function in Eq. (7) can be solved analytically by maximizing the lower bound. Employing Jensen’s inequality,the right-hand terms of Eq. (7) have the following lower bounds:
Using inequalities (10) and (11) in Eq. (7) and the Gaussian assumption for the posterior PDF, the maximization problem is further converted to
whereμ*kdenotes the optimal posterior mean vector andΣ*kdenotes the optimal posterior covariance matrix. Rearranging the lower bounds of Eq. (7),the approximated cost functionJ(μk,Σk) is given by
and the auxiliary matricesAkandBkare calculated as
Before presenting the main results,to aid in the development,a cluster of intermediate variables and matrices are defined as follows:
whereΛμk(μk,Σk) andΛΣk(μk,Σk) denote the Jacobian matrices of the cost functionJ(μk,Σk)with respect to the posterior mean vectorμkand the posterior covariance matrixΣkrespectively,andΠμk(μk,Σk)denotes the Hessian matrix of the cost functionJ(μk,Σk)with respect toμk.
Theorem 1By solving the maximization problem (12), the optimal posterior mean vectorμ*kcan be formulated as follows:
The weighted matricesΨ*xkandΨ*zkare formulated as
The auxiliary matrices are given by
The proof of Theorem 1 is given in Appendix B.
Theorem 1 indicates that the optimal posterior mean vectorμ*kis indeed a modified Kalman filtering estimate, achieved by using the modified PECMand the modified MNCMin Eqs. (20)and (21). Also, it is observed from Theorem 1 that the optimal posterior mean vectorμ*kdepends on the optimal posterior covariance matrixΣ*k. Similar to our previous work (Huang et al., 2020), we can obtain the following two propositions:
Proposition 2The result ofμ*kgiven in Eq. (19)is a global optimal solution when and only when the following inequalities hold:
The proof of Proposition 2 is given in Appendix C.
Proposition 3The approximated cost functionJ(μk,Σk) is monotonically decreasing with respect toΣk.
The proof of Proposition 3 is given in Appendix D.
Proposition 2 provides a sufficient condition to guarantee that the resultμ*kin Eq. (19) is a global optimal solution. Proposition 3 implies that the optimal solution of the posterior covariance matrixΣ*kshould be selected as the lower bound ofΣk. Therefore, to guarantee the filtering consistency, the optimal posterior covariance matrixΣ*kis given by
Substituting Eqs. (20) and (21) into Eq. (27)and using the well-known matrix inversion lemma yield
It can be observed from Theorem 1 and Eq.(28)that the optimal posterior mean vectorμ*kand the optimal posterior covariance matrixΣ*kare intercoupled. Then, the fixed-point iteration method is employed to solveμ*kandΣ*kapproximately, from which the proposed MORKF can be implemented,as summarized in Algorithm 1, where∈denotes the iteration threshold andNvdenotes the maximum number of iterations.
Next, the convergence conditions of the fixedpoint iteration will be provided.
Proposition 4If the iterative initial valueμ(k0)and the optimal solutionμ*kare close enough and the following inequalities hold?l ≥0,the fixed-point iteration approach will converge locally:
whereθ1andθ2are arbitrary positive finite real numbers.
The proof of Proposition 4 is given in Appendix E.
?
2.3.1 Computational complexity analysis
Next, we analyze and compare the computational complexities of the proposed MORKF and the existing SSMKF by calculating the number of floating point operations (NoFPO). Taking the squareroot similarity functions as an example,the NoFPOs of some main equations are listed in Table 3.
The fixed-point iteration method is employedto implement the proposed MORKF.Assuming that the average fixed-point iteration number isNm, the NoFPO of the proposed MORKF can be calculated according to Table 3:
Table 3 NoFPOs of some main equations
Similarly, the NoFPO of the existing SSMKF(Huang et al., 2020)can be calculated as follows:
Comparing Eqs. (30)and(31),we find that the computational complexity of the proposed MORKF is moderately greater than that of the existing SSMKF because of the different adjustment styles when facing the outliers. However, note that the weighted matricesΨ*xkandΨ*zkin Eqs. (20) and(21) are diagonal matrices, so it is easy to obtain their inverse matrices. Considering the improved performance in addressing the multiple outliers, the increased computational complexity of the proposed MORKF is acceptable.
2.3.2 Approximation error analysis
Three assumptions are presented to facilitate the derivation of the proposed MORKF:
Assumption 1The optimal posterior PDFq*(xk)is assumed to be Gaussian.
Assumption 2The lower bound of the original MSSM-based cost function is maximized.
Assumption 3The lower bound of the posterior covariance matrixΣkis assumed as the estimation error covariance matrix of the modified Kalman filter with modified PECMand modified MNCM.
As for Assumption 1, in Bayesian filtering, it is always difficult to analytically formulate the non-Gaussian posterior PDF that is caused by the state and measurement outliers (Roth et al., 2017). In this study, to overcome this difficulty, the widely accepted Gaussian assumption is employed to look for an analytical presentation of the posterior PDF,thereby reaching a compromise between filtering accuracy and computational burden. Although such Gaussian assumption may introduce approximation errors into the posterior PDF to some extent, it exhibits satisfactory filtering accuracy with tolerable computational burden in engineering practice,as shown in later simulation study. Therefore, the Gaussian assumption for the non-Gaussian posterior PDF is reasonable.
Next,we analyze the influence of Assumption 2.For the sake of descriptions,a cluster of variables are defined as
wherei= 1,2,...,n,j= 1,2,...,m, andanddenote the expectations ofandwith respect to the optimal posterior PDFq*(xk). The first-order Taylor series expansions are performed onfx(l) andfz(l) atl=andl=, respectively,which yields
whereo(·) denotes the high-order terms of the Taylor series expansions. Dropping the high-order terms yields the following first-order linearization approximations:
By employing methods similar to that in Huang et al. (2020),we obtain the following propositions:
Proposition 5Using the first-order linearization approximations(Eq.(33))and the Gaussian assumption to the posterior PDF in Eq. (7), the original maximization problem(7)becomes the approximate maximization problem(12).
The proof of Proposition 5 is given in Appendix F.
Proposition 6The variances of auxiliary variablesandare upper-bounded, and can be formulated as follows:
The proof of Proposition 6 is given in Appendix G.
Proposition 5 means that the approximation errors of Assumption 2 are dominated mainly by the second-order moments ofand. Meanwhile,the boundedness of the second-order moments ofand, which is given in Proposition 6, guarantees that the approximation errors of Assumption 2 are bounded. As shown in inequality (35), these two upper bounds depend critically on the posterior covariance matrixΣkand the state dimension.The larger the posterior covariance matrixΣkor the state dimensionn, the larger approximation errors will be induced. Because the posterior covariance matrix will decrease with the convergence of the filter, the approximation errors induced by Assumption 2 can be limited. Moreover,the exemplary similarity functions provided in this study possess much smaller high-order derivatives than the firstorder derivatives,which helps Assumption 2 be more reasonable.
As for Assumption 3,the outlier-robust Kalman filters tend to generate a lager posterior covariance matrixΣkthan the classical Kalman filter with the modified PECMand the modified MNCM. Such a constraint is often beneficial in guaranteeing filtering consistency and stability.
The selection strategy for similarity functionsfx(·) andfz(·) will be discussed in this subsection to facilitate the implementation of the proposed MORKF.First,the optimality of the proposed MORKF should be guaranteed when the state and measurement noises are Gaussian-distributed. According to Huang et al. (2020), the auxiliary matricesA*kandB*kcan be approximated as the nominal one-step PECMPk|k-1and MNCMRkrespectively when the state and measurement noises are Gaussian-distributed,i.e.,
According to Eqs. (8), (9), and(36), we have
Exploiting Eqs. (37)and(38)yields
Substituting Eq. (39) into Eqs. (22) and (23),the diagonal weighted matrices can be rewritten as follows:
Algorithm 1 indicates that the proposed MORKF degrades into the classical KF when the weighted matrices satisfyΨ(i)xk=InandΨ(i)zk=Im.To meet such conditions,according to Eqs.(40)and(41), the similarity functionsfx(·) andfz(·) should satisfy
Next, we need to guarantee the robustness of the proposed MORKF.If the state and measurement noises are contaminated by outliers, the auxiliary matricesA*kandB*ksatisfy the following inequalities(Huang et al., 2020):
Using the above inequalities, we can obtain the following theorem:
Theorem 2For a linear system, the proposed MORKF exhibits robustness if the similarity functionsfx(·) andfz(·) are chosen such that
and then the diagonal weighted matrices satisfy
The proof of Theorem 2 is given in Appendix H.
Employing inequality(45)in Eqs.(20)and(21)yields
Inequality (46) indicates that the modified PECM and the modified MNCM are not less than the nominal PECM and the nominal MNCM, respectively. Furthermore, according to the second and third conditions of similarity functionf(·) and Eqs. (20) and (23), violent outliers may result in small diagonal elements ofΨ*xkandΨ*zk, and then a significantly modified PECM and MNCM will be obtained. Consequently, the modified PECM and MNCM can be adaptively adjusted along with the intensity and occurrence probability of outliers.
Several similarity functions are recommended and listed in Table 4, from which several exemplary MORKFs can be obtained,whereσdenotes the kernel size, andνandωdenote the degree-of-freedom(DOF)parameters. It is easy to demonstrate that allthe similarity functions listed in Table 4 satisfy the conditions of Proposition 2. To meet the constraints of Propositions 2 and 4,the recommended similarity functions need to satisfy the following corollaries:
Table 4 Recommended similarity functions f(·) and their first- and second-order derivatives
Corollary 1For the recommended similarity functions listed in Table 4, inequality (26) in Proposition 2 holds only when the following conditions are satisfied:
whereLi*1kandLj*2kare as given in Eq. (32).
The proof of Corollary 1 is given in Appendix I.
Corollary 2The positive finite real numbersθ1andθ2in Proposition 4 exist only whenand
The proof of Corollary 2 is given in Appendix J.
Remark 2It is observed that all the first-order derivatives of the recommended similarity functions in Table 4 are-0.5 whenσ,ν, andωtend to infinity, i.e.,{σ,ν,ω} →+∞. Therefore, the resultant exemplary MORKFs will degrade into the classical KF when{σ,ν,ω}→+∞.
The M-estimator is a generalized maximum likelihood estimator that provides a robust state estimate by solving the following minimization problem(Huber, 2011):
where the cost functionρ(xk) can be formulated as
whereρx(·) andρz(·) are the robust cost functions that are applied to the predictive errors and residual errors,respectively.
It is observed from Eqs. (12), (13), (48), and(49)that the cost function of the proposed MORKF has a form similar to that of the M-estimator. The M-estimator optimizes only the state vector,whereas the proposed MORKF optimizes the state vector and the covariance matrix simultaneously. The M-estimator takes the stochastic state as a deterministic quantity and updates the covariance matrix independent of the optimization of the cost functionρ(xk). The proposed MORKF exploits the randomness of the stochastic state by updating the state and the covariance matrix alternately,which benefits the performance of the proposed MORKF.
Remark 3The existing SSMKF (Huang et al.,2020) was derived based on an assumption that the outliers occurring in different state and measurement dimensions have the same statistical properties,which may not be suitable in the scenarios with multiple outliers. By constructing a new MSSMbased cost function which imposes separate constraints on each state and measurement dimension,the proposed MORKF produces a couple of diagonal weighted matricesΨ*xkandΨ*zkto adaptively adjust the nominal PECM and MNCM,respectively,rather than two scalar-scale factors in the SSMKF.
The performance of the proposed MORKF is validated in a two-dimensional (2D) maneuvering target tracking example,where the positions are observed in a noisy scenario, and the positions and velocities are estimated simultaneously. The state transition matrix and measurement matrix areFk=andrespectively,whereT= 1 s. The state and measurement noises can be formulated aswk=[w1,k,w2,k,w3,k,w4,k]Tandvk= [v1,k, v2,k]Trespectively, whose nominal noise covariance matrices are given byQ=andR=rI2(rdenotes a scale factor), respectively.
The superiority of the proposed MORKF is evaluated through comparisons with the classical KF with nominal noise covariance matrices(KFNNCM),the existing HKF(Huber,2011),MCKF(Chen et al.,2017), and SSMKF (Huang et al., 2020). As discussed in our previous work (Huang et al., 2020),SSMKF can achieve the best estimation accuracy when the similarity function is selected as the squareroot function, and the resultant SSMKF has better estimation performance than the existing RSTKF(Huang et al.,2017). To better show the advantages of the proposed method, the square-root similarity functions are used to implement the existing SSMKF and the proposed MORKF;then two algorithms that are abbreviated as SSMKF-sqrt and MORKF-sqrt respectively can be obtained. The parameter settings for all algorithms are listed in Table 5. The iteration threshold is set as∈= 10-16and the maximum number of iterations is set asNm= 50. The simulation time is 1000 s,and 1000 Monte-Carlo runs are executed. All the algorithms are coded with MATLAB and executed on a computer with Intel Core i3-3110M CPU@2.40 GHz.
Table 5 Parameter settings for compared algorithms
Case 1: We consider the case where both state and measurement noises are Gaussian-mixture. Particularly, in the first stage, the identical outliercontaminated state and measurement noises are produced as follows:
where “w.p.” is short for“with probability.”
In the second stage, the noise covariance matrices for [w1,k, w3,k]Tand [w2,k, w4,k]Tcan berespectively.The multiple outlier-contaminated state and measurement noises are produced as follows:defined as
where the coefficients areU1=U4=300,U2=400,U3=500,andr=100.
Figs. 1 and 2 illustrate the RMSEs of position and velocity from all compared filters. We can observe that the proposed MORKF-sqrt has similar performance to the existing SSMKF-sqrt in the first stage but exhibits the smallest RMSEs in the second stage. In the first stage, both the proposed MORKF and the existing SSMKF outperform the HKF and MCKF because the randomness inherent in the stochastic state vector has been extensively exploited by using the posterior covariance matrix during the fixed-point iteration. However, in the second stage, SSMKF is inferior to the proposed MORKF because SSMKF is constructed based on an assumption that the outliers occurring in different dimensions possess the same statistical properties. The steady-state ARMSEs during the second stage (600—1000 s) and the runtime in a single step are summarized in Table 6. It can be seen from Table 6 that in the scenario with multiple outlier corrupted state and measurement noises, the proposed MORKF-sqrt has smaller steady-state ARMSEs than existing filters in position and velocity but greater runtime is required. As compared with the ARMSEposand ARMSEvelfrom MCKF,the steadystate ARMSEs of the proposed MORKF-sqrt have been reduced by 25.55% and 4.60% in position and velocity,respectively.
Fig. 1 Root mean square errors (RMSEs) of position from all filters in case 1 (References to color refer to the online version of this figure)
Fig. 2 Root mean square errors (RMSEs) of velocity from all filters in case 1 (References to color refer to the online version of this figure)
Table 6 Steady-state ARMSEs during 600-1000 s and runtime in a single step in case 1
Next, we describe why the proposed MORKF outperforms the existing SSMKF for multiple outliers. The diagonal elements of the weighted diagonal matricesΨxkandΨzkfrom MORKF as well as the scalar-scale factorsξkandλkfrom SSMKF are collected and averaged during 1000 Monte-Carlo runs, and are depicted in Figs. 3 and 4, respectively. In the first stage, the diagonal elements of the weighted matrixΨxkand the scalar-scale factorξkare of similar magnitude, so are matrixΨzkand scalarλk, which results in similar performance for MORKF and SSMKF in the scenario with the same form of outliers. However, in the second stage, the scalar factorsξkandλkand the diagonal elementsΨxk(2,2),Ψxk(4,4), andΨzk(2,2) are reduced to accommodate the suddenly increased state outliers in the second and fourth dimensions and the suddenly increased measurement outliers in the second dimension as described in Eq.(51),which results in the various enlargements of PECM and MNCM from dimension to dimension in the proposed MORKF, rather than the scalar enlargements of PECM and MNCM in SSMKF. Therefore, the proposed MORKF produces a couple of diagonal weighted matricesΨxkandΨzkto adaptively adjust the nominal PECM and MNCM, respectively, rather than two scalarscale factorsξkandλkas in the SSMKF,which leads to better estimation accuracy compared with the existing cutting-edge SSMKF for addressing multiple outliers.
Fig. 3 Comparisons of scalar-scale factor ξk from SSMKF and diagonal elements of Ψ xk from MORKF in case 1 (References to color refer to the online version of this figure)
Fig. 4 Comparisons of scalar-scale factor λk from SSMKF and diagonal elements of Ψzk from MORKF in case 1 (References to color refer to the online version of this figure)
Case 2: In this case, the identical outliercontaminated state and measurement noises in the first stage are generated in the same way as in case 1.However,in the second stage,the state and measurement noises are produced by mixing the Gaussian distribution and uniform distribution in some dimensions. The specific formulations of noise generation are given as follows:
whereU(σ;a,b)denotes that variableσis randomly extracted from a uniform distribution upon [a,b].
Fig. 5 Root mean square errors (RMSEs) of position from all filters in case 2 (References to color refer to the online version of this figure)
The simulation results are given in Figs. 5 and 6, and the ARMSEs are summarized in Table 7.Similar to case 1, the proposed MORKF-sqrt has similar performance to the existing SSMKF-sqrt in the first stage. However,the proposed MORKF-sqrt presents the best performance in the second stage,because the multiple outliers can be addressed from dimension to dimension by optimizing the proposed MSSM-based cost function.
In this paper, we have presented a novel MORKF for linear stochastic discrete-time systems.To evaluate the similarity between two random vectors from dimension to dimension, a new MSSM was first introduced. The MORKF was derived by maximizing an MSSM-based cost function. To illustrate the effectiveness and superiority of the proposed MORKF, theoretical analysis and discussion have been provided,and the similarity function selections and comparisons with existing robust methods have also been presented. Simulation results demonstrated that the developed MORKF outperforms the existing cutting-edge robust KFs in terms of estimation accuracy for linear systems when the state and measurement noises are corrupted by multiple outliers.
Fig. 6 Root mean square errors (RMSEs) of velocity from all filters in case 2 (References to color refer to the online version of this figure)
Contributors
Yulong HUANG designed the algorithm. Yulong HUANG and Mingming BAI coded the simulation. Mingming BAI drafted the paper. Yonggang ZHANG helped organize the paper. Mingming BAI and Yonggang ZHANG revised and finalized the paper.
Compliance with ethics guidelines
Yulong HUANG, Mingming BAI, and Yonggang ZHANG declare that they have no conflict of interest.
Appendix A:Proof of Proposition 1
Using inequality (A1)in Eq. (2), we obtain
Hence,the maximum pointpf(0)is reached only whenα=β.
Appendix B:Proof of Theorem 1
The Jacobian matrix ofJ(μk,Σk)with respect toμkcan be formulated as follows:
Define a couple of diagonal weighted matrices:
and then the Jacobian matrix in Eq. (B1) can be rewritten as follows:
According to the maximum point criterion, exploitingΛ*μk(μ*k,Σ*k)=0 yields
Substituting Eqs. (20) and (21) into Eq. (B3)and using the matrix inversion lemma in Simon(2006)yield the results in Theorem 1.
Appendix C:Proof of Proposition 2
Using Eqs. (13)and(16),the Hessian matrix ofJ(μk,Σk) with respect toμkcan be given by
where the quadratic termO*μkis given by
and the auxiliary matricesDkiandFkjsatisfy(Huang et al., 2020)
Exploiting Eqs. (16) and (20)—(23) in inequality (C3)yields
Substituting inequality (C4) into Eq. (C2), the Hessian matrix satisfies
The Hessian matrix is negative definite if the conditions in Proposition 2 hold.
Appendix D:Proof of Proposition 3
Using Eqs. (14) and (15) in Eq. (13) and calculating the derivative of the resultant cost functionJ(μk,Σk) with respect toΣk, the Jacobian matrixΛΣk(μk,Σk)can be formulated as follows:
According to the second condition off(·), we know that bothΨxkandΨzkare positive definite. Therefore, the Jacobian matrixΛΣk(μk,Σk)in Eq. (D1)is negative definite.
Appendix E:Proof of Proposition 4
Dropping the quadratic term in Eq. (C1) and taking the Frobenius norm of the derivative of the modified Hessian matrix(μk,Σk)yield
where
According to Eqs. (14)and(15),we obtain
Using inequality (29) and Eq. (C2) in Eq. (B4)yields
Similar to our previous work (Huang et al.,2020), using inequality (C3), we can prove that the modified Hessian matrix(μk,Σk) satisfies the Lipschitz condition,from which the results in Proposition 4 hold.
Appendix F:Proof of Proposition 5
Substituting Eq. (34)into Eq. (7)yields
wherec{μk,Σk}denotes a constant with no respect toμkandΣk.
Making derivative operations on(μk,Σk)yields
whereO(μk,Σk) represents the quadratic term of the Hessian matrix of the original cost functionJ(μk,Σk) with respect toμk, and is given by
Appendix G:Proof of Proposition 6
The variable in Eq. (32) can be rewritten as follows:
By introducing the posterior covariance matrixΣk, the formulation in Eq. (G1) can be further rewritten as
whereThe random vectorexsatisfies a standard normal distribution, namely,ex ~N(ex;0,Im).
The variance ofLi1kcan be formulated as
where the terms in Eq. (G3)can be given by
whereγx=‖ex‖2satisfies a chi-square distribution with DOF parametern, namely,γx ~χ2(n). According to the property of the chi-square distribution, the mean of the random variableγxisnand the variance is 2n.
Making an expectation operation on both sides of inequality (G5), we obtain
Then we have
Using similar means,we can obtain the following result:
Assuming thatLi1kandare independent of each other, we obtain inequality (35).
Appendix H:Proof of Theorem 2
Using Eq. (4)and inequality(43)yields
According to condition(44),we have
Substituting inequality (C5) into Eqs. (22) and(23)yields the results given in Theorem 2.
Appendix I:Proof of Corollary 1
Substituting Eq. (32) into Eq. (16), inequality(26)can be reformulated as follows:
Using the exemplary similarity functions given in Table 4 seriatim in inequality (I1), we obtain the results in inequality (47).
Appendix J:Proof of Corollary 2
For the case of exponential similarity function,constructing an auxiliary functionh(l)= ¨f(l2)land making a derivative operation on it yield
It is obvious thath(l) has a unique maximumwhich is positively bounded when
Using similar means, the logarithmic and square-root similarity functions can be verified to satisfy the inequalities given in inequality(29)when
Frontiers of Information Technology & Electronic Engineering2022年3期