Beskrivning
Produktinformation
- Utgivningsdatum:2026-11-11
- Mått:156 x 234 x undefined mm
- Format:Inbunden
- Språk:Engelska
- Antal sidor:560
- Förlag:Taylor & Francis Ltd
- ISBN:9781041286134
Utforska kategorier
Mer om författaren
Yuriy S. Shmaliy, IEEE Life Fellow, AAIA Fellow, AIIA Fellow (UFIR founder, Shmaliy’s Discrete Orthogonal Polynomials originator), received the B.S., M.S., and Ph.D. degrees in Electrical Engineering from Kharkiv Aviation Institute, Kharkiv, Ukraine, in 1974, 1976, and 1982, respectively, and the Dr.Sc. degree in Electrical Engineering from USSR Government, in 1992. Since 1986, he has been a Full Professor. From 1985 to 1999, he was with Kharkiv Military University, Kharkiv, Ukraine. In 1992, he founded the Scientific Center Sichron and was the Director by 2002. Since 1999, he has been with the Universidad de Guanajuato, Guanajuato, Mexico, and from 2012 to 2015, he headed the Department of Electronics Engineering in this University. He has 564 journal and conference papers and holds 81 patents. He has authored the books Continuous-Time Signals (Springer, 2006), Continuous-Time Systems (Springer, 2007), GPS-Based Optimal FIR Filtering of Clock Models (Nova Science Publ., 2009), and Optimal and Robust State Estimation: Finite Impulse response (FIR) and Kalman Approaches (Wiley & Sons, 2022)--One of the best estimation theory books of all time by BookAuthority. He also edited the book Probability: Interpretation, Theory and Applications (Nova Science Publ., 2012). Prof. Shmaliy has pioneered the theory of optimal and robust Finite Impulse Response (FIR) state estimation and coined the Unbiased FIR (UFIR) State Estimator, which is now widely used by the filtering research community to solve diverse estimation problems as a robust alternative to Kalman filter. He discovered a new class of discrete orthogonal polynomials (DOP). To recognize his pioneering contributions, the DOP named after him are called "Discrete Shmaliy Moments" or “Shmaliy DOP,” and developed to “Discrete Shmaliy Transform.” He was rewarded a title, Honorary Radio Engineer of the USSR, in 1991, was with the Ukrainian State Award Committee on Science and Technology, in 1998-1999, and has been IEEE Fellow Committee Member, in 2023-2026. He was the recipient of the Royal Academy of Engineering Newton Research Collaboration Program Award, in 2015, IEEE Latin America Eminent Engineer Award, in 2021, and several best conference paper awards. He was invited many times to give tutorial, seminar, and plenary lectures.
Innehållsförteckning
- Contents Foreword xv Preface xvii Acronims xix 1 Introduction 1 1.1 Brief pre-Kalman history . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Kalman filtering approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2.1 Recursive filtering estimate . . . . . . . . . . . . . . . . . . . . . . . 5 1.3 Dynamic process in state space . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.3.1 What is system state? . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.3.2 What do we need to estimate state? . . . . . . . . . . . . . . . . . . 7 1.3.3 What model to estimate state? . . . . . . . . . . . . . . . . . . . . . 8 1.3.4 What is state estimation problem? . . . . . . . . . . . . . . . . . . . 10 1.3.4.1 Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.3.4.2 Smoothing . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.3.4.3 Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.3.5 What types of linear state estimators? . . . . . . . . . . . . . . . . . 13 1.3.5.1 Unbiased estimator . . . . . . . . . . . . . . . . . . . . . . 13 1.3.5.2 Optimal estimator . . . . . . . . . . . . . . . . . . . . . . . 14 1.3.5.3 Optimal unbiased (maximum likelihood) estimator . . . . . 15 1.3.6 What criteria to evaluate estimator? . . . . . . . . . . . . . . . . . . 16 1.4 Properties of Kalman filtering . . . . . . . . . . . . . . . . . . . . . . . . . . 17 1.4.1 Linearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 1.4.2 Dimensionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 1.4.3 Optimality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 1.4.4 Effectiveness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 1.4.5 General functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 1.4.6 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 1.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 1.6 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 I Optimal and Suboptimal Estimates 25 2 Kalman Filter for Beginners 27 2.1 Continuous-time stochastic system . . . . . . . . . . . . . . . . . . . . . . . 28 2.1.1 Representation in state space . . . . . . . . . . . . . . . . . . . . . . 28 2.1.2 General state space model . . . . . . . . . . . . . . . . . . . . . . . . 29 2.2 Discrete-time state-space model . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.2.1 Euler’s methods of system discretization . . . . . . . . . . . . . . . . 30 vii viii Contents 2.2.2 LTI systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.2.2.1 Forward Euler method . . . . . . . . . . . . . . . . . . . . 31 2.2.2.2 Backward Euler method . . . . . . . . . . . . . . . . . . . . 33 2.2.3 LTV systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.2.3.1 Forward Euler method . . . . . . . . . . . . . . . . . . . . 35 2.2.3.2 Backward Euler method . . . . . . . . . . . . . . . . . . . . 35 2.3 Intuitive derivation of the Kalman filter . . . . . . . . . . . . . . . . . . . . 36 2.3.1 Basic a posteriori Kalman filter . . . . . . . . . . . . . . . . . . . . . 40 2.3.2 Alternate a posteriori Kalman filter . . . . . . . . . . . . . . . . . . 40 2.3.3 The a priori Kalman filter . . . . . . . . . . . . . . . . . . . . . . . 41 2.3.4 Stationary Kalman filter . . . . . . . . . . . . . . . . . . . . . . . . . 42 2.4 Algorithmic variants of the Kalman filter . . . . . . . . . . . . . . . . . . . 43 2.4.1 Information Kalman filter . . . . . . . . . . . . . . . . . . . . . . . . 43 2.4.1.1 Information Kalman filtering algorithm . . . . . . . . . . . 45 2.4.2 Backward Kalman filter . . . . . . . . . . . . . . . . . . . . . . . . . 45 2.4.2.1 Backward Kalman filtering algorithm . . . . . . . . . . . . 46 2.4.3 Forward-backward (two-filter) smoothing . . . . . . . . . . . . . . . 47 2.5 Kalman-Bucy filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 2.6 Unbiasedness and stability of Kalman filter . . . . . . . . . . . . . . . . . . 51 2.6.1 Unbiasedness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 2.6.2 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 2.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 2.8 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3 Bayesian Approach 57 3.1 Conditional probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.1.1 Bayes’ theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.1.2 Conditional probability density . . . . . . . . . . . . . . . . . . . . . 60 3.1.2.1 Two random variables . . . . . . . . . . . . . . . . . . . . . 60 3.1.2.2 Multiple random variables . . . . . . . . . . . . . . . . . . 61 3.2 Bayesian estimator (filter) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 3.2.1 Linear model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 3.2.1.1 Linear model (scalar case) . . . . . . . . . . . . . . . . . . 63 3.2.1.2 Linear model (vector case) . . . . . . . . . . . . . . . . . . 63 3.2.2 Nonlinear model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 3.3 Bayesian filtering of Gaussian models . . . . . . . . . . . . . . . . . . . . . . 64 3.3.1 Time update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 3.3.2 Measurement update . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 3.3.3 Recursive Gaussian filter (nonlinear case) . . . . . . . . . . . . . . . 66 3.3.3.1 Non-additive noise case . . . . . . . . . . . . . . . . . . . . 67 3.3.4 Recursive Gaussian filter (linear case) . . . . . . . . . . . . . . . . . 68 3.4 Kalman filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 3.4.1 Alternate Kalman filter recursions (scalar case) . . . . . . . . . . . . 70 3.4.2 Alternate Kalman filter recursions (vector case) . . . . . . . . . . . . 73 3.4.3 Basic Kalman filter recursions . . . . . . . . . . . . . . . . . . . . . . 76 3.5 Kalman smoothing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 3.5.1 Kalman smoother recursions . . . . . . . . . . . . . . . . . . . . . . 78 3.5.2 Rauch-Tung-Striebel Smoother . . . . . . . . . . . . . . . . . . . . . 82 3.5.2.1 Kalman-RTS smoothing algorithm . . . . . . . . . . . . . . 83 3.6 Sequential Monte Carlo methods . . . . . . . . . . . . . . . . . . . . . . . . 84 3.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Contents ix 3.8 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 4 Convolution-based approach 89 4.1 Convolution forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.1.1 Problems solved with convolution . . . . . . . . . . . . . . . . . . . . 91 4.2 The a posteriori optimal FIR filter . . . . . . . . . . . . . . . . . . . . . . . 92 4.2.1 Extended state-space model . . . . . . . . . . . . . . . . . . . . . . . 92 4.2.2 Batch estimate and error covariance . . . . . . . . . . . . . . . . . . 93 4.2.2.1 Batch a posteriori optimal FIR filtering algorithm . . . . . 95 4.2.3 Recursive forms for OFIR filter . . . . . . . . . . . . . . . . . . . . . 97 4.2.3.1 Iterative a posteriori OFIR filtering algorithm . . . . . . . 101 4.3 The a posteriori optimal unbiased FIR filter . . . . . . . . . . . . . . . . . . 102 4.3.1 Batch estimate and error covariance . . . . . . . . . . . . . . . . . . 102 4.3.1.1 Batch a posteriori OUFIR filtering algorithm . . . . . . . . 103 4.3.2 Batch maximum likelihood filter . . . . . . . . . . . . . . . . . . . . 103 4.3.2.1 Batch ML filtering algorithm . . . . . . . . . . . . . . . . . 104 4.3.3 Recursive forms for OUFIR filter . . . . . . . . . . . . . . . . . . . . 104 4.3.3.1 Special case: infinite horizon . . . . . . . . . . . . . . . . . 106 4.3.3.2 Special case: constant state . . . . . . . . . . . . . . . . . . 107 4.3.3.3 Recursive ML filtering of constant state . . . . . . . . . . . 108 4.3.4 Properties of bias-constrained filters . . . . . . . . . . . . . . . . . . 111 4.4 The a posteriori optimal IIR filter . . . . . . . . . . . . . . . . . . . . . . . 111 4.4.1 Extended state-space model . . . . . . . . . . . . . . . . . . . . . . . 112 4.4.2 Batch a posteriori optimal IIR filter . . . . . . . . . . . . . . . . . . 112 4.4.3 Recursive forms for OIIR filter . . . . . . . . . . . . . . . . . . . . . 113 4.5 Kalman filter properties from convolution . . . . . . . . . . . . . . . . . . . 116 4.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 4.7 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 5 General Kalman Filter 121 5.1 Time-Correlated Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 5.1.1 Noise de-correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 5.1.1.1 GKF algorithm for de-correlated noise . . . . . . . . . . . . 123 5.1.2 New Kalman gain . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 5.1.2.1 GKF algorithm for time-correlated noise . . . . . . . . . . 124 5.2 Colored Measurement Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 5.2.1 Augmented state vector . . . . . . . . . . . . . . . . . . . . . . . . . 126 5.2.2 Measurement differencing . . . . . . . . . . . . . . . . . . . . . . . . 126 5.2.2.1 Time-correlated noise . . . . . . . . . . . . . . . . . . . . . 127 5.2.2.2 De-correlated noise . . . . . . . . . . . . . . . . . . . . . . 129 5.2.3 Equivalence of GKF algorithms for CMN . . . . . . . . . . . . . . . 131 5.3 Colored process noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 5.3.1 Augmented state vector . . . . . . . . . . . . . . . . . . . . . . . . . 133 5.3.2 State differencing (LTV case) . . . . . . . . . . . . . . . . . . . . . . 134 5.3.2.1 GKF algorithm for LTV systems with CPN . . . . . . . . . 135 5.3.3 State differencing (LTI case) . . . . . . . . . . . . . . . . . . . . . . 136 5.3.3.1 GKF algorithm for LTI systems with CPN . . . . . . . . . 138 5.4 Colored process and measurement noise . . . . . . . . . . . . . . . . . . . . 139 5.4.1 Augmented state vector . . . . . . . . . . . . . . . . . . . . . . . . . 139 5.4.2 State and measurement differencing . . . . . . . . . . . . . . . . . . 140 5.4.2.1 State differencing . . . . . . . . . . . . . . . . . . . . . . . 140 x Contents 5.4.2.2 Measurement differencing . . . . . . . . . . . . . . . . . . . 140 5.4.2.3 GKF algorithm for LTI systems with CPN and CMN . . . 141 5.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 5.6 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 6 Suboptimal Kalman filtering 147 6.1 Extended Kalman filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 6.1.1 The first-order extension . . . . . . . . . . . . . . . . . . . . . . . . . 151 6.1.1.1 The first-order a posteriori EKF algorithm . . . . . . . . . 151 6.1.2 Iterated extended Kalman filter . . . . . . . . . . . . . . . . . . . . . 152 6.1.2.1 The a posteriori iterated EKF algorithm . . . . . . . . . . 153 6.1.3 The second-order extension . . . . . . . . . . . . . . . . . . . . . . . 154 6.1.3.1 The a priori state estimate . . . . . . . . . . . . . . . . . . 155 6.1.3.2 The a priori error covariance . . . . . . . . . . . . . . . . . 156 6.1.3.3 The a posteriori state estimate . . . . . . . . . . . . . . . . 157 6.1.3.4 The a posteriori error covariance . . . . . . . . . . . . . . . 157 6.2 Sigma-points approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 6.2.1 Statistical linearization . . . . . . . . . . . . . . . . . . . . . . . . . 162 6.2.2 Gaussian quadrature and cubature . . . . . . . . . . . . . . . . . . . 164 6.3 Unscented Kalman filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 6.3.1 The unscented transformation . . . . . . . . . . . . . . . . . . . . . . 167 6.3.2 The unscented Kalman filter . . . . . . . . . . . . . . . . . . . . . . 168 6.3.2.1 The UKF algorithm . . . . . . . . . . . . . . . . . . . . . . 169 6.4 Gauss-Hermite Kalman filter . . . . . . . . . . . . . . . . . . . . . . . . . . 169 6.4.1 Gauss-Hermite approximate integration . . . . . . . . . . . . . . . . 170 6.4.2 Gauss-Hermite cubature recursions . . . . . . . . . . . . . . . . . . . 172 6.5 Cubature Kalman filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 6.5.1 Cubature rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 6.5.1.1 Cartesian coordinates . . . . . . . . . . . . . . . . . . . . . 175 6.5.1.2 Polar coordinates . . . . . . . . . . . . . . . . . . . . . . . 176 6.5.1.3 CKF algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 177 6.6 Adaptive Kalman filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 6.6.1 Adaptation through covariance mismatch . . . . . . . . . . . . . . . 179 6.6.1.1 Measurement noise covariance adaptation . . . . . . . . . . 181 6.6.1.2 System noise covariance adaptation . . . . . . . . . . . . . 182 6.6.1.3 Adaptive Kalman filtering algorithm . . . . . . . . . . . . . 184 6.6.1.4 Strong tracking Kalman filter . . . . . . . . . . . . . . . . . 187 6.6.2 Adaptation using fuzzy inference . . . . . . . . . . . . . . . . . . . . 190 6.6.2.1 Fuzzy strong tracking Kalman filter . . . . . . . . . . . . . 194 6.6.3 Neural network aided adaptation . . . . . . . . . . . . . . . . . . . . 197 6.6.3.1 Correcting output estimate . . . . . . . . . . . . . . . . . . 200 6.6.3.2 Computing Kalman gain . . . . . . . . . . . . . . . . . . . 202 6.7 Fault detection and diagnosis using Kalman filtering . . . . . . . . . . . . . 203 6.7.1 Fault detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 6.7.2 Fault isolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 6.7.2.1 Measurement fault isolation . . . . . . . . . . . . . . . . . . 208 6.7.3 Fault estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 6.8 Some notable applied problems . . . . . . . . . . . . . . . . . . . . . . . . . 212 6.8.1 Data association uncertainty . . . . . . . . . . . . . . . . . . . . . . 212 6.8.2 Random jitter in sampling interval . . . . . . . . . . . . . . . . . . . 215 6.8.2.1 Noise covariances under timing jitter . . . . . . . . . . . . 217 Contents xi 6.8.2.2 Jitter Kalman filtering algorithm . . . . . . . . . . . . . . . 221 6.8.3 Fast Kalman filter variants . . . . . . . . . . . . . . . . . . . . . . . 223 6.9 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 6.10 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 7 Kalman filtering for networks 229 7.1 Ensemble Kalman filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 7.1.1 High-dimensional state vectors . . . . . . . . . . . . . . . . . . . . . 230 7.1.1.1 The ensemble Kalman filtering algorithm . . . . . . . . . . 232 7.2 Consensus distributed Kalman filtering . . . . . . . . . . . . . . . . . . . . . 233 7.2.1 Consensus on measurements . . . . . . . . . . . . . . . . . . . . . . . 235 7.2.2 Consensus on estimates . . . . . . . . . . . . . . . . . . . . . . . . . 237 7.2.2.1 Disagreement in estimates . . . . . . . . . . . . . . . . . . 237 7.2.2.2 Consensus on a posteriori estimates . . . . . . . . . . . . . 238 7.2.2.3 Consensus on a priori estimates . . . . . . . . . . . . . . . 242 7.2.3 Consensus on information . . . . . . . . . . . . . . . . . . . . . . . . 243 7.2.3.1 Distributed information Kalman filtering . . . . . . . . . . 244 7.2.4 Stability of consensus filtering . . . . . . . . . . . . . . . . . . . . . . 246 7.3 Fusion Kalman filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 7.3.1 Optimal fusion of estimates . . . . . . . . . . . . . . . . . . . . . . . 248 7.3.1.1 Kalman filter-based algorithm for optimal fusion of estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 7.3.2 Optimal data fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 7.3.2.1 Kalman filter-based data fusion algorithms . . . . . . . . . 253 7.4 Filtering with intermittent observations . . . . . . . . . . . . . . . . . . . . 254 7.4.1 Intermittent Kalman filter . . . . . . . . . . . . . . . . . . . . . . . . 257 7.4.2 Optimal fusion with intermittent observations . . . . . . . . . . . . . 258 7.4.2.1 Intermittent Kalman filter for optimal fusion . . . . . . . . 260 7.5 Delayed and lost data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 7.5.1 Timestamp one-step delays and packet dropouts . . . . . . . . . . . 262 7.5.1.1 Kalman filter for timestamp data with one-step delays and packet dropouts . . . . . . . . . . . . . . . . . . . . . . . . 264 7.5.2 Random one-step delays and packet dropouts . . . . . . . . . . . . . 264 7.5.2.1 Kalman filter for data with random binary one-step delays and packet dropouts . . . . . . . . . . . . . . . . . . . . . . 267 7.5.3 Optimal data fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 7.5.3.1 Fusion Kalman filter for timestamp one-step delays and packet dropouts . . . . . . . . . . . . . . . . . . . . . . . . 272 7.6 Event-triggered data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 7.6.1 Send-on-delta event-triggering . . . . . . . . . . . . . . . . . . . . . . 274 7.6.2 Send-on-delta event-triggered Kalman filter . . . . . . . . . . . . . . 275 7.6.2.1 Event-triggered Kalman filtering algorithm . . . . . . . . . 277 7.6.3 Optimal fusion of event-triggered data . . . . . . . . . . . . . . . . . 278 7.6.3.1 Data received from a single sensor . . . . . . . . . . . . . . 278 7.6.3.2 Data received from multiple sensors . . . . . . . . . . . . . 281 7.6.3.3 Fusion of event-triggered data . . . . . . . . . . . . . . . . 282 7.6.3.4 Fusion event-triggered Kalman filtering algorithm for timestamp data . . . . . . . . . . . . . . . . . . . . . . . . . . . 284 7.7 Fault tolerant Kalman filtering . . . . . . . . . . . . . . . . . . . . . . . . . 286 7.7.1 Fault tolerant optimal fusion of estimates . . . . . . . . . . . . . . . 287 7.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290 xii Contents 7.9 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292 II Robust Estimates 295 8 Robust approaches to recursive filtering 297 8.1 Robust state estimation problem . . . . . . . . . . . . . . . . . . . . . . . . 297 8.2 Noticeable early solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 8.3 Robust performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303 8.4 Error models for robust filtering . . . . . . . . . . . . . . . . . . . . . . . . 304 8.4.1 Disturbance models . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 8.4.1.1 Standard error model . . . . . . . . . . . . . . . . . . . . . 305 8.4.1.2 Augmented error model . . . . . . . . . . . . . . . . . . . . 307 8.4.2 Uncertainty models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309 8.4.2.1 Forward Euler method-based error model . . . . . . . . . . 309 8.4.2.2 Backward Euler method-based error model . . . . . . . . . 311 8.4.3 Disturbance and uncertainty models . . . . . . . . . . . . . . . . . . 312 8.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312 8.6 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 9 Unbiased filtering 315 9.1 The a posteriori UFIR filter . . . . . . . . . . . . . . . . . . . . . . . . . . . 316 9.1.1 Iterative computation using recursions . . . . . . . . . . . . . . . . . 317 9.1.1.1 Iterative a posteriori UFIR filtering algorithm . . . . . . . 320 9.1.1.2 UFIR filter with improved performance . . . . . . . . . . . 323 9.1.2 Recursive form for error covariance . . . . . . . . . . . . . . . . . . . 323 9.1.3 Minimizing error covariance . . . . . . . . . . . . . . . . . . . . . . . 325 9.1.3.1 Available ground truth . . . . . . . . . . . . . . . . . . . . 325 9.1.3.2 Unavailable ground truth . . . . . . . . . . . . . . . . . . . 326 9.2 General UFIR filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328 9.2.1 Gauss-Markov colored measurement noise . . . . . . . . . . . . . . . 328 9.2.1.1 General UFIR filtering algorithm for CMN . . . . . . . . . 329 9.2.2 Gauss-Markov colored process noise . . . . . . . . . . . . . . . . . . 330 9.2.2.1 General UFIR filtering algorithm for LTI systems with CPN 331 9.3 Unbiased smoothing and prediction . . . . . . . . . . . . . . . . . . . . . . . 332 9.3.1 Recursive error covariance . . . . . . . . . . . . . . . . . . . . . . . . 335 9.3.1.1 Time-varying case . . . . . . . . . . . . . . . . . . . . . . . 336 9.3.1.2 Time-invariant case . . . . . . . . . . . . . . . . . . . . . . 337 9.4 Extended UFIR filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338 9.4.1 First-order extended UFIR filter . . . . . . . . . . . . . . . . . . . . 338 9.4.2 Second-order extended UFIR filter . . . . . . . . . . . . . . . . . . . 340 9.5 Robustness of UFIR filtering . . . . . . . . . . . . . . . . . . . . . . . . . . 343 9.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346 9.7 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 10 H2 filtering 349 10.1 Transfer function approach for H2 filtering . . . . . . . . . . . . . . . . . . . 349 10.2 Recursive H2 filtering under disturbances . . . . . . . . . . . . . . . . . . . 350 10.2.1 Forward Euler method . . . . . . . . . . . . . . . . . . . . . . . . . . 350 10.2.2 Backward Euler method . . . . . . . . . . . . . . . . . . . . . . . . . 352 10.2.3 Recursive H2 filtering algorithm for disturbed models . . . . . . . . 354 Contents xiii 10.2.4 Computing H2 filter gain using LMI . . . . . . . . . . . . . . . . . . 357 10.3 Filtering of uncertain models . . . . . . . . . . . . . . . . . . . . . . . . . . 359 10.3.1 Covariances of uncertain terms . . . . . . . . . . . . . . . . . . . . . 360 10.3.1.1 H2 filtering algorithm for uncertain models . . . . . . . . . 363 10.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 10.5 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 11 H∞ filtering 367 11.1 The H∞ filtering problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 11.2 Disturbed model based on forward Euler method . . . . . . . . . . . . . . . 369 11.2.1 Bounded real lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . 369 11.2.2 H∞ filter—option-(I) . . . . . . . . . . . . . . . . . . . . . . . . . . . 372 11.2.2.1 Recursive H∞ filtering algorithm . . . . . . . . . . . . . . . 374 11.2.3 H∞ filter—option-(II) . . . . . . . . . . . . . . . . . . . . . . . . . . 374 11.2.3.1 Recursive H∞ filtering algorithm . . . . . . . . . . . . . . . 375 11.2.4 Iterative H∞ filtering algorithm . . . . . . . . . . . . . . . . . . . . . 376 11.3 Disturbed model based on backward Euler method . . . . . . . . . . . . . . 378 11.3.1 Bounded real lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . 378 11.3.2 H∞ filter—option-(I) . . . . . . . . . . . . . . . . . . . . . . . . . . . 380 11.3.2.1 Recursive H∞ filtering algorithm . . . . . . . . . . . . . . . 380 11.3.3 H∞ filter—option-(II) . . . . . . . . . . . . . . . . . . . . . . . . . . 381 11.3.3.1 Recursive H∞ filtering algorithm . . . . . . . . . . . . . . . 382 11.3.4 Iterative H∞ filtering algorithm . . . . . . . . . . . . . . . . . . . . . 382 11.4 Filtering of uncertain models . . . . . . . . . . . . . . . . . . . . . . . . . . 383 11.4.1 Recursive H∞ filtering algorithm for uncertain models . . . . . . . . 383 11.5 Hybrid filtering structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387 11.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387 11.7 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387 12 Generalized H2 filtering 389 12.1 The energy-to-peak filtering problem . . . . . . . . . . . . . . . . . . . . . . 389 12.2 Energy-to-peak lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390 12.2.1 Euler’s forward method-based model . . . . . . . . . . . . . . . . . . 391 12.2.2 Euler’s backward method-based model . . . . . . . . . . . . . . . . . 394 12.3 GH2 filter for disturbed models—option-(I) . . . . . . . . . . . . . . . . . . 395 12.3.1 Recursive GH2 filtering algorithm . . . . . . . . . . . . . . . . . . . 397 12.4 GH2 filter for disturbed models—option-(II) . . . . . . . . . . . . . . . . . . 397 12.4.1 Recursive GH2 filtering algorithm . . . . . . . . . . . . . . . . . . . 400 12.5 Iterative GH2 filtering algorithm . . . . . . . . . . . . . . . . . . . . . . . . 403 12.6 Filtering of uncertain models . . . . . . . . . . . . . . . . . . . . . . . . . . 403 12.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404 12.8 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404 13 L1 filtering 407 13.1 The L1 filtering problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407 13.2 Peak-to-peak lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409 13.2.1 Euler’s forward method-based model . . . . . . . . . . . . . . . . . . 409 13.2.2 Euler’s backward method-based model . . . . . . . . . . . . . . . . . 411 13.3 L1 filtering of disturbed models . . . . . . . . . . . . . . . . . . . . . . . . . 413 13.3.1 L1 filter—option-(I) . . . . . . . . . . . . . . . . . . . . . . . . . . . 415 13.3.2 L1 filter—option-(II) . . . . . . . . . . . . . . . . . . . . . . . . . . . 415 xiv Contents 13.3.2.1 Recursive L1 filtering algorithm . . . . . . . . . . . . . . . 416 13.3.3 Iterative L1 filtering algorithm . . . . . . . . . . . . . . . . . . . . . 420 13.4 Filtering of uncertain models . . . . . . . . . . . . . . . . . . . . . . . . . . 421 13.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421 13.6 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422 14 Where recursive state estimation meets artificial intelligence 425 14.1 Model-based vs. data-driven state estimation . . . . . . . . . . . . . . . . . 426 14.1.1 Data-driven Kalman filter . . . . . . . . . . . . . . . . . . . . . . . . 427 14.1.2 Maximum likelihood estimate . . . . . . . . . . . . . . . . . . . . . . 428 14.2 Machine learning-aided Kalman filtering . . . . . . . . . . . . . . . . . . . . 430 14.2.1 Tuning parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432 14.2.1.1 Measurement noise covariance . . . . . . . . . . . . . . . . 432 14.2.1.2 Process noise covariance . . . . . . . . . . . . . . . . . . . . 432 14.2.1.3 Process and measurement noise covariance . . . . . . . . . 433 14.2.1.4 State space model parameters . . . . . . . . . . . . . . . . 433 14.2.1.5 Bias correction gain . . . . . . . . . . . . . . . . . . . . . . 433 14.2.2 Compensation for estimation errors . . . . . . . . . . . . . . . . . . . 434 14.3 Kalman filter-aided machine learning . . . . . . . . . . . . . . . . . . . . . . 435 14.3.1 Network training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435 14.3.2 Hybrid schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436 14.3.2.1 Improving network performance . . . . . . . . . . . . . . . 436 14.3.2.2 Long short-term memory Kalman filter . . . . . . . . . . . 437 14.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439 14.5 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439 15 Applications 441 15.1 GPS navigation of a moving vehicle . . . . . . . . . . . . . . . . . . . . . . 441 15.2 UWB-based localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444 15.2.1 Linear robotic dog localization . . . . . . . . . . . . . . . . . . . . . 445 15.2.2 Nonlinear ad hoc localization . . . . . . . . . . . . . . . . . . . . . . 451 15.3 Pedestrian visual object tracking . . . . . . . . . . . . . . . . . . . . . . . . 455 15.3.1 Eight-state space model . . . . . . . . . . . . . . . . . . . . . . . . . 456 15.3.2 Separate coordinate filtering . . . . . . . . . . . . . . . . . . . . . . . 458 15.4 Air pollution estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461 15.4.1 CO concentration model . . . . . . . . . . . . . . . . . . . . . . . . . 461 15.5 EMG amplitude estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . 464 15.5.1 Typical EMG signaling . . . . . . . . . . . . . . . . . . . . . . . . . 465 15.6 Person activity estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468 15.6.1 Walking activity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469 15.6.2 Standing up from sitting position . . . . . . . . . . . . . . . . . . . . 470 15.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471 15.8 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472 Bibliography 475 Index 497
Mer från samma författare
Computational Problems in Science and Engineering II
Nikos E. Mastorakis, Imre J. Rudas, Yuriy S. Shmaliy
1 823 kr
6th International Technical Conference on Advances in Computing, Control and Industrial Engineering (CCIE 2021)
Yuriy S. Shmaliy, Abdelhalim Abdelnaby Zekry
3 211 kr
8th International Conference on Computing, Control and Industrial Engineering (CCIE2024)
Yuriy S. Shmaliy
3 211 kr
8th International Conference on Computing, Control and Industrial Engineering (CCIE2024)
Yuriy S. Shmaliy
3 211 kr
8th International Conference on Computing, Control and Industrial Engineering (CCIE2024)
Yuriy S. Shmaliy
3 211 kr
8th International Conference on Computing, Control and Industrial Engineering (CCIE2024)
Yuriy S. Shmaliy
3 211 kr
7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023)
Yuriy S. Shmaliy, Anand Nayyar
5 046 kr
7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023)
Yuriy S. Shmaliy, Anand Nayyar
5 046 kr
Du kanske också är intresserad av
7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023)
Yuriy S. Shmaliy, Anand Nayyar
5 046 kr
8th International Conference on Computing, Control and Industrial Engineering (CCIE2024)
Yuriy S. Shmaliy
3 211 kr
6th International Technical Conference on Advances in Computing, Control and Industrial Engineering (CCIE 2021)
Yuriy S. Shmaliy, Abdelhalim Abdelnaby Zekry
3 211 kr
7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023)
Yuriy S. Shmaliy, Anand Nayyar
5 046 kr
8th International Conference on Computing, Control and Industrial Engineering (CCIE2024)
Yuriy S. Shmaliy
3 211 kr
Computational Problems in Science and Engineering II
Nikos E. Mastorakis, Imre J. Rudas, Yuriy S. Shmaliy
1 823 kr
8th International Conference on Computing, Control and Industrial Engineering (CCIE2024)
Yuriy S. Shmaliy
3 211 kr
8th International Conference on Computing, Control and Industrial Engineering (CCIE2024)
Yuriy S. Shmaliy
3 211 kr
- -30%