Statistical
and Adaptive
Signal Processing
Spectral Estimation, Signal Modeling, Adaptive
Filtering, and Array Processing
Dimitris G. Manolakis
Massachusetts Institute of Technology
Lincoln Laboratory
Vinay K. Ingle
Northeastern University
Stephen M. Kogon
Massachusetts Institute of Technology
Lincoln Laboratory
a rtechhouse.com
CONTENTS
Preface x
vii1 Introduction
11.1
Random Signals 11.2
Spectral Estimation 81.3
Signal Modeling 111.3.1 Rational or Pole-Zero
Models / 1.3.2 Fractional
Pole-Zero Models and
Fractal Models
1.4
Adaptive Filtering 161.4.1 Applications of Adaptive
Filters / 1.4.2 Features of
Adaptive Filters
1.5
Array Processing 251.5.1 Spatial Filtering or
Beamforming / 1.5.2 Adaptive
Interference Mitigation in
Radar Systems / 1.5.3 Adaptive
Sidelobe Canceler
1.6
Organization of the Book 292 Fundamentals of Discrete-
Time Signal Processing
332.1
Discrete-Time Signals 332.1.1 Continuous-Time, Discrete-
Time, and Digital Signals /
2.1.2 Mathematical Description of
Signals / 2.1.3 Real-World Signals
2.2
Representation of
Deterministic Signals
Transform-Domain372.2.1 Fourier Transforms and
Fourier Series / 2.2.2 Sampling
of Continuous-Time Signals /
2.2.3 The Discrete Fourier
Transform / 2.2.4 The
z
of Narrowband Signals
-Transform / 2.2.5 Representation2.3
Discrete-Time Systems 472.3.1 Analysis of Linear,
Time-Invariant Systems / 2.3.2
Response to Periodic Inputs / 2.3.3
Correlation Analysis and Spectral
Density
2.4
System Invertibility
Minimum-Phase and542.4.1 System Invertibility and
Minimum-Phase Systems /
2.4.2 All-Pass Systems / 2.4.3
Minimum-Phase and All-Pass
Decomposition / 2.4.4 Spectral
Factorization
2.5
Lattice Filter Realizations 642.5.1 All-Zero Lattice Structures /
2.5.2 All-Pole Lattice Structures
2.6
Summary 70Problems
703 Random Variables, Vectors,
and Sequences
753.1
Random Variables 753.1.1 Distribution and Density
Functions / 3.1.2 Statistical
Averages / 3.1.3 Some Useful
Random Variables
3.2
Random Vectors 833.2.1 Definitions and Second-Order
Moments / 3.2.2 Linear
Transformations of Random
Vectors / 3.2.3 Normal Random
Vectors / 3.2.4 Sums of Independent
Random Variables
3.3
Processes
Discrete-Time Stochastic973.3.1 Description Using
Probability Functions / 3.3.2
Second-Order Statistical
Description / 3.3.3 Stationarity /
x
xi
Contents
3.3.4 Ergodicity / 3.3.5 Random
Signal Variability / 3.3.6
Frequency-Domain Description
of Stationary Processes
3.4
Stationary Random Inputs 115
Linear Systems with3.4.1 Time-Domain Analysis /
3.4.2 Frequency-Domain Analysis /
3.4.3 Random Signal Memory /
3.4.4 General Correlation
Matrices / 3.4.5 Correlation
Matrices from Random Processes
3.5
Representation
Whitening and Innovations1253.5.1 Transformations Using
Eigen-decomposition / 3.5.2
Transformations Using
Triangular Decomposition /
3.5.3 The Discrete Karhunen-
Loève Transform
3.6
Theory
Principles of Estimation1333.6.1 Properties of Estimators /
3.6.2 Estimation of Mean /
3.6.3 Estimation of Variance
3.7
Summary 142Problems
1434 Linear Signal Models
1494.1
Introduction 1494.1.1 Linear Nonparametric
Signal Models / 4.1.2 Parametric
Pole-Zero Signal Models / 4.1.3
Mixed Processes and the Wold
Decomposition
4.2
All-Pole Models 1564.2.1 Model Properties /
4.2.2 All-Pole Modeling and
Linear Prediction / 4.2.3
Autoregressive Models
Lower-Order Models
/ 4.2.44.3
All-Zero Models 1724.3.1 Model Properties / 4.3.2
Moving-Average Models / 4.3.3
Lower-Order Models
4.4
Pole-Zero Models 1774.4.1 Model Properties / 4.4.2
Autoregressive Moving-Average
Models / 4.4.3 The First-Order
Pole-Zero Model 1: PZ (1,1) /
4.4.4 Summary and Dualities
4.5
on the Unit Circle
Models with Poles1824.6
Models
Cepstrum of Pole-Zero1844.6.1 Pole-Zero Models / 4.6.2
All-Pole Models / 4.6.3 All-Zero
Models
4.7
Summary 189Problems
1895 Nonparametric Power
Spectrum Estimation
1955.1
Deterministic Signals
Spectral Analysis of1965.1.1 Effect of Signal Sampling /
5.1.2 Windowing, Periodic
Extension, and Extrapolation /
5.1.3 Effect of Spectrum
Sampling / 5.1.4 Effects of
Windowing: Leakage and Loss
of Resolution / 5.1.5 Summary
5.2
Autocorrelation of
Stationary Random Signals
Estimation of the2095.3
Spectrum of Stationary
Random Signals
Estimation of the Power2125.3.1 Power Spectrum Estimation
Using the Periodogram / 5.3.2
Power Spectrum Estimation by
Smoothing a Single Periodogram—
The Blackman-Tukey Method /
5.3.3 Power Spectrum Estimation
by Averaging Multiple
Periodograms—The Welch-
Bartlett Method / 5.3.4 Some
Practical Considerations and
Examples
xii
Contents
5.4
Joint Signal Analysis 2375.4.1 Estimation of Cross-Power
Spectrum / 5.4.2 Estimation of
Frequency Response Functions
5.5
Spectrum Estimation
Multitaper Power2465.5.1 Estimation of Auto Power
Spectrum / 5.5.2 Estimation
of Cross Power Spectrum
5.6
Summary 254Problems
2556 Optimum Linear Filters
2616.1
Estimation
Optimum Signal2616.2
Error Estimation
Linear Mean Square2646.2.1 Error Performance Surface /
6.2.2 Derivation of the Linear
MMSE Estimator / 6.2.3 Principal-
Component Analysis of the Optimum
Linear Estimator / 6.2.4 Geometric
Interpretations and the Principle of
Orthogonality / 6.2.5 Summary
and Further Properties
6.3
Equations
Solution of the Normal2746.4
Response Filters
Optimum Finite Impulse2786.4.1 Design and Properties /
6.4.2 Optimum FIR Filters for
Stationary Processes / 6.4.3
Frequency-Domain Interpretations
6.5
Linear Prediction 2866.5.1 Linear Signal Estimation /
6.5.2 Forward Linear Prediction /
6.5.3 Backward Linear Prediction /
6.5.4 Stationary Processes /
6.5.5 Properties
6.6
Response Filters
Optimum Infinite Impulse2956.6.1 Noncausal IIR Filters /
6.6.2 Causal IIR Filters / 6.6.3
Filtering of Additive Noise / 6.6.4
Linear Prediction Using the
Infinite Past—Whitening
6.7
and Deconvolution
Inverse Filtering3066.8
Transmission Systems
Channel Equalization in Data3106.8.1 Nyquist’s Criterion for Zero
ISI / 6.8.2 Equivalent Discrete-Time
Channel Model / 6.8.3 Linear
Equalizers / 6.8.4 Zero-Forcing
Equalizers / 6.8.5 Minimum MSE
Equalizers
6.9
Eigenfilters
Matched Filters and3196.9.1 Deterministic Signal in Noise /
6.9.2 Random Signal in Noise
6.10
Summary 325Problems
3257 Algorithms and Structures
for Optimum Linear Filters
3337.1
Recursive Algorithms 334
Fundamentals of Order-7.1.1 Matrix Partitioning and
Optimum Nesting / 7.1.2 Inversion
of Partitioned Hermitian Matrices /
7.1.3 Levinson Recursion for the
Optimum Estimator / 7.1.4 Order-
Recursive Computation of the LDL
HDecomposition / 7.1.5 Order-
Recursive Computation of the
Optimum Estimate
7.2
Algorithmic Quantities
Interpretations of3437.2.1 Innovations and Backward
Prediction / 7.2.2 Partial
Correlation / 7.2.3 Order
Decomposition of the Optimum
Estimate / 7.2.4 Gram-Schmidt
Orthogonalization
7.3
for Optimum FIR Filters
Order-Recursive Algorithms3477.3.1 Order-Recursive Computation
of the Optimum Filter / 7.3.2
xiii
Contents
Lattice-Ladder Structure / 7.3.3
Simplifications for Stationary
Stochastic Processes / 7.3.4
Algorithms Based on the UDU
HDecomposition
7.4
and Levinson-Durbin
Algorithms of Levinson3557.5
Optimum FIR Filters
and Predictors 361
Lattice Structures for7.5.1 Lattice-Ladder Structures /
7.5.2 Some Properties and
Interpretations / 7.5.3 Parameter
Conversions
7.6
Algorithm of Schür 3687.6.1 Direct Schür Algorithm /
7.6.2 Implementation
Considerations / 7.6.3 Inverse
Schür Algorithm
7.7
of Toeplitz Matrices
Triangularization and Inversion3747.7.1 LDL
Inverse of a Toeplitz Matrix /
7.7.2 LDL
Toeplitz Matrix / 7.7.3 Inversion
of Real Toeplitz Matrices
H Decomposition ofH Decomposition of a7.8
Kalman Filter Algorithm 3787.8.1 Preliminary Development /
7.8.2 Development of Kalman Filter
7.9
Summary 387Problems
3898 Least-Squares Filtering
and Prediction
3958.1
Squares
The Principle of Least3958.2
Error Estimation
Linear Least-Squares3968.2.1 Derivation of the Normal
Equations / 8.2.2 Statistical
Properties of Least-Squares
Estimators
8.3
Least-Squares FIR Filters 4068.4
Signal Estimation
Linear Least-Squares4118.4.1 Signal Estimation and Linear
Prediction / 8.4.2 Combined
Forward and Backward Linear
Prediction (FBLP) / 8.4.3
Narrowband Interference
Cancelation
8.5
Normal Equations 416
LS Computations Using the8.5.1 Linear LSE Estimation /
8.5.2 LSE FIR Filtering and
Prediction
8.6
Orthogonalization
Techniques
LS Computations Using4228.6.1 Householder Reflections /
8.6.2 The Givens Rotations / 8.6.3
Gram-Schmidt Orthogonalization
8.7
the Singular Value
Decomposition
LS Computations Using4318.7.1 Singular Value
Decomposition / 8.7.2 Solution
of the LS Problem / 8.7.3
Rank-Deficient LS Problems
8.8
Summary 438Problems
4399 Signal Modeling
and Parametric
Spectral Estimation
4459.1
Theory and Practice
The Modeling Process:4459.2
Models
Estimation of All-Pole4499.2.1 Direct Structures /
9.2.2 Lattice Structures / 9.2.3
Maximum Entropy Method / 9.2.4
Excitations with Line Spectra
9.3
Models
Estimation of Pole-Zero4629.3.1 Known Excitation / 9.3.2
Unknown Excitation / 9.3.3
xiv
Contents
Nonlinear Least-Squares
Optimization
9.4
Applicatons 4679.4.1 Spectral Estimation /
9.4.2 Speech Modeling
9.5
Spectrum Estimation
Minimum-Variance4719.6
Frequency Estimation
Techniques
Harmonic Models and4789.6.1 Harmonic Model /
9.6.2 Pisarenko Harmonic
Decomposition / 9.6.3 MUSIC
Algorithm / 9.6.4 Minimum-Norm
Method / 9.6.5 ESPRIT Algorithm
9.7
Summary 493Problems
49410 Adaptive Filters
49910.1
Adaptive Filters
Typical Applications of50010.1.1 Echo Cancelation in
Communications / 10.1.2
Equalization of Data
Communications Channels /
10.1.3 Linear Predictive Coding /
10.1.4 Noise Cancelation
10.2
Adaptive Filters
Principles of50610.2.1 Features of Adaptive
Filters / 10.2.2 Optimum versus
Adaptive Filters / 10.2.3 Stability
and Steady-State Performance of
Adaptive Filters / 10.2.4 Some
Practical Considerations
10.3
Steepest Descent
Method of51610.4
Adaptive Filters
Least-Mean-Square52410.4.1 Derivation / 10.4.2
Adaptation in a Stationary SOE /
10.4.3 Summary and Design
Guidelines / 10.4.4 Applications
of the LMS Algorithm / 10.4.5
Some Practical Considerations
10.5
Adaptive Filters
Recursive Least-Squares54810.5.1 LS Adaptive Filters /
10.5.2 Conventional Recursive
Least-Squares Algorithm / 10.5.3
Some Practical Considerations /
10.5.4 Convergence and
Performance Analysis
10.6
for Array Processing
RLS Algorithms56010.6.1 LS Computations Using
the Cholesky and QR
Decompositions / 10.6.2 Two
Useful Lemmas / 10.6.3 The
QR-RLS Algorithm /
10.6.4 Extended QR-RLS
Algorithm / 10.6.5 The Inverse
QR
Implementation of QR-RLS
Algorithm Using the Givens
Rotations / 10.6.7 Implementation
of Inverse QR-RLS Algorithm
Using the Givens Rotations /
10.6.8 Classification of RLS
Algorithms for Array Processing
-RLS Algorithm / 10.6.610.7
for FIR Filtering
Fast RLS Algorithms57310.7.1 Fast Fixed-Order RLS FIR
Filters / 10.7.2 RLS Lattice-
Ladder Filters / 10.7.3 RLS
Lattice-Ladder Filters Using Error
Feedback Updatings / 10.7.4
Givens Rotation–Based LS Lattice-
Ladder Algorithms / 10.7.5
Classification of RLS Algorithms
for FIR Filtering
10.8
of Adaptive Algorithms
Tracking Performance59010.8.1 Approaches for
Nonstationary SOE / 10.8.2
Preliminaries in Performance
Analysis / 10.8.3 The LMS
Algorithm / 10.8.4 The RLS
Algorithm with Exponential
Forgetting / 10.8.5 Comparison
of Tracking Performance
10.9
Summary 607Problems
608xv
Contents
11 Array Processing
62111.1
Array Fundamentals 62211.1.1 Spatial Signals / 11.1.2
Modulation-Demodulation /
11.1.3 Array Signal Model /
11.1.4 The Sensor Array: Spatial
Sampling
11.2
Filtering: Beamforming
Conventional Spatial63111.2.1 Spatial Matched Filter /
11.2.2 Tapered Beamforming
11.3
Processing
Optimum Array64111.3.1 Optimum Beamforming /
11.3.2 Eigenanalysis of the
Optimum Beamformer / 11.3.3
Interference Cancelation
Performance / 11.3.4 Tapered
Optimum Beamforming / 11.3.5
The Generalized Sidelobe Canceler
11.4
Considerations for
Optimum Beamformers
Performance65211.4.1 Effect of Signal Mismatch /
11.4.2 Effect of Bandwidth
11.5
Adaptive Beamforming 65911.5.1 Sample Matrix Inversion /
11.5.2 Diagonal Loading with the
SMI Beamformer / 11.5.3
Implementation of the SMI
Beamformer / 11.5.4 Sample-by-
Sample Adaptive Methods
11.6
Processing Methods
Other Adaptive Array67111.6.1 Linearly Constrained
Minimum-Variance Beamformers /
11.6.2 Partially Adaptive Arrays /
11.6.3 Sidelobe Cancelers
1 1.7
Angle Estimation 67811.7.1 Maximum-Likelihood
Angle Estimation / 11.7.2
Cramér-Rao Lower Bound on
Angle Accuracy / 11.7.3
Beamsplitting Algorithms /
11.7.4 Model-Based Methods
11.8
Adaptive Processing
Space-Time68311.9
Summary 685Problems
68612 Further Topics
69112.1
in Signal Processing
Higher-Order Statistics69112.1.1 Moments, Cumulants, and
Polyspectra / 12.1.2 Higher-
Order Moments and LTI Systems /
12.1.3 Higher-Order Moments of
Linear Signal Models
12.2
Blind Deconvolution 69712.3
Filters—Blind Equalizers
Unsupervised Adaptive70212.3.1 Blind Equalization /
12.3.2 Symbol Rate Blind
Equalizers / 12.3.3 Constant-
Modulus Algorithm
12.4
Equalizers
Fractionally Spaced70912.4.1 Zero-Forcing Fractionally
Spaced Equalizers / 12.4.2
MMSE Fractionally Spaced
Equalizers / 12.4.3 Blind
Fractionally Spaced Equalizers
12.5
Signal Models
Fractional Pole-Zero71612.5.1 Fractional Unit-Pole
Model / 12.5.2 Fractional Pole-
Zero Models: FPZ (p, d, q) /
12.5.3 Symmetric
Fractional Pole-Zero Processes
a-Stable12.6
Signal Models
Self-Similar Random72512.6.1 Self-Similar Stochastic
Processes / 12.6.2 Fractional
Brownian Motion / 12.6.3
Fractional Gaussian Noise /
12.6.4 Simulation of Fractional
Brownian Motions and Fractional
Gaussian Noises / 12.6.5
Estimation of Long Memory /
xvi
Contents
12.6.6 Fractional Lévy Stable
Motion
12.7
Summary 741Problems
742Appendix A Matrix Inversion
Lemma
745Appendix B Gradients and
Optimization in
Complex Space
747B.1
Gradient 747B.2
Lagrange Multipliers 749Appendix C M
ATLABFunctions
753Appendix D Useful Results
from Matrix Algebra
755D.1
Vector Space
Complex-Valued755Some Definitions
D.2
Matrices 756D.2.1 Some Definitions / D.2.2
Properties of Square Matrices
D.3
Matrix
Determinant of a Square760D.3.1 Properties of the
Determinant / D.3.2 Condition
Number
D.4
Unitary Matrices 762D.4.1 Hermitian Forms after
Unitary Transformations / D.4.2
Significant Integral of Quadratic
and Hermitian Forms
D.5
Positive Definite Matrices 764Appendix E Minimum Phase
Test for Polynomials
767Bibliography
769Index
787
No comments:
Post a Comment