Preface |
Symbols |
Abbreviations |
Introduction / 1: |
Signals and Information / 1.1: |
Signal Processing Methods / 1.2: |
Transform-based Signal Processing / 1.2.1: |
Model-based Signal Processing / 1.2.2: |
Bayesian Signal Processing / 1.2.3: |
Neural Networks / 1.2.4: |
Applications of Digital Signal Processing / 1.3: |
Adaptive Noise Cancellation / 1.3.1: |
Adaptive Noise Reduction / 1.3.2: |
Blind Channel Equalisation / 1.3.3: |
Signal Classification and Pattern Recognition / 1.3.4: |
Linear Prediction Modelling of Speech / 1.3.5: |
Digital Coding of Audio Signals / 1.3.6: |
Detection of Signals in Noise / 1.3.7: |
Directional Reception of Waves: Beam-forming / 1.3.8: |
Dolby Noise Reduction / 1.3.9: |
Radar Signal Processing: Doppler Frequency Shift / 1.3.10: |
Sampling and Analogue-to-digital Conversion / 1.4: |
Sampling and Reconstruction of Analogue Signals / 1.4.1: |
Quantisation / 1.4.2: |
Bibliography |
Noise and Distortion / 2: |
White Noise / 2.1: |
Band-limited White Noise / 2.2.1: |
Coloured Noise / 2.3: |
Impulsive Noise / 2.4: |
Transient Noise Pulses / 2.5: |
Thermal Noise / 2.6: |
Shot Noise / 2.7: |
Electromagnetic Noise / 2.8: |
Channel Distortions / 2.9: |
Echo and Multipath Reflections / 2.10: |
Modelling Noise / 2.11: |
Additive White Gaussian Noise Model / 2.11.1: |
Hidden Markov Model for Noise / 2.11.2: |
Probability and Information Models / 3: |
Random Signals / 3.1: |
Random and Stochastic Processes / 3.2.1: |
The Space of a Random Process / 3.2.2: |
Probability Models / 3.3: |
Probability and Random Variables / 3.3.1: |
Probability Mass Function / 3.3.2: |
Probability Density Function / 3.3.3: |
Probability Dgnsity Functions of Random Processes / 3.3.4: |
Information Models / 3.4: |
Entropy / 3.4.1: |
Mutual Information / 3.4.2: |
Entropy Coding / 3.4.3: |
Stationary and Nonstationary Random Processes / 3.5: |
Strict-sense Stationary Processes / 3.5.1: |
Wide-sense Stationary Processes / 3.5.2: |
Nonstationary Processes / 3.5.3: |
Statistics (Expected Values) of a Random Process / 3.6: |
The Mean Value / 3.6.1: |
Autocorrelation / 3.6.2: |
Autocovariance / 3.6.3: |
Power Spectral Density / 3.6.4: |
Joint Statistical Averages of Two Random Processes / 3.6.5: |
Cross-correlation and Cross-covariance / 3.6.6: |
Cross-power Spectral Density and Coherence / 3.6.7: |
Ergodic Processes and Time-averaged Statistics / 3.6.8: |
Mean-ergodic Processes / 3.6.9: |
Correlation-ergodic Processes / 3.6.10: |
Some Useful Classes of Random Processes / 3.7: |
Gaussian (Normal) Process / 3.7.1: |
Multivariate Gaussian Process / 3.7.2: |
Mixture Gaussian Process / 3.7:3: |
A Binary-state Gaussian Process / 3.7.4: |
Poisson Process / 3.7.5: |
Poisson-Gaussian Model for Clutters and Impulsive Noise / 3.7.6: |
Markov Processes / 3.7.8: |
Markov Chain Processes / 3.7.9: |
Gamma Probability Distribution / 3.7.10: |
Rayleigh Probability Distribution / 3.7.11: |
Laplacian Probability Distribution / 3.7.12: |
Transformation of a Random Process / 3.8: |
Monotonic Transformation of Random Processes / 3.8.1: |
Many-to-one Mapping of Random Signals / 3.8.2: |
Summary / 3.9: |
Bayesian Inference / 4: |
Bayesian Estimation Theory: Basic Definitions / 4.1: |
Dynamic and Probability Models in Estimation / 4.1.1: |
Parameter Space and Signal Space / 4.1.2: |
Parameter Estimation and Signal Restoration / 4.1.3: |
Performance Measures and Desirable Properties of Estimators / 4.1.4: |
Prior and Posterior Spaces and Distributions / 4.1.5: |
Bayesian Estimation / 4.2: |
Maximum a Posteriori Estimation / 4.2.1: |
Maximum-likelihood Estimation / 4.2.2: |
Minimum Mean Square Error Estimation / 4.2.3: |
Minimum Mean Absolute Value of Error Estimation / 4.2.4: |
Equivalence of the MAP, ML, MMSE and MAVE for Gaussian Processes with Uniform Distributed Parameters / 4.2.5: |
The Influence of the Prior on Estimation Bias and Variance / 4.2.6: |
The Relative Importance of the Prior and the Observation / 4.2.7: |
The Estimate-Maximise Method / 4.3: |
Convergence of the EM Algorithm / 4.3.1: |
Cramer-Rao Bound on the Minimum Estimator Variance / 4.4: |
Cramer-Rao Bound for Random Parameters / 4.4.1: |
Cramer-Rao Bound for a Vector Parameter / 4.4.2: |
Design of Gaussian Mixture Models / 4.5: |
EM Estimation of Gaussian Mixture Model / 4.5.1: |
Bayesian Classification / 4.6: |
Binary Classification / 4.6.1: |
Classification Error / 4.6.2: |
Bayesian Classification of Discrete-valued Parameters / 4.6.3: |
Maximum a Posteriori Classification / 4.6.4: |
Maximum-likelihood Classification / 4.6.5: |
Minimum Mean Square Error Classification / 4.6.6: |
Bayesian Classification of Finite State Processes / 4.6.7: |
Bayesian Estimation of the Most Likely State Sequence / 4.6.8: |
Modelling the Space of a Random Process / 4.7: |
Vector Quantisation of a Random Process / 4.7.1: |
Vector Quantisation using Gaussian Models / 4.7.2: |
Design of a Vector Quantiser: K-means Clustering / 4.7.3: |
Hidden Markov Models / 4.8: |
Statistical Models for Nonstationary Processes / 5.1: |
Comparison of Markov and Hidden Markov Models / 5.2: |
A Physical Interpretation: HMMs of Speech / 5.2.2: |
Hidden Markov Model as a Bayesian Model / 5.2.3: |
Parameters of a Hidden Markov Model / 5.2.4: |
State Observation Probability Models / 5.2.5: |
State Transition Probabilities / 5.2.6: |
State-Time Trellis Diagram / 5.2.7: |
Training Hidden Markov Models / 5.3: |
Forward-Backward Probability Computation / 5.3.1: |
Baum-Welch Model Re-estimation / 5.3.2: |
Training HMMs with Discrete Density Observation Models / 5.3.3: |
HMMs with Continuous Density Observation Models / 5.3.4: |
HMMs with Gaussian Mixture pdfs / 5.3.5: |
Decoding of Signals using Hidden Markov Models / 5.4: |
Viterbi Decoding Algorithm / 5.4.1: |
HMMs in DNA and Protein Sequence Modelling / 5.5: |
HMMs for Modelling Speech and Noise / 5.6: |
Modelling Speech with HMMs / 5.6.1: |
HMM-based Estimation of Signals in Noise / 5.6.2: |
Signal and Noise Model Combination and Decomposition / 5.6.3: |
Hidden Markov Model Combination / 5.6.4: |
Decomposition of State Sequences of Signal and Noise / 5.6.5: |
HMM-based Wiener Filters / 5.6.6: |
Modelling Noise Characteristics / 5.6.7: |
Least Square Error Filters / 5.7: |
Least Square Error Estimation: Wiener Filters / 6.1: |
Block-data Formulation of the Wiener Filter / 6.2: |
QR Decomposition of the Least Square Error Equation / 6.2.1: |
Interpretation of Wiener Filters as Projections in Vector Space / 6.3: |
Analysis of the Least Mean Square Error Signal / 6.4: |
Formulation of Wiener Filters in the Frequency Domain / 6.5: |
Some Applications of Wiener Filters / 6.6: |
Wiener Filters for Additive Noise Reduction / 6.6.1: |
Wiener Filters and Separability of Signal and Noise / 6.6.2: |
The Square-root Wiener Filter / 6.6.3: |
Wiener Channel Equaliser / 6.6.4: |
Time-alignment of Signals in Multichannel/Multisensor Systems / 6.6.5: |
Implementation of Wiener Filters / 6.7: |
The Choice of Wiener Filter Order / 6.7.1: |
Improvements to Wiener Filters / 6.7.2: |
Adaptive Filters / 6.8: |
State-space Kalman Filters / 7.1: |
Derivation of the Kalman Filter Algorithm / 7.2.1: |
Sample-adaptive Filters / 7.3: |
Recursive Least Square Adaptive Filters / 7.4: |
The Matrix Inversion Lemma / 7.4.1: |
Recursive Time-update of Filter Coefficients / 7.4.2: |
The Steepest-descent Method / 7.5: |
Convergence Rate / 7.5.1: |
Vector-valued Adaptation Step Size / 7.5.2: |
The LMS Filter / 7.6: |
Leaky LMS Algorithm / 7.6.1: |
Normalised LMS Algorithm / 7.6.2: |
Linear Prediction Models / 7.7: |
Linear Prediction Coding / 8.1: |
Frequency Response of LP Models / 8.1.1: |
Calculation of Predictor Coefficients / 8.1.2: |
Effect of Estimation of Correlation Function on LP Model Solution / 8.1.3: |
The Inverse Filter: Spectral Whitening / 8.1.4: |
The Prediction Error Signal / 8.1.5: |
Forward, Backward and Lattice Predictors / 8.2: |
Augmented Equations for Forward and Backward Predictors / 8.2.1: |
Levinson-Durbin Recursive Solution / 8.2.2: |
Lattice Predictors / 8.2.3: |
Alternative Formulations of Least Square Error Prediction / 8.2.4: |
Predictor Model Order Selection / 8.2.5: |
Short- and Long-term Predictors / 8.3: |
MAP Estimation of Predictor Coefficients / 8.4: |
Probability Density Function of Predictor Output / 8.4.1: |
Using the Prior pdf of the Predictor Coefficients / 8.4.2: |
Formant-tracking LP Models / 8.5: |
Sub-band Linear Prediction Model / 8.6: |
Signal Restoration using Linear Prediction Models / 8.7: |
Frequency-domain Signal Restoration using Prediction Models / 8.7.1: |
Implementation of Sub-band Linear Prediction Wiener Filters / 8.7.2: |
Power Spectrum and Correlation / 8.8: |
Fourier Series: Representation of Periodic Signals / 9.1: |
Fourier Transform: Representation of Aperiodic Signals / 9.3: |
Discrete Fourier Transform / 9.3.1: |
Time/Frequency Resolutions, the Uncertainty Principle / 9.3.2: |
Energy-spectral Density and Power-spectral Density / 9.3.3: |
Nonparametric Power Spectrum Estimation / 9.4: |
The Mean and Variance of Periodograms / 9.4.1: |
Averaging Periodograms (Bartlett Method) / 9.4.2: |
Welch Method: Averaging Periodograms from Overlapped and Windowed Segments / 9.4.3: |
Blackman-Tukey Method / 9.4.4: |
Power Spectrum Estimation from Autocorrelation of Overlapped Segments / 9.4.5: |
Model-based Power Spectrum Estimation / 9.5: |
Maximum-entropy Spectral Estimation / 9.5.1: |
Autoregressive Power Spectrum Estimation / 9.5.2: |
Moving-average Power Spectrum Estimation / 9.5.3: |
Autoregressive Moving-average Power Spectrum Estimation / 9.5.4: |
High-resolution Spectral Estimation Based on Subspace Eigenanalysis / 9.6: |
Pisarenko Harmonic Decomposition / 9.6.1: |
Multiple Signal Classification Spectral Estimation / 9.6.2: |
Estimation of Signal Parameters via Rotational Invariance Techniques / 9.6.3: |
Interpolation / 9.7: |
Interpolation of a Sampled Signal / 10.1: |
Digital Interpolation by a Factor of I / 10.1.2: |
Interpolation of a Sequence of Lost Samples / 10.1.3: |
The Factors that affect Interpolation Accuracy / 10.1.4: |
Polynomial Interpolation / 10.2: |
Lagrange Polynomial Interpolation / 10.2.1: |
Newton Polynomial Interpolation / 10.2.2: |
Hermite Polynomial Interpolation / 10.2.3: |
Cubic Spline Interpolation / 10.2.4: |
Model-based Interpolation / 10.3: |
Maximum a Posteriori Interpolation / 10.3.1: |
Least Square Error Autoregressive Interpolation / 10.3.2: |
Interpolation based on a Short-term Prediction Model / 10.3.3: |
Interpolation based on Long- and Short-term Correlations / 10.3.4: |
LSAR Interpolation Error / 10.3.5: |
Interpolation in Frequency-Time Domain / 10.3.6: |
Interpolation using Adaptive Codebooks / 10.3.7: |
Interpolation through Signal Substitution / 10.3.8: |
Spectral Amplitude Estimation / 10.4: |
Spectral Representation of Noisy Signals / 11.1: |
Vector Representation of the Spectrum of Noisy Signals / 11.1.2: |
Spectral Subtraction / 11.2: |
Power Spectrum Subtraction / 11.2.1: |
Magnitude Spectrum Subtraction / 11.2.2: |
Spectral Subtraction Filter: Relation to Wiener Filters / 11.2.3: |
Processing Distortions / 11.2.4: |
Effect of Spectral Subtraction on Signal Distribution / 11.2.5: |
Reducing the Noise Variance / 11.2.6: |
Filtering Out the Processing Distortions / 11.2.7: |
Nonlinear Spectra] Subtraction / 11.2.8: |
Implementation of Spectral Subtraction / 11.2.9: |
Bayesian MMSE Spectral Amplitude Estimation / 11.3: |
Application to Speech Restoration and Recognition / 11.4: |
Autocorrelation and Power Spectrum of Impulsive Noise / 11.5: |
Statistical Models for Impulsive Noise / 12.2: |
Bernoulli-Gaussian Model of Impulsive Noise / 12.2.1: |
Poisson-Gaussian Model of Impulsive Noise / 12.2.2: |
A Binary-state Model of Impulsive Noise / 12.2.3: |
Signal-to-impulsive-noise Ratio / 12.2.4: |
Median Filters / 12.3: |
Impulsive Noise Removal using Linear Prediction Models / 12.4: |
Impulsive Noise Detection / 12.4.1: |
Analysis of Improvement in Noise Detectability / 12.4.2: |
Two-sided Predictor for Impulsive Noise Detection / 12.4.3: |
Interpolation of Discarded Samples / 12.4.4: |
Robust Parameter Estimation / 12.5: |
Restoration of Archived Gramophone Records / 12.6: |
Transient Noise Waveforms / 12.7: |
Transient Noise Pulse Models / 13.2: |
Noise Pulse Templates / 13.2.1: |
Autoregressive Model of Transient Noise Pulses / 13.2.2: |
Hidden Markov Model of a Noise Pulse Process / 13.2.3: |
Detection of Noise Pulses / 13.3: |
Matched Filter for Noise Pulse Detection / 13.3.1: |
Noise Detection based on Inverse Filtering / 13.3.2: |
Noise Detection based on HMM / 13.3.3: |
Removal of Noise Pulse Distortions / 13.4: |
Adaptive Subtraction of Noise Pulses / 13.4.1: |
AR-based Restoration of Signals Distorted by Noise Pulses / 13.4.2: |
Echo Cancellation / 13.5: |
Introduction: Acoustic and Hybrid Echoes / 14.1: |
Telephone Line Hybrid Echo / 14.2: |
Echo: the Sources of Delay in Telephone Networks / 14.2.1: |
Echo Return Loss / 14.2.2: |
Hybrid Echo Suppression / 14.3: |
Adaptive Echo Cancellation / 14.4: |
Echo Canceller Adaptation Methods / 14.4.1: |
Convergence of Line Echo Canceller / 14.4.2: |
Echo Cancellation for Digital Data Transmission / 14.4.3: |
Acoustic Echo / 14.5: |
Sub-band Acoustic Echo Cancellation / 14.6: |
Multiple-input Multiple-output Echo Cancellation / 14.7: |
Stereophonic Echo Cancellation Systems / 14.7.1: |
Channel Equalisation and Blind Deconvolution / 14.8: |
The Ideal Inverse Channel Filter / 15.1: |
Equalisation Error, Convolutional Noise / 15.1.2: |
Blind Equalisation / 15.1.3: |
Minimum- and Maximum-phase Channels / 15.1.4: |
Wiener Equaliser / 15.1.5: |
Blind Equalisation using the Channel Input Power Spectrum / 15.2: |
Homomorphic Equalisation / 15.2.1: |
Homomorphic Equalisation using a Bank of High-pass Filters / 15.2.2: |
Equalisation based on Linear Prediction Models / 15.3: |
Blind Equalisation through Model Factorisation / 15.3.1: |
Bayesian Blind Deconvolution and Equalisation / 15.4: |
Conditional Mean Channel Estimation / 15.4.1: |
Maximum-likelihood Channel Estimation / 15.4.2: |
Maximum a Posteriori Channel Estimation / 15.4.3: |
Channel Equalisation based on Hidden Markov Models / 15.4.4: |
MAP Channel Estimate based on HMMs / 15.4.5: |
Implementations of HMM-based Deconvolution / 15.4.6: |
Blind Equalisation for Digital Communications Channels / 15.5: |
LMS Blind Equalisation / 15.5.1: |
Equalisation of a Binary Digital Channel / 15.5.2: |
Equalisation based on Higher-order Statistics / 15.6: |
Higher-order Moments, Cumulants and Spectra / 15.6.1: |
Higher-order Spectra of Linear Time-invariant Systems / 15.6.2: |
Blind Equalisation based on Higher-order Cepstra / 15.6.3: |
Speech Enhancement in Noise / 15.7: |
Single-input Speech-enhancement Methods / 16.1: |
An Overview of a Speech-enhancement System / 16.2.1: |
Wiener Filter for De-noising Speech / 16.2.2: |
Spectra] Subtraction of Noise / 16.2.3: |
Bayesian MMSE Speech Enhancement / 16.2.4: |
Kalman Filter / 16.2.5: |
Speech Enhancement via LP Model Reconstruction / 16.2.6: |
Multiple-input Speech-enhancement Methods / 16.3: |
Beam-forming with Microphone Arrays / 16.3.1: |
Speech Distortion Measurements / 16.4: |
Noise in Wireless Communications / 17: |
Introduction to Cellular Communications / 17.1: |
Noise, Capacity and Spectral Efficiency / 17.2: |
Communications Signal Processing in Mobile Systems / 17.3: |
Noise and Distortion in Mobile Communications Systems / 17.4: |
Multipath Propagation of Electromagnetic Signals / 17.4.1: |
Rake Receivers for Multipath Signals / 17.4.2: |
Signal Fading in Mobile Communications Systems / 17.4.3: |
Large-scale Signal Fading / 17.4.4: |
Small-scale Fast Signal Fading / 17.4.5: |
Smart Antennas / 17.5: |
Switched and Adaptive Smart Antennas / 17.5.1: |
Space-Time Signal Processing - Diversity Schemes / 17.5.2: |
Index / 17.6: |