close
1.

図書

図書
Olivier Cappé, Eric Moulines, Tobias Rydén
出版情報: New York : Springer, c2005  xvii, 652 p. ; 25 cm
シリーズ名: Springer series in statistics
所蔵情報: loading…
目次情報: 続きを見る
Preface
Contributors
Introduction / 1:
What Is a Hidden Markov Model? / 1.1:
Beyond Hidden Markov Models / 1.2:
Examples / 1.3:
Finite Hidden Markov Models / 1.3.1:
Normal Hidden Markov Models / 1.3.2:
Gaussian Linear State-Space Models / 1.3.3:
Conditionally Gaussian Linear State-Space Models / 1.3.4:
General (Continuous) State-Space HMMs / 1.3.5:
Switching Processes with Markov Regime / 1.3.6:
Left-to-Right and Ergodic Hidden Markov Models / 1.4:
Main Definitions and Notations / 2:
Markov Chains / 2.1:
Transition Kernels / 2.1.1:
Homogeneous Markov Chains / 2.1.2:
Non-homogeneous Markov Chains / 2.1.3:
Hidden Markov Models / 2.2:
Definitions and Notations / 2.2.1:
Conditional Independence in Hidden Markov Models / 2.2.2:
Hierarchical Hidden Markov Models / 2.2.3:
State Inference / Part I:
Filtering and Smoothing Recursions / 3:
Basic Notations and Definitions / 3.1:
Likelihood / 3.1.1:
Smoothing / 3.1.2:
The Forward-Backward Decomposition / 3.1.3:
Implicit Conditioning (Please Read This Section!) / 3.1.4:
Forward-Backward / 3.2:
The Forward-Backward Recursions / 3.2.1:
Filtering and Normalized Recursion / 3.2.2:
Markovian Decompositions / 3.3:
Forward Decomposition / 3.3.1:
Backward Decomposition / 3.3.2:
Complements / 3.4:
Advanced Topics in Smoothing / 4:
Recursive Computation of Smoothed Functionals / 4.1:
Fixed Point Smoothing / 4.1.1:
Recursive Smoothers for General Functionals / 4.1.2:
Comparison with Forward-Backward Smoothing / 4.1.3:
Filtering and Smoothing in More General Models / 4.2:
Smoothing in Markov-switching Models / 4.2.1:
Smoothing in Partially Observed Markov Chains / 4.2.2:
Marginal Smoothing in Hierarchical HMMs / 4.2.3:
Forgetting of the Initial Condition / 4.3:
Total Variation / 4.3.1:
Lipshitz Contraction for Transition Kernels / 4.3.2:
The Doeblin Condition and Uniform Ergodicity / 4.3.3:
Forgetting Properties / 4.3.4:
Uniform Forgetting Under Strong Mixing Conditions / 4.3.5:
Forgetting Under Alternative Conditions / 4.3.6:
Applications of Smoothing / 5:
Models with Finite State Space / 5.1:
Maximum a Posteriori Sequence Estimation / 5.1.1:
Filtering and Backward Markovian Smoothing / 5.2:
Linear Prediction Interpretation / 5.2.2:
The Prediction and Filtering Recursions Revisited / 5.2.3:
Disturbance Smoothing / 5.2.4:
The Backward Recursion and the Two-Filter Formula / 5.2.5:
Application to Marginal Filtering and Smoothing in CGLSSMs / 5.2.6:
Monte Carlo Methods / 6:
Basic Monte Carlo Methods / 6.1:
Monte Carlo Integration / 6.1.1:
Monte Carlo Simulation for HMM State Inference / 6.1.2:
A Markov Chain Monte Carlo Primer / 6.2:
The Accept-Reject Algorithm / 6.2.1:
Markov Chain Monte Carlo / 6.2.2:
Metropolis-Hastings / 6.2.3:
Hybrid Algorithms / 6.2.4:
Gibbs Sampling / 6.2.5:
Stopping an MCMC Algorithm / 6.2.6:
Applications to Hidden Markov Models / 6.3:
Generic Sampling Strategies / 6.3.1:
Gibbs Sampling in CGLSSMs / 6.3.2:
Sequential Monte Carlo Methods / 7:
Importance Sampling and Resampling / 7.1:
Importance Sampling / 7.1.1:
Sampling Importance Resampling / 7.1.2:
Sequential Importance Sampling / 7.2:
Sequential Implementation for HMMs / 7.2.1:
Choice of the Instrumental Kernel / 7.2.2:
Sequential Improtance Sampling with Resampling / 7.3:
Weight Degeneracy / 7.3.1:
Resampling / 7.3.2:
Implementation of Multinomial Resampling / 7.4:
Alternatives to Multinomial Resampling / 7.4.2:
Advanced Topics in Sequential Monte Carlo / 8:
Alternatives to SISR / 8.1:
I.I.D. Sampling / 8.1.1:
Two-Stage Sampling / 8.1.2:
Interpretation with Auxiliary Variables / 8.1.3:
Auxiliary Accept-Reject Sampling / 8.1.4:
Markov Chain Monte Carlo Auxiliary Sampling / 8.1.5:
Sequential Monte Carlo in Hierarchical HMMs / 8.2:
Sequential Importance Sampling and Global Sampling / 8.2.1:
Optimal Sampling / 8.2.2:
Application to CGLSSMs / 8.2.3:
Particle Approximation of Smoothing Functionals / 8.3:
Analysis of Sequential Monte Carlo Methods / 9:
Unnormalized Importance Sampling / 9.1:
Deviation Inequalities / 9.1.2:
Self-normalized Importance Sampling Estimator / 9.1.3:
The Algorithm / 9.2:
Weighting and Resampling / 9.2.2:
Application to the Single-Stage SIR Algorithm / 9.2.4:
Single-Step Analysis of SMC Methods / 9.3:
Mutation Step / 9.3.1:
Description of Algorithms / 9.3.2:
Analysis of the Mutation/Selection Algorithm / 9.3.3:
Analysis of the Selection/Mutation Algorithm / 9.3.4:
SISR / 9.4:
Weak Limits Theorems for Triangular Array / 9.4.2:
Bibliographic Notes / 9.5.2:
Parameter Inference / Part II:
Maximum Likelihood Inference, Part I: Optimization Through Exact Smoothing / 10:
Likelihood Optimization in Incomplete Data Models / 10.1:
Problem Statement and Notations / 10.1.1:
The Expectation-Maximization Algorithm / 10.1.2:
Gradient-based Methods / 10.1.3:
Pros and Cons of Gradient-based Methods / 10.1.4:
Application to HMMs / 10.2:
Hidden Markov Models as Missing Data Models / 10.2.1:
EM in HMMs / 10.2.2:
Computing Derivatives / 10.2.3:
Connection with the Sensitivity Equation Approach / 10.2.4:
The Example of Normal Hidden Markov Models / 10.3:
EM Parameter Update Formulas / 10.3.1:
Estimation of the Initial Distribution / 10.3.2:
Recursive Implementation of E-Step / 10.3.3:
Computation of the Score and Observed Information / 10.3.4:
The Example of Gaussian Linear State-Space Models / 10.4:
The Intermediate Quantity of EM / 10.4.1:
Recursive Implementation / 10.4.2:
Global Convergence of the EM Algorithm / 10.5:
Rate of Convergence of EM / 10.5.2:
Generalized EM Algorithms / 10.5.3:
Maximum Likelihood Inference, Part II: Monte Carlo Optimization / 10.5.4:
Methods and Algorithms / 11.1:
Monte Carlo EM / 11.1.1:
Simulation Schedules / 11.1.2:
Gradient-based Algorithms / 11.1.3:
Interlude: Stochastic Approximation and the Robbins-Monro Approach / 11.1.4:
Stochastic Gradient Algorithms / 11.1.5:
Stochastic Approximation EM / 11.1.6:
Stochastic EM / 11.1.7:
Analysis of the MCEM Algorithm / 11.2:
Convergence of Perturbed Dynamical Systems / 11.2.1:
Convergence of the MCEM Algorithm / 11.2.2:
Rate of Convergence of MCEM / 11.2.3:
Analysis of Stochastic Approximation Algorithms / 11.3:
Basic Results for Stochastic Approximation Algorithms / 11.3.1:
Convergence of the Stochastic Gradient Algorithm / 11.3.2:
Rate of Convergence of the Stochastic Gradient Algorithm / 11.3.3:
Convergence of the SAEM Algorithm / 11.3.4:
Statistical Properties of the Maximum Likelihood Estimator / 11.4:
A Primer on MLE Asymptotics / 12.1:
Stationary Approximations / 12.2:
Consistency / 12.3:
Construction of the Stationary Conditional Log-likelihood / 12.3.1:
The Contrast Function and Its Properties / 12.3.2:
Identifiability / 12.4:
Equivalence of Parameters / 12.4.1:
Identifiability of Mixture Densities / 12.4.2:
Application of Mixture Identifiability to Hidden Markov Models / 12.4.3:
Asymptotic Normality of the Score and Convergence of the Observed Information / 12.5:
The Score Function and Invoking the Fisher Identity / 12.5.1:
Construction of the Stationary Conditional Score / 12.5.2:
Weak Convergence of the Normalized Score / 12.5.3:
Convergence of the Normalized Observed Information / 12.5.4:
Asymptotics of the Maximum Likelihood Estimator / 12.5.5:
Applications to Likelihood-based Tests / 12.6:
Fully Bayesian Approaches / 12.7:
Parameter Estimation / 13.1:
Bayesian Inference / 13.1.1:
Prior Distributions for HMMs / 13.1.2:
Non-identifiability and Label Switching / 13.1.3:
MCMC Methods for Bayesian Inference / 13.1.4:
Reversible Jump Methods / 13.2:
Variable Dimension Models / 13.2.1:
Green's Reversible Jump Algorithm / 13.2.2:
Alternative Sampler Designs / 13.2.3:
Alternatives to Reversible Jump MCMC / 13.2.4:
Multiple Imputations Methods and Maximum a Posteriori / 13.3:
Simulated Annealing / 13.3.1:
The SAME Algorithm / 13.3.2:
Background and Complements / Part III:
Elements of Markov Chain Theory / 14:
Chains on Countable State Spaces / 14.1:
Irreducibility / 14.1.1:
Recurrence and Transience / 14.1.2:
Invariant Measures and Stationarity / 14.1.3:
Ergodicity / 14.1.4:
Chains on General State Spaces / 14.2:
Geometric Ergodicity and Foster-Lyapunov Conditions / 14.2.1:
Limit Theorems / 14.2.6:
Phi-irreducibility / 14.3:
Atoms and Small Sets / 14.3.2:
Recurrence and Positive Recurrence / 14.3.3:
An Information-Theoretic Perspective on Order Estimation / 15:
Model Order Identification: What Is It About? / 15.1:
Order Estimation in Perspective / 15.2:
Order Estimation and Composite Hypothesis Testing / 15.3:
Code-based Identification / 15.4:
Definitions / 15.4.1:
Information Divergence Rates / 15.4.2:
MDL Order Estimators in Bayesian Settings / 15.5:
Strongly Consistent Penalized Maximum Likelihood Estimators for HMM Order Estimation / 15.6:
Efficiency Issues / 15.7:
Variations on Stein's Lemma / 15.7.1:
Achieving Optimal Error Exponents / 15.7.2:
Consistency of the BIC Estimator in the Markov Order Estimation Problem / 15.8:
Some Martingale Tools / 15.8.1:
The Martingale Approach / 15.8.2:
The Union Bound Meets Martingale Inequalities / 15.8.3:
Appendices / 15.9:
Conditioning / A:
Probability and Topology Terminology and Notation / A.1:
Conditional Expectation / A.2:
Conditional Distribution / A.3:
Conditional Independence / A.4:
Linear Prediction / B:
Hilbert Spaces / B.1:
The Projection Theorem / B.2:
Notations / C:
Mathematical / C.1:
Probability / C.2:
Sequential Monte Carlo / C.3:
References
Index
Preface
Contributors
Introduction / 1:
2.

電子ブック

EB
Denis. Belomestny, Cristina Butucea, Enno Mammen, Eric Moulines, Markus Reiß, Vladimir V. Ulyanov
出版情報: SpringerLink Books - AutoHoldings , Cham : Springer International Publishing AG, 2023
所蔵情報: loading…
文献の複写および貸借の依頼を行う
 文献複写・貸借依頼