Preface |
Preface to the First Edition |
Introduction / 1: |
Models / 1.1: |
Linear models (LM) and linear mixed models (LMM) / a: |
Generalized models (GLMs and GLMMs) / b: |
Factors, Levels, Cells, Effects and Data / 1.2: |
Fixed Effects Models / 1.3: |
Example 1: Placebo and a drug |
Example 2: Comprehension of humor |
Example 3: Four dose levels of a drug / c: |
Random Effects Models / 1.4: |
Example 4: Clinics |
Notation |
Example 5: Ball bearings and calipers |
Linear Mixed Models (LMMs) / 1.5: |
Example 6: Medications and clinics |
Example 7: Drying methods and fabrics |
Example 8: Potomac River Fever |
Regression models / d: |
Longitudinal data / e: |
Example 9: Osteoarthritis Initiative / f: |
Model equations / g: |
Fixed or Random? / 1.6: |
Example 10: Clinical trials |
Making a decision |
Inference / 1.7: |
Estimation |
Testing |
Prediction |
Computer Software / 1.8: |
Exercises / 1.9: |
One-Way Classifications / 2: |
Normality and Fixed Effects / 2.1: |
Model |
Estimation by ML |
Generalized likelihood ratio test |
Confidence intervals |
Hypothesis tests |
Normality, Random Effects and MLE / 2.2: |
Balanced data |
Unbalanced data |
Bias |
Sampling variances |
Normality, Random Effects and Reml / 2.3: |
More on Random Effects and Normality / 2.4: |
Tests and confidence intervals |
Predicting random effects |
Binary Data: Fixed Effects / 2.5: |
Model equation |
Likelihood |
ML equations and their solutions |
Likelihood ratio test |
The usual chi-square test |
Large-sample tests and confidence intervals |
Exact tests and confidence intervals |
Example: Snake strike data / h: |
Binary Data: Random Effects / 2.6: |
Beta-binomial model |
Logit-normal model |
Probit-normal model |
Computing / 2.7: |
Single-Predictor Regression / 2.8: |
Normality: Simple Linear Regression / 3.1: |
Maximum likelihood estimators |
Distributions of MLEs |
Illustration |
Normality: A Nonlinear Model / 3.3: |
Transforming Versus Linking / 3.4: |
Transforming |
Linking |
Comparisons |
Random Intercepts: Balanced Data / 3.5: |
The model |
Estimating [mu] and [beta] |
Estimating variances |
Tests of hypotheses - using LRT |
Predicting the random intercepts |
Random Intercepts: Unbalanced Data / 3.6: |
Estimating [mu] and [beta] when variances are known |
Bernoulli - Logistic Regression / 3.7: |
Logistic regression model |
ML equations |
Bernoulli - Logistic with Random Intercepts / 3.8: |
Conditional Inference |
Linear Models (LMs) / 3.9: |
A General Model / 4.1: |
A Linear Model for Fixed Effects / 4.2: |
Mle Under Normality / 4.3: |
Sufficient Statistics / 4.4: |
Many Apparent Estimators / 4.5: |
General result |
Mean and variance |
Invariance properties |
Distributions |
Estimable Functions / 4.6: |
Definition |
Properties |
A Numerical Example / 4.7: |
Estimating Residual Variance / 4.8: |
Distribution of estimators |
The One- and Two-Way Classifications / 4.9: |
The one-way classification |
The two-way classification |
Testing Linear Hypotheses / 4.10: |
Wald test |
t-Tests and Confidence Intervals / 4.11: |
Unique Estimation Using Restrictions / 4.12: |
Generalized Linear Models (GLMs) / 4.13: |
Structure of the Model / 5.1: |
Distribution of y |
Link function |
Predictors |
Linear models |
Estimation by Maximum Likelihood / 5.3: |
Some useful identities |
Likelihood equations |
Large-sample variances |
Solving the ML equations |
Example: Potato flour dilutions |
Tests of Hypotheses / 5.5: |
Likelihood ratio tests |
Wald tests |
Illustration of tests |
Illustration of confidence intervals |
Maximum Quasi-Likelihood / 5.6: |
Basic properties / 5.7: |
Attributing Structure to Var(y) / 6.2: |
Example |
Taking covariances between factors as zero |
The traditional variance components model |
An LMM for longitudinal data |
Estimating Fixed Effects for V Known / 6.3: |
Estimating Fixed Effects for V Unknown / 6.4: |
Sampling variance |
Bias in the variance |
Approximate F-statistics |
Predicting Random Effects for V Known / 6.5: |
Predicting Random Effects for V Unknown / 6.6: |
Anova Estimation of Variance Components / 6.7: |
Maximum Likelihood (ML) Estimation / 6.8: |
Estimators |
Information matrix |
Asymptotic sampling variances |
Restricted Maximum Likelihood (REML) / 6.9: |
Notes and Extensions / 6.10: |
ML or REML? |
Other methods for estimating variances |
Appendix for Chapter 6 / 6.11: |
Differentiating a log likelihood |
Differentiating a generalized inverse |
Differentiation for the variance components model |
Generalized Linear Mixed Models / 6.12: |
Conditional distribution of y / 7.1: |
Consequences of Having Random Effects / 7.3: |
Marginal versus conditional distribution |
Mean of y |
Variances |
Covariances and correlations |
Other Methods of Estimation / 7.4: |
Penalized quasi-likelihood |
Conditional likelihood |
Simpler models |
Asymptotic variances / 7.6: |
Score tests |
Illustration: Chestnut Leaf Blight / 7.7: |
A random effects probit model |
Models for Longitudinal Data / 7.8: |
A Model for Balanced Data / 8.1: |
Prescription |
Estimating the mean |
Estimating V[subscript 0] |
A Mixed Model Approach / 8.3: |
Fixed and random effects |
Random Intercept and Slope Models / 8.4: |
Within-subject correlations |
Predicting Random Effects / 8.5: |
Uncorrelated subjects |
Uncorrelated between, and within, subjects |
Uncorrelated between, and autocorrelated within |
Random intercepts and slopes |
Estimating Parameters / 8.6: |
The general case |
Uncorrelated between, and autocorrelated within, subjects |
Unbalanced Data / 8.7: |
Example and model |
Models for Non-Normal Responses / 8.8: |
Prediction of random effects |
Binary responses, random intercepts and slopes |
A Summary of Results / 8.9: |
Appendix / 8.10: |
For Section 8.4a |
For Section 8.4b |
Marginal Models / 8.11: |
Examples of Marginal Regression Models / 9.1: |
Generalized Estimating Equations / 9.3: |
Models with marginal and conditional interpretations |
Contrasting Marginal and Conditional Models / 9.4: |
Multivariate Models / 9.5: |
Multivariate Normal Outcomes / 10.1: |
Non-Normally Distributed Outcomes / 10.3: |
A multivariate binary model |
A binary/normal example |
A Poisson/Normal Example |
Correlated Random Effects / 10.4: |
Likelihood-Based Analysis / 10.5: |
Example: Osteoarthritis Initiative / 10.6: |
Missing data / 10.7: |
Efficiency |
Nonlinear Models / 10.8: |
Example: Corn Photosynthesis / 11.1: |
Pharmacokinetic Models / 11.3: |
Computations for Nonlinear Mixed Models / 11.4: |
Departures from Assumptions / 11.5: |
Incorrect Model for Response / 12.1: |
Omitted covariates |
Misspecified link functions |
Misclassified binary outcomes |
Informative cluster sizes |
Incorrect Random Effects Distribution / 12.3: |
Incorrect distributional family |
Correlation of covariates and random effects |
Covariate-dependent random effects variance |
Diagnosing Misspecification / 12.4: |
Conditional likelihood methods |
Between/within cluster covariate decompositions |
Specification tests |
Nonparametric maximum likelihood |
Best Prediction (BP) / 12.5: |
The best predictor |
Mean and variance properties |
A correlation property |
Maximizing a mean |
Normality |
Best Linear Prediction (BLP) / 13.3: |
BLP(u) |
Derivation |
Ranking |
Linear Mixed Model Prediction (BLUP) / 13.4: |
BLUE(X[beta]) |
BLUP(t'X[beta] + s'u) |
Two variances |
Other derivations |
Required Assumptions / 13.5: |
Estimated Best Prediction / 13.6: |
Henderson's Mixed Model Equations / 13.7: |
Origin |
Solutions |
Use in ML estimation of variance components |
Verification of (13.5) / 13.8: |
Verification of (13.7) and (13.8) |
Computing ML Estimates for LMMs / 13.9: |
The EM algorithm |
Using E[u|y] |
Newton-Raphson method |
Computing ML Estimates for GLMMs / 14.3: |
Numerical quadrature |
EM algorithm |
Markov chain Monte Carlo algorithms |
Stochastic approximation algorithms |
Simulated maximum likelihood |
Penalized Quasi-Likelihood and Laplace / 14.4: |
Iterative Bootstrap Bias Correction / 14.5: |
Some Matrix Results / 14.6: |
Vectors and Matrices of Ones / M.1: |
Kronecker (or Direct) Products / M.2: |
A Matrix Notation in Terms of Elements / M.3: |
Generalized Inverses / M.4: |
Generalized inverses of X'X |
Two results involving X(X'V[superscript -1]X)[superscript -]X'V[superscript -1] |
Solving linear equations |
Rank results |
Vectors orthogonal to columns of X |
A theorem for K' with K'X being null |
Differential Calculus / M.5: |
Scalars |
Vectors |
Inner products |
Quadratic forms |
Inverse matrices |
Determinants |
Some Statistical Results / Appendix S: |
Moments / S.1: |
Conditional moments |
Mean of a quadratic form |
Moment generating function |
Normal Distributions / S.2: |
Univariate |
Multivariate |
Quadratic forms in normal variables |
Exponential Families / S.3: |
Maximum Likelihood / S.4: |
The likelihood function |
Maximum likelihood estimation |
Asymptotic variance-covariance matrix |
Asymptotic distribution of MLEs |
Likelihood Ratio Tests / S.5: |
MLE Under Normality / S.6: |
Estimation of [beta] |
Estimation of variance components |
Restricted maximum likelihood (REML) |
References |
Index |
Longitudinal Data |
GLMMs |