Notations |
Acronyms |
High-Dimensional Data / 1: |
Practical motivations / 1.1: |
Fields of application / 1.1.1: |
The goals to be reached / 1.1.2: |
Theoretical motivations / 1.2: |
How can we visualize high-dimensional spaces? / 1.2.1: |
Curse of dimensionality and empty space phenomenon / 1.2.2: |
Some directions to be explored / 1.3: |
Relevance of the variables / 1.3.1: |
Dependencies between the variables / 1.3.2: |
About topology, spaces, and manifolds / 1.4: |
Two benchmark manifolds / 1.5: |
Overview of the next chapters / 1.6: |
Characteristics of an Analysis Method / 2: |
Purpose / 2.1: |
Expected functionalities / 2.2: |
Estimation of the number of latent variables / 2.2.1: |
Embedding for dimensionality reduction / 2.2.2: |
Embedding for latent variable separation / 2.2.3: |
Internal characteristics / 2.3: |
Underlying model / 2.3.1: |
Algorithm / 2.3.2: |
Criterion / 2.3.3: |
Example: Principal component analysis / 2.4: |
Data model of PCA / 2.4.1: |
Criteria leading to PCA / 2.4.2: |
Functionalities of PCA / 2.4.3: |
Algorithms / 2.4.4: |
Examples and limitations of PCA / 2.4.5: |
Toward a categorization of DR methods / 2.5: |
Hard vs. soft dimensionality reduction / 2.5.1: |
Traditional vs. generative model / 2.5.2: |
Linear vs. nonlinear model / 2.5.3: |
Continuous vs. discrete model / 2.5.4: |
Implicit vs. explicit mapping / 2.5.5: |
Integrated vs. external estimation of the dimensionality / 2.5.6: |
Layered vs. standalone embeddings / 2.5.7: |
Single vs. multiple coordinate systems / 2.5.8: |
Optional vs. mandatory vector quantization / 2.5.9: |
Batch vs. online algorithm / 2.5.10: |
Exact vs. approximate optimization / 2.5.11: |
The type of criterion to be optimized / 2.5.12: |
Estimation of the Intrinsic Dimension / 3: |
Definition of the intrinsic dimension / 3.1: |
Fractal dimensions / 3.2: |
The q-dimension / 3.2.1: |
Capacity dimension / 3.2.2: |
Information dimension / 3.2.3: |
Correlation dimension / 3.2.4: |
Some inequalities / 3.2.5: |
Practical estimation / 3.2.6: |
Other dimension estimators / 3.3: |
Local methods / 3.3.1: |
Trial and error / 3.3.2: |
Comparisons / 3.4: |
Data Sets / 3.4.1: |
PCA estimator / 3.4.2: |
Local PCA estimator / 3.4.3: |
Concluding remarks / 3.4.5: |
Distance Preservation / 4: |
State-of-the-art / 4.1: |
Spatial distances / 4.2: |
Metric space, distances, norms and scalar product / 4.2.1: |
Multidimensional scaling / 4.2.2: |
Sammon's nonlinear mapping / 4.2.3: |
Curvilinear component analysis / 4.2.4: |
Graph distances / 4.3: |
Geodesic distance and graph distance / 4.3.1: |
Isomap / 4.3.2: |
Geodesic NLM / 4.3.3: |
Curvilinear distance analysis / 4.3.4: |
Other distances / 4.4: |
Kernel PC A / 4.4.1: |
Semidefinite embedding / 4.4.2: |
Topology Preservation / 5: |
State of the art / 5.1: |
Predefined lattice / 5.2: |
Self-Organizing Maps / 5.2.1: |
Generative Topographic Mapping / 5.2.2: |
Data-driven lattice / 5.3: |
Locally linear embedding / 5.3.1: |
Laplacian eigenmaps / 5.3.2: |
Isotop / 5.3.3: |
Method comparisons / 6: |
Toy examples / 6.1: |
The Swiss roll / 6.1.1: |
Manifolds having essential loops or spheres / 6.1.2: |
Cortex unfolding / 6.2: |
Image processing / 6.3: |
Artificial faces / 6.3.1: |
Real faces / 6.3.2: |
Conclusions / 7: |
Summary of the book / 7.1: |
The problem / 7.1.1: |
A basic solution / 7.1.2: |
Dimensionality reduction / 7.1.3: |
Latent variable separation / 7.1.4: |
Intrinsic dimensionality estimation / 7.1.5: |
Data flow / 7.2: |
Variable Selection / 7.2.1: |
Calibration / 7.2.2: |
Linear dimensionality reduction / 7.2.3: |
Nonlinear dimensionality reduction / 7.2.4: |
Further processing / 7.2.5: |
Model complexity / 7.3: |
Taxonomy / 7.4: |
Distance preservation / 7.4.1: |
Topology preservation / 7.4.2: |
Spectral methods / 7.5: |
Nonspectral methods / 7.6: |
Tentative methodology / 7.7: |
Perspectives / 7.8: |
Matrix Calculus / A: |
Singular value decomposition / A.1: |
Eigenvalue decomposition / A.2: |
Square root of a square matrix / A.3: |
Gaussian Variables / B: |
One-dimensional Gaussian distribution / B.1: |
Multidimensional Gaussian distribution / B.2: |
Uncorrelated Gaussian variables / B.2.1: |
Isotropic multivariate Gaussian distribution / B.2.2: |
Linearly mixed Gaussian variables / B.2.3: |
Optimization / C: |
Newton's method / C.1: |
Finding extrema / C.1.1: |
Multivariate version / C.1.2: |
Gradient ascent/descent / C.2: |
Stochastic gradient descent / C.2.1: |
Vector quantization / D: |
Classical techniques / D.1: |
Competitive learning / D.2: |
Initialization and ""dead units"" / D.3: |
Graph Building / E: |
Without vector quantization / E.1: |
K-rule / E.1.1: |
e-rule / E.1.2: |
r-rule / E.1.3: |
With vector quantization / E.2: |
Data rule / E.2.1: |
Histogram rule / E.2.2: |
Implementation Issues / F: |
Dimension estimation / F.1: |
Computation of the closest point(s) / F.1.1: |
References / F.3: |
Index |
Notations |
Acronyms |
High-Dimensional Data / 1: |
Practical motivations / 1.1: |
Fields of application / 1.1.1: |
The goals to be reached / 1.1.2: |