Series Foreword |
Preface |
Introduction to Semi-Supervised Learning / 1: |
Supervised, Unsupervised, and Semi-Supervised Learning / 1.1: |
When Can Semi-Supervised Learning Work? / 1.2: |
Classes of Algorithms and Organization of This Book / 1.3: |
Generative Models / I: |
A Taxonomy for Semi-Supervised Learning Methods / Matthias W. Seeger2: |
The Semi-Supervised Learning Problem / 2.1: |
Paradigms for Semi-Supervised Learning / 2.2: |
Examples / 2.3: |
Conclusions / 2.4: |
Semi-Supervised Text Classification Using EM / N. C. Nigam ; Andrew McCallum ; Tom Mitchell3: |
Introduction / 3.1: |
A Generative Model for Text / 3.2: |
Experminental Results with Basic EM / 3.3: |
Using a More Expressive Generative Model / 3.4: |
Overcoming the Challenges of Local Maxima / 3.5: |
Conclusions and Summary / 3.6: |
Risks of Semi-Supervised Learning / Fabio Cozman ; Ira Cohen4: |
Do Unlabled Data Improve or Degrade Classification Performance? / 4.1: |
Understanding Unlabeled Data: Asymptotic Bias / 4.2: |
The Asymptotic Analysis of Generative Smei-Supervised Learning / 4.3: |
The Value of Labeled and Unlabeled Data / 4.4: |
Finite Sample Effects / 4.5: |
Model Search and Robustness / 4.6: |
Conclusion / 4.7: |
Probabilistic Semi-Supervised Cluster with Constraints / Sugato Basu ; Mikhail Bilenko ; Arindam Banerjee ; Raymond J. Mooney5: |
HMRF Model for Semi-Supervised Clustering / 5.1: |
HMRF-KMeans Algorithm / 5.3: |
Active Learning for Constraint Acquistion / 5.4: |
Experimental Results / 5.5: |
Related Work / 5.6: |
Low-Density Separation / 5.7: |
Transductive Support Vector Machines / Thorsten Joachims6: |
Why Use Margin on the Test Set? / 6.1: |
Experiments and Applications of the TSVMs / 6.4: |
Solving the TSVM Optimization Problem / 6.5: |
Connection to Related Approaches / 6.6: |
Summary and Conclusions / 6.7: |
Semi-Supervised Learning Using Semi-Definite Programming / Tijl De Bie ; Nello Cristianini7: |
Relaxing SVM transduction / 7.1: |
An Approximation for Speedup / 7.2: |
General Semi-Supervised Learning Settings / 7.3: |
Empirical Results / 7.4: |
Summary and Outlook / 7.5: |
Appendix |
The Extended Schur Complement Lemma |
Gaussian Processes and the Null-Category Noise Model / Neil D. Lawrence ; Michael I. Jordan8: |
The Noise Model / 8.1: |
Process Model and the Effect of the Null-Category / 8.3: |
Posterior Inference and Prediction / 8.4: |
Results / 8.5: |
Discussion / 8.6: |
Entropy Regularization / Yves Grandvalet ; Yoshua Bengio9: |
Derivation of the Criterion / 9.1: |
Optimization Algorithms / 9.3: |
Related Methods / 9.4: |
Experiments / 9.5: |
Proof of Theorem 9.1 / 9.6: |
Data-Dependent Regularization / Adrian Corduneanu ; Tommi S. Jaakkola10: |
Information Regularization on Metric Spaces / 10.1: |
Information Regularization and Relational Data / 10.3: |
Graph-Based Models / 10.4: |
Label Propogation and Quadratic Criterion / Olivier Delalleau ; Nicolas Le Roux11: |
Label Propogation on a Similarity Graph / 11.1: |
Quadratic Cost Criterion / 11.3: |
From Transduction to Induction / 11.4: |
Incorporating Class Prior Knowledge / 11.5: |
Curse of Dimensionality for Semi-Supervised Learning / 11.6: |
The Geometric Basis of Semi-Supervised Learning / Vikas Sindhwani ; Misha Belkin ; Partha Niyogi11.7: |
Incorporating Geometry in Regularization / 12.1: |
Algorithms / 12.3: |
Data-Dependent Kernels for Semi-Supervised Learning / 12.4: |
Linear Methods for Large-Scale Semi-Supervised Learning / 12.5: |
Connections to Other Algorithms and Related Work / 12.6: |
Future Directions / 12.7: |
Discrete Regularization / Dengyong Zhou ; Bernhard Scholkopf13: |
Discrete Analysis / 13.1: |
Semi-Supervised Learning with Conditional Harmonic Mixing / Christopher J. C. Burges ; John C. Platt13.3: |
Conditional Harmonic Mixing / 14.1: |
Learning in CHM Models / 14.3: |
Incorporating Prior Knowledge / 14.4: |
Learning the Conditionals / 14.5: |
Model Averaging / 14.6: |
Change of Representation / 14.7: |
Graph Kernels by Spectral Transforms / Xiaojin Zhu ; Jaz Kandola ; John Lafferty ; Zoubin Ghahramani15: |
The Graph Laplacian / 15.1: |
Kernels by Spectral Transforms / 15.2: |
Kernel Alignment / 15.3: |
Optimizing Alignment Using QCQP for Semi-Supervised Learning / 15.4: |
Semi-Supervised Kernels with Order Restraints / 15.5: |
Spectral Methods for Dimensionality Reduction / Lawrence K. Saul ; Kilian Weinberger ; Fei Sha ; Jihun Ham15.6: |
Linear Methods / 16.1: |
Graph-Based Methods / 16.3: |
Kernel Methods / 16.4: |
Modifying Distances / Alon Orlitsky ; Sajama16.5: |
Estimating DBD Metrics / 17.1: |
Computing DBD Metrics / 17.3: |
Semi-Supervised Learning Using Density-Based Metrics / 17.4: |
Conclusions and Future Work / 17.5: |
Semi-Supervised Learning in Practice / V: |
Large-Scale Algorithms / 18: |
Cost Approximations / 18.1: |
Subset Selection / 18.3: |
Semi-Supervised Protein Classification Using Cluster Kernels / Jason Weston ; Christina Leslie ; Eugene Ie ; William Stafford Noble18.4: |
Representation and Kernels for Protein Sequences / 19.1: |
Semi-Supervised Kernels for Protein Sequences / 19.3: |
Prediction of Protein Function from Networks / Hyunjung Shin ; Koji Tsuda19.4: |
Graph-Based Semi-Supervised Learning / 20.1: |
Combining Multiple Graphs / 20.3: |
Experiments on Function Prediction of Proteins / 20.4: |
Conclusion and Outlook / 20.5: |
Analysis of Benchmarks / 21: |
The Benchmark / 21.1: |
Application of SSL Methods / 21.2: |
Results and Discussion / 21.3: |
Perspectives / VI: |
An Augmented PAC Model for Semi-Supervised Learning / Maria-Florina Balcan ; Avrim Blum22: |
A Formal Framework / 22.1: |
Sample Complexity Results / 22.3: |
Algorithmic Results / 22.4: |
Related Models and Discussion / 22.5: |
Metric-Based Approaches for Semi-Supervised Regression and Classification / Dale Schuurmans ; Finnegan Southey ; Dana Wilkinson ; Yuhong Guo23: |
Metric Structure of Supervised Learning / 23.1: |
Model Selection / 23.3: |
Regularization / 23.4: |
Classification / 23.5: |
Transductive Inference and Semi-Supervised Learning / Vladimir Vapnik23.6: |
Problem Settings / 24.1: |
Problem of Generalization in Inductive and Transductive Inference / 24.2: |
Structure of the VC Bounds and Transductive Inference / 24.3: |
The Symmetrization Lemma and Transductive Inference / 24.4: |
Bounds for Transductive Inference / 24.5: |
The Structural Risk Minimization Principle for Induction and Transduction / 24.6: |
Combinatorics in Transductive Inference / 24.7: |
Measures of Size of Equivalence Classes / 24.8: |
Algorithms for Inductive and Transductive SVMs / 24.9: |
Semi-Supervised Learning / 24.10: |
Conclusion: / 24.11: |
Transductive Inference and the New Problems of Inference |
Beyond Transduction: Selective Inference / 24.12: |
A Discussion of Semi-Supervised Learning and Transduction / 25: |
References |
Notation and Symbols |
Contributors |
Index |
Online Index |
Series Foreword |
Preface |
Introduction to Semi-Supervised Learning / 1: |