Introduction / 1: |
What Is Artificial Intelligence? / 1.1: |
Brain Science and Problem Solving / 1.1.1: |
The Turing Test and Chatterbots / 1.1.2: |
The History of AI / 1.2: |
The First Beginnings / 1.2.1: |
Logic Solves (Almost) All Problems / 1.2.2: |
The New Connection ism / 1.2.3: |
Reasoning Under Uncertainty / 1.2.4: |
Distributed, Autonomous and Learning Agents / 1.2.5: |
AI Grows up / 1.2.6: |
Agents / 1.3: |
Knowledge-Based Systems / 1.4: |
Exercises / 1.5: |
Propositional Logic / 2: |
Syntax / 2.1: |
Semantics / 2.2: |
Proof Systems / 2.3: |
Resolution / 2.4: |
Horn Clauses / 2.5: |
Computability and Complexity / 2.6: |
Applications and Limitations / 2.7: |
First-order Predicate Logic / 2.8: |
Equality / 3.1: |
Quantifiers and Normal Forms / 3.3: |
ProofCalculi / 3.4: |
Resolution Strategies / 3.5: |
Automated Theorem Provers / 3.5.2: |
Mathematical Examples / 3.7: |
Applications / 3.8: |
Summary / 3.9: |
Limitations of Logic / 3.10: |
The Search Space Problem / 4.1: |
Decidability and Incompleteness / 4.2: |
The Flying Penguin / 4.3: |
Modeling Uncertainty / 4.4: |
Logic Programming with Prolog / 4.5: |
Prolog Systems and Implementations / 5.1: |
Simple Examples / 5.2: |
Execution Control and Procedural Elements / 5.3: |
Lists / 5.4: |
Self-modifying Programs / 5.5: |
A Planning Example / 5.6: |
Constraint Logic Programming / 5.7: |
Search, Games and Problem Solving / 5.8: |
Uninformed Search / 6.1: |
Breadth-First Search / 6.2.1: |
Depth-First Search / 6.2.2: |
Iterative Deepening / 6.2.3: |
Comparison / 6.2.4: |
Heuristic Search / 6.3: |
Greedy Search / 6.3.1: |
A-Search / 6.3.2: |
Ida-Search / 6.3.3: |
Empirical Comparison of the Search Algorithms / 6.3.4: |
Games with Opponents / 6.3.5: |
Minimax Search / 6.4.1: |
Alpha-Beta-Pruning / 6.4.2: |
Non-deterministic Games / 6.4.3: |
Heuristic Evaluation Functions / 6.5: |
Learning of Heuristics / 6.5.1: |
State of the Art / 6.6: |
Reasoning with Uncertainty / 6.7: |
Computing with Probabilities / 7.1: |
Conditional Probability / 7.1.1: |
The Principle of Maximum Entropy / 7.2: |
An Inference Rule for Probabilities / 7.2.1: |
Maximum Entropy Without Explicit Constraints / 7.2.2: |
Conditional Probability Versus Material Implication / 7.2.3: |
MaxEnt-Systems / 7.2.4: |
The Tweety Example / 7.2.5: |
Lexmed, an Expert System for Diagnosing Appendicitis / 7.3: |
Appendicitis Diagnosis with Formal Methods / 7.3.1: |
Hybrid Probabilistic Knowledge Base / 7.3.2: |
Application of Lexmed / 7.3.3: |
Function of Lexmed / 7.3.4: |
Risk Management Using the Cost Matrix / 7.3.5: |
Performance / 7.3.6: |
Application Areas and Experiences / 7.3.7: |
Reasoning with Bayesian Networks / 7.4: |
Independent Variables / 7.4.1: |
Graphical Representation of Knowledge as a Bayesian Network / 7.4.2: |
Conditional Independence / 7.4.3: |
Practical Application / 7.4.4: |
Software for Bayesian Networks / 7.4.5: |
Development of Bayesian Networks / 7.4.6: |
Semantics of Bayesian Networks / 7.4.7: |
Machine Learning and Data Mining / 7.5: |
Data Analysis / 8.1: |
The Perceptron, a Linear Classifier / 8.2: |
The Learning Rule / 8.2.1: |
Optimization and Outlook / 8.2.2: |
The Nearest Neighbor Method / 8.3: |
Two Classes, Many Classes, Approximation / 8.3.1: |
Distance Is Relevant / 8.3.2: |
Computation Times / 8.3.3: |
Summary and Outlook / 8.3.4: |
Case-Based Reasoning / 8.3.5: |
Decision Tree Learning / 8.4: |
A Simple Example / 8.4.1: |
Entropy as a Metric for Information Content / 8.4.2: |
Information Gain / 8.4.3: |
Application of C4.5 / 8.4.4: |
Learning of Appendicitis Diagnosis / 8.4.5: |
Continuous Attributes / 8.4.6: |
PruningùCutting the Tree / 8.4.7: |
Missing Values / 8.4.8: |
Learning of Bayesian Networks / 8.4.9: |
Learning the Network Structure / 8.5.l: |
The Naive Bayes Classifier / 8:6: |
Text Classification with Naive Bayes / 8.6.1: |
Clustering / 8.7: |
Distance Metrics / 8.7.1: |
k-Means and the Em Algorithm / 8.7.2: |
Hierarchical Clustering / 8.7.3: |
Data Mining in Practice / 8.8: |
The Data Mining Tool Knime / 8.8.1: |
The Perceptron / 8.9: |
Nearest Neighbor Method / 8.10.3: |
Decision Trees / 8.10.4: |
Data Mining / 8.10.5: |
Neural Networks / 9: |
From Biology to Simulation / 9.1: |
The Mathematical Model / 9.1.1: |
Hopfield Networks / 9.2: |
Application to a Pattern Recognition Example / 9.2.1: |
Analysis / 9.2.2: |
Neural Associative Memory / 9.2.3: |
Correlation Matrix Memory / 9.3.1: |
The Pseudoinverse / 9.3.2: |
The Binary Hebb Rule / 9.3.3: |
A Spelling Correction Program / 9.3.4: |
Linear Networks with Minimal Errors / 9.4: |
Least Squares Method / 9.4.1: |
Application to the Appendicitis Data / 9.4.2: |
The Delta Rule / 9.4.3: |
Comparison to the Perceptron / 9.4.4: |
The Backpropagation Algorithm / 9.5: |
Nettalk: A Network Learns to Speak / 9.5.1: |
Learning of Heuristics for Theorem Provers / 9.5.2: |
Problems and Improvements / 9.5.3: |
Support Vector Machines / 9.6: |
Backpropagation / 9.7: |
Reinforcement Learning / 9.9.5: |
The Task / 10.1: |
Uninformed Combinatorial Search / 10.3: |
Value Iteration and Dynamic Programming / 10.4: |
A Learning Walking Robot and Its Simulation / 10.5: |
Q-Learning / 10.6: |
Q-Learning in a Nondeterministic Environment / 10.6.1: |
Exploration and Exploitation / 10.7: |
Approximation, Generalization and Convergence / 10.8: |
Curse of Dimensionality / 10.9: |
Solutions for the Exercises / 10.11: |
First-Order Predicate Logic / 11.1: |
Prolog / 11.4: |
References / 11.6: |
Index |
Introduction / 1: |
What Is Artificial Intelligence? / 1.1: |
Brain Science and Problem Solving / 1.1.1: |