close
1.

電子ブック

EB
Pierre M. Nugues, A. Bundy, Jörg Siekmann
出版情報: SpringerLink Books - AutoHoldings , Springer Berlin Heidelberg, 2006
所蔵情報: loading…
目次情報: 続きを見る
An Overview of Language Processing / 1:
Linguistics and Language Processing / 1.1:
Applications of Language Processing / 1.2:
The Different Domains of Language Processing / 1.3:
Phonetics / 1.4:
Lexicon and Morphology / 1.5:
Syntax / 1.6:
Syntax as Defined by Noam Chomsky / 1.6.1:
Syntax as Relations and Dependencies / 1.6.2:
Semantics / 1.7:
Discourse and Dialogue / 1.8:
Why Speech and Language Processing Are Difficult / 1.9:
Ambiguity / 1.9.1:
Models and Their Implementation / 1.9.2:
An Example of Language Technology in Action: the Persona Project / 1.10:
Overview of Persona / 1.10.1:
The Persona's Modules / 1.10.2:
Further Reading / 1.11:
Corpus Processing Tools / 2:
Corpora / 2.1:
Types of Corpora / 2.1.1:
Corpora and Lexicon Building / 2.1.2:
Corpora as Knowledge Sources for the Linguist / 2.1.3:
Finite-State Automata / 2.2:
A Description / 2.2.1:
Mathematical Definition of Finite-State Automata / 2.2.2:
Finite-State Automata in Prolog / 2.2.3:
Deterministic and Nondeterministic Automata / 2.2.4:
Building a Deterministic Automata from a Nondeterministic One / 2.2.5:
Searching a String with a Finite-State Automaton / 2.2.6:
Operations on Finite-State Automata / 2.2.7:
Regular Expressions / 2.3:
Repetition Metacharacters / 2.3.1:
The Longest Match / 2.3.2:
Character Classes / 2.3.3:
Nonprintable Symbols or Positions / 2.3.4:
Union and Boolean Operators / 2.3.5:
Operator Combination and Precedence / 2.3.6:
Programming with Regular Expressions / 2.4:
Perl / 2.4.1:
Matching / 2.4.2:
Substitutions / 2.4.3:
Translating Characters / 2.4.4:
String Operators / 2.4.5:
Back References / 2.4.6:
Finding Concordances / 2.5:
Concordances in Prolog / 2.5.1:
Concordances in Perl / 2.5.2:
Approximate String Matching / 2.6:
Edit Operations / 2.6.1:
Minimum Edit Distance / 2.6.2:
Searching Edits in Prolog / 2.6.3:
Encoding, Entropy, and Annotation Schemes / 2.7:
Encoding Texts / 3.1:
Character Sets / 3.2:
Representing Characters / 3.2.1:
Unicode / 3.2.2:
The Unicode Encoding Schemes / 3.2.3:
Locales and Word Order / 3.3:
Presenting Time, Numerical Information, and Ordered Words / 3.3.1:
The Unicode Collation Algorithm / 3.3.2:
Markup Languages / 3.4:
A Brief Background / 3.4.1:
An Outline of XML / 3.4.2:
Writing a DTD / 3.4.3:
Writing an XML Document / 3.4.4:
Namespaces / 3.4.5:
Codes and Information Theory / 3.5:
Entropy / 3.5.1:
Huffman Encoding / 3.5.2:
Cross Entropy / 3.5.3:
Perplexity and Cross Perplexity / 3.5.4:
Entropy and Decision Trees / 3.6:
Decision Trees / 3.6.1:
Inducing Decision Trees Automatically / 3.6.2:
Counting Words / 3.7:
Counting Words and Word Sequences / 4.1:
Words and Tokens / 4.2:
What Is a Word? / 4.2.1:
Breaking a Text into Words: Tokenization / 4.2.2:
Tokenizing Texts / 4.3:
Tokenizing Texts in Prolog / 4.3.1:
Tokenizing Texts in Perl / 4.3.2:
N-grams / 4.4:
Some Definitions / 4.4.1:
Counting Unigrams in Prolog / 4.4.2:
Counting Unigrams with Perl / 4.4.3:
Counting Bigrams with Perl / 4.4.4:
Probabilistic Models of a Word Sequence / 4.5:
The Maximum Likelihood Estimation / 4.5.1:
Using ML Estimates with Nineteen Eighty-Four / 4.5.2:
Smoothing N-gram Probabilities / 4.6:
Sparse Data / 4.6.1:
Laplace's Rule / 4.6.2:
Good-Turing Estimation / 4.6.3:
Using N-grams of Variable Length / 4.7:
Linear Interpolation / 4.7.1:
Back-off / 4.7.2:
Quality of a Language Model / 4.8:
Intuitive Presentation / 4.8.1:
Entropy Rate / 4.8.2:
Perplexity / 4.8.3:
Collocations / 4.9:
Word Preference Measurements / 4.9.1:
Extracting Collocations with Perl / 4.9.2:
Application: Retrieval and Ranking of Documents on the Web / 4.10:
Words, Parts of Speech, and Morphology / 4.11:
Words / 5.1:
Parts of Speech / 5.1.1:
Features / 5.1.2:
Two Significant Parts of Speech: The Noun and the Verb / 5.1.3:
Lexicons / 5.2:
Encoding a Dictionary / 5.2.1:
Building a Trie in Prolog / 5.2.2:
Finding a Word in a Trie / 5.2.3:
Morphology / 5.3:
Morphemes / 5.3.1:
Morphs / 5.3.2:
Inflection and Derivation / 5.3.3:
Language Differences / 5.3.4:
Morphological Parsing / 5.4:
Two-Level Model of Morphology / 5.4.1:
Interpreting the Morphs / 5.4.2:
Finite-State Transducers / 5.4.3:
Conjugating a French Verb / 5.4.4:
Prolog Implementation / 5.4.5:
Operations on Finite-State Transducers / 5.4.6:
Morphological Rules / 5.5:
Two-Level Rules / 5.5.1:
Rules and Finite-State Transducers / 5.5.2:
Rule Composition: An Examplewith French Irregular Verbs / 5.5.3:
Application Examples / 5.6:
Part-of-Speech Tagging Using Rules / 5.7:
Resolving Part-of-Speech Ambiguity / 6.1:
A Manual Method / 6.1.1:
Which Method to Use to Automatically Assign Parts of Speech / 6.1.2:
Tagging with Rules / 6.2:
Brill's Tagger / 6.2.1:
Implementation in Prolog / 6.2.2:
Deriving Rules Automatically / 6.2.3:
Confusion Matrices / 6.2.4:
Unknown Words / 6.3:
Standardized Part-of-Speech Tagsets / 6.4:
Multilingual Part-of-Speech Tags / 6.4.1:
Parts of Speechfor English / 6.4.2:
An Annotation Schemefor Swedish / 6.4.3:
Part-of-Speech Tagging Using Stochastic Techniques / 6.5:
The Noisy Channel Model / 7.1:
Presentation / 7.1.1:
The N-gram Approximation / 7.1.2:
Tagging a Sentence / 7.1.3:
The Viterbi Algorithm: An Intuitive Presentation / 7.1.4:
Markov Models / 7.2:
Markov Chains / 7.2.1:
Hidden Markov Models / 7.2.2:
Three Fundamental Algorithms to Solve Problems with HMMs / 7.2.3:
The Forward Procedure / 7.2.4:
Viterbi Algorithm / 7.2.5:
The Backward Procedure / 7.2.6:
The Forward-Backward Algorithm / 7.2.7:
Tagging with Decision Trees / 7.3:
An Application of the Noisy Channel Model: Spell Checking / 7.4:
A Second Application: Language Models for Machine Translation / 7.6:
Parallel Corpora / 7.6.1:
Alignment / 7.6.2:
Translation / 7.6.3:
Phrase-Structure Grammars in Prolog / 7.7:
Using Prolog to Write Phrase-Structure Grammars / 8.1:
Representing Chomsky's Syntactic Formalism in Prolog / 8.2:
Constituents / 8.2.1:
Tree Structures / 8.2.2:
Phrase-Structure Rules / 8.2.3:
The Definite Clause Grammar (DCG) Notation / 8.2.4:
Parsing with DCGs / 8.3:
Translating DCGs into Prolog Clauses / 8.3.1:
Parsing and Generation / 8.3.2:
Left-Recursive Rules / 8.3.3:
Parsing Ambiguity / 8.4:
Using Variables / 8.5:
Gender and Number Agreement / 8.5.1:
Obtaining the Syntactic Structure / 8.5.2:
Application: Tokenizing Texts Using DCG Rules / 8.6:
Word Breaking / 8.6.1:
Recognition of Sentence Boundaries / 8.6.2:
Semantic Representation / 8.7:
A-Calculus / 8.7.1:
Embedding A-Expressions into DCG Rules / 8.7.2:
Semantic Composition of Verbs / 8.7.3:
An Application of Phrase-Structure Grammars and a Worked Example / 8.8:
Partial Parsing / 8.9:
Is Syntax Necessary? / 9.1:
Word Spotting and Template Matching / 9.2:
ELIZA / 9.2.1:
Word Spotting in Prolog / 9.2.2:
Multiword Detection / 9.3:
Multiwords / 9.3.1:
AStandard Multiword Annotation / 9.3.2:
Detecting Multiwords with Rules / 9.3.3:
Running the Program / 9.3.4:
Noun Groups and Verb Groups / 9.4:
Groups Versus Recursive Phrases / 9.4.1:
DCG Rules to Detect Noun Groups / 9.4.2:
DCG Rules to Detect Verb Groups / 9.4.3:
Running the Rules / 9.4.4:
Group Detection as a Tagging Problem / 9.5:
Tagging Gaps / 9.5.1:
Tagging Words / 9.5.2:
Using Symbolic Rules / 9.5.3:
Using Statistical Tagging / 9.5.4:
Cascading Partial Parsers / 9.6:
Elementary Analysis of Grammatical Functions / 9.7:
Main Functions / 9.7.1:
Extracting Other Groups / 9.7.2:
An Annotation Scheme for Groups in French / 9.8:
Application: The FASTUS System / 9.9:
The Message Understanding Conferences / 9.9.1:
The Syntactic Layers of the FASTUS System / 9.9.2:
Evaluationof Information Extraction Systems / 9.9.3:
Syntactic Formalisms / 9.10:
Introduction / 10.1:
Chomsky's Grammar in Syntactic Structures / 10.2:
Constituency: A Formal Definition / 10.2.1:
Transformations / 10.2.2:
Transformations and Movements / 10.2.3:
Gap Threading / 10.2.4:
Gap Threading to Parse Relative Clauses / 10.2.5:
Standardized Phrase Categories for English / 10.3:
Unification-Based Grammars / 10.4:
Representing Features in Prolog / 10.4.1:
A Formalism for Features and Rules / 10.4.3:
Features Organization / 10.4.4:
Features and Unification / 10.4.5:
A Unification Algorithm for Feature Structures / 10.4.6:
Dependency Grammars / 10.5:
Properties of a Dependency Graph / 10.5.1:
Valence / 10.5.3:
Dependencies and Functions / 10.5.4:
Parsing Techniques / 10.6:
Bottom-up Parsing / 11.1:
The Shift-Reduce Algorithm / 11.2.1:
Implementing Shift-Reduce Parsing in Prolog / 11.2.2:
Differences Between Bottom-up and Top-down Parsing / 11.2.3:
Chart Parsing / 11.3:
Backtracking and Efficiency / 11.3.1:
Structure of a Chart / 11.3.2:
The Active Chart / 11.3.3:
Modules of an Earley Parser / 11.3.4:
The Earley Algorithm in Prolog / 11.3.5:
The Earley Parser to Handle Left-Recursive Rules and Empty Symbols / 11.3.6:
Probabilistic Parsing of Context-Free Grammars / 11.4:
A Description of PCFGs / 11.5:
The Bottom-up Chart / 11.5.1:
The Cocke-Younger-Kasami Algorithm in Prolog / 11.5.2:
Adding Probabilities to the CYK Parser / 11.5.3:
Parser Evaluation / 11.6:
Constituency-Based Evaluation / 11.6.1:
Dependency-Based Evaluation / 11.6.2:
PerformanceofPCFG Parsing / 11.6.3:
Parsing Dependencies / 11.7:
Dependency Rules / 11.7.1:
Extending the Shift-Reduce Algorithm to Parse Dependencies / 11.7.2:
Nivre's Parser in Prolog / 11.7.3:
Finding Dependencies Using Constraints / 11.7.4:
Parsing Dependencies Using Statistical Techniques / 11.7.5:
Semantics and Predicate Logic / 11.8:
Language Meaning and Logic: An Illustrative Example / 12.1:
Formal Semantics / 12.3:
First-Order Predicate Calculus to Represent the State of Affairs / 12.4:
Variables and Constants / 12.4.1:
Predicates / 12.4.2:
Querying the Universe of Discourse / 12.5:
Mapping Phrases onto Logical Formulas / 12.6:
Representing Nouns and Adjectives / 12.6.1:
Representing Noun Groups / 12.6.2:
Representing Verbs and Prepositions / 12.6.3:
The Case of Determiners / 12.7:
Determiners and Logic Quantifiers / 12.7.1:
Translating Sentences Using Quantifiers / 12.7.2:
A General Representation of Sentences / 12.7.3:
Compositionality to Translate Phrases to Logical Forms / 12.8:
Translating the Noun Phrase / 12.8.1:
Translating the Verb Phrase / 12.8.2:
Augmenting the Database and Answering Questions / 12.9:
Declarations / 12.9.1:
Questions with Existential and Universal Quantifiers / 12.9.2:
Prolog and Unknown Predicates / 12.9.3:
Other Determiners and Questions / 12.9.4:
Application: The Spoken Language Translator / 12.10:
Translating Spoken Sentences / 12.10.1:
Compositional Semantics / 12.10.2:
Semantic Representation Transfer / 12.10.3:
Lexical Semantics / 12.11:
Beyond Formal Semantics / 13.1:
La langue etlaparole / 13.1.1:
Language and the Structure of the World / 13.1.2:
Lexical Structures / 13.2:
Some Basic Terms and Concepts / 13.2.1:
Ontological Organization / 13.2.2:
Lexical Classes and Relations / 13.2.3:
Semantic Networks / 13.2.4:
Building a Lexicon / 13.3:
The Lexicon and Word Senses / 13.3.1:
Verb Models / 13.3.2:
Definitions / 13.3.3:
An Example of Exhaustive Lexical Organization: Word Net / 13.4:
Nouns / 13.4.1:
Adjectives / 13.4.2:
Verbs / 13.4.3:
Automatic Word Sense Disambiguation / 13.5:
Senses as Tags / 13.5.1:
Associating a Word with a Context / 13.5.2:
Guessing the Topic / 13.5.3:
Naive Bayes / 13.5.4:
Using Constraints on Verbs / 13.5.5:
Using Dictionary Definitions / 13.5.6:
An Unsupervised Algorithm to Tag Senses / 13.5.7:
Senses and Languages / 13.5.8:
Case Grammars / 13.6:
Cases in Latin / 13.6.1:
Cases and Thematic Roles / 13.6.2:
Parsing with Cases / 13.6.3:
Semantic Grammars / 13.6.4:
Extending Case Grammars / 13.7:
Frame Net / 13.7.1:
A Statistical Method to Identify Semantic Roles / 13.7.2:
An Example of Case Grammar Application: EVAR / 13.8:
EVAR's Ontology and Syntactic Classes / 13.8.1:
Cases in EVAR / 13.8.2:
Discourse / 13.9:
Discourse: A Minimalist Definition / 14.1:
A Description of Discourse / 14.2.1:
Discourse Entities / 14.2.2:
References: An Application-Oriented View / 14.3:
References and Noun Phrases / 14.3.1:
Finding Names - Proper Nouns / 14.3.2:
Coreference / 14.4:
Anaphora / 14.4.1:
Solving Coreferences in an Example / 14.4.2:
A Standard Coreference Annotation / 14.4.3:
References: A More Formal View / 14.5:
Generating Discourse Entities: The Existential Quantifier / 14.5.1:
Retrieving Discourse Entities: Definite Descriptions / 14.5.2:
Generating Discourse Entities: The Universal Quantifier / 14.5.3:
Centering: A Theory on Discourse Structure / 14.6:
Solving Coreferences / 14.7:
A Simplistic Method: Using Syntactic and Semantic Compatibility / 14.7.1:
Solving Coreferences with Shallow Grammatical Information / 14.7.2:
Salience in a Multimodal Context / 14.7.3:
Using a Machine-Learning Technique to Resolve Coreferences / 14.7.4:
More Complex Phenomena: Ellipses / 14.7.5:
Discourse and Rhetoric / 14.8:
Ancient Rhetoric: An Outline / 14.8.1:
Rhetorical Structure Theory / 14.8.2:
Types of Relations / 14.8.3:
Implementing Rhetorical Structure Theory / 14.8.4:
Events and Time / 14.9:
Events / 14.9.1:
Event Types / 14.9.2:
Temporal Representation of Events / 14.9.3:
Events and Tenses / 14.9.4:
Time ML, an Annotation Scheme for Time and Events / 14.10:
Dialogue / 14.11:
Why a Dialogue? / 15.1:
Simple Dialogue Systems / 15.3:
Dialogue Systems Based on Automata / 15.3.1:
Dialogue Modeling / 15.3.2:
Speech Acts: A Theory of Language Interaction / 15.4:
Speech Acts and Human-Machine Dialogue / 15.5:
Speech Acts as a Tagging Model / 15.5.1:
Speech Acts Tags Used in the SUNDIAL Project / 15.5.2:
Dialogue Parsing / 15.5.3:
Interpreting Speech Acts / 15.5.4:
EVAR: A Dialogue Application Using Speech Acts / 15.5.5:
Taking Beliefs and Intentions into Account / 15.6:
Representing Mental States / 15.6.1:
The STRIPS Planning Algorithm / 15.6.2:
Causality / 15.6.3:
An Introduction to Prolog / 15.7:
A Short Background / A.1:
Basic Features of Prolog / A.2:
Facts / A.2.1:
Terms / A.2.2:
Queries / A.2.3:
Logical Variables / A.2.4:
Shared Variables / A.2.5:
Data Types in Prolog / A.2.6:
Rules / A.2.7:
Running a Program / A.3:
Unification / A.4:
Substitution and Instances / A.4.1:
Terms and Unification / A.4.2:
The Herbrand Unification Algorithm / A.4.3:
Example / A.4.4:
The Occurs-Check / A.4.5:
Resolution / A.5:
Modus Ponens / A.5.1:
A Resolution Algorithm / A.5.2:
Derivation Trees and Backtracking / A.5.3:
Tracing and Debugging / A.6:
Cuts, Negation, and Related Predicates / A.7:
Cuts / A.7.1:
Negation / A.7.2:
The once/1 Predicate / A.7.3:
Lists / A.8:
Some List-Handling Predicates / A.9:
The member/2 Predicate / A.9.1:
The append/3 Predicate / A.9.2:
The delete/3 Predicate / A.9.3:
The intersection/3 Predicate / A.9.4:
The reverse/2 Predicate / A.9.5:
The Mode of an Argument / A.9.6:
Operators and Arithmetic / A.10:
Operators / A.10.1:
Arithmetic Operations / A.10.2:
Comparison Operators / A.10.3:
Lists and Arithmetic: The length/2 Predicate / A.10.4:
Lists and Comparison: The quicksort/2 Predicate / A.10.5:
Some Other Built-in Predicates / A.11:
Type Predicates / A.11.1:
Term Manipulation Predicates / A.11.2:
Handling Run-Time Errors and Exceptions / A.12:
Dynamically Accessing and Updatingthe Database / A.13:
Accessing a Clause: The clause/2 Predicate / A.13.1:
Dynamic and Static Predicates / A.13.2:
Adding a Clause: The asserta/1 and 1 assertz/Predicates / A.13.3:
Removing Clauses: The retract/1 and abolish/2 Predicates / A.13.4:
Handling Unknown Predicates / A.13.5:
All-Solutions Predicates / A.14:
Fundamental Search Algorithms / A.15:
Representing the Graph / A.15.1:
Depth-First Search / A.15.2:
Breadth-First Search / A.15.3:
A* Search / A.15.4:
Input/Output / A.16:
Reading and Writing Characters with Edinburgh Prolog / A.16.1:
Reading and Writing Terms with Edinburgh Prolog / A.16.2:
Opening and Closing Files with Edinburgh Prolog / A.16.3:
Reading and Writing Characters with Standard Prolog / A.16.4:
Reading and Writing Terms with Standard Prolog / A.16.5:
Opening and Closing Files with Standard Prolog / A.16.6:
Writing Loops / A.16.7:
Developing Prolog Programs / A.17:
Presentation Style / A.17.1:
Improving Programs / A.17.2:
Index
References
An Overview of Language Processing / 1:
Linguistics and Language Processing / 1.1:
Applications of Language Processing / 1.2:
2.

電子ブック

EB
Luc De Raedt, Jörg Siekmann, A. Bundy
出版情報: SpringerLink Books - AutoHoldings , Springer Berlin Heidelberg, 2008
所蔵情報: loading…
目次情報: 続きを見る
Introduction / 1:
What Is Logical and Relational Learning? / 1.1:
Why Is Logical and Relational Learning Important? / 1.2:
Structure Activity Relationship Prediction / 1.2.1:
A Web Mining Example / 1.2.2:
A Language Learning Example / 1.2.3:
How Does Relational and Logical Learning Work? / 1.3:
A Brief History / 1.4:
An Introduction to Logic / 2:
A Relational Database Example / 2.1:
The Syntax of Clausal Logic / 2.2:
The Semantics of Clausal Logic - Model Theory / 2.3:
Inference with Clausal Logic - Proof Theory / 2.4:
Prolog and SLD-resolution / 2.5:
Historical and Bibliographic Remarks / 2.6:
An Introduction to Learning and Search / 3:
Representing Hypotheses and Instances / 3.1:
Boolean Data / 3.2:
Machine Learning / 3.3:
Data Mining / 3.4:
A Generate-and-Test Algorithm / 3.5:
Structuring the Search Space / 3.6:
Monotonicity / 3.7:
Borders / 3.8:
Refinement Operators / 3.9:
A Generic Algorithm for Mining and Learning / 3.10:
A Complete General-to-Specific Algorithm / 3.11:
A Heuristic General-to-Specific Algorithm / 3.12:
A Branch-and-Bound Algorithm / 3.13:
A Specific-to-General Algorithm / 3.14:
Working with Borders* / 3.15:
Computing a Single Border / 3.15.1:
Computing Two Borders / 3.15.2:
Computing Two Borders Incrementally / 3.15.3:
Operations on Borders / 3.15.4:
Conclusions / 3.16:
Bibliographical Notes / 3.17:
Representations for Mining and Learning / 4:
Representing Data and Hypotheses / 4.1:
Attribute-Value Learning / 4.2:
Multiple-Instance Learning: Dealing With Sets / 4.3:
Relational Learning / 4.4:
Logic Programs / 4.5:
Sequences, Lists, and Grammars / 4.6:
Trees and Terms / 4.7:
Graphs / 4.8:
Background Knowledge / 4.9:
Designing It Yourself / 4.10:
A Hierarchy of Representations* / 4.11:
From AV to BL / 4.11.1:
From MI to AV / 4.11.2:
From RL to MI / 4.11.3:
From LP to RL / 4.11.4:
Propositionalization / 4.12:
A Table-Based Approach / 4.12.1:
A Query-Based Approach / 4.12.2:
Aggregation / 4.13:
Historical and Bibliographical Remarks / 4.14:
Generality and Logical Entailment / 5:
Generality and Logical Entailment Coincide / 5.1:
Propositional Subsumption / 5.2:
Subsumption in Logical Atoms / 5.3:
Specialization Operators / 5.3.1:
Generalization Operators* / 5.3.2:
Computing the lgg and the glb / 5.3.3:
[Theta]-Subsumption / 5.4:
Soundness and Completeness / 5.4.1:
Deciding [Theta]-Subsumption / 5.4.2:
Equivalence Classes / 5.4.3:
Variants of [Theta]-Subsumption* / 5.5:
Object Identity* / 5.5.1:
Inverse Implication* / 5.5.2:
Using Background Knowledge / 5.6:
Saturation and Bottom Clauses / 5.6.1:
Relative Least General Generalization* / 5.6.2:
Semantic Refinement* / 5.6.3:
Aggregation* / 5.7:
Inverse Resolution / 5.8:
A Note on Graphs, Trees, and Sequences / 5.9:
Bibliographic Notes / 5.10:
The Upgrading Story / 6:
Motivation for a Methodology / 6.1:
Methodological Issues / 6.2:
Representing the Examples / 6.2.1:
Representing the Hypotheses / 6.2.2:
Adapting the Algorithm / 6.2.3:
Adding Features / 6.2.4:
Case Study 1: Rule Learning and Foil / 6.3:
Foil's Problem Setting / 6.3.1:
Foil's Algorithm / 6.3.2:
Case Study 2: Decision Tree Learning and Tilde / 6.4:
The Problem Setting / 6.4.1:
Inducing Logical Decision Trees / 6.4.2:
Case Study 3: Frequent Item-Set Mining and Warmr / 6.5:
Relational Association Rules and Local Patterns / 6.5.1:
Computing Frequent Queries / 6.5.2:
Language Bias / 6.6:
Syntactic Bias / 6.6.1:
Semantic Bias / 6.6.2:
Inducing Theories / 6.7:
Introduction to Theory Revision / 7.1:
Theories and Model Inference / 7.1.1:
Theory Revision / 7.1.2:
Overview of the Rest of This Chapter / 7.1.3:
Towards Abductive Logic Programming / 7.2:
Abduction / 7.2.1:
Integrity Constraints / 7.2.2:
Abductive Logic Programming / 7.2.3:
Shapiro's Theory Revision System / 7.3:
Interaction / 7.3.1:
The Model Inference System / 7.3.2:
Two Propositional Theory Revision Systems* / 7.4:
Learning a Propositional Horn Theory Efficiently / 7.4.1:
Heuristic Search in Theory Revision / 7.4.2:
Inducing Constraints / 7.5:
Problem Specification / 7.5.1:
An Algorithm for Inducing Integrity Constraints / 7.5.2:
Probabilistic Logic Learning / 7.6:
Probability Theory Review / 8.1:
Probabilistic Logics / 8.2:
Probabilities on Interpretations / 8.2.1:
Probabilities on Proofs / 8.2.2:
Probabilistic Learning / 8.3:
Parameter Estimation / 8.3.1:
Structure Learning / 8.3.2:
First-Order Probabilistic Logics / 8.4:
Probabilistic Interpretations / 8.4.1:
Probabilistic Proofs / 8.4.2:
Learning from Interpretations / 8.5:
Learning from Entailment / 8.5.2:
Learning from Proof Trees and Traces / 8.5.3:
Relational Reinforcement Learning* / 8.6:
Markov Decision Processes / 8.6.1:
Solving Markov Decision Processes / 8.6.2:
Relational Markov Decision Processes / 8.6.3:
Solving Relational Markov Decision Processes / 8.6.4:
Kernels and Distances for Structured Data / 8.7:
A Simple Kernel and Distance / 9.1:
Kernel Methods / 9.2:
The Max Margin Approach / 9.2.1:
Support Vector Machines / 9.2.2:
The Kernel Trick / 9.2.3:
Distance-Based Learning / 9.3:
Distance Functions / 9.3.1:
The k-Nearest Neighbor Algorithm / 9.3.2:
The k-Means Algorithm / 9.3.3:
Kernels for Structured Data / 9.4:
Convolution and Decomposition / 9.4.1:
Vectors and Tuples / 9.4.2:
Sets and Multi-sets / 9.4.3:
Strings / 9.4.4:
Trees and Atoms / 9.4.5:
Graph Kernels* / 9.4.6:
Distances and Metrics / 9.5:
Generalization and Metrics / 9.5.1:
Sets / 9.5.2:
Atoms and Trees / 9.5.4:
Relational Kernels and Distances / 9.5.6:
Bibliographical and Historical Notes / 9.7:
Computational Aspects of Logical and Relational Learning / 10:
Efficiency of Relational Learning / 10.1:
Coverage as [theta]-Subsumption / 10.1.1:
[theta]-Subsumption Empirically / 10.1.2:
Optimizing the Learner for [theta]-subsumption / 10.1.3:
Computational Learning Theory* / 10.2:
Notions of Learnability / 10.2.1:
Positive Results / 10.2.2:
Negative Results / 10.2.3:
Historical and Bibliographic Notes / 10.3:
Lessons Learned / 11:
A Hierarchy of Representations / 11.1:
From Upgrading to Downgrading / 11.2:
Propositionalization and Aggregation / 11.3:
Learning Tasks / 11.4:
Operators and Generality / 11.5:
Unification and Variables / 11.6:
Three Learning Settings / 11.7:
Knowledge and Background Knowledge / 11.8:
Applications / 11.9:
References
Author Index
Index
Introduction / 1:
What Is Logical and Relational Learning? / 1.1:
Why Is Logical and Relational Learning Important? / 1.2:
3.

電子ブック

EB
Vicenç Torra, Dov M. Gabbay, Yasuo Narukawa, Jörg Siekmann, Vicenc Torra, A. Bundy
出版情報: SpringerLink Books - AutoHoldings , Springer Berlin Heidelberg, 2007
所蔵情報: loading…
4.

電子ブック

EB
Poramate Manoonpong, Dov M. Gabbay, Jörg Siekmann, A. Bundy
出版情報: SpringerLink Books - AutoHoldings , Springer Berlin Heidelberg, 2007
所蔵情報: loading…
目次情報: 続きを見る
Introduction
Biologically Inspired Perception-Action Systems
Neural Concepts and Modelling
Physical Sensors and Walking Machine Platforms
Artificial Perception-Action Systems
Performance of Artificial Perception-Action Systems
Conclusions
Description of the Reactive Walking Machines
Symbols and Acronyms
References
Index
Introduction
Biologically Inspired Perception-Action Systems
Neural Concepts and Modelling
文献の複写および貸借の依頼を行う
 文献複写・貸借依頼