close
1.

電子ブック

EB
Gian Piero Zarri
出版情報: Springer eBooks Computer Science , Springer London, 2009
所蔵情報: loading…
目次情報: 続きを見る
Basic Principles / 1:
Narrative Information in an NKRL Context / 1.1:
Narratology and NKRL / 1.1.1:
The Notion of "Event" in an NKRL Context / 1.1.2:
Knowledge Representation and NKRL / 1.2:
"Standard" Ontologies and the "n-ary" Problem / 1.2.1:
A Plain "n-ary" Solution and Some Related Problems / 1.2.2:
In the Guise of Winding Up / 1.3:
The Knowledge Representation Strategy / 2:
Architecture of NKRL: the Four "Components" / 2.1:
The Data Structures of the Four Components / 2.2:
Definitional/Enumerative Data Structures / 2.2.1:
Descriptive/Factual Data Structures / 2.2.2:
Second-order Structures / 2.3:
The Completive Construction / 2.3.1:
Binding Occurrences / 2.3.2:
The Semantic and Ontological Contents / 2.4:
The Organization of the HClass Hierarchy / 3.1:
General Notions about Ontologies / 3.1.1:
HClass Architecture / 3.1.2:
The Organization of the HTemp Hierarchy / 3.2:
Recent Examples of "Structured" Ontological Systems / 3.2.1:
Main Features of Some Specific HTemp Structures / 3.2.2:
The Query and Inference Procedures / 3.3:
"Search Patterns" and Low-level Inferences / 4.1:
The Algorithmic Structure of Fum / 4.1.1:
Temporal Information and Indexing / 4.1.2:
High-level Inference Procedures / 4.2:
General Remarks about Some Reasoning Paradigms / 4.2.1:
Hypothesis Rules / 4.2.2:
Transformation Rules / 4.2.3:
Integrating the Two Main Inferencing Modes of NKRL / 4.2.4:
Inference Rules and Internet Filtering / 4.2.5:
Conclusion / 4.3:
Technological Enhancements / 5.1:
Theoretical Enhancements / 5.2:
Appendix A
Appendix B
References
Index
Basic Principles / 1:
Narrative Information in an NKRL Context / 1.1:
Narratology and NKRL / 1.1.1:
2.

電子ブック

EB
Gian Piero Zarri
出版情報: SpringerLink Books - AutoHoldings , Springer London, 2009
所蔵情報: loading…
目次情報: 続きを見る
Basic Principles / 1:
Narrative Information in an NKRL Context / 1.1:
Narratology and NKRL / 1.1.1:
The Notion of "Event" in an NKRL Context / 1.1.2:
Knowledge Representation and NKRL / 1.2:
"Standard" Ontologies and the "n-ary" Problem / 1.2.1:
A Plain "n-ary" Solution and Some Related Problems / 1.2.2:
In the Guise of Winding Up / 1.3:
The Knowledge Representation Strategy / 2:
Architecture of NKRL: the Four "Components" / 2.1:
The Data Structures of the Four Components / 2.2:
Definitional/Enumerative Data Structures / 2.2.1:
Descriptive/Factual Data Structures / 2.2.2:
Second-order Structures / 2.3:
The Completive Construction / 2.3.1:
Binding Occurrences / 2.3.2:
The Semantic and Ontological Contents / 2.4:
The Organization of the HClass Hierarchy / 3.1:
General Notions about Ontologies / 3.1.1:
HClass Architecture / 3.1.2:
The Organization of the HTemp Hierarchy / 3.2:
Recent Examples of "Structured" Ontological Systems / 3.2.1:
Main Features of Some Specific HTemp Structures / 3.2.2:
The Query and Inference Procedures / 3.3:
"Search Patterns" and Low-level Inferences / 4.1:
The Algorithmic Structure of Fum / 4.1.1:
Temporal Information and Indexing / 4.1.2:
High-level Inference Procedures / 4.2:
General Remarks about Some Reasoning Paradigms / 4.2.1:
Hypothesis Rules / 4.2.2:
Transformation Rules / 4.2.3:
Integrating the Two Main Inferencing Modes of NKRL / 4.2.4:
Inference Rules and Internet Filtering / 4.2.5:
Conclusion / 4.3:
Technological Enhancements / 5.1:
Theoretical Enhancements / 5.2:
Appendix A
Appendix B
References
Index
Basic Principles / 1:
Narrative Information in an NKRL Context / 1.1:
Narratology and NKRL / 1.1.1:
3.

電子ブック

EB
Kathryn E. Merrick, Mary Lou Maher
出版情報: Springer eBooks Computer Science , Springer Berlin Heidelberg, 2009
所蔵情報: loading…
目次情報: 続きを見る
Non-Player Characters and Reinforcement Learning / Part I:
Non-Player Characters in Multiuser Games / 1:
Types of Multiuser Games / 1.1:
Massively Multiplayer Online Role-Playing Games / 1.1.1:
Multiuser Simulation Games / 1.1.2:
Open-Ended Virtual Worlds / 1.1.3:
Character Roles in Multiuser Games / 1.2:
Existing Artificial Intelligence Techniques for Non-Player Characters in Multiuser Games / 1.3:
Reflexive Agents / 1.3.1:
Learning Agents / 1.3.2:
Evolutionary Agents / 1.3.3:
Smart Terrain / 1.3.4:
Summary / 1.4:
References / 1.5:
Motivation in Natural and Artificial Agents / 2:
Defining Motivation / 2.1:
Biological Theories of Motivation / 2.2:
Drive Theory / 2.2.1:
Motivational State Theory / 2.2.2:
Arousal / 2.2.3:
Cognitive Theories of Motivation / 2.3:
Curiosity / 2.3.1:
Operant Theory / 2.3.2:
Incentive / 2.3.3:
Achievement Motivation / 2.3.4:
Attribution Theory / 2.3.5:
Intrinsic Motivation / 2.3.6:
Social Theories of Motivation / 2.4:
Conformity / 2.4.1:
Cultural Effect / 2.4.2:
Evolution / 2.4.3:
Combined Motivation Theories / 2.5:
Maslow's Hierarchy of Needs / 2.5.1:
Existence Relatedness Growth Theory / 2.5.2:
Towards Motivated Reinforcement Learning / 2.6:
Defining Reinforcement Learning / 3.1:
Dynamic Programming / 3.1.1:
Monte Carlo Methods / 3.1.2:
Temporal Difference Learning / 3.1.3:
Reinforcement Learning in Complex Environments / 3.2:
Partially Observable Environments / 3.2.1:
Function Approximation / 3.2.2:
Hierarchical Reinforcement Learning / 3.2.3:
Motivated Reinforcement Learning / 3.3:
Using a Motivation Signal in Addition to a Reward Signal / 3.3.1:
Using a Motivation Signal Instead of a Reward Signal / 3.3.2:
Comparing the Behaviour of Learning Agents / 3.4:
Player Satisfaction / 4.1:
Psychological Flow / 4.1.1:
Structural Flow / 4.1.2:
Formalising Non-Player Character Behaviour / 4.2:
Models of Optimality for Reinforcement Learning / 4.2.1:
Characteristics of Motivated Reinforcement Learning / 4.2.2:
Comparing Motivated Reinforcement Learning Agents / 4.3:
Statistical Model for Identifying Learned Tasks / 4.3.1:
Behavioural Variety / 4.3.2:
Behavioural Complexity / 4.3.3:
Developing Curious Characters Using Motivated Reinforcement Learning / 4.4:
Curiosity, Motivation and Attention Focus / 5:
Agents in Complex, Dynamic Environments / 5.1:
States / 5.1.1:
Actions / 5.1.2:
Reward and Motivation / 5.1.3:
Motivation and Attention Focus / 5.2:
Observations / 5.2.1:
Events / 5.2.2:
Tasks and Task Selection / 5.2.3:
Experience-Based Reward as Cognitive Motivation / 5.2.4:
Arbitration Functions / 5.2.5:
A General Experience-Based Motivation Function / 5.2.6:
Curiosity as Motivation for Support Characters / 5.3:
Curiosity as Interesting Events / 5.3.1:
Curiosity as Interesting and Competence / 5.3.2:
Motivated Reinforcement Learning Agents / 5.4:
A General Motivated Reinforcement Learning Model / 6.1:
Algorithms for Motivated Reinforcement Learning / 6.2:
Motivated Flat Reinforcement Learning / 6.2.1:
Motivated Multioption Reinforcement Learning / 6.2.2:
Motivated Hierarchical Reinforcement Learning / 6.2.3:
Curious Characters in Games / 6.3:
Curious Characters for Multiuser Games / 7:
Motivated Reinforcement Learning for Support Characters in Massively Multiplayer Online Role-Playing Games / 7.1:
Character Behaviour in Small-Scale, Isolated Games Locations / 7.2:
Case Studies of Individual Characters / 7.2.1:
General Trends in Character Behaviour / 7.2.2:
Curious Characters for Games in Complex, Dynamic Environments / 7.3:
Designing Characters That Can Multitask / 8.1:
Designing Characters for Complex Tasks / 8.1.1:
Games That Change While Characters Are Learning / 8.2.1:
Curious Characters for Games in Second Life / 8.3.1:
Motivated Reinforcement Learning in Open-Ended Simulation Games / 9.1:
Game Design / 9.1.1:
Character Design / 9.1.2:
Evaluating Character Behaviour in Response to Game Play Sequences / 9.2:
Discussion / 9.2.1:
Future / 9.3:
Towards the Future / 10:
Using Motivated Reinforcement Learning in Non-Player Characters / 10.1:
Other Gaming Applications for Motivated Reinforcement Learning / 10.2:
Dynamic Difficulty Adjustment / 10.2.1:
Procedural Content Generation / 10.2.2:
Beyond Curiosity / 10.3:
Biological Models of Motivation / 10.3.1:
Cognitive Models of Motivation / 10.3.2:
Social Models of Motivation / 10.3.3:
Combined Models of Motivation / 10.3.4:
New Models of Motivated Learning / 10.4:
Motivated Supervised Learning / 10.4.1:
Motivated Unsupervised Learning / 10.4.2:
Evaluating the Behaviour of Motivated Learning Agents / 10.5:
Concluding Remarks / 10.6:
Index / 10.7:
Non-Player Characters and Reinforcement Learning / Part I:
Non-Player Characters in Multiuser Games / 1:
Types of Multiuser Games / 1.1:
4.

電子ブック

EB
Kathryn E. Merrick, Mary Lou Maher
出版情報: SpringerLink Books - AutoHoldings , Springer Berlin Heidelberg, 2009
所蔵情報: loading…
目次情報: 続きを見る
Non-Player Characters and Reinforcement Learning / Part I:
Non-Player Characters in Multiuser Games / 1:
Types of Multiuser Games / 1.1:
Massively Multiplayer Online Role-Playing Games / 1.1.1:
Multiuser Simulation Games / 1.1.2:
Open-Ended Virtual Worlds / 1.1.3:
Character Roles in Multiuser Games / 1.2:
Existing Artificial Intelligence Techniques for Non-Player Characters in Multiuser Games / 1.3:
Reflexive Agents / 1.3.1:
Learning Agents / 1.3.2:
Evolutionary Agents / 1.3.3:
Smart Terrain / 1.3.4:
Summary / 1.4:
References / 1.5:
Motivation in Natural and Artificial Agents / 2:
Defining Motivation / 2.1:
Biological Theories of Motivation / 2.2:
Drive Theory / 2.2.1:
Motivational State Theory / 2.2.2:
Arousal / 2.2.3:
Cognitive Theories of Motivation / 2.3:
Curiosity / 2.3.1:
Operant Theory / 2.3.2:
Incentive / 2.3.3:
Achievement Motivation / 2.3.4:
Attribution Theory / 2.3.5:
Intrinsic Motivation / 2.3.6:
Social Theories of Motivation / 2.4:
Conformity / 2.4.1:
Cultural Effect / 2.4.2:
Evolution / 2.4.3:
Combined Motivation Theories / 2.5:
Maslow's Hierarchy of Needs / 2.5.1:
Existence Relatedness Growth Theory / 2.5.2:
Towards Motivated Reinforcement Learning / 2.6:
Defining Reinforcement Learning / 3.1:
Dynamic Programming / 3.1.1:
Monte Carlo Methods / 3.1.2:
Temporal Difference Learning / 3.1.3:
Reinforcement Learning in Complex Environments / 3.2:
Partially Observable Environments / 3.2.1:
Function Approximation / 3.2.2:
Hierarchical Reinforcement Learning / 3.2.3:
Motivated Reinforcement Learning / 3.3:
Using a Motivation Signal in Addition to a Reward Signal / 3.3.1:
Using a Motivation Signal Instead of a Reward Signal / 3.3.2:
Comparing the Behaviour of Learning Agents / 3.4:
Player Satisfaction / 4.1:
Psychological Flow / 4.1.1:
Structural Flow / 4.1.2:
Formalising Non-Player Character Behaviour / 4.2:
Models of Optimality for Reinforcement Learning / 4.2.1:
Characteristics of Motivated Reinforcement Learning / 4.2.2:
Comparing Motivated Reinforcement Learning Agents / 4.3:
Statistical Model for Identifying Learned Tasks / 4.3.1:
Behavioural Variety / 4.3.2:
Behavioural Complexity / 4.3.3:
Developing Curious Characters Using Motivated Reinforcement Learning / 4.4:
Curiosity, Motivation and Attention Focus / 5:
Agents in Complex, Dynamic Environments / 5.1:
States / 5.1.1:
Actions / 5.1.2:
Reward and Motivation / 5.1.3:
Motivation and Attention Focus / 5.2:
Observations / 5.2.1:
Events / 5.2.2:
Tasks and Task Selection / 5.2.3:
Experience-Based Reward as Cognitive Motivation / 5.2.4:
Arbitration Functions / 5.2.5:
A General Experience-Based Motivation Function / 5.2.6:
Curiosity as Motivation for Support Characters / 5.3:
Curiosity as Interesting Events / 5.3.1:
Curiosity as Interesting and Competence / 5.3.2:
Motivated Reinforcement Learning Agents / 5.4:
A General Motivated Reinforcement Learning Model / 6.1:
Algorithms for Motivated Reinforcement Learning / 6.2:
Motivated Flat Reinforcement Learning / 6.2.1:
Motivated Multioption Reinforcement Learning / 6.2.2:
Motivated Hierarchical Reinforcement Learning / 6.2.3:
Curious Characters in Games / 6.3:
Curious Characters for Multiuser Games / 7:
Motivated Reinforcement Learning for Support Characters in Massively Multiplayer Online Role-Playing Games / 7.1:
Character Behaviour in Small-Scale, Isolated Games Locations / 7.2:
Case Studies of Individual Characters / 7.2.1:
General Trends in Character Behaviour / 7.2.2:
Curious Characters for Games in Complex, Dynamic Environments / 7.3:
Designing Characters That Can Multitask / 8.1:
Designing Characters for Complex Tasks / 8.1.1:
Games That Change While Characters Are Learning / 8.2.1:
Curious Characters for Games in Second Life / 8.3.1:
Motivated Reinforcement Learning in Open-Ended Simulation Games / 9.1:
Game Design / 9.1.1:
Character Design / 9.1.2:
Evaluating Character Behaviour in Response to Game Play Sequences / 9.2:
Discussion / 9.2.1:
Future / 9.3:
Towards the Future / 10:
Using Motivated Reinforcement Learning in Non-Player Characters / 10.1:
Other Gaming Applications for Motivated Reinforcement Learning / 10.2:
Dynamic Difficulty Adjustment / 10.2.1:
Procedural Content Generation / 10.2.2:
Beyond Curiosity / 10.3:
Biological Models of Motivation / 10.3.1:
Cognitive Models of Motivation / 10.3.2:
Social Models of Motivation / 10.3.3:
Combined Models of Motivation / 10.3.4:
New Models of Motivated Learning / 10.4:
Motivated Supervised Learning / 10.4.1:
Motivated Unsupervised Learning / 10.4.2:
Evaluating the Behaviour of Motivated Learning Agents / 10.5:
Concluding Remarks / 10.6:
Index / 10.7:
Non-Player Characters and Reinforcement Learning / Part I:
Non-Player Characters in Multiuser Games / 1:
Types of Multiuser Games / 1.1:
5.

電子ブック

EB
Gabriele Puppis, Takeo Kanade
出版情報: Springer eBooks Computer Science , Springer Berlin Heidelberg, 2010
所蔵情報: loading…
目次情報: 続きを見る
Introduction / 1:
Word Automata and Time Granularities / 2:
Background Knowledge / 2.1:
Words and Languages / 2.1.1:
Periodicity of Words / 2.1.2:
Word Automata / 2.1.3:
Time Granularities / 2.1.4:
The String-Based and Automaton-Based Approaches / 2.2:
The Granspec Formalism / 2.2.1:
From Granspecs to Single-String Automata / 2.2.2:
Counters and Multiple Transitions / 2.2.3:
The Logical Counterpart of RCSSA / 2.2.4:
Compact and Tractable Representations / 2.3:
Nested Repetitions of Words / 2.3.1:
Algorithms on NCSSA / 2.3.2:
Optimizing Representations / 2.3.3:
Reasoning on Sets of Granularities / 2.4:
Languages of Ultimately Periodic Words / 2.4.1:
Ultimately Periodic Automata / 2.4.2:
Algorithms on UPA / 2.4.3:
Applications to Time Granularity / 2.4.4:
Discussion / 2.5:
Tree Automata and Logics / 3:
Graphs and Trees / 3.1:
Tree Automata / 3.1.2:
Monadic Second-Order Logic / 3.1.3:
The Model Checking Problem / 3.1.4:
The Contraction Method for Tree Automata / 3.2:
Features and Types / 3.2.1:
Types and the Acceptance Problem / 3.2.2:
From Trees to Their Retractions / 3.2.3:
An Example / 3.2.4:
Tree Transformations / 3.3:
Tree Recolorings / 3.3.1:
Tree Substitutions / 3.3.2:
Tree Transducers / 3.3.3:
Inverse Substitutions / 3.3.4:
A Summary / 3.3.5:
The Class of Reducible Trees / 3.4:
Compositional Properties of Types / 3.4.1:
Closure Properties / 3.4.2:
Effectiveness of the Contraction Method / 3.5:
Reducible Trees and the Caucal Hierarchy / 3.5.1:
Two-Way Alternating Tree Automata / 3.5.2:
Morphic Trees / 3.5.3:
Layered Temporal Structures / 3.5.4:
Summary / 3.6:
Technical Proofs / A:
Proofs of Theorem 5 and Theorem 6 / A.l:
Proof of Theorem 8 / A.2:
Proof of Proposition 34 / A.3:
References
Notation
Index
Introduction / 1:
Word Automata and Time Granularities / 2:
Background Knowledge / 2.1:
6.

電子ブック

EB
Gabriele Puppis, Takeo Kanade
出版情報: SpringerLink Books - AutoHoldings , Springer Berlin Heidelberg, 2010
所蔵情報: loading…
目次情報: 続きを見る
Introduction / 1:
Word Automata and Time Granularities / 2:
Background Knowledge / 2.1:
Words and Languages / 2.1.1:
Periodicity of Words / 2.1.2:
Word Automata / 2.1.3:
Time Granularities / 2.1.4:
The String-Based and Automaton-Based Approaches / 2.2:
The Granspec Formalism / 2.2.1:
From Granspecs to Single-String Automata / 2.2.2:
Counters and Multiple Transitions / 2.2.3:
The Logical Counterpart of RCSSA / 2.2.4:
Compact and Tractable Representations / 2.3:
Nested Repetitions of Words / 2.3.1:
Algorithms on NCSSA / 2.3.2:
Optimizing Representations / 2.3.3:
Reasoning on Sets of Granularities / 2.4:
Languages of Ultimately Periodic Words / 2.4.1:
Ultimately Periodic Automata / 2.4.2:
Algorithms on UPA / 2.4.3:
Applications to Time Granularity / 2.4.4:
Discussion / 2.5:
Tree Automata and Logics / 3:
Graphs and Trees / 3.1:
Tree Automata / 3.1.2:
Monadic Second-Order Logic / 3.1.3:
The Model Checking Problem / 3.1.4:
The Contraction Method for Tree Automata / 3.2:
Features and Types / 3.2.1:
Types and the Acceptance Problem / 3.2.2:
From Trees to Their Retractions / 3.2.3:
An Example / 3.2.4:
Tree Transformations / 3.3:
Tree Recolorings / 3.3.1:
Tree Substitutions / 3.3.2:
Tree Transducers / 3.3.3:
Inverse Substitutions / 3.3.4:
A Summary / 3.3.5:
The Class of Reducible Trees / 3.4:
Compositional Properties of Types / 3.4.1:
Closure Properties / 3.4.2:
Effectiveness of the Contraction Method / 3.5:
Reducible Trees and the Caucal Hierarchy / 3.5.1:
Two-Way Alternating Tree Automata / 3.5.2:
Morphic Trees / 3.5.3:
Layered Temporal Structures / 3.5.4:
Summary / 3.6:
Technical Proofs / A:
Proofs of Theorem 5 and Theorem 6 / A.l:
Proof of Theorem 8 / A.2:
Proof of Proposition 34 / A.3:
References
Notation
Index
Introduction / 1:
Word Automata and Time Granularities / 2:
Background Knowledge / 2.1:
7.

電子ブック

EB
Milan Studen?, Michael Jordan, Frank P. Kelly, Jon Kleinberg, Bernhard Sch?lkopf, Ian Witten
出版情報: Springer eBooks Computer Science , Springer London, 2005
所蔵情報: loading…
目次情報: 続きを見る
Introduction / 1:
Motivational thoughts / 1.1:
Goals of the monograph / 1.2:
Structure of the book / 1.3:
Basic Concepts / 2:
Conditional independence / 2.1:
Semi-graphoid properties / 2.2:
Formal independence models / 2.2.1:
Semi-graphoids / 2.2.2:
Elementary independence statements / 2.2.3:
Problem of axiomatic characterization / 2.2.4:
Classes of probability measures / 2.3:
Marginally continuous measures / 2.3.1:
Factorizable measures / 2.3.2:
Multiinformation and conditional product / 2.3.3:
Properties of multiinformation function / 2.3.4:
Positive measures / 2.3.5:
Gaussian measures / 2.3.6:
Basic construction / 2.3.7:
Imsets / 2.4:
Graphical Methods / 3:
Undirected graphs / 3.1:
Acyclic directed graphs / 3.2:
Classic chain graphs / 3.3:
Within classic graphical models / 3.4:
Decomposable models / 3.4.1:
Recursive causal graphs / 3.4.2:
Lattice conditional independence models / 3.4.3:
Bubble graphs / 3.4.4:
Advanced graphical models / 3.5:
General directed graphs / 3.5.1:
Reciprocal graphs / 3.5.2:
Joint-response chain graphs / 3.5.3:
Covariance graphs / 3.5.4:
Alternative chain graphs / 3.5.5:
Annotated graphs / 3.5.6:
Hidden variables / 3.5.7:
Ancestral graphs / 3.5.8:
MC graphs / 3.5.9:
Incompleteness of graphical approaches / 3.6:
Structural Imsets: Fundamentals / 4:
Basic class of distributions / 4.1:
Discrete measures / 4.1.1:
Regular Gaussian measures / 4.1.2:
Conditional Gaussian measures / 4.1.3:
Classes of structural imsets / 4.2:
Elementary imsets / 4.2.1:
Semi-elementary and combinatorial imsets / 4.2.2:
Structural imsets / 4.2.3:
Product formula induced by a structural imset / 4.3:
Examples of reference systems of measures / 4.3.1:
Topological assumptions / 4.3.2:
Markov condition / 4.4:
Semi-graphoid induced by a structural imset / 4.4.1:
Markovian measures / 4.4.2:
Equivalence result / 4.5:
Description of Probabilistic Models / 5:
Supermodular set functions / 5.1:
Semi-graphoid produced by a supermodular function / 5.1.1:
Quantitative equivalence of supermodular functions / 5.1.2:
Skeletal supermodular functions / 5.2:
Skeleton / 5.2.1:
Significance of skeletal imsets / 5.2.2:
Description of models by structural imsets / 5.3:
Galois connection / 5.4:
Formal concept analysis / 5.4.1:
Lattice of structural models / 5.4.2:
Equivalence and Implication / 6:
Two concepts of equivalence / 6.1:
Independence and Markov equivalence / 6.1.1:
Independence implication / 6.2:
Direct characterization of independence implication / 6.2.1:
Skeletal characterization of independence implication / 6.2.2:
Testing independence implication / 6.3:
Testing structural imsets / 6.3.1:
Grade / 6.3.2:
Invariants of independence equivalence / 6.4:
Adaptation to a distribution framework / 6.5:
The Problem of Representative Choice / 7:
Baricentral imsets / 7.1:
Standard imsets / 7.2:
Translation of DAG models / 7.2.1:
Translation of decomposable models / 7.2.2:
Imsets of the smallest degree / 7.3:
Decomposition implication / 7.3.1:
Minimal generators / 7.3.2:
Span / 7.4:
Determining and unimarginal classes / 7.4.1:
Imsets with the least lower class / 7.4.2:
Exclusivity of standard imsets / 7.4.3:
Dual description / 7.5:
Coportraits / 7.5.1:
Dual baricentral imsets and global view / 7.5.2:
Learning / 8:
Two approaches to learning / 8.1:
Quality criteria / 8.2:
Criteria for learning DAG models / 8.2.1:
Score equivalent criteria / 8.2.2:
Decomposable criteria / 8.2.3:
Regular criteria / 8.2.4:
Inclusion neighborhood / 8.3:
Standard imsets and learning / 8.4:
Inclusion neighborhood characterization / 8.4.1:
Regular criteria and standard imsets / 8.4.2:
Open Problems / 9:
Theoretical problems / 9.1:
Miscellaneous topics / 9.1.1:
Classification of skeletal imsets / 9.1.2:
Operations with structural models / 9.2:
Reductive operations / 9.2.1:
Expansive operations / 9.2.2:
Cumulative operations / 9.2.3:
Decomposition of structural models / 9.2.4:
Implementation tasks / 9.3:
Interpretation and learning tasks / 9.4:
Meaningful description of structural models / 9.4.1:
Tasks concerning distribution frameworks / 9.4.2:
Learning tasks / 9.4.3:
Appendix / A:
Classes of sets / A.1:
Posets and lattices / A.2:
Graphs / A.3:
Topological concepts / A.4:
Finite-dimensional subspaces and convex cones / A.5:
Linear subspaces / A.5.1:
Convex sets and cones / A.5.2:
Measure-theoretical concepts / A.6:
Measure and integral / A.6.1:
Basic measure-theoretical results / A.6.2:
Information-theoretical concepts / A.6.3:
Conditional probability / A.6.4:
Conditional independence in terms of ?-algebras / A.7:
Concepts from multivariate analysis / A.8:
Matrices / A.8.1:
Statistical characteristics of probability measures / A.8.2:
Multivariate Gaussian distributions / A.8.3:
Elementary statistical concepts / A.9:
Empirical concepts / A.9.1:
Statistical conception / A.9.2:
Likelihood function / A.9.3:
Testing statistical hypotheses / A.9.4:
Distribution framework / A.9.5:
List of Notation
List of Lemmas, Propositions etc
References
Index
Introduction / 1:
Motivational thoughts / 1.1:
Goals of the monograph / 1.2:
8.

電子ブック

EB
Milan Studený, Michael Jordan, Frank P. Kelly, Jon Kleinberg, Bernhard Schölkopf, Ian Witten
出版情報: SpringerLink Books - AutoHoldings , Springer London, 2005
所蔵情報: loading…
目次情報: 続きを見る
Introduction / 1:
Motivational thoughts / 1.1:
Goals of the monograph / 1.2:
Structure of the book / 1.3:
Basic Concepts / 2:
Conditional independence / 2.1:
Semi-graphoid properties / 2.2:
Formal independence models / 2.2.1:
Semi-graphoids / 2.2.2:
Elementary independence statements / 2.2.3:
Problem of axiomatic characterization / 2.2.4:
Classes of probability measures / 2.3:
Marginally continuous measures / 2.3.1:
Factorizable measures / 2.3.2:
Multiinformation and conditional product / 2.3.3:
Properties of multiinformation function / 2.3.4:
Positive measures / 2.3.5:
Gaussian measures / 2.3.6:
Basic construction / 2.3.7:
Imsets / 2.4:
Graphical Methods / 3:
Undirected graphs / 3.1:
Acyclic directed graphs / 3.2:
Classic chain graphs / 3.3:
Within classic graphical models / 3.4:
Decomposable models / 3.4.1:
Recursive causal graphs / 3.4.2:
Lattice conditional independence models / 3.4.3:
Bubble graphs / 3.4.4:
Advanced graphical models / 3.5:
General directed graphs / 3.5.1:
Reciprocal graphs / 3.5.2:
Joint-response chain graphs / 3.5.3:
Covariance graphs / 3.5.4:
Alternative chain graphs / 3.5.5:
Annotated graphs / 3.5.6:
Hidden variables / 3.5.7:
Ancestral graphs / 3.5.8:
MC graphs / 3.5.9:
Incompleteness of graphical approaches / 3.6:
Structural Imsets: Fundamentals / 4:
Basic class of distributions / 4.1:
Discrete measures / 4.1.1:
Regular Gaussian measures / 4.1.2:
Conditional Gaussian measures / 4.1.3:
Classes of structural imsets / 4.2:
Elementary imsets / 4.2.1:
Semi-elementary and combinatorial imsets / 4.2.2:
Structural imsets / 4.2.3:
Product formula induced by a structural imset / 4.3:
Examples of reference systems of measures / 4.3.1:
Topological assumptions / 4.3.2:
Markov condition / 4.4:
Semi-graphoid induced by a structural imset / 4.4.1:
Markovian measures / 4.4.2:
Equivalence result / 4.5:
Description of Probabilistic Models / 5:
Supermodular set functions / 5.1:
Semi-graphoid produced by a supermodular function / 5.1.1:
Quantitative equivalence of supermodular functions / 5.1.2:
Skeletal supermodular functions / 5.2:
Skeleton / 5.2.1:
Significance of skeletal imsets / 5.2.2:
Description of models by structural imsets / 5.3:
Galois connection / 5.4:
Formal concept analysis / 5.4.1:
Lattice of structural models / 5.4.2:
Equivalence and Implication / 6:
Two concepts of equivalence / 6.1:
Independence and Markov equivalence / 6.1.1:
Independence implication / 6.2:
Direct characterization of independence implication / 6.2.1:
Skeletal characterization of independence implication / 6.2.2:
Testing independence implication / 6.3:
Testing structural imsets / 6.3.1:
Grade / 6.3.2:
Invariants of independence equivalence / 6.4:
Adaptation to a distribution framework / 6.5:
The Problem of Representative Choice / 7:
Baricentral imsets / 7.1:
Standard imsets / 7.2:
Translation of DAG models / 7.2.1:
Translation of decomposable models / 7.2.2:
Imsets of the smallest degree / 7.3:
Decomposition implication / 7.3.1:
Minimal generators / 7.3.2:
Span / 7.4:
Determining and unimarginal classes / 7.4.1:
Imsets with the least lower class / 7.4.2:
Exclusivity of standard imsets / 7.4.3:
Dual description / 7.5:
Coportraits / 7.5.1:
Dual baricentral imsets and global view / 7.5.2:
Learning / 8:
Two approaches to learning / 8.1:
Quality criteria / 8.2:
Criteria for learning DAG models / 8.2.1:
Score equivalent criteria / 8.2.2:
Decomposable criteria / 8.2.3:
Regular criteria / 8.2.4:
Inclusion neighborhood / 8.3:
Standard imsets and learning / 8.4:
Inclusion neighborhood characterization / 8.4.1:
Regular criteria and standard imsets / 8.4.2:
Open Problems / 9:
Theoretical problems / 9.1:
Miscellaneous topics / 9.1.1:
Classification of skeletal imsets / 9.1.2:
Operations with structural models / 9.2:
Reductive operations / 9.2.1:
Expansive operations / 9.2.2:
Cumulative operations / 9.2.3:
Decomposition of structural models / 9.2.4:
Implementation tasks / 9.3:
Interpretation and learning tasks / 9.4:
Meaningful description of structural models / 9.4.1:
Tasks concerning distribution frameworks / 9.4.2:
Learning tasks / 9.4.3:
Appendix / A:
Classes of sets / A.1:
Posets and lattices / A.2:
Graphs / A.3:
Topological concepts / A.4:
Finite-dimensional subspaces and convex cones / A.5:
Linear subspaces / A.5.1:
Convex sets and cones / A.5.2:
Measure-theoretical concepts / A.6:
Measure and integral / A.6.1:
Basic measure-theoretical results / A.6.2:
Information-theoretical concepts / A.6.3:
Conditional probability / A.6.4:
Conditional independence in terms of ?-algebras / A.7:
Concepts from multivariate analysis / A.8:
Matrices / A.8.1:
Statistical characteristics of probability measures / A.8.2:
Multivariate Gaussian distributions / A.8.3:
Elementary statistical concepts / A.9:
Empirical concepts / A.9.1:
Statistical conception / A.9.2:
Likelihood function / A.9.3:
Testing statistical hypotheses / A.9.4:
Distribution framework / A.9.5:
List of Notation
List of Lemmas, Propositions etc
References
Index
Introduction / 1:
Motivational thoughts / 1.1:
Goals of the monograph / 1.2:
9.

電子ブック

EB
Daniel S. Yeung, Ian Cloete, Wing W. Y. Ng, Daming Shi
出版情報: Springer eBooks Computer Science , Springer Berlin Heidelberg, 2010
所蔵情報: loading…
目次情報: 続きを見る
Introduction to Neural Networks / 1:
Properties of Neural Networks / 1.1:
Neural Network Learning / 1.2:
Supervised Learning / 1.2.1:
Unsupervised Learning / 1.2.2:
Perceptron / 1.3:
Adaline and Least Mean Square Algorithm / 1.4:
Multilayer Perceptron and Backpropagation Algorithm / 1.5:
Output Layer Learning / 1.5.1:
Hidden Layer Learning / 1.5.2:
Radial Basis Function Networks / 1.6:
Support Vector Machines / 1.7:
Principles of Sensitivity Analysis / 2:
Perturbations in Neural Networks / 2.1:
Neural Network Sensitivity Analysis / 2.2:
Fundamental Methods of Sensitivity Analysis / 2.3:
Geometrical Approach / 2.3.1:
Statistical Approach / 2.3.2:
Summary / 2.4:
Hyper-Rectangle Model / 3:
Hyper-Rectangle Model for Input Space of MLP / 3.1:
Sensitivity Measure of MLP / 3.2:
Discussion / 3.3:
Sensitivity Analysis with Parameterized Activation Function / 4:
Parameterized Antisymmetric Squashing Function / 4.1:
Sensitivity Measure / 4.2:
Localized Generalization Error Model / 4.3:
Introduction / 5.1:
The Localized Generalization Error Model / 5.2:
The Q-Neighborhood and Q-Union / 5.2.1:
The Localized Generalization Error Bound / 5.2.2:
Stochastic Sensitivity Measure for RBFNN / 5.2.3:
Characteristics of the Error Bound / 5.2.4:
Comparing Two Classifiers Using the Error Bound / 5.2.5:
Architecture Selection Using the Error Bound / 5.3:
Critical Vector Learning for RBF Networks / 5.3.1:
Related Work / 6.1:
Construction of RBF Networks with Sensitivity Analysis / 6.2:
RBF Classifiers' Sensitivity to the Kernel Function Centers / 6.2.1:
Orthogonal Least Square Transform / 6.2.2:
Critical Vector Selection / 6.2.3:
Sensitivity Analysis of Prior Knowledge / 6.3:
KBANNs / 7.1:
Inductive Bias / 7.2:
Sensitivity Analysis and Measures / 7.3:
Output-Pattern Sensitivity / 7.3.1:
Output-Weight Sensitivity / 7.3.2:
Output-H Sensitivity / 7.3.3:
Euclidean Distance / 7.3.4:
Promoter Recognition / 7.4:
Data and Initial Domain Theory / 7.4.1:
Experimental Methodology / 7.4.2:
Discussion and Conclusion / 7.5:
Applications / 8:
Input Dimension Reduction / 8.1:
Sensitivity Matrix / 8.1.1:
Criteria for Pruning Inputs / 8.1.2:
Network Optimization / 8.2:
Selective Learning / 8.3:
Hardware Robustness / 8.4:
Measure of Nonlinearity / 8.5:
Parameter Tuning for Neocognitron / 8.6:
Receptive Field / 8.6.1:
Selectivity / 8.6.2:
Sensitivity Analysis of the Neocognitron / 8.6.3:
Bibliography
Introduction to Neural Networks / 1:
Properties of Neural Networks / 1.1:
Neural Network Learning / 1.2:
10.

電子ブック

EB
Daniel S. Yeung, Ian Cloete, Wing W. Y. Ng, Daming Shi
出版情報: SpringerLink Books - AutoHoldings , Springer Berlin Heidelberg, 2010
所蔵情報: loading…
目次情報: 続きを見る
Introduction to Neural Networks / 1:
Properties of Neural Networks / 1.1:
Neural Network Learning / 1.2:
Supervised Learning / 1.2.1:
Unsupervised Learning / 1.2.2:
Perceptron / 1.3:
Adaline and Least Mean Square Algorithm / 1.4:
Multilayer Perceptron and Backpropagation Algorithm / 1.5:
Output Layer Learning / 1.5.1:
Hidden Layer Learning / 1.5.2:
Radial Basis Function Networks / 1.6:
Support Vector Machines / 1.7:
Principles of Sensitivity Analysis / 2:
Perturbations in Neural Networks / 2.1:
Neural Network Sensitivity Analysis / 2.2:
Fundamental Methods of Sensitivity Analysis / 2.3:
Geometrical Approach / 2.3.1:
Statistical Approach / 2.3.2:
Summary / 2.4:
Hyper-Rectangle Model / 3:
Hyper-Rectangle Model for Input Space of MLP / 3.1:
Sensitivity Measure of MLP / 3.2:
Discussion / 3.3:
Sensitivity Analysis with Parameterized Activation Function / 4:
Parameterized Antisymmetric Squashing Function / 4.1:
Sensitivity Measure / 4.2:
Localized Generalization Error Model / 4.3:
Introduction / 5.1:
The Localized Generalization Error Model / 5.2:
The Q-Neighborhood and Q-Union / 5.2.1:
The Localized Generalization Error Bound / 5.2.2:
Stochastic Sensitivity Measure for RBFNN / 5.2.3:
Characteristics of the Error Bound / 5.2.4:
Comparing Two Classifiers Using the Error Bound / 5.2.5:
Architecture Selection Using the Error Bound / 5.3:
Critical Vector Learning for RBF Networks / 5.3.1:
Related Work / 6.1:
Construction of RBF Networks with Sensitivity Analysis / 6.2:
RBF Classifiers' Sensitivity to the Kernel Function Centers / 6.2.1:
Orthogonal Least Square Transform / 6.2.2:
Critical Vector Selection / 6.2.3:
Sensitivity Analysis of Prior Knowledge / 6.3:
KBANNs / 7.1:
Inductive Bias / 7.2:
Sensitivity Analysis and Measures / 7.3:
Output-Pattern Sensitivity / 7.3.1:
Output-Weight Sensitivity / 7.3.2:
Output-H Sensitivity / 7.3.3:
Euclidean Distance / 7.3.4:
Promoter Recognition / 7.4:
Data and Initial Domain Theory / 7.4.1:
Experimental Methodology / 7.4.2:
Discussion and Conclusion / 7.5:
Applications / 8:
Input Dimension Reduction / 8.1:
Sensitivity Matrix / 8.1.1:
Criteria for Pruning Inputs / 8.1.2:
Network Optimization / 8.2:
Selective Learning / 8.3:
Hardware Robustness / 8.4:
Measure of Nonlinearity / 8.5:
Parameter Tuning for Neocognitron / 8.6:
Receptive Field / 8.6.1:
Selectivity / 8.6.2:
Sensitivity Analysis of the Neocognitron / 8.6.3:
Bibliography
Introduction to Neural Networks / 1:
Properties of Neural Networks / 1.1:
Neural Network Learning / 1.2:
11.

電子ブック

EB
Carsten Ullrich, J?rg Siekmann
出版情報: Springer eBooks Computer Science , Springer Berlin Heidelberg, 2008
所蔵情報: loading…
目次情報: 続きを見る
Preliminaries / Part I:
Introduction / 1:
Motivation / 1.1:
Contributions / 1.2:
Service-Oriented Course Generation / 1.2.1:
Modeling of Pedagogical Knowledge / 1.2.2:
Adaptivity in Generated Courses / 1.2.3:
Evaluation / 1.2.4:
Overview / 1.3:
Relevant Technologies / 2:
Basic Terminology / 2.1:
Semantic Web Technologies / 2.2:
Extensible Markup Language / 2.2.1:
Resource Description Framework / 2.2.2:
OWL Web Ontology Language / 2.2.3:
E-learning Standards / 2.3:
Learning Object Metadata / 2.3.1:
IMS Content Packaging / 2.3.2:
IMS Simple Sequencing / 2.3.3:
IMS Learning Design / 2.3.4:
Mathematics in the Web / 2.4:
OMDoc (Open Mathematical Documents) / 2.4.1:
The Learning Environment ActiveMath / 2.4.2:
Course Generation / 2.5:
Hierarchical Task Network Planning / 2.6:
Introduction to AI-Planning / 2.6.1:
Introduction to Hierarchical Task Network Planning / 2.6.2:
SHOP2 and JSHOP2 / 2.6.3:
JSHOP2 Formalism / 2.6.4:
Descriptive and Prescriptive Learning Theories / 3:
Behaviorism / 3.1:
Cognitivism / 3.2:
Constructivism / 3.3:
Instructional Design / 3.4:
Competency-Based Learning / 3.5:
Mathematical Competencies / 3.5.1:
Competency Levels / 3.5.2:
PAIGOS / Part II:
General Principles / 4:
An Ontology of Instructional Objects / 4.1:
Description of the Ontology / 4.1.1:
Why an Ontology? / 4.1.3:
Applications of the Ontology / 4.1.4:
A Mediator for Accessing Learning Object Repositories / 4.2:
Related Work / 4.2.1:
Overview of the Mediator Architecture / 4.2.2:
Querying the Mediator / 4.2.3:
Ontology Mapping and Query Rewriting / 4.2.4:
Repository Interface and Caching / 4.2.5:
Limitations of the Mediator as an Educational Service / 4.2.6:
Pedagogical Tasks, Methods and Strategies / 4.3:
Representing Course Generation Knowledge in an HTN Planner / 4.4:
Mapping Pedagogical Tasks onto HTN Tasks / 4.4.1:
Course Generation Planning Problems / 4.4.3:
Critical and Optional Tasks / 4.4.4:
Basic General Purpose Axioms and Operators / 4.5:
Testing for Equality / 4.5.1:
List Manipulation / 4.5.2:
Binding a Variable to All Terms of a Term List / 4.5.3:
Manipulating the World State / 4.5.4:
Basic Operators and Methods of the Course Generation Domain / 4.6:
Inserting References to Educational Resources / 4.6.1:
Starting and Ending Sections / 4.6.2:
Inserting References to Learning-Support Services / 4.6.3:
An Operator for Dynamic Text Generation / 4.6.4:
Dynamic Subtask Expansion / 4.6.5:
Accessing Information about Educational Resources / 4.6.6:
Axioms for Accessing the Learner Model / 4.6.7:
Processing Resources Depending on Learner Characteristics / 4.6.8:
Initializing and Manipulating Information about the Learning Goal / 4.6.9:
Converting a Plan into a Course / 4.7:
Generating Structure and Adaptivity: Dynamic Tasks / 4.8:
Generation of Narrative Bridges and Structure / 4.9:
Empirical Findings / 4.9.1:
Operator and Methods for Text Generation / 4.9.2:
Symbolic Representations of Dynamic Text Items / 4.9.3:
Generation of Structure Information / 4.9.4:
Summary / 4.10:
Course Generation in Practice: Formalized Scenarios / 5:
Moderate Constructivist Competency-Based Scenarios / 5.1:
Course Generation and Constructivism - a Contradiction? / 5.1.1:
Selecting Exercises / 5.1.2:
Selecting Examples / 5.1.3:
Scenario "Discover" / 5.1.4:
Scenario "Rehearse" / 5.1.5:
Scenario "Connect" / 5.1.6:
Scenario "Train Intensively" / 5.1.7:
Scenario "Train Competencies" / 5.1.8:
Scenario "Exam Simulation" / 5.1.9:
Course Generation Based on Instructional Design Principles / 5.2:
Merrill's "First Principles of Instruction" / 5.2.1:
Scenario "Guided Tour" / 5.2.2:
Implementation and Integration / 6:
Implementation / 6.1:
Integration of PAIGOS in ActiveMath / 6.2:
Course Generation in ActiveMath / 6.2.1:
Dynamically Generated Elements in a Table of Contents / 6.2.2:
Usage of Learning-Support Services in ActiveMath / 6.2.3:
Template-Based Generation of Narrative Bridges / 6.2.4:
PAIGOS as a Service in ActiveMath / 6.2.5:
Course Generation as a Web-Service / 6.3:
Interfaces / 6.3.1:
Technical Evaluations and Use Cases / 7:
Evaluation of the Ontology / 7.1.1:
Mediator Use Cases and Evaluations / 7.1.2:
Course Generation Use Cases and Evaluations / 7.1.3:
Performance of PAIGOS / 7.1.4:
Discussion / 7.1.5:
Formative and Summative Evaluation / 7.2:
Formative Evaluations / 7.2.1:
Summative Evaluation / 7.2.2:
Conclusions / 7.2.3:
Early Work / 8:
Generic Tutoring Environment / 8.2:
Dynamic Courseware Generator / 8.3:
ACE/WINDS / 8.4:
Former Course Generator of ActiveMath / 8.5:
APeLS/iClass / 8.6:
SeLeNe / 8.7:
Statistical Methods for Course Generation / 8.8:
Approaches Using Hierarchical Task Network Planning / 8.9:
Ontologies for Instructional Design / 8.10:
Future Work and Acknowledgments / 9:
Future Work / 9.1:
Complete List of User Comments
References
Index
Preliminaries / Part I:
Introduction / 1:
Motivation / 1.1:
12.

電子ブック

EB
Carsten Ullrich, Jörg Siekmann
出版情報: SpringerLink Books - AutoHoldings , Springer Berlin Heidelberg, 2008
所蔵情報: loading…
目次情報: 続きを見る
Preliminaries / Part I:
Introduction / 1:
Motivation / 1.1:
Contributions / 1.2:
Service-Oriented Course Generation / 1.2.1:
Modeling of Pedagogical Knowledge / 1.2.2:
Adaptivity in Generated Courses / 1.2.3:
Evaluation / 1.2.4:
Overview / 1.3:
Relevant Technologies / 2:
Basic Terminology / 2.1:
Semantic Web Technologies / 2.2:
Extensible Markup Language / 2.2.1:
Resource Description Framework / 2.2.2:
OWL Web Ontology Language / 2.2.3:
E-learning Standards / 2.3:
Learning Object Metadata / 2.3.1:
IMS Content Packaging / 2.3.2:
IMS Simple Sequencing / 2.3.3:
IMS Learning Design / 2.3.4:
Mathematics in the Web / 2.4:
OMDoc (Open Mathematical Documents) / 2.4.1:
The Learning Environment ActiveMath / 2.4.2:
Course Generation / 2.5:
Hierarchical Task Network Planning / 2.6:
Introduction to AI-Planning / 2.6.1:
Introduction to Hierarchical Task Network Planning / 2.6.2:
SHOP2 and JSHOP2 / 2.6.3:
JSHOP2 Formalism / 2.6.4:
Descriptive and Prescriptive Learning Theories / 3:
Behaviorism / 3.1:
Cognitivism / 3.2:
Constructivism / 3.3:
Instructional Design / 3.4:
Competency-Based Learning / 3.5:
Mathematical Competencies / 3.5.1:
Competency Levels / 3.5.2:
PAIGOS / Part II:
General Principles / 4:
An Ontology of Instructional Objects / 4.1:
Description of the Ontology / 4.1.1:
Why an Ontology? / 4.1.3:
Applications of the Ontology / 4.1.4:
A Mediator for Accessing Learning Object Repositories / 4.2:
Related Work / 4.2.1:
Overview of the Mediator Architecture / 4.2.2:
Querying the Mediator / 4.2.3:
Ontology Mapping and Query Rewriting / 4.2.4:
Repository Interface and Caching / 4.2.5:
Limitations of the Mediator as an Educational Service / 4.2.6:
Pedagogical Tasks, Methods and Strategies / 4.3:
Representing Course Generation Knowledge in an HTN Planner / 4.4:
Mapping Pedagogical Tasks onto HTN Tasks / 4.4.1:
Course Generation Planning Problems / 4.4.3:
Critical and Optional Tasks / 4.4.4:
Basic General Purpose Axioms and Operators / 4.5:
Testing for Equality / 4.5.1:
List Manipulation / 4.5.2:
Binding a Variable to All Terms of a Term List / 4.5.3:
Manipulating the World State / 4.5.4:
Basic Operators and Methods of the Course Generation Domain / 4.6:
Inserting References to Educational Resources / 4.6.1:
Starting and Ending Sections / 4.6.2:
Inserting References to Learning-Support Services / 4.6.3:
An Operator for Dynamic Text Generation / 4.6.4:
Dynamic Subtask Expansion / 4.6.5:
Accessing Information about Educational Resources / 4.6.6:
Axioms for Accessing the Learner Model / 4.6.7:
Processing Resources Depending on Learner Characteristics / 4.6.8:
Initializing and Manipulating Information about the Learning Goal / 4.6.9:
Converting a Plan into a Course / 4.7:
Generating Structure and Adaptivity: Dynamic Tasks / 4.8:
Generation of Narrative Bridges and Structure / 4.9:
Empirical Findings / 4.9.1:
Operator and Methods for Text Generation / 4.9.2:
Symbolic Representations of Dynamic Text Items / 4.9.3:
Generation of Structure Information / 4.9.4:
Summary / 4.10:
Course Generation in Practice: Formalized Scenarios / 5:
Moderate Constructivist Competency-Based Scenarios / 5.1:
Course Generation and Constructivism - a Contradiction? / 5.1.1:
Selecting Exercises / 5.1.2:
Selecting Examples / 5.1.3:
Scenario "Discover" / 5.1.4:
Scenario "Rehearse" / 5.1.5:
Scenario "Connect" / 5.1.6:
Scenario "Train Intensively" / 5.1.7:
Scenario "Train Competencies" / 5.1.8:
Scenario "Exam Simulation" / 5.1.9:
Course Generation Based on Instructional Design Principles / 5.2:
Merrill's "First Principles of Instruction" / 5.2.1:
Scenario "Guided Tour" / 5.2.2:
Implementation and Integration / 6:
Implementation / 6.1:
Integration of PAIGOS in ActiveMath / 6.2:
Course Generation in ActiveMath / 6.2.1:
Dynamically Generated Elements in a Table of Contents / 6.2.2:
Usage of Learning-Support Services in ActiveMath / 6.2.3:
Template-Based Generation of Narrative Bridges / 6.2.4:
PAIGOS as a Service in ActiveMath / 6.2.5:
Course Generation as a Web-Service / 6.3:
Interfaces / 6.3.1:
Technical Evaluations and Use Cases / 7:
Evaluation of the Ontology / 7.1.1:
Mediator Use Cases and Evaluations / 7.1.2:
Course Generation Use Cases and Evaluations / 7.1.3:
Performance of PAIGOS / 7.1.4:
Discussion / 7.1.5:
Formative and Summative Evaluation / 7.2:
Formative Evaluations / 7.2.1:
Summative Evaluation / 7.2.2:
Conclusions / 7.2.3:
Early Work / 8:
Generic Tutoring Environment / 8.2:
Dynamic Courseware Generator / 8.3:
ACE/WINDS / 8.4:
Former Course Generator of ActiveMath / 8.5:
APeLS/iClass / 8.6:
SeLeNe / 8.7:
Statistical Methods for Course Generation / 8.8:
Approaches Using Hierarchical Task Network Planning / 8.9:
Ontologies for Instructional Design / 8.10:
Future Work and Acknowledgments / 9:
Future Work / 9.1:
Complete List of User Comments
References
Index
Preliminaries / Part I:
Introduction / 1:
Motivation / 1.1:
13.

電子ブック

EB
Danny Weyns
出版情報: Springer eBooks Computer Science , Springer Berlin Heidelberg, 2010
所蔵情報: loading…
目次情報: 続きを見る
Introduction / 1:
Software Architecture and Middleware / 1.1:
Software Architecture / 1.1.1:
Middleware / 1.1.2:
Agent-Oriented Methodologies / 1.2:
Case Study / 1.3:
Overview of the Book / 1.4:
Overview of Architecture-Based Design of Multi-Agent Systems / 2:
General Overview of the Approach / 2.1:
Architectural Design in the Development Life Cycle / 2.1.1:
Steps of Architecture-Based Design of Multi-Agent Systems / 2.1.2:
Functional and Quality Attribute Requirements / 2.2:
Architectural Design / 2.3:
Architectural Patterns / 2.3.1:
ADD Process / 2.3.2:
Middleware Support for Multi-Agent Systems / 2.4:
Documenting Software Architecture / 2.5:
Architectural Views / 2.5.1:
Architectural Description Languages / 2.5.2:
Evaluating Software Architecture / 2.6:
From Software Architecture to Downstream Design and Implementation / 2.7:
Summary / 2.8:
Capturing Expertise in Multi-Agent System Engineering with Architectural Patterns / 3:
Situated Multi-Agent Systems / 3.1:
Single-Agent Systems / 3.1.1:
Multi-Agent Systems / 3.1.2:
Target Domain of the Pattern Language for Situated Multi-Agent Systems / 3.2:
Overview of the Pattern Language / 3.3:
Pattern Template / 3.4:
Virtual Environment / 3.5:
Primary Presentation / 3.5.1:
Architectural Elements / 3.5.2:
Interface Descriptions / 3.5.3:
Design Rationale / 3.5.4:
Situated Agent / 3.6:
Selective Perception / 3.6.1:
Roles and Situated Commitments / 3.7.1:
Free-Flow Trees Extended with Roles and Situated Commitments / 3.8.1:
Protocol-Based Communication / 3.9:
Architectural Design of Multi-Agent Systems / 3.9.1:
Designing and Documenting Multi-Agent System Architectures / 4.1:
Designing and Documenting Architecture in the Development Life Cycle / 4.1.1:
Inputs and Outputs of ADD / 4.1.2:
Overview of the ADD Activities / 4.1.3:
The Domain of Automated Transportation Systems / 4.2:
Business Case / 4.2.2:
System Requirements / 4.2.3:
General Overview of the Design / 4.3:
Challenges at the Outset / 4.3.1:
The System and Its Environment / 4.3.2:
Design Process / 4.3.3:
High-Level Design / 4.3.4:
Architecture Documentation / 4.4:
Introduction to the Architecture Documentation / 4.4.1:
Deployment View / 4.4.2:
Module Uses View / 4.4.3:
Collaborating Components View / 4.4.4:
Middleware for Distributed Multi-Agent Systems / 4.5:
Middleware Support for Distributed, Decentralized Coordination / 5.1:
Middleware in Distributed Software Systems / 5.1.1:
Middleware in Multi-Agent Systems / 5.1.2:
Scope of the Middleware and Requirements / 5.2:
Objectplaces / 5.2.2:
Views / 5.2.3:
Coordination Roles / 5.2.4:
Middleware Architecture / 5.3:
High-Level Module Decomposition / 5.3.1:
Group Formation / 5.3.2:
View Management / 5.3.3:
Role Activation / 5.3.4:
Collision Avoidance in the AGV Transportation System / 5.4:
Collision Avoidance / 5.4.1:
Collision Avoidance Protocol / 5.4.2:
Software Architecture: Communicating Processes for Collision Avoidance / 5.4.3:
Task Assignment / 5.5:
Schedule-Based Task Assignment / 6.1:
FiTA: Field-Based Task Assignment / 6.2:
Coordination Fields / 6.2.1:
Adaptive Task Assignment / 6.2.2:
Dealing With Local Minima / 6.2.3:
DynCNET Protocol / 6.3:
Monitoring the Area of Interest / 6.3.1:
Convergence / 6.3.3:
Synchronization Issues / 6.3.4:
Evaluation / 6.4:
Test Setting / 6.4.1:
Test Results / 6.4.2:
Tradeoff Analysis / 6.4.3:
Evaluation of Multi-Agent System Architectures / 6.5:
Evaluating Multi-Agent System Architectures with ATAM / 7.1:
Architecture Evaluation in the Development Life Cycle / 7.1.1:
Objectives of a Multi-Agent System Architecture Evaluation / 7.1.2:
Overview of the ATAM Activities / 7.1.3:
AGV Transportation System for a Tea Processing Warehouse / 7.2:
Evaluation Process / 7.2.2:
Quality Attribute Workshop / 7.2.3:
Analysis of Architectural Approaches / 7.2.4:
Reflection on ATAM for Evaluating a Multi-Agent System Architecture / 7.3:
ATAM Follow-Up and Demonstrator / 7.4:
Related Approaches / 7.5:
Architectural Approaches and Multi-Agent Systems / 8.1:
Architectural Styles / 8.1.1:
Reference Models and Architectures for Multi-Agent Systems / 8.1.2:
Middleware for Mobile Systems / 8.2:
Work Related to Views / 8.2.1:
Work Related to Coordination Roles / 8.2.2:
Scheduling and Routing of AGV Transportation Systems / 8.3:
AI and Robotics Approaches / 8.3.1:
Multi-Agent System Approaches / 8.3.2:
Conclusions / 9:
Reflection on Architecture-Based Design of Multi-Agent Systems / 9.1:
It Works! / 9.1.1:
Reflection on the Project with Egemin / 9.1.2:
Lessons Learned and Challenges / 9.2:
Dealing with Quality Attributes / 9.2.1:
Designing a Multi-Agent System Architecture / 9.2.2:
Integrating a Multi-Agent System with Its Software Environment / 9.2.3:
Impact of Adopting a Multi-Agent System / 9.2.4:
?-ADL Specification of the Architectural Patterns / A:
Language Constructs / A.1:
Virtual Environment Pattern / A.2:
Situated Agent Pattern / A.3:
Synchronization in the DynCNET Protocol / B:
Synchronization of Abort and Bound Messages / B.1:
Synchronization of Scope Dynamics / B.2:
Overview / C:
Invariant / C.2:
Maintaining the Invariant / C.3:
Glossary
References
Index
Introduction / 1:
Software Architecture and Middleware / 1.1:
Software Architecture / 1.1.1:
14.

電子ブック

EB
Danny Weyns
出版情報: SpringerLink Books - AutoHoldings , Springer Berlin Heidelberg, 2010
所蔵情報: loading…
目次情報: 続きを見る
Introduction / 1:
Software Architecture and Middleware / 1.1:
Software Architecture / 1.1.1:
Middleware / 1.1.2:
Agent-Oriented Methodologies / 1.2:
Case Study / 1.3:
Overview of the Book / 1.4:
Overview of Architecture-Based Design of Multi-Agent Systems / 2:
General Overview of the Approach / 2.1:
Architectural Design in the Development Life Cycle / 2.1.1:
Steps of Architecture-Based Design of Multi-Agent Systems / 2.1.2:
Functional and Quality Attribute Requirements / 2.2:
Architectural Design / 2.3:
Architectural Patterns / 2.3.1:
ADD Process / 2.3.2:
Middleware Support for Multi-Agent Systems / 2.4:
Documenting Software Architecture / 2.5:
Architectural Views / 2.5.1:
Architectural Description Languages / 2.5.2:
Evaluating Software Architecture / 2.6:
From Software Architecture to Downstream Design and Implementation / 2.7:
Summary / 2.8:
Capturing Expertise in Multi-Agent System Engineering with Architectural Patterns / 3:
Situated Multi-Agent Systems / 3.1:
Single-Agent Systems / 3.1.1:
Multi-Agent Systems / 3.1.2:
Target Domain of the Pattern Language for Situated Multi-Agent Systems / 3.2:
Overview of the Pattern Language / 3.3:
Pattern Template / 3.4:
Virtual Environment / 3.5:
Primary Presentation / 3.5.1:
Architectural Elements / 3.5.2:
Interface Descriptions / 3.5.3:
Design Rationale / 3.5.4:
Situated Agent / 3.6:
Selective Perception / 3.6.1:
Roles and Situated Commitments / 3.7.1:
Free-Flow Trees Extended with Roles and Situated Commitments / 3.8.1:
Protocol-Based Communication / 3.9:
Architectural Design of Multi-Agent Systems / 3.9.1:
Designing and Documenting Multi-Agent System Architectures / 4.1:
Designing and Documenting Architecture in the Development Life Cycle / 4.1.1:
Inputs and Outputs of ADD / 4.1.2:
Overview of the ADD Activities / 4.1.3:
The Domain of Automated Transportation Systems / 4.2:
Business Case / 4.2.2:
System Requirements / 4.2.3:
General Overview of the Design / 4.3:
Challenges at the Outset / 4.3.1:
The System and Its Environment / 4.3.2:
Design Process / 4.3.3:
High-Level Design / 4.3.4:
Architecture Documentation / 4.4:
Introduction to the Architecture Documentation / 4.4.1:
Deployment View / 4.4.2:
Module Uses View / 4.4.3:
Collaborating Components View / 4.4.4:
Middleware for Distributed Multi-Agent Systems / 4.5:
Middleware Support for Distributed, Decentralized Coordination / 5.1:
Middleware in Distributed Software Systems / 5.1.1:
Middleware in Multi-Agent Systems / 5.1.2:
Scope of the Middleware and Requirements / 5.2:
Objectplaces / 5.2.2:
Views / 5.2.3:
Coordination Roles / 5.2.4:
Middleware Architecture / 5.3:
High-Level Module Decomposition / 5.3.1:
Group Formation / 5.3.2:
View Management / 5.3.3:
Role Activation / 5.3.4:
Collision Avoidance in the AGV Transportation System / 5.4:
Collision Avoidance / 5.4.1:
Collision Avoidance Protocol / 5.4.2:
Software Architecture: Communicating Processes for Collision Avoidance / 5.4.3:
Task Assignment / 5.5:
Schedule-Based Task Assignment / 6.1:
FiTA: Field-Based Task Assignment / 6.2:
Coordination Fields / 6.2.1:
Adaptive Task Assignment / 6.2.2:
Dealing With Local Minima / 6.2.3:
DynCNET Protocol / 6.3:
Monitoring the Area of Interest / 6.3.1:
Convergence / 6.3.3:
Synchronization Issues / 6.3.4:
Evaluation / 6.4:
Test Setting / 6.4.1:
Test Results / 6.4.2:
Tradeoff Analysis / 6.4.3:
Evaluation of Multi-Agent System Architectures / 6.5:
Evaluating Multi-Agent System Architectures with ATAM / 7.1:
Architecture Evaluation in the Development Life Cycle / 7.1.1:
Objectives of a Multi-Agent System Architecture Evaluation / 7.1.2:
Overview of the ATAM Activities / 7.1.3:
AGV Transportation System for a Tea Processing Warehouse / 7.2:
Evaluation Process / 7.2.2:
Quality Attribute Workshop / 7.2.3:
Analysis of Architectural Approaches / 7.2.4:
Reflection on ATAM for Evaluating a Multi-Agent System Architecture / 7.3:
ATAM Follow-Up and Demonstrator / 7.4:
Related Approaches / 7.5:
Architectural Approaches and Multi-Agent Systems / 8.1:
Architectural Styles / 8.1.1:
Reference Models and Architectures for Multi-Agent Systems / 8.1.2:
Middleware for Mobile Systems / 8.2:
Work Related to Views / 8.2.1:
Work Related to Coordination Roles / 8.2.2:
Scheduling and Routing of AGV Transportation Systems / 8.3:
AI and Robotics Approaches / 8.3.1:
Multi-Agent System Approaches / 8.3.2:
Conclusions / 9:
Reflection on Architecture-Based Design of Multi-Agent Systems / 9.1:
It Works! / 9.1.1:
Reflection on the Project with Egemin / 9.1.2:
Lessons Learned and Challenges / 9.2:
Dealing with Quality Attributes / 9.2.1:
Designing a Multi-Agent System Architecture / 9.2.2:
Integrating a Multi-Agent System with Its Software Environment / 9.2.3:
Impact of Adopting a Multi-Agent System / 9.2.4:
?-ADL Specification of the Architectural Patterns / A:
Language Constructs / A.1:
Virtual Environment Pattern / A.2:
Situated Agent Pattern / A.3:
Synchronization in the DynCNET Protocol / B:
Synchronization of Abort and Bound Messages / B.1:
Synchronization of Scope Dynamics / B.2:
Overview / C:
Invariant / C.2:
Maintaining the Invariant / C.3:
Glossary
References
Index
Introduction / 1:
Software Architecture and Middleware / 1.1:
Software Architecture / 1.1.1:
15.

電子ブック

EB
D. M. Gabbay, Matthieu Cord, J. Siekmann
出版情報: Springer eBooks Computer Science , Springer Berlin Heidelberg, 2008
所蔵情報: loading…
目次情報: 続きを見る
Introduction to Learning Principles for Multimedia Data / Part I:
Introduction to Bayesian Methods and Decision Theory / Simon P. Wilson ; Rozenn Dahyot ; Padraig Cunningham1:
Introduction / 1.1:
Uncertainty and Probability / 1.2:
Quantifying Uncertainty / 1.2.1:
The Laws of Probability / 1.2.2:
Interpreting Probability / 1.2.3:
The Partition Law and Bayes' Law / 1.2.4:
Probability Models, Parameters and Likelihoods / 1.3:
Bayesian Statistical Learning / 1.4:
Implementing Bayesian Statistical Learning Methods / 1.5:
Direct Simulation Methods / 1.5.1:
Markov Chain Monte Carlo / 1.5.2:
Monte Carlo Integration / 1.5.3:
Optimization Methods / 1.5.4:
Decision Theory / 1.6:
Utility and Choosing the Optimal Decision / 1.6.1:
Where Is the Utility? / 1.6.2:
Native Bayes / 1.7:
Further Reading / 1.8:
References
Supervised Learning / Matthieu Cord ; Sarah Jane Delany2:
Introduction to Statistical Learning / 2.1:
Risk Minimization / 2.2.1:
Empirical Risk Minimization / 2.2.2:
Risk Bounds / 2.2.3:
Support Vector Machines and Kernels / 2.3:
Linear Classification: SVM Principle / 2.3.1:
Soft Margin / 2.3.2:
Kernel-Based Classification / 2.3.3:
Nearest Neighbour Classification / 2.4:
Similarity and Distance Metrics / 2.4.1:
Other Distance Metrics for Multimedia Data / 2.4.2:
Computational Complexity / 2.4.3:
Instance Selection and Noise Reduction / 2.4.4:
k-NN: Advantages and Disadvantages / 2.4.5:
Ensemble Techniques / 2.5:
Bias-Variance Analysis of Error / 2.5.1:
Bagging / 2.5.3:
Random Forests / 2.5.4:
Boosting / 2.5.5:
Summary / 2.6:
Unsupervised Learning and Clustering / Derek Greene ; Páadraig Cunningham ; Rudolf Mayer3:
Basic Clustering Techniques / 3.1:
k-Means Clustering / 3.2.1:
Fuzzy Clustering / 3.2.2:
Hierarchical Clustering / 3.2.3:
Modern Clustering Techniques / 3.3:
Kernel Clustering / 3.3.1:
Spectral Clustering / 3.3.2:
Self-organizing Maps / 3.4:
SOM Architecture / 3.4.1:
SOM Algorithm / 3.4.2:
Self-organizing Map and Clustering / 3.4.3:
Variations of the Self-organizing Map / 3.4.4:
Cluster Validation / 3.5:
Internal Validation / 3.5.1:
External Validation / 3.5.2:
Stability-Based Techniques / 3.5.3:
Dimension Reduction / 3.6:
Feature Transformation / 4.1:
Principal Component Analysis / 4.2.1:
Linear Discriminant Analysis / 4.2.2:
Feature Selection / 4.3:
Feature Selection in Supervised Learning / 4.3.1:
Unsupervised Feature Selection / 4.3.2:
Conclusions / 4.4:
Multimedia Applications / Part II:
Online Content-Based Image Retrieval Using Active Learning / Philippe-Henri Gosselin5:
Database Representation: Features and Similarity / 5.1:
Visual Features / 5.2.1:
Signature Based on Visual Pattern Dictionary / 5.2.2:
Similarity / 5.2.3:
Kernel Framework / 5.2.4:
Experiments / 5.2.5:
Classification Framework for Image Collection / 5.3:
Classification Methods for CBIR / 5.3.1:
Query Updating Scheme / 5.3.2:
Active Learning for CBIR / 5.3.3:
Notations for Selective Sampling Optimization / 5.4.1:
Active Learning Methods / 5.4.2:
Further Insights on Active Learning for CBIR / 5.5:
Active Boundary Correction / 5.5.1:
MAP vs Classification Error / 5.5.2:
Batch Selection / 5.5.3:
CBIR Interface: Result Display and Interaction / 5.5.4:
Conservative Learning for Object Detectors / Peter M. Roth ; Horst Bischof6:
Online Conservative Learning / 6.1:
Motion Detection / 6.2.1:
Reconstructive Model / 6.2.2:
Online AdaBoost for Feature Selection / 6.2.3:
Conservative Update Rules / 6.2.4:
Experimental Results / 6.3:
Description of Experiments / 6.3.1:
CoffeeCam / 6.3.2:
Switch to Caviar / 6.3.3:
Further Detection Results / 6.3.4:
Summary and Conclusions / 6.4:
Machine Learning Techniques for Face Analysis / Roberto Valenti ; Nicu Sebe ; Theo Gevers ; Ira Cohen7:
Background / 7.1:
Face Detection / 7.2.1:
Facial Feature Detection / 7.2.2:
Emotion Recognition Research / 7.2.3:
Learning Classifiers for Human-Computer Interaction / 7.3:
Model Is Correct / 7.3.1:
Model Is Incorrect / 7.3.2:
Discussion / 7.3.3:
Learning the Structure of Bayesian Network Classifiers / 7.4:
Bayesian Networks / 7.4.1:
Switching Between Simple Models / 7.4.2:
Beyond Simple Models / 7.4.3:
Classification-Driven Stochastic Structure Search / 7.4.4:
Should Unlabeled Be Weighed Differently? / 7.4.5:
Active Learning / 7.4.6:
Face Detection Experiments / 7.4.7:
Facial Expression Recognition Experiments / 7.5.2:
Mental Search in Image Databases: Implicit Versus Explicit Content Query / Julien Fauqueur ; Nozha Boujemaa7.6:
"Mental Image Search" Versus Other Search Paradigms / 8.1:
Implicit Content Query: Mental Image Search Using Bayesian Inference / 8.3:
Bayesian Inference for CBIR / 8.3.1:
Mental Image Category Search / 8.3.2:
Evaluation / 8.3.3:
Remarks / 8.3.4:
Explicit Content Query: Mental Image Search by Visual Composition Formulation / 8.4:
System Summary / 8.4.1:
Visual Thesaurus Construction / 8.4.2:
Symbolic Indexing, Boolean Search and Range Query Mechanism / 8.4.3:
Results / 8.4.4:
Combining Textual and Visual Information for Semantic Labeling of Images and Videos / Pinar Duygulu ; Muhammet Başstan ; Derya Ozkan8.4.5:
Semantic Labeling of Images / 9.1:
Translation Approach / 9.3:
Learning Correspondences Between Words and Regions / 9.3.1:
Linking Visual Elements to Words in News Videos / 9.3.2:
Translation Approach to Solve Video Association Problem / 9.3.3:
Experiments on News Videos Data Set / 9.3.4:
Naming Faces in News / 9.4:
Integrating Names and Faces / 9.4.1:
Finding Similarity of Faces / 9.4.2:
Finding the Densest Component in the Similarity Graph / 9.4.3:
Conclusions and Discussion / 9.4.4:
Machine Learning for Semi-structured Multimedia Documents: Application to Pornographic Filtering and Thematic Categorization. / Ludovic Denoyer ; Patrick Gallinari10:
Previous Work / 10.1:
Structured Document Classification / 10.2.1:
Multimedia Documents / 10.2.2:
Multimedia Generative Model / 10.3:
Classification of Documents / 10.3.1:
Generative Model / 10.3.2:
Description / 10.3.3:
Learning the Meta Model / 10.4:
Maximization of Lstructure / 10.4.1:
Maximization of Lcontent / 10.4.2:
Local Generative Models for Text and Image / 10.5:
Modelling a Piece of Text with Naive Bayes / 10.5.1:
Image Model / 10.5.2:
Models and Evaluation / 10.6:
Corpora / 10.6.2:
Results over the Pornographic Corpus / 10.6.3:
Results over the Wikipedia Multimedia Categorization Corpus / 10.6.4:
Conclusion / 10.7:
Classification and Clustering of Music for Novel Music Access Applications / Thomas Lidy ; Andreas Rauber11:
Feature Extraction from Audio / 11.1:
Low-Level Audio Features / 11.2.1:
MPEG-7 Audio Descriptors / 11.2.2:
MFCCs / 11.2.3:
MARSYAS Features / 11.2.4:
Rhythm Patterns / 11.2.5:
Statistical Spectrum Descriptors / 11.2.6:
Rhythm Histograms / 11.2.7:
Automatic Classifications of Music into Genres / 11.3:
Evaluation Through Music Classification / 11.3.1:
Benchmark Data Sets for Music Classification / 11.3.2:
Creating and Visualizing Music Maps Based on Self-organizing Maps / 11.4:
Class Visualization / 11.4.1:
Hit Histograms / 11.4.2:
U-Matrix / 11.4.3:
P-Matrix / 11.4.4:
U*-matrix / 11.4.5:
Gradient Fields / 11.4.6:
Component Planes / 11.4.7:
Smoothed Data Histograms / 11.4.8:
PlaySOM - Interaction with Music Maps / 11.5:
Interface / 11.5.1:
Interaction / 11.5.2:
Playlist Creation / 11.5.3:
PocketSOMPlayer - Music Retrieval on Mobile Devices / 11.6:
Playing Scenarios / 11.6.1:
Index / 11.6.3:
Introduction to Learning Principles for Multimedia Data / Part I:
Introduction to Bayesian Methods and Decision Theory / Simon P. Wilson ; Rozenn Dahyot ; Padraig Cunningham1:
Introduction / 1.1:
16.

電子ブック

EB
D. M. Gabbay, Matthieu Cord, J. Siekmann, Pádraig Cunningham
出版情報: SpringerLink Books - AutoHoldings , Springer Berlin Heidelberg, 2008
所蔵情報: loading…
目次情報: 続きを見る
Introduction to Learning Principles for Multimedia Data / Part I:
Introduction to Bayesian Methods and Decision Theory / Simon P. Wilson ; Rozenn Dahyot ; Padraig Cunningham1:
Introduction / 1.1:
Uncertainty and Probability / 1.2:
Quantifying Uncertainty / 1.2.1:
The Laws of Probability / 1.2.2:
Interpreting Probability / 1.2.3:
The Partition Law and Bayes' Law / 1.2.4:
Probability Models, Parameters and Likelihoods / 1.3:
Bayesian Statistical Learning / 1.4:
Implementing Bayesian Statistical Learning Methods / 1.5:
Direct Simulation Methods / 1.5.1:
Markov Chain Monte Carlo / 1.5.2:
Monte Carlo Integration / 1.5.3:
Optimization Methods / 1.5.4:
Decision Theory / 1.6:
Utility and Choosing the Optimal Decision / 1.6.1:
Where Is the Utility? / 1.6.2:
Native Bayes / 1.7:
Further Reading / 1.8:
References
Supervised Learning / Matthieu Cord ; Sarah Jane Delany2:
Introduction to Statistical Learning / 2.1:
Risk Minimization / 2.2.1:
Empirical Risk Minimization / 2.2.2:
Risk Bounds / 2.2.3:
Support Vector Machines and Kernels / 2.3:
Linear Classification: SVM Principle / 2.3.1:
Soft Margin / 2.3.2:
Kernel-Based Classification / 2.3.3:
Nearest Neighbour Classification / 2.4:
Similarity and Distance Metrics / 2.4.1:
Other Distance Metrics for Multimedia Data / 2.4.2:
Computational Complexity / 2.4.3:
Instance Selection and Noise Reduction / 2.4.4:
k-NN: Advantages and Disadvantages / 2.4.5:
Ensemble Techniques / 2.5:
Bias-Variance Analysis of Error / 2.5.1:
Bagging / 2.5.3:
Random Forests / 2.5.4:
Boosting / 2.5.5:
Summary / 2.6:
Unsupervised Learning and Clustering / Derek Greene ; Páadraig Cunningham ; Rudolf Mayer3:
Basic Clustering Techniques / 3.1:
k-Means Clustering / 3.2.1:
Fuzzy Clustering / 3.2.2:
Hierarchical Clustering / 3.2.3:
Modern Clustering Techniques / 3.3:
Kernel Clustering / 3.3.1:
Spectral Clustering / 3.3.2:
Self-organizing Maps / 3.4:
SOM Architecture / 3.4.1:
SOM Algorithm / 3.4.2:
Self-organizing Map and Clustering / 3.4.3:
Variations of the Self-organizing Map / 3.4.4:
Cluster Validation / 3.5:
Internal Validation / 3.5.1:
External Validation / 3.5.2:
Stability-Based Techniques / 3.5.3:
Dimension Reduction / 3.6:
Feature Transformation / 4.1:
Principal Component Analysis / 4.2.1:
Linear Discriminant Analysis / 4.2.2:
Feature Selection / 4.3:
Feature Selection in Supervised Learning / 4.3.1:
Unsupervised Feature Selection / 4.3.2:
Conclusions / 4.4:
Multimedia Applications / Part II:
Online Content-Based Image Retrieval Using Active Learning / Philippe-Henri Gosselin5:
Database Representation: Features and Similarity / 5.1:
Visual Features / 5.2.1:
Signature Based on Visual Pattern Dictionary / 5.2.2:
Similarity / 5.2.3:
Kernel Framework / 5.2.4:
Experiments / 5.2.5:
Classification Framework for Image Collection / 5.3:
Classification Methods for CBIR / 5.3.1:
Query Updating Scheme / 5.3.2:
Active Learning for CBIR / 5.3.3:
Notations for Selective Sampling Optimization / 5.4.1:
Active Learning Methods / 5.4.2:
Further Insights on Active Learning for CBIR / 5.5:
Active Boundary Correction / 5.5.1:
MAP vs Classification Error / 5.5.2:
Batch Selection / 5.5.3:
CBIR Interface: Result Display and Interaction / 5.5.4:
Conservative Learning for Object Detectors / Peter M. Roth ; Horst Bischof6:
Online Conservative Learning / 6.1:
Motion Detection / 6.2.1:
Reconstructive Model / 6.2.2:
Online AdaBoost for Feature Selection / 6.2.3:
Conservative Update Rules / 6.2.4:
Experimental Results / 6.3:
Description of Experiments / 6.3.1:
CoffeeCam / 6.3.2:
Switch to Caviar / 6.3.3:
Further Detection Results / 6.3.4:
Summary and Conclusions / 6.4:
Machine Learning Techniques for Face Analysis / Roberto Valenti ; Nicu Sebe ; Theo Gevers ; Ira Cohen7:
Background / 7.1:
Face Detection / 7.2.1:
Facial Feature Detection / 7.2.2:
Emotion Recognition Research / 7.2.3:
Learning Classifiers for Human-Computer Interaction / 7.3:
Model Is Correct / 7.3.1:
Model Is Incorrect / 7.3.2:
Discussion / 7.3.3:
Learning the Structure of Bayesian Network Classifiers / 7.4:
Bayesian Networks / 7.4.1:
Switching Between Simple Models / 7.4.2:
Beyond Simple Models / 7.4.3:
Classification-Driven Stochastic Structure Search / 7.4.4:
Should Unlabeled Be Weighed Differently? / 7.4.5:
Active Learning / 7.4.6:
Face Detection Experiments / 7.4.7:
Facial Expression Recognition Experiments / 7.5.2:
Mental Search in Image Databases: Implicit Versus Explicit Content Query / Julien Fauqueur ; Nozha Boujemaa7.6:
"Mental Image Search" Versus Other Search Paradigms / 8.1:
Implicit Content Query: Mental Image Search Using Bayesian Inference / 8.3:
Bayesian Inference for CBIR / 8.3.1:
Mental Image Category Search / 8.3.2:
Evaluation / 8.3.3:
Remarks / 8.3.4:
Explicit Content Query: Mental Image Search by Visual Composition Formulation / 8.4:
System Summary / 8.4.1:
Visual Thesaurus Construction / 8.4.2:
Symbolic Indexing, Boolean Search and Range Query Mechanism / 8.4.3:
Results / 8.4.4:
Combining Textual and Visual Information for Semantic Labeling of Images and Videos / Pinar Duygulu ; Muhammet Başstan ; Derya Ozkan8.4.5:
Semantic Labeling of Images / 9.1:
Translation Approach / 9.3:
Learning Correspondences Between Words and Regions / 9.3.1:
Linking Visual Elements to Words in News Videos / 9.3.2:
Translation Approach to Solve Video Association Problem / 9.3.3:
Experiments on News Videos Data Set / 9.3.4:
Naming Faces in News / 9.4:
Integrating Names and Faces / 9.4.1:
Finding Similarity of Faces / 9.4.2:
Finding the Densest Component in the Similarity Graph / 9.4.3:
Conclusions and Discussion / 9.4.4:
Machine Learning for Semi-structured Multimedia Documents: Application to Pornographic Filtering and Thematic Categorization. / Ludovic Denoyer ; Patrick Gallinari10:
Previous Work / 10.1:
Structured Document Classification / 10.2.1:
Multimedia Documents / 10.2.2:
Multimedia Generative Model / 10.3:
Classification of Documents / 10.3.1:
Generative Model / 10.3.2:
Description / 10.3.3:
Learning the Meta Model / 10.4:
Maximization of Lstructure / 10.4.1:
Maximization of Lcontent / 10.4.2:
Local Generative Models for Text and Image / 10.5:
Modelling a Piece of Text with Naive Bayes / 10.5.1:
Image Model / 10.5.2:
Models and Evaluation / 10.6:
Corpora / 10.6.2:
Results over the Pornographic Corpus / 10.6.3:
Results over the Wikipedia Multimedia Categorization Corpus / 10.6.4:
Conclusion / 10.7:
Classification and Clustering of Music for Novel Music Access Applications / Thomas Lidy ; Andreas Rauber11:
Feature Extraction from Audio / 11.1:
Low-Level Audio Features / 11.2.1:
MPEG-7 Audio Descriptors / 11.2.2:
MFCCs / 11.2.3:
MARSYAS Features / 11.2.4:
Rhythm Patterns / 11.2.5:
Statistical Spectrum Descriptors / 11.2.6:
Rhythm Histograms / 11.2.7:
Automatic Classifications of Music into Genres / 11.3:
Evaluation Through Music Classification / 11.3.1:
Benchmark Data Sets for Music Classification / 11.3.2:
Creating and Visualizing Music Maps Based on Self-organizing Maps / 11.4:
Class Visualization / 11.4.1:
Hit Histograms / 11.4.2:
U-Matrix / 11.4.3:
P-Matrix / 11.4.4:
U*-matrix / 11.4.5:
Gradient Fields / 11.4.6:
Component Planes / 11.4.7:
Smoothed Data Histograms / 11.4.8:
PlaySOM - Interaction with Music Maps / 11.5:
Interface / 11.5.1:
Interaction / 11.5.2:
Playlist Creation / 11.5.3:
PocketSOMPlayer - Music Retrieval on Mobile Devices / 11.6:
Playing Scenarios / 11.6.1:
Index / 11.6.3:
Introduction to Learning Principles for Multimedia Data / Part I:
Introduction to Bayesian Methods and Decision Theory / Simon P. Wilson ; Rozenn Dahyot ; Padraig Cunningham1:
Introduction / 1.1:
17.

電子ブック

EB
Dov M. Gabbay, Karl Schlechta
出版情報: Springer eBooks Computer Science , Springer Berlin Heidelberg, 2010
所蔵情報: loading…
目次情報: 続きを見る
Introduction and Motivation / 1:
Programme / 1.1:
Short Overview of the Different Logics / 1.2:
Nonmonotonic Logics / 1.2.1:
Theory Revision / 1.2.2:
Theory Update / 1.2.3:
Deontic Logic / 1.2.4:
Counterfactual Conditionals / 1.2.5:
Modal Logic / 1.2.6:
Intuitionistic Logic / 1.2.7:
Inheritance Systems / 1.2.8:
A Summarizing Table for the Semantics / 1.2.9:
A Discussion of Concepts / 1.3:
Basic Semantic Entities, Truth Values, and Operators / 1.3.1:
Algebraic and Structural Semantics / 1.3.2:
Restricted Operators and Relations / 1.3.3:
Copies in Preferential Models / 1.3.4:
Further Remarks on Universality of Representation Proofs / 1.3.5:
$$$ in the Object Language? / 1.3.6:
Various Considerations on Abstract Semantics / 1.3.7:
A Comparison with Reiter Defaults / 1.3.8:
IBRS / 1.4:
Definition and Comments / 1.4.1:
The Power of IBRS / 1.4.2:
Abstract Semantics for IBRS and Its Engineering Realization / 1.4.3:
Basic Definitions and Results / 2:
Algebraic Definitions / 2.1:
Basic Logical Definitions / 2.2:
Basic Definitions and Results for Nonmonotonic Logics / 2.3:
Abstract Semantics by Size / 3:
The First-Order Setting / 3.1:
General Size Semantics / 3.2:
Introduction / 3.2.1:
Main Table / 3.2.2:
Coherent Systems / 3.2.3:
Size and Principal Filter Logic / 3.2.4:
Preferential Structures - Part I / 4:
Remarks on Nonmonotonic Logics and Preferential Semantics / 4.1:
Basic Definitions / 4.1.2:
Preferential Structures Without Domain Conditions / 4.2:
General Discussion / 4.2.1:
Detailed Discussion / 4.2.2:
Preferential Structures - Part II / 5:
Simplifications by Domain Conditions, Logical Properties / 5.1:
Smooth Structures / 5.1.1:
Ranked Structures / 5.1.3:
The Logical Properties with Definability Preservation / 5.1.4:
A-Ranked Structures / 5.2:
Representation Results for A-Ranked Structures / 5.2.1:
Two-Sequent Calculi / 5.3:
Plausibility Logic / 5.3.1:
A Comment on the Work by Arieli and Avron / 5.3.3:
Blurred Observation - Absence of Definability Preservation / 5.4:
General and Smooth Structures Without Definability Preservation / 5.4.1:
The Limit Variant / 5.4.3:
The Algebraic Limit / 5.5.1:
The Logical Limit / 5.5.3:
Higher Preferential Structures / 6:
The General Case / 6.1:
Discussion of the Totally Smooth Case / 6.3:
The Essentially Smooth Case / 6.4:
Translation to Logic / 6.5:
Deontic Logic and Hierarchical Conditionals / 7:
Semantics of Deontic Logic / 7.1:
Introductory Remarks / 7.1.1:
Philosophical Discussion of Obligations / 7.1.2:
Examination of the Various Cases / 7.1.4:
What Is An Obligation? / 7.1.5:
Conclusion / 7.1.6:
A Comment on Work by Aqvist / 7.2:
There Are (At Least) Two Solutions / 7.2.1:
Outline / 7.2.3:
Gm $$$ A Implies G $$$ A (Outline) / 7.2.4:
Hierarchical Conditionals / 7.3:
Formal Modelling and Summary of Results / 7.3.1:
Overview / 7.3.3:
Connections with Other Concepts / 7.3.4:
Formal Results and Representation for Hierarchical Conditionals / 7.3.5:
Theory Update and Theory Revision / 8:
Update / 8.1:
Hidden Dimensions / 8.1.1:
Introduction to Theory Revision / 8.2:
Booth Revision / 8.2.2:
Revision and Independence / 8.2.3:
Preferential Modelling of Defaults / 8.2.4:
Remarks on Independence / 8.2.5:
An Analysis of Defeasible Inheritance Systems / 9:
Terminology / 9.1:
Inheritance and Reactive Diagrams / 9.1.2:
Conceptual Analysis / 9.1.3:
Introduction to Nonmonotonic Inheritance / 9.2:
Basic Discussion / 9.2.1:
Directly Sceptical Split Validity Upward Chaining Off-Path Inheritance / 9.2.2:
Review of Other Approaches and Problems / 9.2.3:
Defeasible Inheritance and Reactive Diagrams / 9.3:
Summary of Our Algorithm / 9.3.1:
Compilation and Memorization / 9.3.2:
Executing the Algorithm / 9.3.4:
Signposts / 9.3.5:
Beyond Inheritance / 9.3.6:
Interpretations / 9.4:
Informal Comparison of Inheritance with the Systems P and R / 9.4.1:
Inheritance as Information Transfer / 9.4.3:
Inheritance as Reasoning with Prototypes / 9.4.4:
Detailed Translation of Inheritance to Modified Systems of Small Sets / 9.5:
Normality / 9.5.1:
Small Sets / 9.5.2:
Bibliography
Index
Introduction and Motivation / 1:
Programme / 1.1:
Short Overview of the Different Logics / 1.2:
18.

電子ブック

EB
Dov M. Gabbay, Karl Schlechta
出版情報: SpringerLink Books - AutoHoldings , Springer Berlin Heidelberg, 2010
所蔵情報: loading…
目次情報: 続きを見る
Introduction and Motivation / 1:
Programme / 1.1:
Short Overview of the Different Logics / 1.2:
Nonmonotonic Logics / 1.2.1:
Theory Revision / 1.2.2:
Theory Update / 1.2.3:
Deontic Logic / 1.2.4:
Counterfactual Conditionals / 1.2.5:
Modal Logic / 1.2.6:
Intuitionistic Logic / 1.2.7:
Inheritance Systems / 1.2.8:
A Summarizing Table for the Semantics / 1.2.9:
A Discussion of Concepts / 1.3:
Basic Semantic Entities, Truth Values, and Operators / 1.3.1:
Algebraic and Structural Semantics / 1.3.2:
Restricted Operators and Relations / 1.3.3:
Copies in Preferential Models / 1.3.4:
Further Remarks on Universality of Representation Proofs / 1.3.5:
$$$ in the Object Language? / 1.3.6:
Various Considerations on Abstract Semantics / 1.3.7:
A Comparison with Reiter Defaults / 1.3.8:
IBRS / 1.4:
Definition and Comments / 1.4.1:
The Power of IBRS / 1.4.2:
Abstract Semantics for IBRS and Its Engineering Realization / 1.4.3:
Basic Definitions and Results / 2:
Algebraic Definitions / 2.1:
Basic Logical Definitions / 2.2:
Basic Definitions and Results for Nonmonotonic Logics / 2.3:
Abstract Semantics by Size / 3:
The First-Order Setting / 3.1:
General Size Semantics / 3.2:
Introduction / 3.2.1:
Main Table / 3.2.2:
Coherent Systems / 3.2.3:
Size and Principal Filter Logic / 3.2.4:
Preferential Structures - Part I / 4:
Remarks on Nonmonotonic Logics and Preferential Semantics / 4.1:
Basic Definitions / 4.1.2:
Preferential Structures Without Domain Conditions / 4.2:
General Discussion / 4.2.1:
Detailed Discussion / 4.2.2:
Preferential Structures - Part II / 5:
Simplifications by Domain Conditions, Logical Properties / 5.1:
Smooth Structures / 5.1.1:
Ranked Structures / 5.1.3:
The Logical Properties with Definability Preservation / 5.1.4:
A-Ranked Structures / 5.2:
Representation Results for A-Ranked Structures / 5.2.1:
Two-Sequent Calculi / 5.3:
Plausibility Logic / 5.3.1:
A Comment on the Work by Arieli and Avron / 5.3.3:
Blurred Observation - Absence of Definability Preservation / 5.4:
General and Smooth Structures Without Definability Preservation / 5.4.1:
The Limit Variant / 5.4.3:
The Algebraic Limit / 5.5.1:
The Logical Limit / 5.5.3:
Higher Preferential Structures / 6:
The General Case / 6.1:
Discussion of the Totally Smooth Case / 6.3:
The Essentially Smooth Case / 6.4:
Translation to Logic / 6.5:
Deontic Logic and Hierarchical Conditionals / 7:
Semantics of Deontic Logic / 7.1:
Introductory Remarks / 7.1.1:
Philosophical Discussion of Obligations / 7.1.2:
Examination of the Various Cases / 7.1.4:
What Is An Obligation? / 7.1.5:
Conclusion / 7.1.6:
A Comment on Work by Aqvist / 7.2:
There Are (At Least) Two Solutions / 7.2.1:
Outline / 7.2.3:
Gm $$$ A Implies G $$$ A (Outline) / 7.2.4:
Hierarchical Conditionals / 7.3:
Formal Modelling and Summary of Results / 7.3.1:
Overview / 7.3.3:
Connections with Other Concepts / 7.3.4:
Formal Results and Representation for Hierarchical Conditionals / 7.3.5:
Theory Update and Theory Revision / 8:
Update / 8.1:
Hidden Dimensions / 8.1.1:
Introduction to Theory Revision / 8.2:
Booth Revision / 8.2.2:
Revision and Independence / 8.2.3:
Preferential Modelling of Defaults / 8.2.4:
Remarks on Independence / 8.2.5:
An Analysis of Defeasible Inheritance Systems / 9:
Terminology / 9.1:
Inheritance and Reactive Diagrams / 9.1.2:
Conceptual Analysis / 9.1.3:
Introduction to Nonmonotonic Inheritance / 9.2:
Basic Discussion / 9.2.1:
Directly Sceptical Split Validity Upward Chaining Off-Path Inheritance / 9.2.2:
Review of Other Approaches and Problems / 9.2.3:
Defeasible Inheritance and Reactive Diagrams / 9.3:
Summary of Our Algorithm / 9.3.1:
Compilation and Memorization / 9.3.2:
Executing the Algorithm / 9.3.4:
Signposts / 9.3.5:
Beyond Inheritance / 9.3.6:
Interpretations / 9.4:
Informal Comparison of Inheritance with the Systems P and R / 9.4.1:
Inheritance as Information Transfer / 9.4.3:
Inheritance as Reasoning with Prototypes / 9.4.4:
Detailed Translation of Inheritance to Modified Systems of Small Sets / 9.5:
Normality / 9.5.1:
Small Sets / 9.5.2:
Bibliography
Index
Introduction and Motivation / 1:
Programme / 1.1:
Short Overview of the Different Logics / 1.2:
19.

電子ブック

EB
Marco Kuhlmann, Takeo Kanade
出版情報: Springer eBooks Computer Science , Springer Berlin Heidelberg, 2010
所蔵情報: loading…
目次情報: 続きを見る
Introduction / 1:
Motivation / 1.1:
Dependency Structures / 1.1.1:
Generative Capacity and Non-projectivity / 1.1.2:
Lexicalized Grammars Induce Dependency Trees / 1.2:
Overview of the Book / 1.3:
Dependency Languages / 1.3.1:
Contributions / 1.3.3:
Preliminaries / 2:
Projective Dependency Structures / 3:
Projectivity / 3.1:
Projectivity in the Sense of Harper and Hays / 3.1.1:
Projectivity in the Sense of Lecerf and Ihm / 3.1.2:
Projectivity in the Sense of Fitialov / 3.1.3:
Related Work / 3.1.4:
Algebraic Framework / 3.2:
Tree Traversal Strategies / 3.2.1:
Traversal of Treelet-Ordered Trees / 3.2.2:
Order Annotations / 3.2.3:
Dependency Algebras / 3.2.4:
Algorithmic Problems / 3.3:
Encoding and Decoding / 3.3.1:
Testing whether a Dependency Structure Is Projective / 3.3.2:
Empirical Evaluation / 3.3.3:
The Projectivity Hypothesis / 3.4.1:
Experimental Setup / 3.4.2:
Results and Discussion / 3.4.3:
Dependency Structures of Bounded Degree / 3.4.4:
The Block-Degree Measure / 4.1:
Blocks and Block-Degree / 4.1.1:
A Hierarchy of Non-projective Dependency Structures / 4.1.2:
Traversal of Block-Ordered Trees / 4.1.3:
Segmented Dependency Structures / 4.2.2:
Dependency Structure Algebras / 4.2.3:
Encoding / 4.3:
Computing the Block-Degree of a Dependency Structure / 4.3.2:
Dependency Structures without Crossings / 4.4:
Weakly Non-projective Dependency Structures / 5.1:
Definition of Weak Non-projectivity / 5.1.1:
Relation to the Block-Degree Measure / 5.1.2:
Algebraic Opaqueness / 5.1.3:
Well-Nested Dependency Structures / 5.1.4:
Definition of Well-Nestedness / 5.2.1:
Non-crossing Partitions / 5.2.2:
Algebraic Characterization / 5.2.3:
Testing whether a Dependency Structure Is Well-Nested / 5.2.4:
Structures and Grammars / 5.2.5:
Context-Free Grammars / 6.1:
Definition / 6.1.1:
String Semantics / 6.1.2:
Linearization Semantics / 6.1.3:
Dependency Semantics / 6.1.4:
Linear Context-Free Rewriting Systems / 6.2:
Non-essential Concatenation Functions / 6.2.1:
Coupled Context-Free Grammars / 6.2.4:
Tree Adjoining Grammar / 6.3.1:
Regular Dependency Languages / 6.4.1:
Regular Sets of Dependency Structures / 7.1:
Algebraic Recognizability / 7.1.1:
Elementary Properties / 7.1.2:
Regular Term Grammars / 7.1.3:
Regular Dependency Grammars / 7.1.4:
Dependency Languages and Lexicalized Grammars / 7.1.5:
Pumping Lemmata / 7.2:
The Pumping Lemma for Regular Term Languages / 7.2.1:
Ogden's Lemma for Regular Term Languages / 7.2.2:
Constant Growth / 7.3:
Constant Growth and Semilinearity / 7.3.1:
Regular Term Languages are Semilinear / 7.3.2:
Generative Capacity and Parsing Complexity / 7.3.3:
Projection of String Languages / 8.1:
Labelled Dependency Structures / 8.1.1:
String-Generating Regular Dependency Grammars / 8.1.2:
String-Generative Capacity / 8.1.3:
String Languages and Structural Properties / 8.2:
Masked Strings / 8.2.1:
Enforcing a Given Block-Degree / 8.2.2:
Enforcing Ill-Nestedness / 8.2.3:
Hierarchies of String Languages / 8.2.4:
Parsing Complexity / 8.2.5:
Membership Problems / 8.3.1:
The Standard Membership Problem / 8.3.2:
The Uniform Membership Problem / 8.3.3:
Recognition of Well-Nested Languages / 8.3.4:
Conclusion / 8.3.5:
Main Contributions / 9.1:
Future Directions / 9.2:
Development of the Formalism / 9.2.1:
Linguistic Relevance / 9.2.2:
Applications to Parsing / 9.2.3:
An Algebraic Perspective on Grammar Formalisms / 9.2.4:
References
Index
Introduction / 1:
Motivation / 1.1:
Dependency Structures / 1.1.1:
20.

電子ブック

EB
Clemens van Dinther, Stefan Brantschen, Clemens van Dinther, Marius Walliser
出版情報: Springer eBooks Computer Science , Birkh?user Basel, 2007
所蔵情報: loading…
目次情報: 続きを見る
List of Figures
List of Tables
Motivation and Fundamentals / I:
Introduction / 1:
Problem Description and Research Questions / 1.1:
Organization of the Book / 1.2:
Economic Foundations / 2:
Electronic Markets and Strategic Bidding / 2.1:
Historical Background of Markets / 2.1.1:
Markets as an Economic System / 2.1.2:
Market Institution Types and Auctions / 2.1.3:
Bidding in Auctions under Uncertainty / 2.1.4:
Market Engineering / 2.2:
Structured Design Approach / 2.2.1:
Generic Design Approaches / 2.2.2:
Computer Aided Market Engineering / 2.3:
Conceptual Platform Design / 2.3.1:
The meet2trade Software Suite / 2.3.2:
Summary / 2.4:
Agent-based Computational Economics / 3:
Intelligent Software Agents / 3.1:
Characteristics of Software Agents / 3.1.1:
Agent Learning / 3.1.2:
Agent Architecture / 3.1.3:
Multi Agent Systems / 3.2:
Communication and Interaction in Multi-Agent Systems / 3.2.1:
Coordination in Multi Agent Systems / 3.2.2:
Building Multi Agent-based Simulation Models / 3.3:
Characteristics of Simulations / 3.3.1:
Developing and Applying Simulations / 3.3.2:
Agent-based Simulation Approaches and Tools / 3.4:
Methodological Approaches of MABS in Economics / 4:
Pure Agent-based Simulation: The Bottom-up Approach / 4.1:
Monte Carlo Simulation / 4.2:
Evolutionary Approach / 4.3:
Reinforcement Learning / 4.4:
The Learning Model / 4.4.1:
Markov Games / 4.4.2:
Agent-based Simulation Software / 4.5:
Design Objectives for Agent-based Simulation Software / 5.1:
Functional and Technical Requirements / 5.1.1:
Agent-based Simulation Software - An Overview / 5.1.2:
Requirements Analysis of Agent Platforms / 5.1.3:
The Java Agent Development Framework / 5.2:
Agent Platform Architecture / 5.2.1:
Agent Model / 5.2.2:
Development Tools / 5.2.3:
The Agent-based Market Simulation Environment / 5.3:
Architecture / 5.3.1:
Simulation Control Agent (SCA) / 5.3.2:
Simulation Agents and Behaviours / 5.3.3:
Examples for Simulations on AMASE / 5.3.4:
Examination of Bidding under Uncertainty / 5.4:
Simulation Design / 6:
The Simulation Model / 6.1:
Bidding Process and Action Space / 6.1.1:
Agents and Environment / 6.1.2:
The Reinforcement Learning Mechanism / 6.1.3:
Simulation Approach / 6.2:
Initial Parameter Values of the Simulation / 6.2.1:
Simulation Settings / 6.2.2:
Theoretical Benchmark / 6.2.3:
Assessment of the Simulation Results / 7:
Institutional Rules: Sealed Bid versus Ascending Second Price Auction / 7.1:
Two Agent Environment / 7.1.1:
Five Agent Environment / 7.1.2:
Impact of Information Acquisition Cost / 7.2:
Concluding Discussion and Future Research / 7.2.1:
Conclusion / 8:
Summary of the Main Contributions / 8.1:
Limitations of the Approach / 8.2:
Outlook / 8.3:
Appendices and Bibliography
Appendix: Mathematical Proofs / A:
Expected Social Welfare / A.1:
Auctioneer's Expected Revenue / A.2:
Bidders' Expected Payoff / A.3:
Uncertain Bidders' Expected Payoff / A.4:
Density Function for the Estimated Valuations / A.5:
Appendix: Simulation Data and Figures / B:
Data / B.1:
Institutional Rules: Two Bidder / B.1.1:
Institutional Rules: Five Bidder / B.1.2:
Information Acquisition Costs: Two Bidder / B.1.3:
Information Acquisition Cost: Five Bidder / B.1.4:
Figures / B.2:
Bibliography
Abbreviations
List of Figures
List of Tables
Motivation and Fundamentals / I:
21.

電子ブック

EB
Marco Kuhlmann, Takeo Kanade, Josef Kittler
出版情報: SpringerLink Books - AutoHoldings , Springer Berlin Heidelberg, 2010
所蔵情報: loading…
目次情報: 続きを見る
Introduction / 1:
Motivation / 1.1:
Dependency Structures / 1.1.1:
Generative Capacity and Non-projectivity / 1.1.2:
Lexicalized Grammars Induce Dependency Trees / 1.2:
Overview of the Book / 1.3:
Dependency Languages / 1.3.1:
Contributions / 1.3.3:
Preliminaries / 2:
Projective Dependency Structures / 3:
Projectivity / 3.1:
Projectivity in the Sense of Harper and Hays / 3.1.1:
Projectivity in the Sense of Lecerf and Ihm / 3.1.2:
Projectivity in the Sense of Fitialov / 3.1.3:
Related Work / 3.1.4:
Algebraic Framework / 3.2:
Tree Traversal Strategies / 3.2.1:
Traversal of Treelet-Ordered Trees / 3.2.2:
Order Annotations / 3.2.3:
Dependency Algebras / 3.2.4:
Algorithmic Problems / 3.3:
Encoding and Decoding / 3.3.1:
Testing whether a Dependency Structure Is Projective / 3.3.2:
Empirical Evaluation / 3.3.3:
The Projectivity Hypothesis / 3.4.1:
Experimental Setup / 3.4.2:
Results and Discussion / 3.4.3:
Dependency Structures of Bounded Degree / 3.4.4:
The Block-Degree Measure / 4.1:
Blocks and Block-Degree / 4.1.1:
A Hierarchy of Non-projective Dependency Structures / 4.1.2:
Traversal of Block-Ordered Trees / 4.1.3:
Segmented Dependency Structures / 4.2.2:
Dependency Structure Algebras / 4.2.3:
Encoding / 4.3:
Computing the Block-Degree of a Dependency Structure / 4.3.2:
Dependency Structures without Crossings / 4.4:
Weakly Non-projective Dependency Structures / 5.1:
Definition of Weak Non-projectivity / 5.1.1:
Relation to the Block-Degree Measure / 5.1.2:
Algebraic Opaqueness / 5.1.3:
Well-Nested Dependency Structures / 5.1.4:
Definition of Well-Nestedness / 5.2.1:
Non-crossing Partitions / 5.2.2:
Algebraic Characterization / 5.2.3:
Testing whether a Dependency Structure Is Well-Nested / 5.2.4:
Structures and Grammars / 5.2.5:
Context-Free Grammars / 6.1:
Definition / 6.1.1:
String Semantics / 6.1.2:
Linearization Semantics / 6.1.3:
Dependency Semantics / 6.1.4:
Linear Context-Free Rewriting Systems / 6.2:
Non-essential Concatenation Functions / 6.2.1:
Coupled Context-Free Grammars / 6.2.4:
Tree Adjoining Grammar / 6.3.1:
Regular Dependency Languages / 6.4.1:
Regular Sets of Dependency Structures / 7.1:
Algebraic Recognizability / 7.1.1:
Elementary Properties / 7.1.2:
Regular Term Grammars / 7.1.3:
Regular Dependency Grammars / 7.1.4:
Dependency Languages and Lexicalized Grammars / 7.1.5:
Pumping Lemmata / 7.2:
The Pumping Lemma for Regular Term Languages / 7.2.1:
Ogden's Lemma for Regular Term Languages / 7.2.2:
Constant Growth / 7.3:
Constant Growth and Semilinearity / 7.3.1:
Regular Term Languages are Semilinear / 7.3.2:
Generative Capacity and Parsing Complexity / 7.3.3:
Projection of String Languages / 8.1:
Labelled Dependency Structures / 8.1.1:
String-Generating Regular Dependency Grammars / 8.1.2:
String-Generative Capacity / 8.1.3:
String Languages and Structural Properties / 8.2:
Masked Strings / 8.2.1:
Enforcing a Given Block-Degree / 8.2.2:
Enforcing Ill-Nestedness / 8.2.3:
Hierarchies of String Languages / 8.2.4:
Parsing Complexity / 8.2.5:
Membership Problems / 8.3.1:
The Standard Membership Problem / 8.3.2:
The Uniform Membership Problem / 8.3.3:
Recognition of Well-Nested Languages / 8.3.4:
Conclusion / 8.3.5:
Main Contributions / 9.1:
Future Directions / 9.2:
Development of the Formalism / 9.2.1:
Linguistic Relevance / 9.2.2:
Applications to Parsing / 9.2.3:
An Algebraic Perspective on Grammar Formalisms / 9.2.4:
References
Index
Introduction / 1:
Motivation / 1.1:
Dependency Structures / 1.1.1:
22.

電子ブック

EB
Clemens van Dinther, Stefan Brantschen, Clemens van Dinther, Marius Walliser, Monique Calisti
出版情報: SpringerLink Books - AutoHoldings , Birkhäuser Basel, 2007
所蔵情報: loading…
目次情報: 続きを見る
List of Figures
List of Tables
Motivation and Fundamentals / I:
Introduction / 1:
Problem Description and Research Questions / 1.1:
Organization of the Book / 1.2:
Economic Foundations / 2:
Electronic Markets and Strategic Bidding / 2.1:
Historical Background of Markets / 2.1.1:
Markets as an Economic System / 2.1.2:
Market Institution Types and Auctions / 2.1.3:
Bidding in Auctions under Uncertainty / 2.1.4:
Market Engineering / 2.2:
Structured Design Approach / 2.2.1:
Generic Design Approaches / 2.2.2:
Computer Aided Market Engineering / 2.3:
Conceptual Platform Design / 2.3.1:
The meet2trade Software Suite / 2.3.2:
Summary / 2.4:
Agent-based Computational Economics / 3:
Intelligent Software Agents / 3.1:
Characteristics of Software Agents / 3.1.1:
Agent Learning / 3.1.2:
Agent Architecture / 3.1.3:
Multi Agent Systems / 3.2:
Communication and Interaction in Multi-Agent Systems / 3.2.1:
Coordination in Multi Agent Systems / 3.2.2:
Building Multi Agent-based Simulation Models / 3.3:
Characteristics of Simulations / 3.3.1:
Developing and Applying Simulations / 3.3.2:
Agent-based Simulation Approaches and Tools / 3.4:
Methodological Approaches of MABS in Economics / 4:
Pure Agent-based Simulation: The Bottom-up Approach / 4.1:
Monte Carlo Simulation / 4.2:
Evolutionary Approach / 4.3:
Reinforcement Learning / 4.4:
The Learning Model / 4.4.1:
Markov Games / 4.4.2:
Agent-based Simulation Software / 4.5:
Design Objectives for Agent-based Simulation Software / 5.1:
Functional and Technical Requirements / 5.1.1:
Agent-based Simulation Software - An Overview / 5.1.2:
Requirements Analysis of Agent Platforms / 5.1.3:
The Java Agent Development Framework / 5.2:
Agent Platform Architecture / 5.2.1:
Agent Model / 5.2.2:
Development Tools / 5.2.3:
The Agent-based Market Simulation Environment / 5.3:
Architecture / 5.3.1:
Simulation Control Agent (SCA) / 5.3.2:
Simulation Agents and Behaviours / 5.3.3:
Examples for Simulations on AMASE / 5.3.4:
Examination of Bidding under Uncertainty / 5.4:
Simulation Design / 6:
The Simulation Model / 6.1:
Bidding Process and Action Space / 6.1.1:
Agents and Environment / 6.1.2:
The Reinforcement Learning Mechanism / 6.1.3:
Simulation Approach / 6.2:
Initial Parameter Values of the Simulation / 6.2.1:
Simulation Settings / 6.2.2:
Theoretical Benchmark / 6.2.3:
Assessment of the Simulation Results / 7:
Institutional Rules: Sealed Bid versus Ascending Second Price Auction / 7.1:
Two Agent Environment / 7.1.1:
Five Agent Environment / 7.1.2:
Impact of Information Acquisition Cost / 7.2:
Concluding Discussion and Future Research / 7.2.1:
Conclusion / 8:
Summary of the Main Contributions / 8.1:
Limitations of the Approach / 8.2:
Outlook / 8.3:
Appendices and Bibliography
Appendix: Mathematical Proofs / A:
Expected Social Welfare / A.1:
Auctioneer's Expected Revenue / A.2:
Bidders' Expected Payoff / A.3:
Uncertain Bidders' Expected Payoff / A.4:
Density Function for the Estimated Valuations / A.5:
Appendix: Simulation Data and Figures / B:
Data / B.1:
Institutional Rules: Two Bidder / B.1.1:
Institutional Rules: Five Bidder / B.1.2:
Information Acquisition Costs: Two Bidder / B.1.3:
Information Acquisition Cost: Five Bidder / B.1.4:
Figures / B.2:
Bibliography
Abbreviations
List of Figures
List of Tables
Motivation and Fundamentals / I:
23.

電子ブック

EB
Pierre M. Nugues
出版情報: Springer eBooks Computer Science , Springer Berlin Heidelberg, 2006
所蔵情報: loading…
目次情報: 続きを見る
An Overview of Language Processing / 1:
Linguistics and Language Processing / 1.1:
Applications of Language Processing / 1.2:
The Different Domains of Language Processing / 1.3:
Phonetics / 1.4:
Lexicon and Morphology / 1.5:
Syntax / 1.6:
Syntax as Defined by Noam Chomsky / 1.6.1:
Syntax as Relations and Dependencies / 1.6.2:
Semantics / 1.7:
Discourse and Dialogue / 1.8:
Why Speech and Language Processing Are Difficult / 1.9:
Ambiguity / 1.9.1:
Models and Their Implementation / 1.9.2:
An Example of Language Technology in Action: the Persona Project / 1.10:
Overview of Persona / 1.10.1:
The Persona's Modules / 1.10.2:
Further Reading / 1.11:
Corpus Processing Tools / 2:
Corpora / 2.1:
Types of Corpora / 2.1.1:
Corpora and Lexicon Building / 2.1.2:
Corpora as Knowledge Sources for the Linguist / 2.1.3:
Finite-State Automata / 2.2:
A Description / 2.2.1:
Mathematical Definition of Finite-State Automata / 2.2.2:
Finite-State Automata in Prolog / 2.2.3:
Deterministic and Nondeterministic Automata / 2.2.4:
Building a Deterministic Automata from a Nondeterministic One / 2.2.5:
Searching a String with a Finite-State Automaton / 2.2.6:
Operations on Finite-State Automata / 2.2.7:
Regular Expressions / 2.3:
Repetition Metacharacters / 2.3.1:
The Longest Match / 2.3.2:
Character Classes / 2.3.3:
Nonprintable Symbols or Positions / 2.3.4:
Union and Boolean Operators / 2.3.5:
Operator Combination and Precedence / 2.3.6:
Programming with Regular Expressions / 2.4:
Perl / 2.4.1:
Matching / 2.4.2:
Substitutions / 2.4.3:
Translating Characters / 2.4.4:
String Operators / 2.4.5:
Back References / 2.4.6:
Finding Concordances / 2.5:
Concordances in Prolog / 2.5.1:
Concordances in Perl / 2.5.2:
Approximate String Matching / 2.6:
Edit Operations / 2.6.1:
Minimum Edit Distance / 2.6.2:
Searching Edits in Prolog / 2.6.3:
Encoding, Entropy, and Annotation Schemes / 2.7:
Encoding Texts / 3.1:
Character Sets / 3.2:
Representing Characters / 3.2.1:
Unicode / 3.2.2:
The Unicode Encoding Schemes / 3.2.3:
Locales and Word Order / 3.3:
Presenting Time, Numerical Information, and Ordered Words / 3.3.1:
The Unicode Collation Algorithm / 3.3.2:
Markup Languages / 3.4:
A Brief Background / 3.4.1:
An Outline of XML / 3.4.2:
Writing a DTD / 3.4.3:
Writing an XML Document / 3.4.4:
Namespaces / 3.4.5:
Codes and Information Theory / 3.5:
Entropy / 3.5.1:
Huffman Encoding / 3.5.2:
Cross Entropy / 3.5.3:
Perplexity and Cross Perplexity / 3.5.4:
Entropy and Decision Trees / 3.6:
Decision Trees / 3.6.1:
Inducing Decision Trees Automatically / 3.6.2:
Counting Words / 3.7:
Counting Words and Word Sequences / 4.1:
Words and Tokens / 4.2:
What Is a Word? / 4.2.1:
Breaking a Text into Words: Tokenization / 4.2.2:
Tokenizing Texts / 4.3:
Tokenizing Texts in Prolog / 4.3.1:
Tokenizing Texts in Perl / 4.3.2:
N-grams / 4.4:
Some Definitions / 4.4.1:
Counting Unigrams in Prolog / 4.4.2:
Counting Unigrams with Perl / 4.4.3:
Counting Bigrams with Perl / 4.4.4:
Probabilistic Models of a Word Sequence / 4.5:
The Maximum Likelihood Estimation / 4.5.1:
Using ML Estimates with Nineteen Eighty-Four / 4.5.2:
Smoothing N-gram Probabilities / 4.6:
Sparse Data / 4.6.1:
Laplace's Rule / 4.6.2:
Good-Turing Estimation / 4.6.3:
Using N-grams of Variable Length / 4.7:
Linear Interpolation / 4.7.1:
Back-off / 4.7.2:
Quality of a Language Model / 4.8:
Intuitive Presentation / 4.8.1:
Entropy Rate / 4.8.2:
Perplexity / 4.8.3:
Collocations / 4.9:
Word Preference Measurements / 4.9.1:
Extracting Collocations with Perl / 4.9.2:
Application: Retrieval and Ranking of Documents on the Web / 4.10:
Words, Parts of Speech, and Morphology / 4.11:
Words / 5.1:
Parts of Speech / 5.1.1:
Features / 5.1.2:
Two Significant Parts of Speech: The Noun and the Verb / 5.1.3:
Lexicons / 5.2:
Encoding a Dictionary / 5.2.1:
Building a Trie in Prolog / 5.2.2:
Finding a Word in a Trie / 5.2.3:
Morphology / 5.3:
Morphemes / 5.3.1:
Morphs / 5.3.2:
Inflection and Derivation / 5.3.3:
Language Differences / 5.3.4:
Morphological Parsing / 5.4:
Two-Level Model of Morphology / 5.4.1:
Interpreting the Morphs / 5.4.2:
Finite-State Transducers / 5.4.3:
Conjugating a French Verb / 5.4.4:
Prolog Implementation / 5.4.5:
Operations on Finite-State Transducers / 5.4.6:
Morphological Rules / 5.5:
Two-Level Rules / 5.5.1:
Rules and Finite-State Transducers / 5.5.2:
Rule Composition: An Examplewith French Irregular Verbs / 5.5.3:
Application Examples / 5.6:
Part-of-Speech Tagging Using Rules / 5.7:
Resolving Part-of-Speech Ambiguity / 6.1:
A Manual Method / 6.1.1:
Which Method to Use to Automatically Assign Parts of Speech / 6.1.2:
Tagging with Rules / 6.2:
Brill's Tagger / 6.2.1:
Implementation in Prolog / 6.2.2:
Deriving Rules Automatically / 6.2.3:
Confusion Matrices / 6.2.4:
Unknown Words / 6.3:
Standardized Part-of-Speech Tagsets / 6.4:
Multilingual Part-of-Speech Tags / 6.4.1:
Parts of Speechfor English / 6.4.2:
An Annotation Schemefor Swedish / 6.4.3:
Part-of-Speech Tagging Using Stochastic Techniques / 6.5:
The Noisy Channel Model / 7.1:
Presentation / 7.1.1:
The N-gram Approximation / 7.1.2:
Tagging a Sentence / 7.1.3:
The Viterbi Algorithm: An Intuitive Presentation / 7.1.4:
Markov Models / 7.2:
Markov Chains / 7.2.1:
Hidden Markov Models / 7.2.2:
Three Fundamental Algorithms to Solve Problems with HMMs / 7.2.3:
The Forward Procedure / 7.2.4:
Viterbi Algorithm / 7.2.5:
The Backward Procedure / 7.2.6:
The Forward-Backward Algorithm / 7.2.7:
Tagging with Decision Trees / 7.3:
An Application of the Noisy Channel Model: Spell Checking / 7.4:
A Second Application: Language Models for Machine Translation / 7.6:
Parallel Corpora / 7.6.1:
Alignment / 7.6.2:
Translation / 7.6.3:
Phrase-Structure Grammars in Prolog / 7.7:
Using Prolog to Write Phrase-Structure Grammars / 8.1:
Representing Chomsky's Syntactic Formalism in Prolog / 8.2:
Constituents / 8.2.1:
Tree Structures / 8.2.2:
Phrase-Structure Rules / 8.2.3:
The Definite Clause Grammar (DCG) Notation / 8.2.4:
Parsing with DCGs / 8.3:
Translating DCGs into Prolog Clauses / 8.3.1:
Parsing and Generation / 8.3.2:
Left-Recursive Rules / 8.3.3:
Parsing Ambiguity / 8.4:
Using Variables / 8.5:
Gender and Number Agreement / 8.5.1:
Obtaining the Syntactic Structure / 8.5.2:
Application: Tokenizing Texts Using DCG Rules / 8.6:
Word Breaking / 8.6.1:
Recognition of Sentence Boundaries / 8.6.2:
Semantic Representation / 8.7:
A-Calculus / 8.7.1:
Embedding A-Expressions into DCG Rules / 8.7.2:
Semantic Composition of Verbs / 8.7.3:
An Application of Phrase-Structure Grammars and a Worked Example / 8.8:
Partial Parsing / 8.9:
Is Syntax Necessary? / 9.1:
Word Spotting and Template Matching / 9.2:
ELIZA / 9.2.1:
Word Spotting in Prolog / 9.2.2:
Multiword Detection / 9.3:
Multiwords / 9.3.1:
AStandard Multiword Annotation / 9.3.2:
Detecting Multiwords with Rules / 9.3.3:
Running the Program / 9.3.4:
Noun Groups and Verb Groups / 9.4:
Groups Versus Recursive Phrases / 9.4.1:
DCG Rules to Detect Noun Groups / 9.4.2:
DCG Rules to Detect Verb Groups / 9.4.3:
Running the Rules / 9.4.4:
Group Detection as a Tagging Problem / 9.5:
Tagging Gaps / 9.5.1:
Tagging Words / 9.5.2:
Using Symbolic Rules / 9.5.3:
Using Statistical Tagging / 9.5.4:
Cascading Partial Parsers / 9.6:
Elementary Analysis of Grammatical Functions / 9.7:
Main Functions / 9.7.1:
Extracting Other Groups / 9.7.2:
An Annotation Scheme for Groups in French / 9.8:
Application: The FASTUS System / 9.9:
The Message Understanding Conferences / 9.9.1:
The Syntactic Layers of the FASTUS System / 9.9.2:
Evaluationof Information Extraction Systems / 9.9.3:
Syntactic Formalisms / 9.10:
Introduction / 10.1:
Chomsky's Grammar in Syntactic Structures / 10.2:
Constituency: A Formal Definition / 10.2.1:
Transformations / 10.2.2:
Transformations and Movements / 10.2.3:
Gap Threading / 10.2.4:
Gap Threading to Parse Relative Clauses / 10.2.5:
Standardized Phrase Categories for English / 10.3:
Unification-Based Grammars / 10.4:
Representing Features in Prolog / 10.4.1:
A Formalism for Features and Rules / 10.4.3:
Features Organization / 10.4.4:
Features and Unification / 10.4.5:
A Unification Algorithm for Feature Structures / 10.4.6:
Dependency Grammars / 10.5:
Properties of a Dependency Graph / 10.5.1:
Valence / 10.5.3:
Dependencies and Functions / 10.5.4:
Parsing Techniques / 10.6:
Bottom-up Parsing / 11.1:
The Shift-Reduce Algorithm / 11.2.1:
Implementing Shift-Reduce Parsing in Prolog / 11.2.2:
Differences Between Bottom-up and Top-down Parsing / 11.2.3:
Chart Parsing / 11.3:
Backtracking and Efficiency / 11.3.1:
Structure of a Chart / 11.3.2:
The Active Chart / 11.3.3:
Modules of an Earley Parser / 11.3.4:
The Earley Algorithm in Prolog / 11.3.5:
The Earley Parser to Handle Left-Recursive Rules and Empty Symbols / 11.3.6:
Probabilistic Parsing of Context-Free Grammars / 11.4:
A Description of PCFGs / 11.5:
The Bottom-up Chart / 11.5.1:
The Cocke-Younger-Kasami Algorithm in Prolog / 11.5.2:
Adding Probabilities to the CYK Parser / 11.5.3:
Parser Evaluation / 11.6:
Constituency-Based Evaluation / 11.6.1:
Dependency-Based Evaluation / 11.6.2:
PerformanceofPCFG Parsing / 11.6.3:
Parsing Dependencies / 11.7:
Dependency Rules / 11.7.1:
Extending the Shift-Reduce Algorithm to Parse Dependencies / 11.7.2:
Nivre's Parser in Prolog / 11.7.3:
Finding Dependencies Using Constraints / 11.7.4:
Parsing Dependencies Using Statistical Techniques / 11.7.5:
Semantics and Predicate Logic / 11.8:
Language Meaning and Logic: An Illustrative Example / 12.1:
Formal Semantics / 12.3:
First-Order Predicate Calculus to Represent the State of Affairs / 12.4:
Variables and Constants / 12.4.1:
Predicates / 12.4.2:
Querying the Universe of Discourse / 12.5:
Mapping Phrases onto Logical Formulas / 12.6:
Representing Nouns and Adjectives / 12.6.1:
Representing Noun Groups / 12.6.2:
Representing Verbs and Prepositions / 12.6.3:
The Case of Determiners / 12.7:
Determiners and Logic Quantifiers / 12.7.1:
Translating Sentences Using Quantifiers / 12.7.2:
A General Representation of Sentences / 12.7.3:
Compositionality to Translate Phrases to Logical Forms / 12.8:
Translating the Noun Phrase / 12.8.1:
Translating the Verb Phrase / 12.8.2:
Augmenting the Database and Answering Questions / 12.9:
Declarations / 12.9.1:
Questions with Existential and Universal Quantifiers / 12.9.2:
Prolog and Unknown Predicates / 12.9.3:
Other Determiners and Questions / 12.9.4:
Application: The Spoken Language Translator / 12.10:
Translating Spoken Sentences / 12.10.1:
Compositional Semantics / 12.10.2:
Semantic Representation Transfer / 12.10.3:
Lexical Semantics / 12.11:
Beyond Formal Semantics / 13.1:
La langue etlaparole / 13.1.1:
Language and the Structure of the World / 13.1.2:
Lexical Structures / 13.2:
Some Basic Terms and Concepts / 13.2.1:
Ontological Organization / 13.2.2:
Lexical Classes and Relations / 13.2.3:
Semantic Networks / 13.2.4:
Building a Lexicon / 13.3:
The Lexicon and Word Senses / 13.3.1:
Verb Models / 13.3.2:
Definitions / 13.3.3:
An Example of Exhaustive Lexical Organization: Word Net / 13.4:
Nouns / 13.4.1:
Adjectives / 13.4.2:
Verbs / 13.4.3:
Automatic Word Sense Disambiguation / 13.5:
Senses as Tags / 13.5.1:
Associating a Word with a Context / 13.5.2:
Guessing the Topic / 13.5.3:
Naive Bayes / 13.5.4:
Using Constraints on Verbs / 13.5.5:
Using Dictionary Definitions / 13.5.6:
An Unsupervised Algorithm to Tag Senses / 13.5.7:
Senses and Languages / 13.5.8:
Case Grammars / 13.6:
Cases in Latin / 13.6.1:
Cases and Thematic Roles / 13.6.2:
Parsing with Cases / 13.6.3:
Semantic Grammars / 13.6.4:
Extending Case Grammars / 13.7:
Frame Net / 13.7.1:
A Statistical Method to Identify Semantic Roles / 13.7.2:
An Example of Case Grammar Application: EVAR / 13.8:
EVAR's Ontology and Syntactic Classes / 13.8.1:
Cases in EVAR / 13.8.2:
Discourse / 13.9:
Discourse: A Minimalist Definition / 14.1:
A Description of Discourse / 14.2.1:
Discourse Entities / 14.2.2:
References: An Application-Oriented View / 14.3:
References and Noun Phrases / 14.3.1:
Finding Names - Proper Nouns / 14.3.2:
Coreference / 14.4:
Anaphora / 14.4.1:
Solving Coreferences in an Example / 14.4.2:
A Standard Coreference Annotation / 14.4.3:
References: A More Formal View / 14.5:
Generating Discourse Entities: The Existential Quantifier / 14.5.1:
Retrieving Discourse Entities: Definite Descriptions / 14.5.2:
Generating Discourse Entities: The Universal Quantifier / 14.5.3:
Centering: A Theory on Discourse Structure / 14.6:
Solving Coreferences / 14.7:
A Simplistic Method: Using Syntactic and Semantic Compatibility / 14.7.1:
Solving Coreferences with Shallow Grammatical Information / 14.7.2:
Salience in a Multimodal Context / 14.7.3:
Using a Machine-Learning Technique to Resolve Coreferences / 14.7.4:
More Complex Phenomena: Ellipses / 14.7.5:
Discourse and Rhetoric / 14.8:
Ancient Rhetoric: An Outline / 14.8.1:
Rhetorical Structure Theory / 14.8.2:
Types of Relations / 14.8.3:
Implementing Rhetorical Structure Theory / 14.8.4:
Events and Time / 14.9:
Events / 14.9.1:
Event Types / 14.9.2:
Temporal Representation of Events / 14.9.3:
Events and Tenses / 14.9.4:
Time ML, an Annotation Scheme for Time and Events / 14.10:
Dialogue / 14.11:
Why a Dialogue? / 15.1:
Simple Dialogue Systems / 15.3:
Dialogue Systems Based on Automata / 15.3.1:
Dialogue Modeling / 15.3.2:
Speech Acts: A Theory of Language Interaction / 15.4:
Speech Acts and Human-Machine Dialogue / 15.5:
Speech Acts as a Tagging Model / 15.5.1:
Speech Acts Tags Used in the SUNDIAL Project / 15.5.2:
Dialogue Parsing / 15.5.3:
Interpreting Speech Acts / 15.5.4:
EVAR: A Dialogue Application Using Speech Acts / 15.5.5:
Taking Beliefs and Intentions into Account / 15.6:
Representing Mental States / 15.6.1:
The STRIPS Planning Algorithm / 15.6.2:
Causality / 15.6.3:
An Introduction to Prolog / 15.7:
A Short Background / A.1:
Basic Features of Prolog / A.2:
Facts / A.2.1:
Terms / A.2.2:
Queries / A.2.3:
Logical Variables / A.2.4:
Shared Variables / A.2.5:
Data Types in Prolog / A.2.6:
Rules / A.2.7:
Running a Program / A.3:
Unification / A.4:
Substitution and Instances / A.4.1:
Terms and Unification / A.4.2:
The Herbrand Unification Algorithm / A.4.3:
Example / A.4.4:
The Occurs-Check / A.4.5:
Resolution / A.5:
Modus Ponens / A.5.1:
A Resolution Algorithm / A.5.2:
Derivation Trees and Backtracking / A.5.3:
Tracing and Debugging / A.6:
Cuts, Negation, and Related Predicates / A.7:
Cuts / A.7.1:
Negation / A.7.2:
The once/1 Predicate / A.7.3:
Lists / A.8:
Some List-Handling Predicates / A.9:
The member/2 Predicate / A.9.1:
The append/3 Predicate / A.9.2:
The delete/3 Predicate / A.9.3:
The intersection/3 Predicate / A.9.4:
The reverse/2 Predicate / A.9.5:
The Mode of an Argument / A.9.6:
Operators and Arithmetic / A.10:
Operators / A.10.1:
Arithmetic Operations / A.10.2:
Comparison Operators / A.10.3:
Lists and Arithmetic: The length/2 Predicate / A.10.4:
Lists and Comparison: The quicksort/2 Predicate / A.10.5:
Some Other Built-in Predicates / A.11:
Type Predicates / A.11.1:
Term Manipulation Predicates / A.11.2:
Handling Run-Time Errors and Exceptions / A.12:
Dynamically Accessing and Updatingthe Database / A.13:
Accessing a Clause: The clause/2 Predicate / A.13.1:
Dynamic and Static Predicates / A.13.2:
Adding a Clause: The asserta/1 and 1 assertz/Predicates / A.13.3:
Removing Clauses: The retract/1 and abolish/2 Predicates / A.13.4:
Handling Unknown Predicates / A.13.5:
All-Solutions Predicates / A.14:
Fundamental Search Algorithms / A.15:
Representing the Graph / A.15.1:
Depth-First Search / A.15.2:
Breadth-First Search / A.15.3:
A* Search / A.15.4:
Input/Output / A.16:
Reading and Writing Characters with Edinburgh Prolog / A.16.1:
Reading and Writing Terms with Edinburgh Prolog / A.16.2:
Opening and Closing Files with Edinburgh Prolog / A.16.3:
Reading and Writing Characters with Standard Prolog / A.16.4:
Reading and Writing Terms with Standard Prolog / A.16.5:
Opening and Closing Files with Standard Prolog / A.16.6:
Writing Loops / A.16.7:
Developing Prolog Programs / A.17:
Presentation Style / A.17.1:
Improving Programs / A.17.2:
Index
References
An Overview of Language Processing / 1:
Linguistics and Language Processing / 1.1:
Applications of Language Processing / 1.2:
24.

電子ブック

EB
Pierre M. Nugues, A. Bundy, Jörg Siekmann
出版情報: SpringerLink Books - AutoHoldings , Springer Berlin Heidelberg, 2006
所蔵情報: loading…
目次情報: 続きを見る
An Overview of Language Processing / 1:
Linguistics and Language Processing / 1.1:
Applications of Language Processing / 1.2:
The Different Domains of Language Processing / 1.3:
Phonetics / 1.4:
Lexicon and Morphology / 1.5:
Syntax / 1.6:
Syntax as Defined by Noam Chomsky / 1.6.1:
Syntax as Relations and Dependencies / 1.6.2:
Semantics / 1.7:
Discourse and Dialogue / 1.8:
Why Speech and Language Processing Are Difficult / 1.9:
Ambiguity / 1.9.1:
Models and Their Implementation / 1.9.2:
An Example of Language Technology in Action: the Persona Project / 1.10:
Overview of Persona / 1.10.1:
The Persona's Modules / 1.10.2:
Further Reading / 1.11:
Corpus Processing Tools / 2:
Corpora / 2.1:
Types of Corpora / 2.1.1:
Corpora and Lexicon Building / 2.1.2:
Corpora as Knowledge Sources for the Linguist / 2.1.3:
Finite-State Automata / 2.2:
A Description / 2.2.1:
Mathematical Definition of Finite-State Automata / 2.2.2:
Finite-State Automata in Prolog / 2.2.3:
Deterministic and Nondeterministic Automata / 2.2.4:
Building a Deterministic Automata from a Nondeterministic One / 2.2.5:
Searching a String with a Finite-State Automaton / 2.2.6:
Operations on Finite-State Automata / 2.2.7:
Regular Expressions / 2.3:
Repetition Metacharacters / 2.3.1:
The Longest Match / 2.3.2:
Character Classes / 2.3.3:
Nonprintable Symbols or Positions / 2.3.4:
Union and Boolean Operators / 2.3.5:
Operator Combination and Precedence / 2.3.6:
Programming with Regular Expressions / 2.4:
Perl / 2.4.1:
Matching / 2.4.2:
Substitutions / 2.4.3:
Translating Characters / 2.4.4:
String Operators / 2.4.5:
Back References / 2.4.6:
Finding Concordances / 2.5:
Concordances in Prolog / 2.5.1:
Concordances in Perl / 2.5.2:
Approximate String Matching / 2.6:
Edit Operations / 2.6.1:
Minimum Edit Distance / 2.6.2:
Searching Edits in Prolog / 2.6.3:
Encoding, Entropy, and Annotation Schemes / 2.7:
Encoding Texts / 3.1:
Character Sets / 3.2:
Representing Characters / 3.2.1:
Unicode / 3.2.2:
The Unicode Encoding Schemes / 3.2.3:
Locales and Word Order / 3.3:
Presenting Time, Numerical Information, and Ordered Words / 3.3.1:
The Unicode Collation Algorithm / 3.3.2:
Markup Languages / 3.4:
A Brief Background / 3.4.1:
An Outline of XML / 3.4.2:
Writing a DTD / 3.4.3:
Writing an XML Document / 3.4.4:
Namespaces / 3.4.5:
Codes and Information Theory / 3.5:
Entropy / 3.5.1:
Huffman Encoding / 3.5.2:
Cross Entropy / 3.5.3:
Perplexity and Cross Perplexity / 3.5.4:
Entropy and Decision Trees / 3.6:
Decision Trees / 3.6.1:
Inducing Decision Trees Automatically / 3.6.2:
Counting Words / 3.7:
Counting Words and Word Sequences / 4.1:
Words and Tokens / 4.2:
What Is a Word? / 4.2.1:
Breaking a Text into Words: Tokenization / 4.2.2:
Tokenizing Texts / 4.3:
Tokenizing Texts in Prolog / 4.3.1:
Tokenizing Texts in Perl / 4.3.2:
N-grams / 4.4:
Some Definitions / 4.4.1:
Counting Unigrams in Prolog / 4.4.2:
Counting Unigrams with Perl / 4.4.3:
Counting Bigrams with Perl / 4.4.4:
Probabilistic Models of a Word Sequence / 4.5:
The Maximum Likelihood Estimation / 4.5.1:
Using ML Estimates with Nineteen Eighty-Four / 4.5.2:
Smoothing N-gram Probabilities / 4.6:
Sparse Data / 4.6.1:
Laplace's Rule / 4.6.2:
Good-Turing Estimation / 4.6.3:
Using N-grams of Variable Length / 4.7:
Linear Interpolation / 4.7.1:
Back-off / 4.7.2:
Quality of a Language Model / 4.8:
Intuitive Presentation / 4.8.1:
Entropy Rate / 4.8.2:
Perplexity / 4.8.3:
Collocations / 4.9:
Word Preference Measurements / 4.9.1:
Extracting Collocations with Perl / 4.9.2:
Application: Retrieval and Ranking of Documents on the Web / 4.10:
Words, Parts of Speech, and Morphology / 4.11:
Words / 5.1:
Parts of Speech / 5.1.1:
Features / 5.1.2:
Two Significant Parts of Speech: The Noun and the Verb / 5.1.3:
Lexicons / 5.2:
Encoding a Dictionary / 5.2.1:
Building a Trie in Prolog / 5.2.2:
Finding a Word in a Trie / 5.2.3:
Morphology / 5.3:
Morphemes / 5.3.1:
Morphs / 5.3.2:
Inflection and Derivation / 5.3.3:
Language Differences / 5.3.4:
Morphological Parsing / 5.4:
Two-Level Model of Morphology / 5.4.1:
Interpreting the Morphs / 5.4.2:
Finite-State Transducers / 5.4.3:
Conjugating a French Verb / 5.4.4:
Prolog Implementation / 5.4.5:
Operations on Finite-State Transducers / 5.4.6:
Morphological Rules / 5.5:
Two-Level Rules / 5.5.1:
Rules and Finite-State Transducers / 5.5.2:
Rule Composition: An Examplewith French Irregular Verbs / 5.5.3:
Application Examples / 5.6:
Part-of-Speech Tagging Using Rules / 5.7:
Resolving Part-of-Speech Ambiguity / 6.1:
A Manual Method / 6.1.1:
Which Method to Use to Automatically Assign Parts of Speech / 6.1.2:
Tagging with Rules / 6.2:
Brill's Tagger / 6.2.1:
Implementation in Prolog / 6.2.2:
Deriving Rules Automatically / 6.2.3:
Confusion Matrices / 6.2.4:
Unknown Words / 6.3:
Standardized Part-of-Speech Tagsets / 6.4:
Multilingual Part-of-Speech Tags / 6.4.1:
Parts of Speechfor English / 6.4.2:
An Annotation Schemefor Swedish / 6.4.3:
Part-of-Speech Tagging Using Stochastic Techniques / 6.5:
The Noisy Channel Model / 7.1:
Presentation / 7.1.1:
The N-gram Approximation / 7.1.2:
Tagging a Sentence / 7.1.3:
The Viterbi Algorithm: An Intuitive Presentation / 7.1.4:
Markov Models / 7.2:
Markov Chains / 7.2.1:
Hidden Markov Models / 7.2.2:
Three Fundamental Algorithms to Solve Problems with HMMs / 7.2.3:
The Forward Procedure / 7.2.4:
Viterbi Algorithm / 7.2.5:
The Backward Procedure / 7.2.6:
The Forward-Backward Algorithm / 7.2.7:
Tagging with Decision Trees / 7.3:
An Application of the Noisy Channel Model: Spell Checking / 7.4:
A Second Application: Language Models for Machine Translation / 7.6:
Parallel Corpora / 7.6.1:
Alignment / 7.6.2:
Translation / 7.6.3:
Phrase-Structure Grammars in Prolog / 7.7:
Using Prolog to Write Phrase-Structure Grammars / 8.1:
Representing Chomsky's Syntactic Formalism in Prolog / 8.2:
Constituents / 8.2.1:
Tree Structures / 8.2.2:
Phrase-Structure Rules / 8.2.3:
The Definite Clause Grammar (DCG) Notation / 8.2.4:
Parsing with DCGs / 8.3:
Translating DCGs into Prolog Clauses / 8.3.1:
Parsing and Generation / 8.3.2:
Left-Recursive Rules / 8.3.3:
Parsing Ambiguity / 8.4:
Using Variables / 8.5:
Gender and Number Agreement / 8.5.1:
Obtaining the Syntactic Structure / 8.5.2:
Application: Tokenizing Texts Using DCG Rules / 8.6:
Word Breaking / 8.6.1:
Recognition of Sentence Boundaries / 8.6.2:
Semantic Representation / 8.7:
A-Calculus / 8.7.1:
Embedding A-Expressions into DCG Rules / 8.7.2:
Semantic Composition of Verbs / 8.7.3:
An Application of Phrase-Structure Grammars and a Worked Example / 8.8:
Partial Parsing / 8.9:
Is Syntax Necessary? / 9.1:
Word Spotting and Template Matching / 9.2:
ELIZA / 9.2.1:
Word Spotting in Prolog / 9.2.2:
Multiword Detection / 9.3:
Multiwords / 9.3.1:
AStandard Multiword Annotation / 9.3.2:
Detecting Multiwords with Rules / 9.3.3:
Running the Program / 9.3.4:
Noun Groups and Verb Groups / 9.4:
Groups Versus Recursive Phrases / 9.4.1:
DCG Rules to Detect Noun Groups / 9.4.2:
DCG Rules to Detect Verb Groups / 9.4.3:
Running the Rules / 9.4.4:
Group Detection as a Tagging Problem / 9.5:
Tagging Gaps / 9.5.1:
Tagging Words / 9.5.2:
Using Symbolic Rules / 9.5.3:
Using Statistical Tagging / 9.5.4:
Cascading Partial Parsers / 9.6:
Elementary Analysis of Grammatical Functions / 9.7:
Main Functions / 9.7.1:
Extracting Other Groups / 9.7.2:
An Annotation Scheme for Groups in French / 9.8:
Application: The FASTUS System / 9.9:
The Message Understanding Conferences / 9.9.1:
The Syntactic Layers of the FASTUS System / 9.9.2:
Evaluationof Information Extraction Systems / 9.9.3:
Syntactic Formalisms / 9.10:
Introduction / 10.1:
Chomsky's Grammar in Syntactic Structures / 10.2:
Constituency: A Formal Definition / 10.2.1:
Transformations / 10.2.2:
Transformations and Movements / 10.2.3:
Gap Threading / 10.2.4:
Gap Threading to Parse Relative Clauses / 10.2.5:
Standardized Phrase Categories for English / 10.3:
Unification-Based Grammars / 10.4:
Representing Features in Prolog / 10.4.1:
A Formalism for Features and Rules / 10.4.3:
Features Organization / 10.4.4:
Features and Unification / 10.4.5:
A Unification Algorithm for Feature Structures / 10.4.6:
Dependency Grammars / 10.5:
Properties of a Dependency Graph / 10.5.1:
Valence / 10.5.3:
Dependencies and Functions / 10.5.4:
Parsing Techniques / 10.6:
Bottom-up Parsing / 11.1:
The Shift-Reduce Algorithm / 11.2.1:
Implementing Shift-Reduce Parsing in Prolog / 11.2.2:
Differences Between Bottom-up and Top-down Parsing / 11.2.3:
Chart Parsing / 11.3:
Backtracking and Efficiency / 11.3.1:
Structure of a Chart / 11.3.2:
The Active Chart / 11.3.3:
Modules of an Earley Parser / 11.3.4:
The Earley Algorithm in Prolog / 11.3.5:
The Earley Parser to Handle Left-Recursive Rules and Empty Symbols / 11.3.6:
Probabilistic Parsing of Context-Free Grammars / 11.4:
A Description of PCFGs / 11.5:
The Bottom-up Chart / 11.5.1:
The Cocke-Younger-Kasami Algorithm in Prolog / 11.5.2:
Adding Probabilities to the CYK Parser / 11.5.3:
Parser Evaluation / 11.6:
Constituency-Based Evaluation / 11.6.1:
Dependency-Based Evaluation / 11.6.2:
PerformanceofPCFG Parsing / 11.6.3:
Parsing Dependencies / 11.7:
Dependency Rules / 11.7.1:
Extending the Shift-Reduce Algorithm to Parse Dependencies / 11.7.2:
Nivre's Parser in Prolog / 11.7.3:
Finding Dependencies Using Constraints / 11.7.4:
Parsing Dependencies Using Statistical Techniques / 11.7.5:
Semantics and Predicate Logic / 11.8:
Language Meaning and Logic: An Illustrative Example / 12.1:
Formal Semantics / 12.3:
First-Order Predicate Calculus to Represent the State of Affairs / 12.4:
Variables and Constants / 12.4.1:
Predicates / 12.4.2:
Querying the Universe of Discourse / 12.5:
Mapping Phrases onto Logical Formulas / 12.6:
Representing Nouns and Adjectives / 12.6.1:
Representing Noun Groups / 12.6.2:
Representing Verbs and Prepositions / 12.6.3:
The Case of Determiners / 12.7:
Determiners and Logic Quantifiers / 12.7.1:
Translating Sentences Using Quantifiers / 12.7.2:
A General Representation of Sentences / 12.7.3:
Compositionality to Translate Phrases to Logical Forms / 12.8:
Translating the Noun Phrase / 12.8.1:
Translating the Verb Phrase / 12.8.2:
Augmenting the Database and Answering Questions / 12.9:
Declarations / 12.9.1:
Questions with Existential and Universal Quantifiers / 12.9.2:
Prolog and Unknown Predicates / 12.9.3:
Other Determiners and Questions / 12.9.4:
Application: The Spoken Language Translator / 12.10:
Translating Spoken Sentences / 12.10.1:
Compositional Semantics / 12.10.2:
Semantic Representation Transfer / 12.10.3:
Lexical Semantics / 12.11:
Beyond Formal Semantics / 13.1:
La langue etlaparole / 13.1.1:
Language and the Structure of the World / 13.1.2:
Lexical Structures / 13.2:
Some Basic Terms and Concepts / 13.2.1:
Ontological Organization / 13.2.2:
Lexical Classes and Relations / 13.2.3:
Semantic Networks / 13.2.4:
Building a Lexicon / 13.3:
The Lexicon and Word Senses / 13.3.1:
Verb Models / 13.3.2:
Definitions / 13.3.3:
An Example of Exhaustive Lexical Organization: Word Net / 13.4:
Nouns / 13.4.1:
Adjectives / 13.4.2:
Verbs / 13.4.3:
Automatic Word Sense Disambiguation / 13.5:
Senses as Tags / 13.5.1:
Associating a Word with a Context / 13.5.2:
Guessing the Topic / 13.5.3:
Naive Bayes / 13.5.4:
Using Constraints on Verbs / 13.5.5:
Using Dictionary Definitions / 13.5.6:
An Unsupervised Algorithm to Tag Senses / 13.5.7:
Senses and Languages / 13.5.8:
Case Grammars / 13.6:
Cases in Latin / 13.6.1:
Cases and Thematic Roles / 13.6.2:
Parsing with Cases / 13.6.3:
Semantic Grammars / 13.6.4:
Extending Case Grammars / 13.7:
Frame Net / 13.7.1:
A Statistical Method to Identify Semantic Roles / 13.7.2:
An Example of Case Grammar Application: EVAR / 13.8:
EVAR's Ontology and Syntactic Classes / 13.8.1:
Cases in EVAR / 13.8.2:
Discourse / 13.9:
Discourse: A Minimalist Definition / 14.1:
A Description of Discourse / 14.2.1:
Discourse Entities / 14.2.2:
References: An Application-Oriented View / 14.3:
References and Noun Phrases / 14.3.1:
Finding Names - Proper Nouns / 14.3.2:
Coreference / 14.4:
Anaphora / 14.4.1:
Solving Coreferences in an Example / 14.4.2:
A Standard Coreference Annotation / 14.4.3:
References: A More Formal View / 14.5:
Generating Discourse Entities: The Existential Quantifier / 14.5.1:
Retrieving Discourse Entities: Definite Descriptions / 14.5.2:
Generating Discourse Entities: The Universal Quantifier / 14.5.3:
Centering: A Theory on Discourse Structure / 14.6:
Solving Coreferences / 14.7:
A Simplistic Method: Using Syntactic and Semantic Compatibility / 14.7.1:
Solving Coreferences with Shallow Grammatical Information / 14.7.2:
Salience in a Multimodal Context / 14.7.3:
Using a Machine-Learning Technique to Resolve Coreferences / 14.7.4:
More Complex Phenomena: Ellipses / 14.7.5:
Discourse and Rhetoric / 14.8:
Ancient Rhetoric: An Outline / 14.8.1:
Rhetorical Structure Theory / 14.8.2:
Types of Relations / 14.8.3:
Implementing Rhetorical Structure Theory / 14.8.4:
Events and Time / 14.9:
Events / 14.9.1:
Event Types / 14.9.2:
Temporal Representation of Events / 14.9.3:
Events and Tenses / 14.9.4:
Time ML, an Annotation Scheme for Time and Events / 14.10:
Dialogue / 14.11:
Why a Dialogue? / 15.1:
Simple Dialogue Systems / 15.3:
Dialogue Systems Based on Automata / 15.3.1:
Dialogue Modeling / 15.3.2:
Speech Acts: A Theory of Language Interaction / 15.4:
Speech Acts and Human-Machine Dialogue / 15.5:
Speech Acts as a Tagging Model / 15.5.1:
Speech Acts Tags Used in the SUNDIAL Project / 15.5.2:
Dialogue Parsing / 15.5.3:
Interpreting Speech Acts / 15.5.4:
EVAR: A Dialogue Application Using Speech Acts / 15.5.5:
Taking Beliefs and Intentions into Account / 15.6:
Representing Mental States / 15.6.1:
The STRIPS Planning Algorithm / 15.6.2:
Causality / 15.6.3:
An Introduction to Prolog / 15.7:
A Short Background / A.1:
Basic Features of Prolog / A.2:
Facts / A.2.1:
Terms / A.2.2:
Queries / A.2.3:
Logical Variables / A.2.4:
Shared Variables / A.2.5:
Data Types in Prolog / A.2.6:
Rules / A.2.7:
Running a Program / A.3:
Unification / A.4:
Substitution and Instances / A.4.1:
Terms and Unification / A.4.2:
The Herbrand Unification Algorithm / A.4.3:
Example / A.4.4:
The Occurs-Check / A.4.5:
Resolution / A.5:
Modus Ponens / A.5.1:
A Resolution Algorithm / A.5.2:
Derivation Trees and Backtracking / A.5.3:
Tracing and Debugging / A.6:
Cuts, Negation, and Related Predicates / A.7:
Cuts / A.7.1:
Negation / A.7.2:
The once/1 Predicate / A.7.3:
Lists / A.8:
Some List-Handling Predicates / A.9:
The member/2 Predicate / A.9.1:
The append/3 Predicate / A.9.2:
The delete/3 Predicate / A.9.3:
The intersection/3 Predicate / A.9.4:
The reverse/2 Predicate / A.9.5:
The Mode of an Argument / A.9.6:
Operators and Arithmetic / A.10:
Operators / A.10.1:
Arithmetic Operations / A.10.2:
Comparison Operators / A.10.3:
Lists and Arithmetic: The length/2 Predicate / A.10.4:
Lists and Comparison: The quicksort/2 Predicate / A.10.5:
Some Other Built-in Predicates / A.11:
Type Predicates / A.11.1:
Term Manipulation Predicates / A.11.2:
Handling Run-Time Errors and Exceptions / A.12:
Dynamically Accessing and Updatingthe Database / A.13:
Accessing a Clause: The clause/2 Predicate / A.13.1:
Dynamic and Static Predicates / A.13.2:
Adding a Clause: The asserta/1 and 1 assertz/Predicates / A.13.3:
Removing Clauses: The retract/1 and abolish/2 Predicates / A.13.4:
Handling Unknown Predicates / A.13.5:
All-Solutions Predicates / A.14:
Fundamental Search Algorithms / A.15:
Representing the Graph / A.15.1:
Depth-First Search / A.15.2:
Breadth-First Search / A.15.3:
A* Search / A.15.4:
Input/Output / A.16:
Reading and Writing Characters with Edinburgh Prolog / A.16.1:
Reading and Writing Terms with Edinburgh Prolog / A.16.2:
Opening and Closing Files with Edinburgh Prolog / A.16.3:
Reading and Writing Characters with Standard Prolog / A.16.4:
Reading and Writing Terms with Standard Prolog / A.16.5:
Opening and Closing Files with Standard Prolog / A.16.6:
Writing Loops / A.16.7:
Developing Prolog Programs / A.17:
Presentation Style / A.17.1:
Improving Programs / A.17.2:
Index
References
An Overview of Language Processing / 1:
Linguistics and Language Processing / 1.1:
Applications of Language Processing / 1.2:
25.

電子ブック

EB
Giovanni Pezzulo, Martin V. Butz, Cristiano Castelfranchi, Rino Falcone, J?rg Siekmann
出版情報: Springer eBooks Computer Science , Springer Berlin Heidelberg, 2008
所蔵情報: loading…
目次情報: 続きを見る
Theory / Part I:
Introduction: Anticipation in Natural and Artificial Cognition / Giovanni Pezzulo ; Martin V. Butz ; Cristiano Castelfranchi ; Rino Falcone1:
Introduction / 1.1:
The Path to Anticipatory Cognitive Systems / 1.2:
Symbolic Behavior, Representation-Less Behavior, and Their Merge to Anticipatory Behavior / 1.2.1:
The Power of Anticipation: From Reactivity to Proactivity / 1.2.2:
The Anticipatory Approach to Cognitive Systems / 1.2.3:
The Unitary Nature of Anticipation / 1.2.4:
Anticipation in Living Organisms / 1.3:
Anticipatory Natural Cognition / 1.3.1:
Anticipatory Codes in the Brain / 1.3.2:
Simulative Theories of Cognition, and Their Unifying Nature / 1.3.3:
Conclusions / 1.4:
The Anticipatory Approach: Definitions and Taxonomies / 2:
Anticipatory Systems, Anticipation, and Anticipatory Behavior / 2.1:
Prediction vs. Anticipation / 2.2:
Predictive Capabilities / 2.2.1:
Anticipatory Capabilities / 2.2.2:
Anticipation and Goal-Oriented Behavior / 2.3:
The Anticipatory Structure of Goal-Oriented Behavior / 2.3.1:
Not All Anticipatory Behavior Is Goal-Oriented / 2.3.2:
Which Anticipations Permit Goal-Oriented Action? / 2.3.3:
The Hierarchical Organization of Anticipatory Goal-Oriented Action / 2.3.4:
Additional Elements of True Goal-Oriented Behavior / 2.3.5:
Anticipation and Learning / 2.4:
Learning to Predict / 2.4.1:
Bootstrapping Autonomous Cognitive Development: Surprise and Curiosity / 2.4.2:
From Willed to Automatic Control of Action and Vice Versa on the Basis of Surprise / 2.4.3:
Benefits of Anticipations in Cognitive Agents / 2.5:
Potentials for Anticipatory Systems / 3.1:
Potential Benefits of Anticipatory Mechanisms on Cognitive Functions / 3.2:
Effective, Context-Based Action Initiation / 3.2.1:
Faster and Smoother Behavior Execution / 3.2.2:
Improving Top-Down Attention / 3.2.3:
Improving Information Seeking / 3.2.4:
Improving Decision Making / 3.2.5:
Object Grounding, Categorization, and Ontologies / 3.2.6:
Social Abilities / 3.2.7:
Learning / 3.2.8:
Arising Challenges Due to Anticipations and Avoiding Them / 3.3:
Conclusion / 3.4:
Models, Architectures, and Applications / Part II:
Anticipation in Attention / Christian Balkenius ; Alexander Forster ; Birger Johansson ; Vin Thorsteinsdottir4:
Learning What to Look at / 4.1:
A Learning Saliency Map / 4.2.1:
Cue-Target Learning / 4.3:
Cueing by a Single Stimulus / 4.3.1:
Contextual Cueing / 4.3.2:
Fovea Based Solution / 4.3.3:
Attending to Moving Targets / 4.4:
Models of Smooth Pursuit / 4.4.1:
Engineering Approaches / 4.4.2:
The State Based Approach / 4.4.3:
The Prediction Approach / 4.4.4:
The Fovea Based Approach / 4.4.5:
Combining Bottom-Up and Top-Down Processes / 4.5:
Anticipatory, Goal-Directed Behavior / Oliver Herbort5:
A Brief History of Schemas / 5.1:
Schema Approaches / 5.2:
Symbolic Schemas for Policy Learning / 5.2.1:
Symbolic Schemas and Prediction for Selection / 5.2.2:
Neural-Based Planning / 5.2.3:
Neural Network-Based Dynamic Programming / 5.2.4:
Inverse Model Approaches / 5.3:
Inverse Models in Motor Learning and Control / 5.3.1:
Inverse Models and Schema Approaches / 5.3.2:
Advanced Structures / 5.4:
Prediction and Action / 5.4.1:
Coupled Forward-Inverse Models / 5.4.2:
Hierarchical Anticipatory Systems / 5.4.3:
Evaluation of Predictive and Anticipatory Capabilities / 5.5:
Schema-Based Systems / 5.5.1:
Discussion / 5.5.2:
Contrasting Predictive System Capabilities / 5.6.1:
Contrasting Anticipatory System Capabilities / 5.6.2:
Integration / 5.6.3:
Anticipation and Believability / Carlos Martinho ; Ana Paiva5.7:
Animation and Believability / 6.1:
Emotion and Exaggeration / 6.1.2:
Anticipation / 6.1.3:
Anticipation, Emotion, and Believability / 6.1.4:
Related Work / 6.2:
Oz Project / 6.2.1:
EMA / 6.2.2:
Duncan the Highland Terrier / 6.2.3:
Emotivector / 6.3:
Architecture / 6.3.1:
Anticipation Model / 6.3.2:
Salience Model / 6.3.3:
Sensation Model / 6.3.4:
Selection Model / 6.3.5:
Uncertainty / 6.3.6:
Aini, the Synthetic Flower / 6.4:
Emotivectors in Action / 6.4.1:
Evaluation / 6.4.2:
iCat, the Affective Game Buddy / 6.5:
Emotivector Integration in Agent Architectures / 6.5.1:
Anticipation and Emotions for Goal Directed Agents / Emiliano Lorini ; Michele Piunti ; Maria Miceli6.7:
Related Works in Affective Computing / 7.1:
Expectations and Surprise / 7.3:
A Typology of Expectations and Predictions / 7.3.1:
From the Typology of Expectations to the Typology of Surprise / 7.3.2:
Roles of Surprise in Cognitive Processing / 7.3.3:
Expectations and Emotions for Goal-Directed Agents / 7.4:
Expectations and Decision Making / 7.4.1:
Situated Agents and Affective States / 7.4.2:
Confidence of Predictions and Modulation of the Probability Function / 7.4.3:
A Reinforcement-Learning Model of Top-Down Attention Based on a Potential-Action Map / Dimitri Ognibene ; Gianluca Baldassarre7.4.4:
Methods / 8.1:
RGB Camera Input / 8.2.1:
Saliency Map and Action Selection / 8.2.2:
Fovea / 8.2.3:
Periphery Map / 8.2.4:
Inhibition-of-Return Map / 8.2.5:
Potential Action Map / 8.2.6:
Actor-Critic Model / 8.2.7:
Parameter Settings / 8.2.8:
The Tasks / 8.2.9:
Results / 8.3:
Learning and Performance of the Models / 8.3.1:
Bottom-Up Attention: Periphery Map and Inhibition-of-Return Map / 8.3.2:
Analysis of the Vote Maps / 8.3.3:
Capability of Learning to Stay, and of Staying, on the Target / 8.3.4:
Potential Action Map: An Action-Oriented Memory of Cue Information / 8.3.5:
Potential Action Map: Capacity to Integrate Multiple Sources of Information / 8.3.6:
Anticipation by Analogy / Boicho Kokinov ; Maurice Grinberg ; Georgi Petkov ; Kiril Kiryazov8.4:
The Anticipation by Analogy Scenario / 9.1:
Models of Analogy-Making / 9.3:
AMBR Model of Analogy-Making / 9.4:
Integrating Visual Perception and Motor Control in AMBR / 9.5:
Top-Down Perception / 9.5.1:
Attention / 9.5.2:
Transfer of the Solution / 9.5.3:
Action Execution / 9.5.4:
Running the Simulated Model and Comparing It with Human Data / 9.6:
Comparing with Human Data / 9.6.1:
Running the Real Robot Model in the Real World / 9.7:
Ikaros / 9.7.1:
AMBR2Robot / 9.7.2:
Tests / 9.7.3:
Mechanisms for Active Vision / 9.8:
Discussion and Conclusion / 9.9:
Anticipation in Coordination / Emilian Lalev10:
The Prisoner's Dilemma Game / 10.1:
Related Research / 10.2:
Fictitious Play / 10.2.1:
Strategic Teaching and Reputation Formation / 10.2.2:
Social Order and Coordination / 10.2.3:
Anticipation and Information Processing in Societies / 10.2.4:
Agent Architecture and Decision Making Model / 10.3:
The Model / 10.3.1:
Judgment and Decision Making / 10.3.2:
Game Simulations with Individual Agents: Comparison with Experimental Results / 10.4:
Comparison of the Model with Experimental Results / 10.4.1:
Multi-Agent Simulations / 10.5:
Agent Societies / 10.5.1:
Simulation Results and Discussions / 10.5.2:
Endowing Artificial Systems with Anticipatory Capabilities: Success Cases / 10.6:
Flexible Goal-Directed Arm Control: The SURE_REACH Architecture / 11.1:
Learning Cognitive Maps for Anticipatory Control: Time Growing Neural Gas / 11.3:
Learning Effective Directional Arm Control: The Evolutionary System XCSF / 11.4:
Anticipatory Target Motion Prediction / 11.5:
Anticipatory Spatial Attention with Saliency Maps / 11.6:
Behavior Prediction in a Group of Robots / 11.7:
Enhanced Adaptivity in a Predator-Prey Scenario / 11.8:
Adaptive Navigation and Control with Anticipation / 11.9:
Mental Experiments for Selecting Actions / 11.10:
Anticipations for Believable Behavior / 11.11:
Anticipatory Behavior in a Searching-for-an-Object Task / 11.12:
The Role of Anticipation in Cooperation and Coordination / 11.13:
Anticipatory Effects of Expectations and Emotions / 11.14:
On-Line and Off-Line Anticipation for Action Control / 11.15:
References / 11.16:
Theory / Part I:
Introduction: Anticipation in Natural and Artificial Cognition / Giovanni Pezzulo ; Martin V. Butz ; Cristiano Castelfranchi ; Rino Falcone1:
Introduction / 1.1:
26.

電子ブック

EB
Giovanni Pezzulo, Martin V. Butz, Cristiano Castelfranchi, Rino Falcone, Jörg Siekmann
出版情報: SpringerLink Books - AutoHoldings , Springer Berlin Heidelberg, 2008
所蔵情報: loading…
目次情報: 続きを見る
Theory / Part I:
Introduction: Anticipation in Natural and Artificial Cognition / Giovanni Pezzulo ; Martin V. Butz ; Cristiano Castelfranchi ; Rino Falcone1:
Introduction / 1.1:
The Path to Anticipatory Cognitive Systems / 1.2:
Symbolic Behavior, Representation-Less Behavior, and Their Merge to Anticipatory Behavior / 1.2.1:
The Power of Anticipation: From Reactivity to Proactivity / 1.2.2:
The Anticipatory Approach to Cognitive Systems / 1.2.3:
The Unitary Nature of Anticipation / 1.2.4:
Anticipation in Living Organisms / 1.3:
Anticipatory Natural Cognition / 1.3.1:
Anticipatory Codes in the Brain / 1.3.2:
Simulative Theories of Cognition, and Their Unifying Nature / 1.3.3:
Conclusions / 1.4:
The Anticipatory Approach: Definitions and Taxonomies / 2:
Anticipatory Systems, Anticipation, and Anticipatory Behavior / 2.1:
Prediction vs. Anticipation / 2.2:
Predictive Capabilities / 2.2.1:
Anticipatory Capabilities / 2.2.2:
Anticipation and Goal-Oriented Behavior / 2.3:
The Anticipatory Structure of Goal-Oriented Behavior / 2.3.1:
Not All Anticipatory Behavior Is Goal-Oriented / 2.3.2:
Which Anticipations Permit Goal-Oriented Action? / 2.3.3:
The Hierarchical Organization of Anticipatory Goal-Oriented Action / 2.3.4:
Additional Elements of True Goal-Oriented Behavior / 2.3.5:
Anticipation and Learning / 2.4:
Learning to Predict / 2.4.1:
Bootstrapping Autonomous Cognitive Development: Surprise and Curiosity / 2.4.2:
From Willed to Automatic Control of Action and Vice Versa on the Basis of Surprise / 2.4.3:
Benefits of Anticipations in Cognitive Agents / 2.5:
Potentials for Anticipatory Systems / 3.1:
Potential Benefits of Anticipatory Mechanisms on Cognitive Functions / 3.2:
Effective, Context-Based Action Initiation / 3.2.1:
Faster and Smoother Behavior Execution / 3.2.2:
Improving Top-Down Attention / 3.2.3:
Improving Information Seeking / 3.2.4:
Improving Decision Making / 3.2.5:
Object Grounding, Categorization, and Ontologies / 3.2.6:
Social Abilities / 3.2.7:
Learning / 3.2.8:
Arising Challenges Due to Anticipations and Avoiding Them / 3.3:
Conclusion / 3.4:
Models, Architectures, and Applications / Part II:
Anticipation in Attention / Christian Balkenius ; Alexander Forster ; Birger Johansson ; Vin Thorsteinsdottir4:
Learning What to Look at / 4.1:
A Learning Saliency Map / 4.2.1:
Cue-Target Learning / 4.3:
Cueing by a Single Stimulus / 4.3.1:
Contextual Cueing / 4.3.2:
Fovea Based Solution / 4.3.3:
Attending to Moving Targets / 4.4:
Models of Smooth Pursuit / 4.4.1:
Engineering Approaches / 4.4.2:
The State Based Approach / 4.4.3:
The Prediction Approach / 4.4.4:
The Fovea Based Approach / 4.4.5:
Combining Bottom-Up and Top-Down Processes / 4.5:
Anticipatory, Goal-Directed Behavior / Oliver Herbort5:
A Brief History of Schemas / 5.1:
Schema Approaches / 5.2:
Symbolic Schemas for Policy Learning / 5.2.1:
Symbolic Schemas and Prediction for Selection / 5.2.2:
Neural-Based Planning / 5.2.3:
Neural Network-Based Dynamic Programming / 5.2.4:
Inverse Model Approaches / 5.3:
Inverse Models in Motor Learning and Control / 5.3.1:
Inverse Models and Schema Approaches / 5.3.2:
Advanced Structures / 5.4:
Prediction and Action / 5.4.1:
Coupled Forward-Inverse Models / 5.4.2:
Hierarchical Anticipatory Systems / 5.4.3:
Evaluation of Predictive and Anticipatory Capabilities / 5.5:
Schema-Based Systems / 5.5.1:
Discussion / 5.5.2:
Contrasting Predictive System Capabilities / 5.6.1:
Contrasting Anticipatory System Capabilities / 5.6.2:
Integration / 5.6.3:
Anticipation and Believability / Carlos Martinho ; Ana Paiva5.7:
Animation and Believability / 6.1:
Emotion and Exaggeration / 6.1.2:
Anticipation / 6.1.3:
Anticipation, Emotion, and Believability / 6.1.4:
Related Work / 6.2:
Oz Project / 6.2.1:
EMA / 6.2.2:
Duncan the Highland Terrier / 6.2.3:
Emotivector / 6.3:
Architecture / 6.3.1:
Anticipation Model / 6.3.2:
Salience Model / 6.3.3:
Sensation Model / 6.3.4:
Selection Model / 6.3.5:
Uncertainty / 6.3.6:
Aini, the Synthetic Flower / 6.4:
Emotivectors in Action / 6.4.1:
Evaluation / 6.4.2:
iCat, the Affective Game Buddy / 6.5:
Emotivector Integration in Agent Architectures / 6.5.1:
Anticipation and Emotions for Goal Directed Agents / Emiliano Lorini ; Michele Piunti ; Maria Miceli6.7:
Related Works in Affective Computing / 7.1:
Expectations and Surprise / 7.3:
A Typology of Expectations and Predictions / 7.3.1:
From the Typology of Expectations to the Typology of Surprise / 7.3.2:
Roles of Surprise in Cognitive Processing / 7.3.3:
Expectations and Emotions for Goal-Directed Agents / 7.4:
Expectations and Decision Making / 7.4.1:
Situated Agents and Affective States / 7.4.2:
Confidence of Predictions and Modulation of the Probability Function / 7.4.3:
A Reinforcement-Learning Model of Top-Down Attention Based on a Potential-Action Map / Dimitri Ognibene ; Gianluca Baldassarre7.4.4:
Methods / 8.1:
RGB Camera Input / 8.2.1:
Saliency Map and Action Selection / 8.2.2:
Fovea / 8.2.3:
Periphery Map / 8.2.4:
Inhibition-of-Return Map / 8.2.5:
Potential Action Map / 8.2.6:
Actor-Critic Model / 8.2.7:
Parameter Settings / 8.2.8:
The Tasks / 8.2.9:
Results / 8.3:
Learning and Performance of the Models / 8.3.1:
Bottom-Up Attention: Periphery Map and Inhibition-of-Return Map / 8.3.2:
Analysis of the Vote Maps / 8.3.3:
Capability of Learning to Stay, and of Staying, on the Target / 8.3.4:
Potential Action Map: An Action-Oriented Memory of Cue Information / 8.3.5:
Potential Action Map: Capacity to Integrate Multiple Sources of Information / 8.3.6:
Anticipation by Analogy / Boicho Kokinov ; Maurice Grinberg ; Georgi Petkov ; Kiril Kiryazov8.4:
The Anticipation by Analogy Scenario / 9.1:
Models of Analogy-Making / 9.3:
AMBR Model of Analogy-Making / 9.4:
Integrating Visual Perception and Motor Control in AMBR / 9.5:
Top-Down Perception / 9.5.1:
Attention / 9.5.2:
Transfer of the Solution / 9.5.3:
Action Execution / 9.5.4:
Running the Simulated Model and Comparing It with Human Data / 9.6:
Comparing with Human Data / 9.6.1:
Running the Real Robot Model in the Real World / 9.7:
Ikaros / 9.7.1:
AMBR2Robot / 9.7.2:
Tests / 9.7.3:
Mechanisms for Active Vision / 9.8:
Discussion and Conclusion / 9.9:
Anticipation in Coordination / Emilian Lalev10:
The Prisoner's Dilemma Game / 10.1:
Related Research / 10.2:
Fictitious Play / 10.2.1:
Strategic Teaching and Reputation Formation / 10.2.2:
Social Order and Coordination / 10.2.3:
Anticipation and Information Processing in Societies / 10.2.4:
Agent Architecture and Decision Making Model / 10.3:
The Model / 10.3.1:
Judgment and Decision Making / 10.3.2:
Game Simulations with Individual Agents: Comparison with Experimental Results / 10.4:
Comparison of the Model with Experimental Results / 10.4.1:
Multi-Agent Simulations / 10.5:
Agent Societies / 10.5.1:
Simulation Results and Discussions / 10.5.2:
Endowing Artificial Systems with Anticipatory Capabilities: Success Cases / 10.6:
Flexible Goal-Directed Arm Control: The SURE_REACH Architecture / 11.1:
Learning Cognitive Maps for Anticipatory Control: Time Growing Neural Gas / 11.3:
Learning Effective Directional Arm Control: The Evolutionary System XCSF / 11.4:
Anticipatory Target Motion Prediction / 11.5:
Anticipatory Spatial Attention with Saliency Maps / 11.6:
Behavior Prediction in a Group of Robots / 11.7:
Enhanced Adaptivity in a Predator-Prey Scenario / 11.8:
Adaptive Navigation and Control with Anticipation / 11.9:
Mental Experiments for Selecting Actions / 11.10:
Anticipations for Believable Behavior / 11.11:
Anticipatory Behavior in a Searching-for-an-Object Task / 11.12:
The Role of Anticipation in Cooperation and Coordination / 11.13:
Anticipatory Effects of Expectations and Emotions / 11.14:
On-Line and Off-Line Anticipation for Action Control / 11.15:
References / 11.16:
Theory / Part I:
Introduction: Anticipation in Natural and Artificial Cognition / Giovanni Pezzulo ; Martin V. Butz ; Cristiano Castelfranchi ; Rino Falcone1:
Introduction / 1.1:
27.

電子ブック

EB
Gaurav Sukhatme
出版情報: Springer eBooks Computer Science , Springer US, 2009
所蔵情報: loading…
目次情報: 続きを見る
Recent Research in Autonomous Robots / Part I:
Mobile Robots for Polar Remote Sensing / Christopher M. Gifford ; Eric L. Akers ; Richard S. Stansbury ; Arvin Agah1:
Introduction / 1.1:
Polar Mobile Robots / 1.2:
Challenges and Survivability Issues for Polar Robotics / 1.2.1:
MARVIN I / 1.2.2:
MARVIN II / 1.2.3:
Software Architecture / 1.2.4:
North Greenland Ice Core Project (GRIP) Camp Operations / 1.2.5:
Summit Camp Operations / 1.2.6:
West Antarctic Ice Sheet (WAIS) Divide Camp Operations / 1.2.7:
Robotics-Based Approaches to Seismic Surveying / 1.3:
Related Work / 1.3.1:
Robotics-Based Approaches / 1.3.2:
Conclusion / 1.4:
Guidance and Control of Formation Flying Spacecraft / F. Y. Hadaegh ; G. Singh ; B. Acikmese ; D. P. Scharf ; M. Mandic2:
Modeling and Simulation / 2.1:
Guidance and Control Architectures / 2.3:
Formation State Estimation / 2.4:
Guidance and Control / 2.5:
Formulation of Optimal Path Planning Problem / 2.5.1:
Conclusions / 2.6:
Acknowledgement / 2.7:
Adaptive Sampling for Field Reconstruction With Multiple Mobile Robots / Bin Zhang ; Gaurav S. Sukhatme3:
Adaptive Sampling / 3.1:
Divide and Conquer / 3.4:
Discretization / 3.4.1:
Graph Partition / 3.4.2:
Path Planning for a Single Robot / 3.4.3:
Simulations / 3.5:
Conclusion and Future Work / 3.6:
Grasping Affordances: Learning to Connect Vision to Hand Action / Charles de Granville ; Di Wang ; Joshua Southerland ; Robert Platt, Jr. ; Andrew H. Fagg4:
Learning Models of 3D Object Appearance / 4.1:
Edgel Constellations for Describing 2D Object Appearance / 4.2.1:
Capturing Object Appearance in 3D / 4.2.2:
Learning Complete 3D Appearance Models / 4.2.3:
Data Collection and Preprocessing / 4.2.4:
Experimental Results / 4.2.5:
Learning Canonical Grasps for Objects / 4.3:
Modeling Hand Orientation / 4.3.1:
Modeling Hand Position / 4.3.2:
Modeling Finger Posture / 4.3.3:
Modeling Mixtures of Hand Postures / 4.3.4:
Data Collection / 4.3.5:
Discussion / 4.3.6:
Intelligent Robotics for Assistive Healtheare and Therapy / Ayanna M. Howard ; Sekou Remy ; Chung Hyuk Park ; Hae Won Park ; Douglas Brooks5:
Activities of Daily Living: Robot Learning from Human Teleoperation / 5.1:
Divided Force Guidance for Haptic Feedback / 5.2.1:
Learning through Haptically Guided Manipulation / 5.2.2:
Experiments / 5.2.3:
Child Therapy and Education: Robots in Interactive Play Scenarios / 5.3:
Defining Play Primitives / 5.3.1:
Physical Therapy: Robot Assistance via Patient Observation / 5.3.2:
Learning of Exercise Primitives / 5.4.1:
Learning of Exercise Behaviors / 5.4.2:
A New Direction in Human-Robot Interaction: A Lesson from Star Wars? / Gerard Jounghyun Kim5.4.3:
Indirect Human-Robot Interaction / 6.1:
Robot location/pose tracking / 6.2.1:
User/environment sensing / 6.2.2:
Flexible projection / 6.2.3:
Large display surface centered interaction design / 6.2.4:
Summary and Postscript / 6.3:
Neurorobotics Primer / M. Anthony Lewis ; Theresa J. Klein7:
Neurorobots and the Scientific Method / 7.1:
21st Century Robotics: Productizing Mythology / 7.1.2:
Computational Substrate / 7.1.3:
Neuromorphic Chips / 7.1.4:
Graphics Processing Units / 7.1.5:
Purpose of this Chapter / 7.1.6:
Classical Robotics / 7.2:
Configuration Space / 7.2.1:
Kinematics / 7.2.2:
Differential Motion / 7.2.3:
Statics / 7.2.4:
Redundancy / 7.2.5:
Dynamics / 7.2.6:
Trajectory Generation / 7.2.7:
A Pause to Reflect / 7.2.8:
Basic Neurocomputation / 7.3:
Information Flows into Dendrites and Out of Axons / 7.3.1:
The Neuron Cell is a Capacitor with a Decision Making Capability / 7.3.2:
Neural Models Capture the Basic Dynamics of the Cell Body and Throw Away Some Details / 7.3.3:
Numerical Integration / 7.3.4:
Reflexes and High Level Control / 7.3.5 Building Neural Oscillators: Nature's Coordination and Trajectory Generation Mechanism:
Notable Systems / 7.4:
GPUs / 7.5:
Learning Inverse Dynamics by Gaussian Process Regression under the Multi-Task Learning Framework / Dit-Yan Yeung ; Yu Zhang7.6:
Appreciation and Dedication / 8.1:
Kinematics and Dynamics / 8.2 Robotic Control:
Reasons Against Analytic Solutions / 8.2.2:
Insights from Human Arm Control / 8.2.3:
Learning and Control / 8.2.4:
Learning Inverse Dynamics / 8.3:
Recent Work / 8.3.1:
Learning Inverse Dynamics as a Regression Problem / 8.3.2:
Gaussian Process Regression / 8.4:
Brief Review / 8.4.1:
Gaussian Process Regression for Learning Inverse Dynamics / 8.4.2:
Multi-Task Gaussian Process Regression / 8.5:
Brief Review of Bonilla et al.'s Method (33) / 8.5.1:
Multi-Task Gaussian Process Regression for Learning Inverse Dynamics / 8.5.2:
Tributes and Recollections from Former Students / 8.6:
Professor George Albert Bekey / 9:
Personal Life / 9.1:
Research / 9.2:
Teaching and Students / 9.3:
Service to the University and the Profession / 9.4:
Recognition, Honors, and Awards / 9.5:
A Personal Tribute / 9.6:
Current History of the Bekey Tribe / H. Pete Schmid ; Monte Ung10:
Recollections and Tributes / Dan Antonelli ; Arun Bhadoria ; Willis G. Downing, Jr. ; Huan Liu ; Michael Merritt ; L. Warren Morrison11:
From Aerospace Engineering to Biomedical Engineering / 11.1:
The Final Oral Examination / 11.2:
Recent Work on Preventing Fractures caused by a Fall / 11.3:
Teacher, Mentor, and Friend / 11.4:
A Testimonial / 11.5:
Making it Look Easy / 11.6:
Solving Complex Problems Efficiently / 11.7:
References
Index
Recent Research in Autonomous Robots / Part I:
Mobile Robots for Polar Remote Sensing / Christopher M. Gifford ; Eric L. Akers ; Richard S. Stansbury ; Arvin Agah1:
Introduction / 1.1:
28.

電子ブック

EB
Gaurav Sukhatme, Gaurav S. Sukhatme
出版情報: SpringerLink Books - AutoHoldings , Springer US, 2009
所蔵情報: loading…
目次情報: 続きを見る
Recent Research in Autonomous Robots / Part I:
Mobile Robots for Polar Remote Sensing / Christopher M. Gifford ; Eric L. Akers ; Richard S. Stansbury ; Arvin Agah1:
Introduction / 1.1:
Polar Mobile Robots / 1.2:
Challenges and Survivability Issues for Polar Robotics / 1.2.1:
MARVIN I / 1.2.2:
MARVIN II / 1.2.3:
Software Architecture / 1.2.4:
North Greenland Ice Core Project (GRIP) Camp Operations / 1.2.5:
Summit Camp Operations / 1.2.6:
West Antarctic Ice Sheet (WAIS) Divide Camp Operations / 1.2.7:
Robotics-Based Approaches to Seismic Surveying / 1.3:
Related Work / 1.3.1:
Robotics-Based Approaches / 1.3.2:
Conclusion / 1.4:
Guidance and Control of Formation Flying Spacecraft / F. Y. Hadaegh ; G. Singh ; B. Acikmese ; D. P. Scharf ; M. Mandic2:
Modeling and Simulation / 2.1:
Guidance and Control Architectures / 2.3:
Formation State Estimation / 2.4:
Guidance and Control / 2.5:
Formulation of Optimal Path Planning Problem / 2.5.1:
Conclusions / 2.6:
Acknowledgement / 2.7:
Adaptive Sampling for Field Reconstruction With Multiple Mobile Robots / Bin Zhang ; Gaurav S. Sukhatme3:
Adaptive Sampling / 3.1:
Divide and Conquer / 3.4:
Discretization / 3.4.1:
Graph Partition / 3.4.2:
Path Planning for a Single Robot / 3.4.3:
Simulations / 3.5:
Conclusion and Future Work / 3.6:
Grasping Affordances: Learning to Connect Vision to Hand Action / Charles de Granville ; Di Wang ; Joshua Southerland ; Robert Platt, Jr. ; Andrew H. Fagg4:
Learning Models of 3D Object Appearance / 4.1:
Edgel Constellations for Describing 2D Object Appearance / 4.2.1:
Capturing Object Appearance in 3D / 4.2.2:
Learning Complete 3D Appearance Models / 4.2.3:
Data Collection and Preprocessing / 4.2.4:
Experimental Results / 4.2.5:
Learning Canonical Grasps for Objects / 4.3:
Modeling Hand Orientation / 4.3.1:
Modeling Hand Position / 4.3.2:
Modeling Finger Posture / 4.3.3:
Modeling Mixtures of Hand Postures / 4.3.4:
Data Collection / 4.3.5:
Discussion / 4.3.6:
Intelligent Robotics for Assistive Healtheare and Therapy / Ayanna M. Howard ; Sekou Remy ; Chung Hyuk Park ; Hae Won Park ; Douglas Brooks5:
Activities of Daily Living: Robot Learning from Human Teleoperation / 5.1:
Divided Force Guidance for Haptic Feedback / 5.2.1:
Learning through Haptically Guided Manipulation / 5.2.2:
Experiments / 5.2.3:
Child Therapy and Education: Robots in Interactive Play Scenarios / 5.3:
Defining Play Primitives / 5.3.1:
Physical Therapy: Robot Assistance via Patient Observation / 5.3.2:
Learning of Exercise Primitives / 5.4.1:
Learning of Exercise Behaviors / 5.4.2:
A New Direction in Human-Robot Interaction: A Lesson from Star Wars? / Gerard Jounghyun Kim5.4.3:
Indirect Human-Robot Interaction / 6.1:
Robot location/pose tracking / 6.2.1:
User/environment sensing / 6.2.2:
Flexible projection / 6.2.3:
Large display surface centered interaction design / 6.2.4:
Summary and Postscript / 6.3:
Neurorobotics Primer / M. Anthony Lewis ; Theresa J. Klein7:
Neurorobots and the Scientific Method / 7.1:
21st Century Robotics: Productizing Mythology / 7.1.2:
Computational Substrate / 7.1.3:
Neuromorphic Chips / 7.1.4:
Graphics Processing Units / 7.1.5:
Purpose of this Chapter / 7.1.6:
Classical Robotics / 7.2:
Configuration Space / 7.2.1:
Kinematics / 7.2.2:
Differential Motion / 7.2.3:
Statics / 7.2.4:
Redundancy / 7.2.5:
Dynamics / 7.2.6:
Trajectory Generation / 7.2.7:
A Pause to Reflect / 7.2.8:
Basic Neurocomputation / 7.3:
Information Flows into Dendrites and Out of Axons / 7.3.1:
The Neuron Cell is a Capacitor with a Decision Making Capability / 7.3.2:
Neural Models Capture the Basic Dynamics of the Cell Body and Throw Away Some Details / 7.3.3:
Numerical Integration / 7.3.4:
Reflexes and High Level Control / 7.3.5 Building Neural Oscillators: Nature's Coordination and Trajectory Generation Mechanism:
Notable Systems / 7.4:
GPUs / 7.5:
Learning Inverse Dynamics by Gaussian Process Regression under the Multi-Task Learning Framework / Dit-Yan Yeung ; Yu Zhang7.6:
Appreciation and Dedication / 8.1:
Kinematics and Dynamics / 8.2 Robotic Control:
Reasons Against Analytic Solutions / 8.2.2:
Insights from Human Arm Control / 8.2.3:
Learning and Control / 8.2.4:
Learning Inverse Dynamics / 8.3:
Recent Work / 8.3.1:
Learning Inverse Dynamics as a Regression Problem / 8.3.2:
Gaussian Process Regression / 8.4:
Brief Review / 8.4.1:
Gaussian Process Regression for Learning Inverse Dynamics / 8.4.2:
Multi-Task Gaussian Process Regression / 8.5:
Brief Review of Bonilla et al.'s Method (33) / 8.5.1:
Multi-Task Gaussian Process Regression for Learning Inverse Dynamics / 8.5.2:
Tributes and Recollections from Former Students / 8.6:
Professor George Albert Bekey / 9:
Personal Life / 9.1:
Research / 9.2:
Teaching and Students / 9.3:
Service to the University and the Profession / 9.4:
Recognition, Honors, and Awards / 9.5:
A Personal Tribute / 9.6:
Current History of the Bekey Tribe / H. Pete Schmid ; Monte Ung10:
Recollections and Tributes / Dan Antonelli ; Arun Bhadoria ; Willis G. Downing, Jr. ; Huan Liu ; Michael Merritt ; L. Warren Morrison11:
From Aerospace Engineering to Biomedical Engineering / 11.1:
The Final Oral Examination / 11.2:
Recent Work on Preventing Fractures caused by a Fall / 11.3:
Teacher, Mentor, and Friend / 11.4:
A Testimonial / 11.5:
Making it Look Easy / 11.6:
Solving Complex Problems Efficiently / 11.7:
References
Index
Recent Research in Autonomous Robots / Part I:
Mobile Robots for Polar Remote Sensing / Christopher M. Gifford ; Eric L. Akers ; Richard S. Stansbury ; Arvin Agah1:
Introduction / 1.1:
29.

電子ブック

EB
Muddassar Farooq, Th B?ck, A. E. Eiben, G. Rozenberg
出版情報: Springer eBooks Computer Science , Springer Berlin Heidelberg, 2009
所蔵情報: loading…
目次情報: 続きを見る
Introduction / 1:
Motivation of the Work / 1.1:
Problem Statement / 1.2:
Hypotheses / 1.2.1:
An Engineering Approach to Nature-Inspired Routing Protocols / 1.3:
The Scientific Contributions of the Work / 1.4:
A Simple, Disributed, Decentralized Multi-Agent System / 1.4.1:
A Comprehensive Routing System / 1.4.2:
An Empirical Comprehensive Performance Evaluation Framework / 1.4.3:
A Scalability Framework for (Nature-Inspired) Agent-Based Routing Protocols / 1.4.4:
Protocol Engineering of Nature-Inspired Routing Protocols / 1.4.5:
A Nature-Inspired Linux Router / 1.4.6:
The Protocol Validation Framework / 1.4.7:
The Formal Framework for Nature-Inspired Protocols / 1.4.8:
A Simple, Efficient, and Scalable Nature-Inspired Security Framework / 1.4.9:
Emerging Mobile and Wireless Sensors Ad Hoc Networks / 1.4.10:
Organization of the Book / 1.5:
A Comprehensive Survey of Nature-Inspired Routing Protocols / 2:
Organization of the Chapter / 2.1:
Network Routing Algorithms / 2.2:
Features Landscape of a Modern Routing Algorithm / 2.2.1:
Taxonomy of Routing Algorithms / 2.2.2:
Ant Colony Optimization (ACO) Routing Algorithms for Fixed Networks / 2.3:
Important Elements of ACO in Routing / 2.3.1:
Ant-Based Control (ABC) for Circuit-Switched Networks / 2.3.2:
Ant-Based Control (ABC) for Packet-Switched Networks / 2.3.3:
AntNet / 2.3.4:
Ant Colony Routing (ACR) and AntNet+SELA QoS-Aware Routing / 2.3.5:
A Brief History of Research in AntNet / 2.3.6:
Evolutionary Routing Algorithms for Fixed Networks / 2.4:
Important Elements of EA in Routing / 2.4.1:
GARA / 2.4.2:
ASGA and SynthECA / 2.4.3:
DGA / 2.4.4:
Related Work on Routing Algorithms for Fixed Networks / 2.5:
Artificial Intelligence Community / 2.5.1:
Networking Community / 2.5.2:
Summary / 2.6:
From The Wisdom of the Hive to Routing in Telecommunication Networks / 3:
An Agent-Based Investigation of a Honeybee Colony / 3.1:
Labor Management / 3.2.1:
The Communication Network of a Honeybee Colony / 3.2.2:
Reinforcement Learning / 3.2.3:
Distributed Coordination and Planning / 3.2.4:
Energy-Efficient Foraging / 3.2.5:
Stochastic Selection of Flower Sites / 3.2.6:
Group Organization / 3.2.7:
BeeHive: The Mapping of Concepts from Nature to Networks / 3.3:
The Bee Agent Model / 3.4:
Estimation Model of Agents / 3.4.1:
Goodness of a Neighbor / 3.4.2:
Communication Paradigm of Agents / 3.4.3:
Packet-Switching Algorithm / 3.4.4:
BeeHive Algorithm / 3.5:
The Performance Evaluation Framework for Nature-Inspired Routing Algorithms / 3.6:
Routing Algorithms Used for Comparison / 3.7:
OSPF / 3.7.1:
Daemon / 3.7.4:
Simulation Environment for BeeHive / 3.8:
simpleNet / 3.8.1:
NTTNet / 3.8.2:
Node150 / 3.8.3:
Discussion of the Results from the Experiments / 3.9:
Congestion Avoidance Behavior / 3.9.1:
Queue Management Behavior / 3.9.2:
Hot Spots / 3.9.3:
Router Crash Experiments / 3.9.4:
Bursty Traffic Generator / 3.9.5:
Sessionless Network Traffic / 3.9.6:
Size of Routing Table / 3.9.7:
A Scalability Framework for Nature-Inspired Routing Algorithms / 3.10:
Existing Work on Scalability Analysis / 4.1:
The Scalability Model for a Routing Algorithm / 4.1.2:
Cost Model / 4.2.1:
Power Model of an Algorithm / 4.2.2:
Scalability Metric for a Routing Algorithm / 4.2.3:
Simulation Environment for Scalability Analysis / 4.3:
Node350 / 4.3.1:
Node650 / 4.3.5:
Node1050 / 4.3.6:
Throughput and Packet Delivery Ratio / 4.4:
Packet Delay / 4.4.2:
Control Overhead and Suboptimal Overhead / 4.4.3:
Agent and Packet Processing Complexity / 4.4.4:
Routing Table Size / 4.4.5:
Investigation of the Behavior of AntNet / 4.4.6:
Towards an Empirically Founded Scalability Model for Routing Protocols / 4.5:
Scalability Matrix and Scalability Analysis / 4.5.1:
Scalability Analysis of BeeHive / 4.5.2:
Scalability Analysis of AntNet / 4.5.3:
Scalability Analysis of OSPF / 4.5.4:
BeeHive in Real Networks of Linux Routers / 4.6:
Engineering of Nature-Inspired Routing Protocols / 5.1:
Structural Design of a Routing Framework / 5.2.1:
Structural Semantics of the Network Stack / 5.2.2:
System Design Issues / 5.2.3:
Natural Routing Framework: Design and Implementation / 5.3:
Algorithm-Independent Framework / 5.3.1:
Algorithmic-Dependent BeeHive Module / 5.3.2:
Protocol Verification Framework / 5.4:
The Motivation Behind the Design and Structure of Experiments / 5.5:
Quantum Traffic Engineering / 5.6:
Real-World Applications Traffic Engineering / 5.6.2:
Hybrid Traffic Engineering / 5.6.3:
A Formal Framework for Analyzing the Behavior of BeeHive / 5.7:
Goodness / 6.1:
Analytical Model / 6.3:
Node Traffic / 6.3.1:
Link Flows / 6.3.2:
Calculation of Delays / 6.3.3:
Throughput / 6.3.4:
Empirical Verification of the Formal Model / 6.4:
Example 1 / 6.4.1:
Example 2 / 6.4.2:
An Efficient Nature-Inspired Security Framework for BeeHive / 6.5:
Robustness and Security Analysis of a Routing Protocol / 7.1:
Security Threats to Nature-Inspired Routing Protocols / 7.2.1:
Existing Works on Security of Routing Protocols / 7.2.2:
BeeHiveGuard: A Digital Signature-Based Security Framework / 7.3:
Agent Integrity / 7.3.1:
Routing Information Integrity / 7.3.2:
Architecture of BeeHiveGuard / 7.3.3:
BeeHiveAIS: an Immune-Inspired Security Framework for BeeHive / 7.4:
Artificial Immune Systems (AISs) / 7.4.1:
Behavioral Analysis of BeeHive for Designing an AIS / 7.4.2:
The AIS Model of BeeHiveAIS / 7.4.3:
Top-Level BeeHiveAIS / 7.4.4:
Simulation Models of Our Security Frameworks / 7.5:
Attack Scenarios on Simple Topologies / 7.5.1:
Analysis of Attacks and Effectiveness of Security Frameworks / 7.5.2:
Bee-Inspired Routing Protocols for Mobile Ad Hoc and Sensor Networks / 7.5.3:
Existing Works on Nature-Inspired MANET Routing Protocols / 8.1:
Bee Agent Model / 8.1.2:
Packers / 8.2.1:
Scouts / 8.2.2:
Foragers / 8.2.3:
Beeswarm / 8.2.4:
Architecture of BeeAdHoc / 8.3:
Packing Floor / 8.3.1:
Entrance / 8.3.2:
Dance Floor / 8.3.3:
Simulation Framework / 8.4:
Metrics / 8.4.1:
Node Mobility Behavior / 8.4.2:
BeeAdHoc in Real-World MANETs / 8.5:
A Performance Evaluation Framework for Real MANETs in Linux / 8.5.1:
Results of Experiments / 8.6:
Security Threats in BeeAdHoc / 8.7:
Challenges for Routing Protocols in Ad Hoc Sensor Networks / 8.8:
Existing Works on Routing Protocols for Wireless Sensor Networks / 8.8.1:
BeeSensor: Architecture and Working / 8.9:
BeeSensor Agent's Model / 8.9.1:
Protocol Description / 8.9.2:
A Performance Evaluation Framework for Nature-Inspired Routing Protocols for WSNs / 8.10:
Results / 8.10.1:
Conclusion and Future Work / 8.12:
Conclusion / 9.1:
Future Research / 9.2:
Quality of Service (QoS) Routing / 9.2.1:
Cyclic Paths / 9.2.2:
Intelligent and Knowledgeable Network Engineering / 9.2.3:
Bee Colony Metaheuristic / 9.2.4:
Natural Engineering: The Need for a Distinct Discipline / 9.3:
References
Index
Introduction / 1:
Motivation of the Work / 1.1:
Problem Statement / 1.2:
30.

電子ブック

EB
Muddassar Farooq, Th Bäck, A. E. Eiben, G. Rozenberg
出版情報: SpringerLink Books - AutoHoldings , Springer Berlin Heidelberg, 2009
所蔵情報: loading…
目次情報: 続きを見る
Introduction / 1:
Motivation of the Work / 1.1:
Problem Statement / 1.2:
Hypotheses / 1.2.1:
An Engineering Approach to Nature-Inspired Routing Protocols / 1.3:
The Scientific Contributions of the Work / 1.4:
A Simple, Disributed, Decentralized Multi-Agent System / 1.4.1:
A Comprehensive Routing System / 1.4.2:
An Empirical Comprehensive Performance Evaluation Framework / 1.4.3:
A Scalability Framework for (Nature-Inspired) Agent-Based Routing Protocols / 1.4.4:
Protocol Engineering of Nature-Inspired Routing Protocols / 1.4.5:
A Nature-Inspired Linux Router / 1.4.6:
The Protocol Validation Framework / 1.4.7:
The Formal Framework for Nature-Inspired Protocols / 1.4.8:
A Simple, Efficient, and Scalable Nature-Inspired Security Framework / 1.4.9:
Emerging Mobile and Wireless Sensors Ad Hoc Networks / 1.4.10:
Organization of the Book / 1.5:
A Comprehensive Survey of Nature-Inspired Routing Protocols / 2:
Organization of the Chapter / 2.1:
Network Routing Algorithms / 2.2:
Features Landscape of a Modern Routing Algorithm / 2.2.1:
Taxonomy of Routing Algorithms / 2.2.2:
Ant Colony Optimization (ACO) Routing Algorithms for Fixed Networks / 2.3:
Important Elements of ACO in Routing / 2.3.1:
Ant-Based Control (ABC) for Circuit-Switched Networks / 2.3.2:
Ant-Based Control (ABC) for Packet-Switched Networks / 2.3.3:
AntNet / 2.3.4:
Ant Colony Routing (ACR) and AntNet+SELA QoS-Aware Routing / 2.3.5:
A Brief History of Research in AntNet / 2.3.6:
Evolutionary Routing Algorithms for Fixed Networks / 2.4:
Important Elements of EA in Routing / 2.4.1:
GARA / 2.4.2:
ASGA and SynthECA / 2.4.3:
DGA / 2.4.4:
Related Work on Routing Algorithms for Fixed Networks / 2.5:
Artificial Intelligence Community / 2.5.1:
Networking Community / 2.5.2:
Summary / 2.6:
From The Wisdom of the Hive to Routing in Telecommunication Networks / 3:
An Agent-Based Investigation of a Honeybee Colony / 3.1:
Labor Management / 3.2.1:
The Communication Network of a Honeybee Colony / 3.2.2:
Reinforcement Learning / 3.2.3:
Distributed Coordination and Planning / 3.2.4:
Energy-Efficient Foraging / 3.2.5:
Stochastic Selection of Flower Sites / 3.2.6:
Group Organization / 3.2.7:
BeeHive: The Mapping of Concepts from Nature to Networks / 3.3:
The Bee Agent Model / 3.4:
Estimation Model of Agents / 3.4.1:
Goodness of a Neighbor / 3.4.2:
Communication Paradigm of Agents / 3.4.3:
Packet-Switching Algorithm / 3.4.4:
BeeHive Algorithm / 3.5:
The Performance Evaluation Framework for Nature-Inspired Routing Algorithms / 3.6:
Routing Algorithms Used for Comparison / 3.7:
OSPF / 3.7.1:
Daemon / 3.7.4:
Simulation Environment for BeeHive / 3.8:
simpleNet / 3.8.1:
NTTNet / 3.8.2:
Node150 / 3.8.3:
Discussion of the Results from the Experiments / 3.9:
Congestion Avoidance Behavior / 3.9.1:
Queue Management Behavior / 3.9.2:
Hot Spots / 3.9.3:
Router Crash Experiments / 3.9.4:
Bursty Traffic Generator / 3.9.5:
Sessionless Network Traffic / 3.9.6:
Size of Routing Table / 3.9.7:
A Scalability Framework for Nature-Inspired Routing Algorithms / 3.10:
Existing Work on Scalability Analysis / 4.1:
The Scalability Model for a Routing Algorithm / 4.1.2:
Cost Model / 4.2.1:
Power Model of an Algorithm / 4.2.2:
Scalability Metric for a Routing Algorithm / 4.2.3:
Simulation Environment for Scalability Analysis / 4.3:
Node350 / 4.3.1:
Node650 / 4.3.5:
Node1050 / 4.3.6:
Throughput and Packet Delivery Ratio / 4.4:
Packet Delay / 4.4.2:
Control Overhead and Suboptimal Overhead / 4.4.3:
Agent and Packet Processing Complexity / 4.4.4:
Routing Table Size / 4.4.5:
Investigation of the Behavior of AntNet / 4.4.6:
Towards an Empirically Founded Scalability Model for Routing Protocols / 4.5:
Scalability Matrix and Scalability Analysis / 4.5.1:
Scalability Analysis of BeeHive / 4.5.2:
Scalability Analysis of AntNet / 4.5.3:
Scalability Analysis of OSPF / 4.5.4:
BeeHive in Real Networks of Linux Routers / 4.6:
Engineering of Nature-Inspired Routing Protocols / 5.1:
Structural Design of a Routing Framework / 5.2.1:
Structural Semantics of the Network Stack / 5.2.2:
System Design Issues / 5.2.3:
Natural Routing Framework: Design and Implementation / 5.3:
Algorithm-Independent Framework / 5.3.1:
Algorithmic-Dependent BeeHive Module / 5.3.2:
Protocol Verification Framework / 5.4:
The Motivation Behind the Design and Structure of Experiments / 5.5:
Quantum Traffic Engineering / 5.6:
Real-World Applications Traffic Engineering / 5.6.2:
Hybrid Traffic Engineering / 5.6.3:
A Formal Framework for Analyzing the Behavior of BeeHive / 5.7:
Goodness / 6.1:
Analytical Model / 6.3:
Node Traffic / 6.3.1:
Link Flows / 6.3.2:
Calculation of Delays / 6.3.3:
Throughput / 6.3.4:
Empirical Verification of the Formal Model / 6.4:
Example 1 / 6.4.1:
Example 2 / 6.4.2:
An Efficient Nature-Inspired Security Framework for BeeHive / 6.5:
Robustness and Security Analysis of a Routing Protocol / 7.1:
Security Threats to Nature-Inspired Routing Protocols / 7.2.1:
Existing Works on Security of Routing Protocols / 7.2.2:
BeeHiveGuard: A Digital Signature-Based Security Framework / 7.3:
Agent Integrity / 7.3.1:
Routing Information Integrity / 7.3.2:
Architecture of BeeHiveGuard / 7.3.3:
BeeHiveAIS: an Immune-Inspired Security Framework for BeeHive / 7.4:
Artificial Immune Systems (AISs) / 7.4.1:
Behavioral Analysis of BeeHive for Designing an AIS / 7.4.2:
The AIS Model of BeeHiveAIS / 7.4.3:
Top-Level BeeHiveAIS / 7.4.4:
Simulation Models of Our Security Frameworks / 7.5:
Attack Scenarios on Simple Topologies / 7.5.1:
Analysis of Attacks and Effectiveness of Security Frameworks / 7.5.2:
Bee-Inspired Routing Protocols for Mobile Ad Hoc and Sensor Networks / 7.5.3:
Existing Works on Nature-Inspired MANET Routing Protocols / 8.1:
Bee Agent Model / 8.1.2:
Packers / 8.2.1:
Scouts / 8.2.2:
Foragers / 8.2.3:
Beeswarm / 8.2.4:
Architecture of BeeAdHoc / 8.3:
Packing Floor / 8.3.1:
Entrance / 8.3.2:
Dance Floor / 8.3.3:
Simulation Framework / 8.4:
Metrics / 8.4.1:
Node Mobility Behavior / 8.4.2:
BeeAdHoc in Real-World MANETs / 8.5:
A Performance Evaluation Framework for Real MANETs in Linux / 8.5.1:
Results of Experiments / 8.6:
Security Threats in BeeAdHoc / 8.7:
Challenges for Routing Protocols in Ad Hoc Sensor Networks / 8.8:
Existing Works on Routing Protocols for Wireless Sensor Networks / 8.8.1:
BeeSensor: Architecture and Working / 8.9:
BeeSensor Agent's Model / 8.9.1:
Protocol Description / 8.9.2:
A Performance Evaluation Framework for Nature-Inspired Routing Protocols for WSNs / 8.10:
Results / 8.10.1:
Conclusion and Future Work / 8.12:
Conclusion / 9.1:
Future Research / 9.2:
Quality of Service (QoS) Routing / 9.2.1:
Cyclic Paths / 9.2.2:
Intelligent and Knowledgeable Network Engineering / 9.2.3:
Bee Colony Metaheuristic / 9.2.4:
Natural Engineering: The Need for a Distinct Discipline / 9.3:
References
Index
Introduction / 1:
Motivation of the Work / 1.1:
Problem Statement / 1.2:
31.

電子ブック

EB
James P. Gunderson, Louise F. Gunderson, Unspecified
出版情報: Springer eBooks Computer Science , Springer US, 2009
所蔵情報: loading…
目次情報: 続きを見る
Introduction / 1:
Bridging the Gap / 1.1:
Bidirectional Mapping / 1.1.1:
Reification and Preafference in Biological Entities / 1.2:
More Advanced Brains / 1.3:
What This Book Is and What It Is Not / 1.4:
Structure of the Book / 1.5:
A Note on Typefaces and Terminology / 1.6:
Anthropomorphization / 1.6.1:
Some background material on probability and biology / 2:
Layout / 2.1:
Probability in the Real World / 2.2:
Why a Biologically Principled Argument? / 2.3:
Biological Principles / 2.3.1:
What Is a Biologically Principled Argument? / 2.4:
Biology Is an Observational Science / 2.4.1:
Life Has Structure / 2.4.2:
The Theory of Evolution Explains the Observed Diversity of Life / 2.4.3:
So Why Is Our Model Biologically Principled? / 2.5:
Why Not Just Use Expected Value? / 2.5.1:
Using Cognition and Physiology to Build a Cognitive Model / 3:
Reification in Biological Entities / 3.1:
Recognition / 3.1.1:
Preafference / 3.1.2:
Biological Storage / 3.2:
Explicit Memory / 3.2.1:
Emotion / 3.3:
Emotion as mediator / 3.3.1:
Representation / 4:
Representing Features of the World / 4.1:
Representing Goals / 4.2:
Representing Actions in the World / 4.3:
Enabling Conditions / 4.3.1:
Outcomes / 4.3.2:
Representing Likelihoods / 4.3.3:
Exogenous Events / 4.4:
Perception/Action System / 5:
Robot as Perception/Action System / 5.1:
Robot as Body / 5.1.1:
Robot as Senor / 5.1.2:
Robot as Agent of Change / 5.1.3:
Low Level Control Loop - Procedural Memory / 5.1.4:
System Safety and Routine Actions / 5.1.5:
Examples of Perception/Action Systems / 5.2:
Fred - a simple test robot / 5.2.1:
Basil / 5.2.2:
Summary of Perception/Action Systems / 5.3:
Design of a Reification Engine / 6:
Model Selection Criteria / 6.1:
Judgment Analysis / 6.2:
Designing the Reification Engine / 6.3:
Bridging the Sensor to Symbol Gap / 7:
Supporting Bidirectional Mapping / 7.1:
A Third Approach / 7.1.1:
Reification Architecture / 7.2:
PerCepts and Reification / 7.3:
PerCept Data / 7.3.1:
PerCept Function / 7.3.2:
Mental Model / 7.4:
Current World State / 7.5:
Reification functionality / 7.6:
Initialization / 7.6.1:
Mapping the World onto its Model - Recognition / 7.6.2:
Projecting the Model onto the World - Preafference / 7.6.3:
Updating the Current World State / 7.6.4:
Wrapping Up Reification / 7.7:
Working Memory and the Construction of Personal Experiences / 8:
Transient Memory / 8.1:
Working Memory and the Current World State / 8.1.1:
Internal State / 8.1.2:
Episodic Memory / 8.2:
Emotive Tags / 8.2.1:
Memory Services / 8.3:
Providing Memory Services to the Reification Process / 8.4:
Memory, What Was That Again? / 8.5:
Semantic Memory and the Personal Rough Ontology / 9:
Semantic Memory / 9.1:
What is a Personal Rough Ontology? / 9.1.1:
Building Semantic Memory / 9.2:
Structure of the Ontology / 9.2.1:
The nodes in the multi-graph / 9.2.2:
Relationships, the Edges of the Graph / 9.2.3:
A Note on Representing Probabilities / 9.2.4:
Persistent Storage in the Personal Rough Ontology / 9.3:
Transient versus Persistent Knowledge / 9.4:
Extracting Problems for the Deliberative System / 9.5:
Focusing Attention by Finding Sub-Ontologies / 9.6:
Weighted Transitivity / 9.6.1:
Deliberative System / 10:
Deliberation / 10.1:
Reasoning About the Present / 10.2:
Sense-Symbols from the Reification Engine / 10.2.1:
Symbols from the Ontology / 10.2.2:
Reasoning with WorldSets / 10.2.3:
Choosing the Future / 10.3:
Planning as Search / 10.3.1:
Adapting to Failure / 10.3.2:
Plan Evaluation and Selection / 10.4:
Acquiring Distributions / 10.4.1:
Simulator Fidelity / 10.4.2:
Summary / 10.5:
Putting it All Together / 11:
How it Fits Together / 11.1:
Goals and Environment / 11.2:
Knowledge Sources / 11.3:
Ontological Knowledge / 11.3.1:
Reification Knowledge / 11.3.2:
Perception/Action Knowledge / 11.3.3:
The process / 11.4:
Perception/Action / 11.4.1:
Reification / 11.4.2:
Execution / 11.4.3:
Execution, Reification and Action / 11.4.4:
Perception/Action - Reflex / 11.4.6:
Execution Failure / 11.4.7:
Back Up to Deliberation / 11.4.8:
Procedural Memory and Localization / 11.4.9:
A Few Notes About the General Flow / 11.5:
Testing / 12:
Testing the Robot, or How Does One Test an Embedded System? / 12.1:
eXtreme Programming / 12.2:
Methodology for Testing Embodied Systems / 12.3:
Benefits of Partitioning the Tests / 12.3.1:
General Testing Guidelines / 12.4:
General Partitioning Guidelines / 12.4.1:
Testing in the lab / 12.5:
Hardware / 12.5.1:
Static Tests / 12.5.2:
Dynamic tests / 12.5.3:
Formal System Tests - Testing In The Real World / 12.6:
Testing Recognition / 12.6.1:
Testing Preafference / 12.6.2:
Testing Self-Localization / 12.6.3:
Where do we go from here / 12.7:
A Stopping Point / 13.1:
Next Steps / 13.2:
Adding Learning to the Model / 13.2.1:
Adding Additional Data Sources / 13.2.2:
Porting the Brain into New Bodies / 13.2.3:
Glossary
References
Index
Introduction / 1:
Bridging the Gap / 1.1:
Bidirectional Mapping / 1.1.1:
32.

電子ブック

EB
Malte Helmert, J?rg Siekmann
出版情報: Springer eBooks Computer Science , Springer Berlin Heidelberg, 2008
所蔵情報: loading…
目次情報: 続きを見る
Planning Benchmarks / Part I:
The Role of Benchmarks / 1:
Evaluating Planner Performance / 1.1:
Worst-Case Evaluation / 1.1.1:
Average-Case Evaluation / 1.1.2:
Planning Benchmarks Are Important / 1.2:
Theoretical Analyses of Planning Benchmarks / 1.3:
Why Theoretical Analyses Are Useful / 1.3.1:
Published Results on Benchmark Complexity / 1.3.2:
Standard Benchmarks / 1.4:
Summary and Overview / 1.5:
Defining Planning Domains / 2:
Optimization Problems / 2.1:
Minimization Problems / 2.1.1:
Approximation Algorithms / 2.1.2:
Approximation Classes / 2.1.3:
Reductions / 2.1.4:
Formalizing Planning Domains / 2.2:
General Results and Reductions / 2.3:
Upper Bounds / 2.3.1:
Shortest Plan Length / 2.3.2:
Approximation Classes of Limited Interest / 2.3.3:
Relating Planning and (Bounded) Plan Existence / 2.3.4:
Generalization and Specialization / 2.3.5:
The Benchmark Suite / 3:
Defining the Competition Domains / 3.1:
IPC1 Domains / 3.2:
IPC2 Domains / 3.2.2:
IPC3 Domains / 3.2.3:
IPC4 Domains / 3.2.4:
Domains and Domain Families / 3.3:
Transportation and Route Planning / 4:
Transport and Route / 4.1:
The Transport Domain / 4.1.1:
The Route Domain / 4.1.2:
Special Cases and Hierarchy / 4.1.3:
General Results / 4.2:
Plan Existence / 4.3:
Hardness of Optimization / 4.4:
Constant Factor Approximation / 4.5:
Hardness of Constant Factor Approximation / 4.6:
Summary / 4.7:
Beyond Transport and Route / 4.8:
IPC Domains: Transportation and Route Planning / 5:
Gripper / 5.1:
Mystery and Mystery Prime / 5.2:
Logistics / 5.3:
Zenotravel / 5.4:
Depots / 5.5:
Miconic-10 / 5.6:
Rovers / 5.7:
Grid / 5.8:
Driverlog / 5.9:
Airport / 5.10:
IPC Domains: Others / 5.11:
Assembly / 6.1:
Blocksworld / 6.2:
Freecell / 6.3:
Movie / 6.4:
Pipesworld / 6.5:
Promela / 6.6:
PSR / 6.7:
Satellite / 6.8:
Schedule / 6.9:
Conclusions / 6.10:
Ten Conclusions / 7.1:
Going Further / 7.2:
Fast Downward / Part II:
Solving Planning Tasks Hierarchically / 8:
Introduction / 8.1:
Related Work / 8.2:
Causal Graphs and Abstraction / 8.2.1:
Causal Graphs and Unary STRIPS Operators / 8.2.2:
Multi-Valued Planning Tasks / 8.2.3:
Architecture and Overview / 8.3:
Translation / 9:
PDDL and Multi-valued Planning Tasks / 9.1:
Translation Overview / 9.2:
Normalization / 9.3:
Compiling Away Types / 9.3.1:
Simplifying Conditions / 9.3.2:
Simplifying Effects / 9.3.3:
Normalization Result / 9.3.4:
Invariant Synthesis / 9.4:
Initial Candidates / 9.4.1:
Proving Invariance / 9.4.2:
Refining Failed Candidates / 9.4.3:
Examples / 9.4.4:
Grounding / 9.4.5:
Overview of Horn Exploration / 9.5.1:
Generating the Logic Program / 9.5.2:
Translating the Logic Program to Normal Form / 9.5.3:
Computing the Canonical Model / 9.5.4:
Axiom and Operator Instantiation / 9.5.5:
Multi-valued Planning Task Generation / 9.6:
Variable Selection / 9.6.1:
Converting the Initial State / 9.6.2:
Converting Operator Effects / 9.6.3:
Converting Conditions / 9.6.4:
Computing Axiom Layers / 9.6.5:
Generating the Output / 9.6.6:
Performance Notes / 9.7:
Relative Performance Compared to MIPS Translator / 9.7.1:
Absolute Performance / 9.7.2:
Knowledge Compilation / 10:
Overview / 10.1:
Domain Transition Graphs / 10.2:
Causal Graphs / 10.3:
Acyclic Causal Graphs / 10.3.1:
Generating and Pruning Causal Graphs / 10.3.2:
Causal Graph Examples / 10.3.3:
Successor Generators and Axiom Evaluators / 10.4:
Successor Generators / 10.4.1:
Axiom Evaluators / 10.4.2:
Search / 11:
The Causal Graph Heuristic / 11.1:
Conceptual View of the Causal Graph Heurstic / 11.2.1:
Computation of the Causal Graph Heuristic / 11.2.2:
States with Infinite Heuristic Value / 11.2.3:
Helpful Transitions / 11.2.4:
The FF Heuristic / 11.3:
Greedy Best-First Search in Fast Downward / 11.4:
Preferred Operators / 11.4.1:
Deferred Heuristic Evaluation / 11.4.2:
Multi-heuristic Best-First Search / 11.5:
Focused Iterative-Broadening Search / 11.6:
Experiments / 12:
Experiment Design / 12.1:
Benchmark Set / 12.1.1:
Experiment Setup / 12.1.2:
Translation and Knowledge Compilation vs. Search / 12.1.3:
Strips Domains from IPC1-3 / 12.2:
ADL Domains from IPC1-3 / 12.3:
Domains from IPC4 / 12.4:
Conclusions from the Experiment / 12.5:
Discussion / 13:
Major Contributors / 13.1:
Multi-valued Representations / 13.2.1:
Task Decomposition Heuristics / 13.2.2:
Minor Contributions / 13.3:
References / 13.4:
Index
Planning Benchmarks / Part I:
The Role of Benchmarks / 1:
Evaluating Planner Performance / 1.1:
33.

電子ブック

EB
James P. Gunderson, Louise F. Gunderson, Unspecified
出版情報: SpringerLink Books - AutoHoldings , Springer US, 2009
所蔵情報: loading…
目次情報: 続きを見る
Introduction / 1:
Bridging the Gap / 1.1:
Bidirectional Mapping / 1.1.1:
Reification and Preafference in Biological Entities / 1.2:
More Advanced Brains / 1.3:
What This Book Is and What It Is Not / 1.4:
Structure of the Book / 1.5:
A Note on Typefaces and Terminology / 1.6:
Anthropomorphization / 1.6.1:
Some background material on probability and biology / 2:
Layout / 2.1:
Probability in the Real World / 2.2:
Why a Biologically Principled Argument? / 2.3:
Biological Principles / 2.3.1:
What Is a Biologically Principled Argument? / 2.4:
Biology Is an Observational Science / 2.4.1:
Life Has Structure / 2.4.2:
The Theory of Evolution Explains the Observed Diversity of Life / 2.4.3:
So Why Is Our Model Biologically Principled? / 2.5:
Why Not Just Use Expected Value? / 2.5.1:
Using Cognition and Physiology to Build a Cognitive Model / 3:
Reification in Biological Entities / 3.1:
Recognition / 3.1.1:
Preafference / 3.1.2:
Biological Storage / 3.2:
Explicit Memory / 3.2.1:
Emotion / 3.3:
Emotion as mediator / 3.3.1:
Representation / 4:
Representing Features of the World / 4.1:
Representing Goals / 4.2:
Representing Actions in the World / 4.3:
Enabling Conditions / 4.3.1:
Outcomes / 4.3.2:
Representing Likelihoods / 4.3.3:
Exogenous Events / 4.4:
Perception/Action System / 5:
Robot as Perception/Action System / 5.1:
Robot as Body / 5.1.1:
Robot as Senor / 5.1.2:
Robot as Agent of Change / 5.1.3:
Low Level Control Loop - Procedural Memory / 5.1.4:
System Safety and Routine Actions / 5.1.5:
Examples of Perception/Action Systems / 5.2:
Fred - a simple test robot / 5.2.1:
Basil / 5.2.2:
Summary of Perception/Action Systems / 5.3:
Design of a Reification Engine / 6:
Model Selection Criteria / 6.1:
Judgment Analysis / 6.2:
Designing the Reification Engine / 6.3:
Bridging the Sensor to Symbol Gap / 7:
Supporting Bidirectional Mapping / 7.1:
A Third Approach / 7.1.1:
Reification Architecture / 7.2:
PerCepts and Reification / 7.3:
PerCept Data / 7.3.1:
PerCept Function / 7.3.2:
Mental Model / 7.4:
Current World State / 7.5:
Reification functionality / 7.6:
Initialization / 7.6.1:
Mapping the World onto its Model - Recognition / 7.6.2:
Projecting the Model onto the World - Preafference / 7.6.3:
Updating the Current World State / 7.6.4:
Wrapping Up Reification / 7.7:
Working Memory and the Construction of Personal Experiences / 8:
Transient Memory / 8.1:
Working Memory and the Current World State / 8.1.1:
Internal State / 8.1.2:
Episodic Memory / 8.2:
Emotive Tags / 8.2.1:
Memory Services / 8.3:
Providing Memory Services to the Reification Process / 8.4:
Memory, What Was That Again? / 8.5:
Semantic Memory and the Personal Rough Ontology / 9:
Semantic Memory / 9.1:
What is a Personal Rough Ontology? / 9.1.1:
Building Semantic Memory / 9.2:
Structure of the Ontology / 9.2.1:
The nodes in the multi-graph / 9.2.2:
Relationships, the Edges of the Graph / 9.2.3:
A Note on Representing Probabilities / 9.2.4:
Persistent Storage in the Personal Rough Ontology / 9.3:
Transient versus Persistent Knowledge / 9.4:
Extracting Problems for the Deliberative System / 9.5:
Focusing Attention by Finding Sub-Ontologies / 9.6:
Weighted Transitivity / 9.6.1:
Deliberative System / 10:
Deliberation / 10.1:
Reasoning About the Present / 10.2:
Sense-Symbols from the Reification Engine / 10.2.1:
Symbols from the Ontology / 10.2.2:
Reasoning with WorldSets / 10.2.3:
Choosing the Future / 10.3:
Planning as Search / 10.3.1:
Adapting to Failure / 10.3.2:
Plan Evaluation and Selection / 10.4:
Acquiring Distributions / 10.4.1:
Simulator Fidelity / 10.4.2:
Summary / 10.5:
Putting it All Together / 11:
How it Fits Together / 11.1:
Goals and Environment / 11.2:
Knowledge Sources / 11.3:
Ontological Knowledge / 11.3.1:
Reification Knowledge / 11.3.2:
Perception/Action Knowledge / 11.3.3:
The process / 11.4:
Perception/Action / 11.4.1:
Reification / 11.4.2:
Execution / 11.4.3:
Execution, Reification and Action / 11.4.4:
Perception/Action - Reflex / 11.4.6:
Execution Failure / 11.4.7:
Back Up to Deliberation / 11.4.8:
Procedural Memory and Localization / 11.4.9:
A Few Notes About the General Flow / 11.5:
Testing / 12:
Testing the Robot, or How Does One Test an Embedded System? / 12.1:
eXtreme Programming / 12.2:
Methodology for Testing Embodied Systems / 12.3:
Benefits of Partitioning the Tests / 12.3.1:
General Testing Guidelines / 12.4:
General Partitioning Guidelines / 12.4.1:
Testing in the lab / 12.5:
Hardware / 12.5.1:
Static Tests / 12.5.2:
Dynamic tests / 12.5.3:
Formal System Tests - Testing In The Real World / 12.6:
Testing Recognition / 12.6.1:
Testing Preafference / 12.6.2:
Testing Self-Localization / 12.6.3:
Where do we go from here / 12.7:
A Stopping Point / 13.1:
Next Steps / 13.2:
Adding Learning to the Model / 13.2.1:
Adding Additional Data Sources / 13.2.2:
Porting the Brain into New Bodies / 13.2.3:
Glossary
References
Index
Introduction / 1:
Bridging the Gap / 1.1:
Bidirectional Mapping / 1.1.1:
34.

電子ブック

EB
Malte Helmert, Jörg Siekmann
出版情報: SpringerLink Books - AutoHoldings , Springer Berlin Heidelberg, 2008
所蔵情報: loading…
目次情報: 続きを見る
Planning Benchmarks / Part I:
The Role of Benchmarks / 1:
Evaluating Planner Performance / 1.1:
Worst-Case Evaluation / 1.1.1:
Average-Case Evaluation / 1.1.2:
Planning Benchmarks Are Important / 1.2:
Theoretical Analyses of Planning Benchmarks / 1.3:
Why Theoretical Analyses Are Useful / 1.3.1:
Published Results on Benchmark Complexity / 1.3.2:
Standard Benchmarks / 1.4:
Summary and Overview / 1.5:
Defining Planning Domains / 2:
Optimization Problems / 2.1:
Minimization Problems / 2.1.1:
Approximation Algorithms / 2.1.2:
Approximation Classes / 2.1.3:
Reductions / 2.1.4:
Formalizing Planning Domains / 2.2:
General Results and Reductions / 2.3:
Upper Bounds / 2.3.1:
Shortest Plan Length / 2.3.2:
Approximation Classes of Limited Interest / 2.3.3:
Relating Planning and (Bounded) Plan Existence / 2.3.4:
Generalization and Specialization / 2.3.5:
The Benchmark Suite / 3:
Defining the Competition Domains / 3.1:
IPC1 Domains / 3.2:
IPC2 Domains / 3.2.2:
IPC3 Domains / 3.2.3:
IPC4 Domains / 3.2.4:
Domains and Domain Families / 3.3:
Transportation and Route Planning / 4:
Transport and Route / 4.1:
The Transport Domain / 4.1.1:
The Route Domain / 4.1.2:
Special Cases and Hierarchy / 4.1.3:
General Results / 4.2:
Plan Existence / 4.3:
Hardness of Optimization / 4.4:
Constant Factor Approximation / 4.5:
Hardness of Constant Factor Approximation / 4.6:
Summary / 4.7:
Beyond Transport and Route / 4.8:
IPC Domains: Transportation and Route Planning / 5:
Gripper / 5.1:
Mystery and Mystery Prime / 5.2:
Logistics / 5.3:
Zenotravel / 5.4:
Depots / 5.5:
Miconic-10 / 5.6:
Rovers / 5.7:
Grid / 5.8:
Driverlog / 5.9:
Airport / 5.10:
IPC Domains: Others / 5.11:
Assembly / 6.1:
Blocksworld / 6.2:
Freecell / 6.3:
Movie / 6.4:
Pipesworld / 6.5:
Promela / 6.6:
PSR / 6.7:
Satellite / 6.8:
Schedule / 6.9:
Conclusions / 6.10:
Ten Conclusions / 7.1:
Going Further / 7.2:
Fast Downward / Part II:
Solving Planning Tasks Hierarchically / 8:
Introduction / 8.1:
Related Work / 8.2:
Causal Graphs and Abstraction / 8.2.1:
Causal Graphs and Unary STRIPS Operators / 8.2.2:
Multi-Valued Planning Tasks / 8.2.3:
Architecture and Overview / 8.3:
Translation / 9:
PDDL and Multi-valued Planning Tasks / 9.1:
Translation Overview / 9.2:
Normalization / 9.3:
Compiling Away Types / 9.3.1:
Simplifying Conditions / 9.3.2:
Simplifying Effects / 9.3.3:
Normalization Result / 9.3.4:
Invariant Synthesis / 9.4:
Initial Candidates / 9.4.1:
Proving Invariance / 9.4.2:
Refining Failed Candidates / 9.4.3:
Examples / 9.4.4:
Grounding / 9.4.5:
Overview of Horn Exploration / 9.5.1:
Generating the Logic Program / 9.5.2:
Translating the Logic Program to Normal Form / 9.5.3:
Computing the Canonical Model / 9.5.4:
Axiom and Operator Instantiation / 9.5.5:
Multi-valued Planning Task Generation / 9.6:
Variable Selection / 9.6.1:
Converting the Initial State / 9.6.2:
Converting Operator Effects / 9.6.3:
Converting Conditions / 9.6.4:
Computing Axiom Layers / 9.6.5:
Generating the Output / 9.6.6:
Performance Notes / 9.7:
Relative Performance Compared to MIPS Translator / 9.7.1:
Absolute Performance / 9.7.2:
Knowledge Compilation / 10:
Overview / 10.1:
Domain Transition Graphs / 10.2:
Causal Graphs / 10.3:
Acyclic Causal Graphs / 10.3.1:
Generating and Pruning Causal Graphs / 10.3.2:
Causal Graph Examples / 10.3.3:
Successor Generators and Axiom Evaluators / 10.4:
Successor Generators / 10.4.1:
Axiom Evaluators / 10.4.2:
Search / 11:
The Causal Graph Heuristic / 11.1:
Conceptual View of the Causal Graph Heurstic / 11.2.1:
Computation of the Causal Graph Heuristic / 11.2.2:
States with Infinite Heuristic Value / 11.2.3:
Helpful Transitions / 11.2.4:
The FF Heuristic / 11.3:
Greedy Best-First Search in Fast Downward / 11.4:
Preferred Operators / 11.4.1:
Deferred Heuristic Evaluation / 11.4.2:
Multi-heuristic Best-First Search / 11.5:
Focused Iterative-Broadening Search / 11.6:
Experiments / 12:
Experiment Design / 12.1:
Benchmark Set / 12.1.1:
Experiment Setup / 12.1.2:
Translation and Knowledge Compilation vs. Search / 12.1.3:
Strips Domains from IPC1-3 / 12.2:
ADL Domains from IPC1-3 / 12.3:
Domains from IPC4 / 12.4:
Conclusions from the Experiment / 12.5:
Discussion / 13:
Major Contributors / 13.1:
Multi-valued Representations / 13.2.1:
Task Decomposition Heuristics / 13.2.2:
Minor Contributions / 13.3:
References / 13.4:
Index
Planning Benchmarks / Part I:
The Role of Benchmarks / 1:
Evaluating Planner Performance / 1.1:
35.

電子ブック

EB
Wolfgang Ertel
出版情報: Springer eBooks Computer Science , Springer London, 2011
所蔵情報: loading…
目次情報: 続きを見る
Introduction / 1:
What Is Artificial Intelligence? / 1.1:
Brain Science and Problem Solving / 1.1.1:
The Turing Test and Chatterbots / 1.1.2:
The History of AI / 1.2:
The First Beginnings / 1.2.1:
Logic Solves (Almost) All Problems / 1.2.2:
The New Connection ism / 1.2.3:
Reasoning Under Uncertainty / 1.2.4:
Distributed, Autonomous and Learning Agents / 1.2.5:
AI Grows up / 1.2.6:
Agents / 1.3:
Knowledge-Based Systems / 1.4:
Exercises / 1.5:
Propositional Logic / 2:
Syntax / 2.1:
Semantics / 2.2:
Proof Systems / 2.3:
Resolution / 2.4:
Horn Clauses / 2.5:
Computability and Complexity / 2.6:
Applications and Limitations / 2.7:
First-order Predicate Logic / 2.8:
Equality / 3.1:
Quantifiers and Normal Forms / 3.3:
ProofCalculi / 3.4:
Resolution Strategies / 3.5:
Automated Theorem Provers / 3.5.2:
Mathematical Examples / 3.7:
Applications / 3.8:
Summary / 3.9:
Limitations of Logic / 3.10:
The Search Space Problem / 4.1:
Decidability and Incompleteness / 4.2:
The Flying Penguin / 4.3:
Modeling Uncertainty / 4.4:
Logic Programming with Prolog / 4.5:
Prolog Systems and Implementations / 5.1:
Simple Examples / 5.2:
Execution Control and Procedural Elements / 5.3:
Lists / 5.4:
Self-modifying Programs / 5.5:
A Planning Example / 5.6:
Constraint Logic Programming / 5.7:
Search, Games and Problem Solving / 5.8:
Uninformed Search / 6.1:
Breadth-First Search / 6.2.1:
Depth-First Search / 6.2.2:
Iterative Deepening / 6.2.3:
Comparison / 6.2.4:
Heuristic Search / 6.3:
Greedy Search / 6.3.1:
A-Search / 6.3.2:
Ida-Search / 6.3.3:
Empirical Comparison of the Search Algorithms / 6.3.4:
Games with Opponents / 6.3.5:
Minimax Search / 6.4.1:
Alpha-Beta-Pruning / 6.4.2:
Non-deterministic Games / 6.4.3:
Heuristic Evaluation Functions / 6.5:
Learning of Heuristics / 6.5.1:
State of the Art / 6.6:
Reasoning with Uncertainty / 6.7:
Computing with Probabilities / 7.1:
Conditional Probability / 7.1.1:
The Principle of Maximum Entropy / 7.2:
An Inference Rule for Probabilities / 7.2.1:
Maximum Entropy Without Explicit Constraints / 7.2.2:
Conditional Probability Versus Material Implication / 7.2.3:
MaxEnt-Systems / 7.2.4:
The Tweety Example / 7.2.5:
Lexmed, an Expert System for Diagnosing Appendicitis / 7.3:
Appendicitis Diagnosis with Formal Methods / 7.3.1:
Hybrid Probabilistic Knowledge Base / 7.3.2:
Application of Lexmed / 7.3.3:
Function of Lexmed / 7.3.4:
Risk Management Using the Cost Matrix / 7.3.5:
Performance / 7.3.6:
Application Areas and Experiences / 7.3.7:
Reasoning with Bayesian Networks / 7.4:
Independent Variables / 7.4.1:
Graphical Representation of Knowledge as a Bayesian Network / 7.4.2:
Conditional Independence / 7.4.3:
Practical Application / 7.4.4:
Software for Bayesian Networks / 7.4.5:
Development of Bayesian Networks / 7.4.6:
Semantics of Bayesian Networks / 7.4.7:
Machine Learning and Data Mining / 7.5:
Data Analysis / 8.1:
The Perceptron, a Linear Classifier / 8.2:
The Learning Rule / 8.2.1:
Optimization and Outlook / 8.2.2:
The Nearest Neighbor Method / 8.3:
Two Classes, Many Classes, Approximation / 8.3.1:
Distance Is Relevant / 8.3.2:
Computation Times / 8.3.3:
Summary and Outlook / 8.3.4:
Case-Based Reasoning / 8.3.5:
Decision Tree Learning / 8.4:
A Simple Example / 8.4.1:
Entropy as a Metric for Information Content / 8.4.2:
Information Gain / 8.4.3:
Application of C4.5 / 8.4.4:
Learning of Appendicitis Diagnosis / 8.4.5:
Continuous Attributes / 8.4.6:
PruningùCutting the Tree / 8.4.7:
Missing Values / 8.4.8:
Learning of Bayesian Networks / 8.4.9:
Learning the Network Structure / 8.5.l:
The Naive Bayes Classifier / 8:6:
Text Classification with Naive Bayes / 8.6.1:
Clustering / 8.7:
Distance Metrics / 8.7.1:
k-Means and the Em Algorithm / 8.7.2:
Hierarchical Clustering / 8.7.3:
Data Mining in Practice / 8.8:
The Data Mining Tool Knime / 8.8.1:
The Perceptron / 8.9:
Nearest Neighbor Method / 8.10.3:
Decision Trees / 8.10.4:
Data Mining / 8.10.5:
Neural Networks / 9:
From Biology to Simulation / 9.1:
The Mathematical Model / 9.1.1:
Hopfield Networks / 9.2:
Application to a Pattern Recognition Example / 9.2.1:
Analysis / 9.2.2:
Neural Associative Memory / 9.2.3:
Correlation Matrix Memory / 9.3.1:
The Pseudoinverse / 9.3.2:
The Binary Hebb Rule / 9.3.3:
A Spelling Correction Program / 9.3.4:
Linear Networks with Minimal Errors / 9.4:
Least Squares Method / 9.4.1:
Application to the Appendicitis Data / 9.4.2:
The Delta Rule / 9.4.3:
Comparison to the Perceptron / 9.4.4:
The Backpropagation Algorithm / 9.5:
Nettalk: A Network Learns to Speak / 9.5.1:
Learning of Heuristics for Theorem Provers / 9.5.2:
Problems and Improvements / 9.5.3:
Support Vector Machines / 9.6:
Backpropagation / 9.7:
Reinforcement Learning / 9.9.5:
The Task / 10.1:
Uninformed Combinatorial Search / 10.3:
Value Iteration and Dynamic Programming / 10.4:
A Learning Walking Robot and Its Simulation / 10.5:
Q-Learning / 10.6:
Q-Learning in a Nondeterministic Environment / 10.6.1:
Exploration and Exploitation / 10.7:
Approximation, Generalization and Convergence / 10.8:
Curse of Dimensionality / 10.9:
Solutions for the Exercises / 10.11:
First-Order Predicate Logic / 11.1:
Prolog / 11.4:
References / 11.6:
Index
Introduction / 1:
What Is Artificial Intelligence? / 1.1:
Brain Science and Problem Solving / 1.1.1:
36.

電子ブック

EB
Eyke H?llermeier, Eyke H?llermeier
出版情報: Springer eBooks Computer Science , Springer Netherlands, 2007
所蔵情報: loading…
目次情報: 続きを見る
Dedication
Foreword
Preface
Notation
Introduction / 1:
Similarity and case-based reasoning / 1.1:
Objective of this book / 1.2:
Making case-based inference more reliable / 1.2.1:
The important role of models / 1.2.2:
Formal models of case-based inference / 1.2.3:
Overview / 1.3:
Similarity and Case-Based Inference / 2:
Model-based and instance-based approaches / 2.1:
Model-based approaches / 2.1.1:
Instance-based approaches / 2.1.2:
Knowledge representation / 2.1.3:
Performance in generalization / 2.1.4:
Computational complexity / 2.1.5:
Similarity-based methods / 2.2:
Nearest neighbor (NN) estimation / 2.2.1:
Instance-based learning / 2.2.2:
Case-based reasoning / 2.2.3:
The concept of similarity / 2.3:
Similarity in case-based reasoning / 2.3.1:
Similarity and fuzzy sets / 2.3.2:
Aggregation of local similarity measures / 2.3.3:
Case-based inference / 2.4:
Deterministic inference problems / 2.4.1:
Non-deterministic inference problems / 2.4.2:
Summary and remarks / 2.4.3:
Constraint-Based Modeling of Case-Based Inference / 3:
Basic concepts / 3.1:
Similarity profiles and hypotheses / 3.1.1:
Generalized similarity profiles / 3.1.2:
Constraint-based inference / 3.2:
A constraint-based inference scheme / 3.2.1:
Non-deterministic problems / 3.2.2:
Case-based approximation / 3.3:
Properties of case-based approximation / 3.3.1:
Local similarity profiles / 3.3.2:
Learning similarity hypotheses / 3.4:
The learning task / 3.4.1:
A learning algorithm / 3.4.2:
Properties of case-based learning / 3.4.3:
Experimental results / 3.4.4:
Application to statistical inference / 3.5:
Case-based parameter estimation / 3.5.1:
Case-based prior elicitation / 3.5.2:
Probabilistic Modeling of Case-Based Inference / 3.6:
Basic probabilistic concepts / 4.1:
Probabilistic similarity profiles and hypotheses / 4.1.1:
Generalized probabilistic profiles / 4.1.2:
Case-based inference, probabilistic reasoning, and statistical inference / 4.2:
Learning probabilistic similarity hypotheses / 4.3:
Simple hypotheses and credible case-based inference / 4.3.1:
Extended case-based learning / 4.3.2:
Experiments with regression and label ranking / 4.4:
Regression: artificial data / 4.4.1:
Regression: real-world data / 4.4.2:
Label ranking / 4.4.3:
Case-based inference as evidential reasoning / 4.5:
Transformation of probabilistic evidence / 4.5.1:
Inference from individual cases / 4.5.2:
Combining evidence from several cases / 4.5.3:
Assessment of cases / 4.6:
Similarity-weighted approximation / 4.6.1:
More general criteria / 4.6.2:
Assessment of individual cases / 4.6.3:
Complex similarity hypotheses / 4.7:
Inference schemes of higher order / 4.7.1:
Partially admissible profiles / 4.7.2:
Approximate probabilistic inference / 4.8:
Generalized uncertainty measures and profiles / 4.8.1:
An approximate inference scheme / 4.8.2:
Fuzzy Set-Based Modeling of Case-Based Inference I / 4.9:
Background on possibility theory / 5.1:
Possibility distributions as generalized constraints / 5.1.1:
Possibility as evidential support / 5.1.2:
Fuzzy rule-based modeling of the CBI hypothesis / 5.2:
Possibility rules / 5.2.1:
Modeling the CBI hypothesis / 5.2.2:
Generalized possibilistic prediction / 5.3:
Control of compensation and accumulation of support / 5.3.1:
Possibilistic support and weighted NN estimation / 5.3.2:
Upper and lower possibility bounds / 5.3.3:
Fuzzy logical evaluation / 5.3.4:
Comparison of extrapolation principles / 5.3.5:
From predictions to decisions / 5.3.6:
An illustrative example / 5.3.7:
Complexity issues / 5.3.8:
Extensions of the basic model / 5.4:
Dealing with incomplete information / 5.4.1:
Discounting noisy and atypical instances / 5.4.2:
From instances to rules / 5.4.3:
Modified possibility rules / 5.4.4:
Combination of several rules / 5.4.5:
Locally restricted extrapolation / 5.4.6:
Incorporation of background knowledge / 5.4.7:
Experimental studies / 5.5:
Preliminaries / 5.5.1:
Classification accuracy / 5.5.2:
Statistical assumptions and robustness / 5.5.3:
Variation of the aggregation operator / 5.5.4:
Representation of uncertainty / 5.5.5:
Calibration of CBI models / 5.6:
Relations to other fields / 5.7:
Fuzzy and possibilistic data analysis / 5.7.1:
Fuzzy set-based approximate reasoning / 5.7.2:
Fuzzy Set-Based Modeling of Case-Based Inference II / 5.8:
Gradual inference rules / 6.1:
The basic model / 6.1.1:
Modification of gradual rules / 6.1.2:
Certainty rules / 6.2:
Cases as information sources / 6.3:
A probabilistic model / 6.3.1:
Combination of information sources / 6.3.2:
Exceptionality and assessment of cases / 6.4:
Local rules / 6.5:
Case-Based Decision Making / 6.6:
Case-based decision theory / 7.1:
Nearest Neighbor decisions / 7.2:
Nearest Neighbor classification and decision making / 7.2.1:
Nearest Neighbor decision rules / 7.2.2:
An axiomatic characterization / 7.2.3:
Fuzzy modeling of case-based decisions / 7.3:
Basic measures for act evaluation / 7.3.1:
Modification of the basic measures / 7.3.2:
Interpretation of the decision criteria / 7.3.3:
Fuzzy quantification in act evaluation / 7.4:
A CBI framework of CBDM / 7.5:
Generalized decision-theoretic setups / 7.5.1:
Decision making using belief functions / 7.5.2:
Possibilistic decision making / 7.5.3:
CBDM models: A discussion of selected issues / 7.6:
The relation between similarity, preference, and belief / 7.6.1:
The effect of observed cases / 7.6.2:
Dynamic aspects of decision making / 7.6.3:
Experience-based decision making / 7.7:
Compiled decision models / 7.7.1:
Satisficing decision trees / 7.7.2:
Experimental evaluation / 7.7.3:
Conclusions and Outlook / 7.8:
Possibilistic Dominance in Qualitative Decisions / A:
Implication-Based Fuzzy Rules as Randomized Gradual Rules / B:
Implication-based fuzzy rules / B.1:
Gradual rules / B.1.1:
Other implication-based rules / B.1.2:
Randomized gradual rules / B.2:
A probabilistic representation of implication-based fuzzy rules / B.3:
Similarity-Based Reasoning as Logical Inference / C:
Simulation Results of Section 3.4.4 / D:
Experimental Results of Section 5.5.4 / E:
Simulation Results of Section 7.4 / F:
Computation of an Extended Splitting Measures / G:
Experimental Results of Section 7.7.2 / H:
References
Index
Dedication
Foreword
Preface
37.

電子ブック

EB
Eyke Hüllermeier, Eyke Hüllermeier
出版情報: SpringerLink Books - AutoHoldings , Springer Netherlands, 2007
所蔵情報: loading…
目次情報: 続きを見る
Dedication
Foreword
Preface
Notation
Introduction / 1:
Similarity and case-based reasoning / 1.1:
Objective of this book / 1.2:
Making case-based inference more reliable / 1.2.1:
The important role of models / 1.2.2:
Formal models of case-based inference / 1.2.3:
Overview / 1.3:
Similarity and Case-Based Inference / 2:
Model-based and instance-based approaches / 2.1:
Model-based approaches / 2.1.1:
Instance-based approaches / 2.1.2:
Knowledge representation / 2.1.3:
Performance in generalization / 2.1.4:
Computational complexity / 2.1.5:
Similarity-based methods / 2.2:
Nearest neighbor (NN) estimation / 2.2.1:
Instance-based learning / 2.2.2:
Case-based reasoning / 2.2.3:
The concept of similarity / 2.3:
Similarity in case-based reasoning / 2.3.1:
Similarity and fuzzy sets / 2.3.2:
Aggregation of local similarity measures / 2.3.3:
Case-based inference / 2.4:
Deterministic inference problems / 2.4.1:
Non-deterministic inference problems / 2.4.2:
Summary and remarks / 2.4.3:
Constraint-Based Modeling of Case-Based Inference / 3:
Basic concepts / 3.1:
Similarity profiles and hypotheses / 3.1.1:
Generalized similarity profiles / 3.1.2:
Constraint-based inference / 3.2:
A constraint-based inference scheme / 3.2.1:
Non-deterministic problems / 3.2.2:
Case-based approximation / 3.3:
Properties of case-based approximation / 3.3.1:
Local similarity profiles / 3.3.2:
Learning similarity hypotheses / 3.4:
The learning task / 3.4.1:
A learning algorithm / 3.4.2:
Properties of case-based learning / 3.4.3:
Experimental results / 3.4.4:
Application to statistical inference / 3.5:
Case-based parameter estimation / 3.5.1:
Case-based prior elicitation / 3.5.2:
Probabilistic Modeling of Case-Based Inference / 3.6:
Basic probabilistic concepts / 4.1:
Probabilistic similarity profiles and hypotheses / 4.1.1:
Generalized probabilistic profiles / 4.1.2:
Case-based inference, probabilistic reasoning, and statistical inference / 4.2:
Learning probabilistic similarity hypotheses / 4.3:
Simple hypotheses and credible case-based inference / 4.3.1:
Extended case-based learning / 4.3.2:
Experiments with regression and label ranking / 4.4:
Regression: artificial data / 4.4.1:
Regression: real-world data / 4.4.2:
Label ranking / 4.4.3:
Case-based inference as evidential reasoning / 4.5:
Transformation of probabilistic evidence / 4.5.1:
Inference from individual cases / 4.5.2:
Combining evidence from several cases / 4.5.3:
Assessment of cases / 4.6:
Similarity-weighted approximation / 4.6.1:
More general criteria / 4.6.2:
Assessment of individual cases / 4.6.3:
Complex similarity hypotheses / 4.7:
Inference schemes of higher order / 4.7.1:
Partially admissible profiles / 4.7.2:
Approximate probabilistic inference / 4.8:
Generalized uncertainty measures and profiles / 4.8.1:
An approximate inference scheme / 4.8.2:
Fuzzy Set-Based Modeling of Case-Based Inference I / 4.9:
Background on possibility theory / 5.1:
Possibility distributions as generalized constraints / 5.1.1:
Possibility as evidential support / 5.1.2:
Fuzzy rule-based modeling of the CBI hypothesis / 5.2:
Possibility rules / 5.2.1:
Modeling the CBI hypothesis / 5.2.2:
Generalized possibilistic prediction / 5.3:
Control of compensation and accumulation of support / 5.3.1:
Possibilistic support and weighted NN estimation / 5.3.2:
Upper and lower possibility bounds / 5.3.3:
Fuzzy logical evaluation / 5.3.4:
Comparison of extrapolation principles / 5.3.5:
From predictions to decisions / 5.3.6:
An illustrative example / 5.3.7:
Complexity issues / 5.3.8:
Extensions of the basic model / 5.4:
Dealing with incomplete information / 5.4.1:
Discounting noisy and atypical instances / 5.4.2:
From instances to rules / 5.4.3:
Modified possibility rules / 5.4.4:
Combination of several rules / 5.4.5:
Locally restricted extrapolation / 5.4.6:
Incorporation of background knowledge / 5.4.7:
Experimental studies / 5.5:
Preliminaries / 5.5.1:
Classification accuracy / 5.5.2:
Statistical assumptions and robustness / 5.5.3:
Variation of the aggregation operator / 5.5.4:
Representation of uncertainty / 5.5.5:
Calibration of CBI models / 5.6:
Relations to other fields / 5.7:
Fuzzy and possibilistic data analysis / 5.7.1:
Fuzzy set-based approximate reasoning / 5.7.2:
Fuzzy Set-Based Modeling of Case-Based Inference II / 5.8:
Gradual inference rules / 6.1:
The basic model / 6.1.1:
Modification of gradual rules / 6.1.2:
Certainty rules / 6.2:
Cases as information sources / 6.3:
A probabilistic model / 6.3.1:
Combination of information sources / 6.3.2:
Exceptionality and assessment of cases / 6.4:
Local rules / 6.5:
Case-Based Decision Making / 6.6:
Case-based decision theory / 7.1:
Nearest Neighbor decisions / 7.2:
Nearest Neighbor classification and decision making / 7.2.1:
Nearest Neighbor decision rules / 7.2.2:
An axiomatic characterization / 7.2.3:
Fuzzy modeling of case-based decisions / 7.3:
Basic measures for act evaluation / 7.3.1:
Modification of the basic measures / 7.3.2:
Interpretation of the decision criteria / 7.3.3:
Fuzzy quantification in act evaluation / 7.4:
A CBI framework of CBDM / 7.5:
Generalized decision-theoretic setups / 7.5.1:
Decision making using belief functions / 7.5.2:
Possibilistic decision making / 7.5.3:
CBDM models: A discussion of selected issues / 7.6:
The relation between similarity, preference, and belief / 7.6.1:
The effect of observed cases / 7.6.2:
Dynamic aspects of decision making / 7.6.3:
Experience-based decision making / 7.7:
Compiled decision models / 7.7.1:
Satisficing decision trees / 7.7.2:
Experimental evaluation / 7.7.3:
Conclusions and Outlook / 7.8:
Possibilistic Dominance in Qualitative Decisions / A:
Implication-Based Fuzzy Rules as Randomized Gradual Rules / B:
Implication-based fuzzy rules / B.1:
Gradual rules / B.1.1:
Other implication-based rules / B.1.2:
Randomized gradual rules / B.2:
A probabilistic representation of implication-based fuzzy rules / B.3:
Similarity-Based Reasoning as Logical Inference / C:
Simulation Results of Section 3.4.4 / D:
Experimental Results of Section 5.5.4 / E:
Simulation Results of Section 7.4 / F:
Computation of an Extended Splitting Measures / G:
Experimental Results of Section 7.7.2 / H:
References
Index
Dedication
Foreword
Preface
38.

電子ブック

EB
Raymond S. T. Lee, Toru Ishida, Nicholas R. Jennings
出版情報: Springer eBooks Computer Science , Springer Berlin Heidelberg, 2006
所蔵情報: loading…
目次情報: 続きを見る
Introduction / 1:
The Coming of the Age of Intelligent Agents / 1.1:
The Structure of This Book / 1.2:
Outline of Each Chapter / 1.3:
Readers of This Book / 1.4:
Concluding Remarks / 1.5:
Concepts and Theories / Part I:
The Search for Human Intelligence / 2:
What Is Intelligence? / 2.1:
The Philosophical View on Intelligence / 2.2:
Introduction - The Search for Intelligence and Ultimate Knowledge / 2.2.1:
The Traditional Philosophical View of Knowledge - Belief, Truth and Justification / 2.2.2:
Rationalistic Versus Empiristic View of Knowledge / 2.2.3:
Kant's Critique of Pure Reason and the Theory of Knowledge / 2.2.4:
Russell's View of Knowledge / 2.2.5:
Krishnamurti's The Awakening of Intelligence - Thought Versus Intelligence / 2.2.6:
Lee's Theory on Knowledge and Intelligence - The Unification Theory of Senses and Experiences / 2.2.7:
The Cognitive-Scientific View on Intelligence / 2.3:
The Cognitive-Scientific Definition of Intelligence / 2.3.1:
Spearman's Model of the Nature of Intelligence / 2.3.2:
Piaget's Psychology of Intelligence / 2.3.3:
Major Approaches of Intelligence - From Psychometric Approach to Latest Studies / 2.3.4:
Gardner's Theory on Multiple Intelligence / 2.3.5:
Lee's Unification Theory of Senses and Experiences - The Psychological Interpretation / 2.3.6:
The Neuroscience and Neurophysiology View on Intelligence / 2.4:
The Major Challenges of Mind Science (The Exploration of the Mind from the Neuroscience Perspective) / 2.4.1:
A Brief History - The Search for Intelligence in Neuroscience / 2.4.2:
Contemporary Research in Mind Science - From Neural Oscillators to the "Chaos in the Brain" / 2.4.3:
The Neuroscientific and Neurophysiological Implications of the Unification Theory of Senses and Experiences / 2.4.4:
Summary / 2.4.5:
From AI to IA - The Emergence of Agent Technology / 2.5:
What is AI? / 3.1:
A Brief History of AI / 3.2:
The Dartmouth Meeting (1956) - The Birth of AI / 3.2.1:
The Turing Test - A Prelude of AI / 3.2.2:
Strong Versus Weak AI / 3.2.3:
Searle's Chinese Room Thought Experiment / 3.2.4:
Development of AI in the Late 1970s / 3.2.5:
The "Reincarnation" of Neural Networks in the Late 1980s / 3.2.6:
The Birth of IAs in the Late 1990s / 3.2.7:
An Overview of the Classification of AI Technologies / 3.3:
AI - Where to Go? / 3.4:
The Coming of the Age of IAs / 3.5:
What Is an IA? - A "Right" Place to Start / 3.5.1:
The Emergence of Agent Technology - The Idea of Portable Intelligence / 3.5.2:
The Ten Basic Requirements of IAs / 3.6:
The Contemporary Variety of IAs / 3.7:
The Conceptual Model of IAs / 3.8:
The BFI Agent Intellectual Conceptual Model / 3.8.1:
The Agent Development Conceptual Model (GIA vs. TIA) / 3.8.2:
Major Challenges and Threats of Agent Technology / 3.9:
AI Techniques for Agent Construction / 3.10:
The World of Fuzziness, Chaos, and Uncertainty / 4.1:
Fuzzy Logic / 4.2:
What is Fuzzy Logic? / 4.2.1:
Fuzzy Theory and the Uncertainty Principle / 4.2.2:
Fuzzy Logic - A Structural Overview / 4.2.3:
Fuzzy Reasoning - A Case Study on Fuzzy Air-conditioning Control System / 4.2.4:
Applications of Fuzzy Logic / 4.2.5:
Neural Networks - the "Brain" of IAs / 4.3:
Neural Networks - Background / 4.3.1:
ANN Architecture / 4.3.2:
Classification of Neural Networks / 4.3.3:
Associative Memory Neural Networks: Auto-associative Networks / 4.3.4:
Hopfield Networks / 4.3.5:
Multilayer Feedforward Backpropagation Networks (FFBPNs) / 4.3.6:
Neural Networks - Where to Go? / 4.3.7:
Genetic Algorithms - the Nature of Evolution / 4.4:
Genetic Algorithms - Basic Principle / 4.4.1:
Population Initialization / 4.4.2:
Fitness Evaluation / 4.4.3:
Parent Selection Scheme / 4.4.4:
Crossover and Mutation / 4.4.5:
Implementation of GAs / 4.4.6:
Hybridization of GA with Neural Networks / 4.4.7:
Chaos Theory - The World of Nonlinear Dynamics / 4.5:
Chaos Theory - The Study of Nonlinear Dynamics / 4.5.1:
Battle Between two Worlds: Deterministic Versus Probabilistic / 4.5.2:
A Snapshot of Chaos Theory / 4.5.3:
Characteristics of Chaos Systems / 4.5.4:
Chaos Theory Versus Uncertainty Principle / 4.5.5:
Current Work on Chaos Theory / 4.5.6:
Chaotic Neural Networks and the Lee-Oscillator / 4.6:
Chaotic Neural Oscillators - An Overview / 4.6.1:
The Lee-Oscillator / 4.6.2:
The Lee-Associator / 4.6.3:
System Implementation and Experimental Results / 4.6.4:
Progressive Memory Recalling Scheme of the Lee-Associator and Its Biological and Psychological Implications / 4.6.5:
Related Work / 4.6.6:
Conclusion / 4.6.7:
Further Reading / 4.7:
Applications of Intelligent Agents Using iJADK / Part II:
The Design and Implementation of an Intelligent Agent-Based System Using iJADK / 5:
iJADE - System Framework / 5.1:
iJADE Architecture / 5.2.1:
Application Layer / 5.2.2:
Conscious (Intelligent) Layer / 5.2.3:
Technology Layer / 5.2.4:
Supporting Layer / 5.2.5:
iJADK Architecture / 5.3:
Introduction to iJADK / 5.3.1:
Basic Components of iJADK / 5.3.2:
Internal Operations of iJADK / 5.3.3:
Agent Programming Over the iJADK Platform / 5.4:
User Interface / 5.4.1:
Agent Class / 5.4.2:
LifeCycleManager / 5.4.3:
RuntimeAgent / 5.4.4:
Sample iJADE Agents / 5.5:
HelloWorldAgent / 5.5.1:
HelloWorldAgent2 / 5.5.2:
TalkAgent / 5.5.3:
Latest Works of iJADE / 5.6:
iJADE WShopper - Intelligent Mobile Shopping Based on Fuzzy-Neuro Shopping Agents / 5.7:
WAP Technology / 6.1:
WAP Technology - From Web to MEB / 6.2.1:
Constraints for Contemporary WAP Technology on MEB / 6.2.2:
iJADE WShopper - System Framework / 6.3:
iJADE WShopper - System Overview / 6.3.1:
iJADE WShopper for M-shopping - System Components / 6.3.2:
Experimental Results / 6.4:
The RTT Test / 6.4.1:
The PS Test / 6.4.3:
The iWSAS Test / 6.4.4:
Migration to the J2ME Platform / 6.5:
Incorporate Other AI Capabilities in the Shopper Agents - iJADE Negotiator / 6.6.2:
iJADE WeatherMAN - A Weather Forecasting Agent Using the Fuzzy Neural Network Model / 7:
Weather Prediction Using a Fuzzy-Neuro Model / 7.1:
iJADE WeatherMAN - System Overview / 7.3:
User Requirement Definition Scheme (URDS) and Weather Reporting Scheme (WRS) / 7.3.1:
Data Collection Scheme (DCS) / 7.3.2:
Variable Selection and Transformation Scheme (VSTS) / 7.3.3:
Fuzzy-Neuro Training and Prediction Scheme (FNTPS) / 7.3.4:
iJADE WeatherMAN - System Implementation / 7.4:
iJADE WeatherMAN Weather Site / 7.4.1:
Central Agent/Test Agent / 7.4.2:
iJADE WeatherMan Place / 7.4.3:
iJADE WeatherMan Agent / 7.4.4:
iJADE Weather Forecaster Place / 7.4.5:
iJADE Forecaster Agent / 7.4.6:
Evaluation Considerations / 7.5:
Average Classification Rate / 7.5.2:
Model Performance / 7.5.3:
The HKO Forecast / 7.5.4:
Future Work / 7.6:
iJADE Stock Advisor - An Intelligent Agent-Based Stock Prediction System Using the Hybrid RBF Recurrent Network / 8:
Stock Advisory and Prediction System - A General Overview / 8.1:
Stochastic Indicator / 8.2.1:
Relative Strength Index (RSI) / 8.2.2:
Money Flow / 8.2.3:
Moving Average / 8.2.4:
Support and Resistant Lines (Trendlines) / 8.2.5:
Trend Generalization / 8.2.6:
iJADE Stock Advisor - System Framework / 8.3:
iJADE Stock Advisor - System Overview / 8.3.1:
Stock Prediction Using the HRBF model / 8.3.2:
Parameter Selection Scheme in HRBFN / 8.4:
Round-Trip-Time (RTT) Test / 8.4.2:
Long- and Short-Term Prediction, Window Size Evaluation Test / 8.4.3:
Stock Prediction Performance Test / 8.4.4:
iJADE Surveillant - A Multi-resolution Neuro-oscillatory Agent-Based Surveillance System / 8.5:
Surveillance System - An Overview / 9.1:
Background / 9.2.1:
Scene Analysis / 9.2.2:
Human Face Recognition / 9.2.3:
Supporting Technologies / 9.3:
MPEG-7 - System Overview / 9.3.1:
MPEG-7 Model / 9.3.2:
The Latest MPEG-7 Development Work on Visual Object Modeling / 9.3.3:
iJADE Surveillant - System Overview / 9.4:
iJADE Surveillant - System Architecture / 9.4.1:
Automatic Multi-resolution Scene Segmentation Scheme Using the CNOW Model / 9.4.2:
Automatic Human Face Detection and Contour Features Extraction Using the ACM / 9.4.3:
Invariant Human Face Recognition Using the EGDLM / 9.4.4:
System Implementation / 9.5:
Automatic Color Scene Segmentation Scheme / 9.5.1:
Invariant Human Face Recognition Scheme / 9.5.2:
Facial Pattern Occlusion and Distortion Test / 9.5.3:
Performance Analysis / 9.5.4:
iJADE Negotiator - An Intelligent Fuzzy Agent-Based Negotiation System for Internet Shopping / 9.6:
Negotiation Systems - An Overview / 10.1:
iJADE Negotiator - System Architecture / 10.3:
iJADE Negotiator - System Overview / 10.3.1:
iJADE Negotiator - Main Functional Modules / 10.3.2:
iJADE Negotiator - Intelligent Negotiation Strategy and Negotiation Protocol / 10.3.3:
iJADE Negotiator - System Implementation / 10.4:
Future Agent Technology - Modern Ontology and Ontological Agent Technologies (OAT) / 10.4.1:
What Is Ontology? / 11.1:
Ontology - Theories of Existence / 11.1.1:
Universals Versus Particulars / 11.1.2:
Ontology - The World of Universals / 11.1.3:
Ontological View of the Nature of Existence / 11.1.4:
Impact of Ontology on Modern AI / 11.1.5:
Modern Ontology and Ontological Agents / 11.2:
The Theoretical Foundation of OAT - Conceptualization Theory / 11.2.1:
Characteristics of Ontological Agents / 11.2.2:
Potential Applications of OAT / 11.2.3:
Cogito iJADE Project / 11.2.4:
Cogito iJADE - A New Era of Self-aware IAs / 11.3.1:
Cogito iJADE - A System Overview / 11.3.2:
Latest Works of Cogito Agents / 11.3.3:
Agent Technology - The Future / 11.4:
iJADK 2.0 API / Appendix:
References
Index
About the Author
Introduction / 1:
The Coming of the Age of Intelligent Agents / 1.1:
The Structure of This Book / 1.2:
39.

電子ブック

EB
Raymond S. T. Lee, Toru Ishida, Nicholas R. Jennings, Katia Sycara
出版情報: SpringerLink Books - AutoHoldings , Springer Berlin Heidelberg, 2006
所蔵情報: loading…
目次情報: 続きを見る
Introduction / 1:
The Coming of the Age of Intelligent Agents / 1.1:
The Structure of This Book / 1.2:
Outline of Each Chapter / 1.3:
Readers of This Book / 1.4:
Concluding Remarks / 1.5:
Concepts and Theories / Part I:
The Search for Human Intelligence / 2:
What Is Intelligence? / 2.1:
The Philosophical View on Intelligence / 2.2:
Introduction - The Search for Intelligence and Ultimate Knowledge / 2.2.1:
The Traditional Philosophical View of Knowledge - Belief, Truth and Justification / 2.2.2:
Rationalistic Versus Empiristic View of Knowledge / 2.2.3:
Kant's Critique of Pure Reason and the Theory of Knowledge / 2.2.4:
Russell's View of Knowledge / 2.2.5:
Krishnamurti's The Awakening of Intelligence - Thought Versus Intelligence / 2.2.6:
Lee's Theory on Knowledge and Intelligence - The Unification Theory of Senses and Experiences / 2.2.7:
The Cognitive-Scientific View on Intelligence / 2.3:
The Cognitive-Scientific Definition of Intelligence / 2.3.1:
Spearman's Model of the Nature of Intelligence / 2.3.2:
Piaget's Psychology of Intelligence / 2.3.3:
Major Approaches of Intelligence - From Psychometric Approach to Latest Studies / 2.3.4:
Gardner's Theory on Multiple Intelligence / 2.3.5:
Lee's Unification Theory of Senses and Experiences - The Psychological Interpretation / 2.3.6:
The Neuroscience and Neurophysiology View on Intelligence / 2.4:
The Major Challenges of Mind Science (The Exploration of the Mind from the Neuroscience Perspective) / 2.4.1:
A Brief History - The Search for Intelligence in Neuroscience / 2.4.2:
Contemporary Research in Mind Science - From Neural Oscillators to the "Chaos in the Brain" / 2.4.3:
The Neuroscientific and Neurophysiological Implications of the Unification Theory of Senses and Experiences / 2.4.4:
Summary / 2.4.5:
From AI to IA - The Emergence of Agent Technology / 2.5:
What is AI? / 3.1:
A Brief History of AI / 3.2:
The Dartmouth Meeting (1956) - The Birth of AI / 3.2.1:
The Turing Test - A Prelude of AI / 3.2.2:
Strong Versus Weak AI / 3.2.3:
Searle's Chinese Room Thought Experiment / 3.2.4:
Development of AI in the Late 1970s / 3.2.5:
The "Reincarnation" of Neural Networks in the Late 1980s / 3.2.6:
The Birth of IAs in the Late 1990s / 3.2.7:
An Overview of the Classification of AI Technologies / 3.3:
AI - Where to Go? / 3.4:
The Coming of the Age of IAs / 3.5:
What Is an IA? - A "Right" Place to Start / 3.5.1:
The Emergence of Agent Technology - The Idea of Portable Intelligence / 3.5.2:
The Ten Basic Requirements of IAs / 3.6:
The Contemporary Variety of IAs / 3.7:
The Conceptual Model of IAs / 3.8:
The BFI Agent Intellectual Conceptual Model / 3.8.1:
The Agent Development Conceptual Model (GIA vs. TIA) / 3.8.2:
Major Challenges and Threats of Agent Technology / 3.9:
AI Techniques for Agent Construction / 3.10:
The World of Fuzziness, Chaos, and Uncertainty / 4.1:
Fuzzy Logic / 4.2:
What is Fuzzy Logic? / 4.2.1:
Fuzzy Theory and the Uncertainty Principle / 4.2.2:
Fuzzy Logic - A Structural Overview / 4.2.3:
Fuzzy Reasoning - A Case Study on Fuzzy Air-conditioning Control System / 4.2.4:
Applications of Fuzzy Logic / 4.2.5:
Neural Networks - the "Brain" of IAs / 4.3:
Neural Networks - Background / 4.3.1:
ANN Architecture / 4.3.2:
Classification of Neural Networks / 4.3.3:
Associative Memory Neural Networks: Auto-associative Networks / 4.3.4:
Hopfield Networks / 4.3.5:
Multilayer Feedforward Backpropagation Networks (FFBPNs) / 4.3.6:
Neural Networks - Where to Go? / 4.3.7:
Genetic Algorithms - the Nature of Evolution / 4.4:
Genetic Algorithms - Basic Principle / 4.4.1:
Population Initialization / 4.4.2:
Fitness Evaluation / 4.4.3:
Parent Selection Scheme / 4.4.4:
Crossover and Mutation / 4.4.5:
Implementation of GAs / 4.4.6:
Hybridization of GA with Neural Networks / 4.4.7:
Chaos Theory - The World of Nonlinear Dynamics / 4.5:
Chaos Theory - The Study of Nonlinear Dynamics / 4.5.1:
Battle Between two Worlds: Deterministic Versus Probabilistic / 4.5.2:
A Snapshot of Chaos Theory / 4.5.3:
Characteristics of Chaos Systems / 4.5.4:
Chaos Theory Versus Uncertainty Principle / 4.5.5:
Current Work on Chaos Theory / 4.5.6:
Chaotic Neural Networks and the Lee-Oscillator / 4.6:
Chaotic Neural Oscillators - An Overview / 4.6.1:
The Lee-Oscillator / 4.6.2:
The Lee-Associator / 4.6.3:
System Implementation and Experimental Results / 4.6.4:
Progressive Memory Recalling Scheme of the Lee-Associator and Its Biological and Psychological Implications / 4.6.5:
Related Work / 4.6.6:
Conclusion / 4.6.7:
Further Reading / 4.7:
Applications of Intelligent Agents Using iJADK / Part II:
The Design and Implementation of an Intelligent Agent-Based System Using iJADK / 5:
iJADE - System Framework / 5.1:
iJADE Architecture / 5.2.1:
Application Layer / 5.2.2:
Conscious (Intelligent) Layer / 5.2.3:
Technology Layer / 5.2.4:
Supporting Layer / 5.2.5:
iJADK Architecture / 5.3:
Introduction to iJADK / 5.3.1:
Basic Components of iJADK / 5.3.2:
Internal Operations of iJADK / 5.3.3:
Agent Programming Over the iJADK Platform / 5.4:
User Interface / 5.4.1:
Agent Class / 5.4.2:
LifeCycleManager / 5.4.3:
RuntimeAgent / 5.4.4:
Sample iJADE Agents / 5.5:
HelloWorldAgent / 5.5.1:
HelloWorldAgent2 / 5.5.2:
TalkAgent / 5.5.3:
Latest Works of iJADE / 5.6:
iJADE WShopper - Intelligent Mobile Shopping Based on Fuzzy-Neuro Shopping Agents / 5.7:
WAP Technology / 6.1:
WAP Technology - From Web to MEB / 6.2.1:
Constraints for Contemporary WAP Technology on MEB / 6.2.2:
iJADE WShopper - System Framework / 6.3:
iJADE WShopper - System Overview / 6.3.1:
iJADE WShopper for M-shopping - System Components / 6.3.2:
Experimental Results / 6.4:
The RTT Test / 6.4.1:
The PS Test / 6.4.3:
The iWSAS Test / 6.4.4:
Migration to the J2ME Platform / 6.5:
Incorporate Other AI Capabilities in the Shopper Agents - iJADE Negotiator / 6.6.2:
iJADE WeatherMAN - A Weather Forecasting Agent Using the Fuzzy Neural Network Model / 7:
Weather Prediction Using a Fuzzy-Neuro Model / 7.1:
iJADE WeatherMAN - System Overview / 7.3:
User Requirement Definition Scheme (URDS) and Weather Reporting Scheme (WRS) / 7.3.1:
Data Collection Scheme (DCS) / 7.3.2:
Variable Selection and Transformation Scheme (VSTS) / 7.3.3:
Fuzzy-Neuro Training and Prediction Scheme (FNTPS) / 7.3.4:
iJADE WeatherMAN - System Implementation / 7.4:
iJADE WeatherMAN Weather Site / 7.4.1:
Central Agent/Test Agent / 7.4.2:
iJADE WeatherMan Place / 7.4.3:
iJADE WeatherMan Agent / 7.4.4:
iJADE Weather Forecaster Place / 7.4.5:
iJADE Forecaster Agent / 7.4.6:
Evaluation Considerations / 7.5:
Average Classification Rate / 7.5.2:
Model Performance / 7.5.3:
The HKO Forecast / 7.5.4:
Future Work / 7.6:
iJADE Stock Advisor - An Intelligent Agent-Based Stock Prediction System Using the Hybrid RBF Recurrent Network / 8:
Stock Advisory and Prediction System - A General Overview / 8.1:
Stochastic Indicator / 8.2.1:
Relative Strength Index (RSI) / 8.2.2:
Money Flow / 8.2.3:
Moving Average / 8.2.4:
Support and Resistant Lines (Trendlines) / 8.2.5:
Trend Generalization / 8.2.6:
iJADE Stock Advisor - System Framework / 8.3:
iJADE Stock Advisor - System Overview / 8.3.1:
Stock Prediction Using the HRBF model / 8.3.2:
Parameter Selection Scheme in HRBFN / 8.4:
Round-Trip-Time (RTT) Test / 8.4.2:
Long- and Short-Term Prediction, Window Size Evaluation Test / 8.4.3:
Stock Prediction Performance Test / 8.4.4:
iJADE Surveillant - A Multi-resolution Neuro-oscillatory Agent-Based Surveillance System / 8.5:
Surveillance System - An Overview / 9.1:
Background / 9.2.1:
Scene Analysis / 9.2.2:
Human Face Recognition / 9.2.3:
Supporting Technologies / 9.3:
MPEG-7 - System Overview / 9.3.1:
MPEG-7 Model / 9.3.2:
The Latest MPEG-7 Development Work on Visual Object Modeling / 9.3.3:
iJADE Surveillant - System Overview / 9.4:
iJADE Surveillant - System Architecture / 9.4.1:
Automatic Multi-resolution Scene Segmentation Scheme Using the CNOW Model / 9.4.2:
Automatic Human Face Detection and Contour Features Extraction Using the ACM / 9.4.3:
Invariant Human Face Recognition Using the EGDLM / 9.4.4:
System Implementation / 9.5:
Automatic Color Scene Segmentation Scheme / 9.5.1:
Invariant Human Face Recognition Scheme / 9.5.2:
Facial Pattern Occlusion and Distortion Test / 9.5.3:
Performance Analysis / 9.5.4:
iJADE Negotiator - An Intelligent Fuzzy Agent-Based Negotiation System for Internet Shopping / 9.6:
Negotiation Systems - An Overview / 10.1:
iJADE Negotiator - System Architecture / 10.3:
iJADE Negotiator - System Overview / 10.3.1:
iJADE Negotiator - Main Functional Modules / 10.3.2:
iJADE Negotiator - Intelligent Negotiation Strategy and Negotiation Protocol / 10.3.3:
iJADE Negotiator - System Implementation / 10.4:
Future Agent Technology - Modern Ontology and Ontological Agent Technologies (OAT) / 10.4.1:
What Is Ontology? / 11.1:
Ontology - Theories of Existence / 11.1.1:
Universals Versus Particulars / 11.1.2:
Ontology - The World of Universals / 11.1.3:
Ontological View of the Nature of Existence / 11.1.4:
Impact of Ontology on Modern AI / 11.1.5:
Modern Ontology and Ontological Agents / 11.2:
The Theoretical Foundation of OAT - Conceptualization Theory / 11.2.1:
Characteristics of Ontological Agents / 11.2.2:
Potential Applications of OAT / 11.2.3:
Cogito iJADE Project / 11.2.4:
Cogito iJADE - A New Era of Self-aware IAs / 11.3.1:
Cogito iJADE - A System Overview / 11.3.2:
Latest Works of Cogito Agents / 11.3.3:
Agent Technology - The Future / 11.4:
iJADK 2.0 API / Appendix:
References
Index
About the Author
Introduction / 1:
The Coming of the Age of Intelligent Agents / 1.1:
The Structure of This Book / 1.2:
40.

電子ブック

EB
Ernesto Sanchez, Massimiliano Schillaci, Giovanni Squillero
出版情報: Springer eBooks Computer Science , Springer US, 2011
所蔵情報: loading…
目次情報: 続きを見る
Evolutionary computation / 1:
Natural and artificial evolution / 1.1:
The classical paradigms / 1.2:
Genetic programming / 1.3:
Why yet another one evolutionary optimizer? / 2:
Background / 2.1:
Where to draw the lines / 2.2:
Individuals / 2.3:
Problem specification / 2.4:
Coding Techniques / 2.5:
The ?Gp architecture / 3:
Conceptual design / 3.1:
The evolutionary core / 3.2:
Evolutionary Operators / 3.2.1:
Population / 3.2.2:
The Evolutionary Cycle / 3.3:
Genetic operator selection / 3.3.1:
Parents selection / 3.3.2:
Offspring Generation / 3.3.3:
Individual Evaluation and Slaughtering / 3.3.4:
Termination and Aging / 3.3.5:
Advanced features / 4:
Self adaptation for exploration or exploitation / 4.1:
Self-adaptation inertia / 4.1.1:
Operator strength / 4.1.2:
Tournament size / 4.3.3:
Escaping local optimums / 4.2:
Operator activation probability / 4.2.1:
Tuning the elitism / 4.2.2:
Preserving diversity / 4.3:
Clone detection, scaling and extermination / 4.3.1:
Entropy and delta-entropy computation / 4.3.2:
Fitness holes
Population topology and multiple populations
Coping with the real problems / 4.4:
Parallel fitness evaluation / 4.4.1:
Multiple fitness / 4.4.2:
Performing an evolutionary run / 5:
Robot Pathfinder / 5.1:
?Gp Settings / 5.2:
Population Settings / 5.3:
Library of Constraints / 5.4:
Launching the experiment / 5.5:
?Gp Extractor / 5.6:
Command line syntax / 6:
Starting a run / 6.1:
Controlling messages to the user / 6.2:
Getting help and information / 6.3:
Controlling logging / 6.4:
Controlling recovery / 6.5:
Controlling evolution / 6.6:
Controlling evaluation / 6.7:
Syntax of the settings file / 7:
Syntax of the population parameters file / 7.1:
Strategy parameters / 8.1:
Base parameters / 8.1.1:
Parameters for self adaptation / 8.1.2:
Other parameters / 8.1.3:
Syntax of the external constraints file / 9:
Purposes of the constraints / 9.1:
Organization of constraints and hierarchy / 9.2:
Specifying the structure of the individual / 9.3:
Specifying the contents of the individual / 9.4:
Writing a compliant evaluator / 10:
Information from ?Gp to the fitness evaluator / 10.1:
Expected fitness format / 10.2:
Good Examples / 10.2.1:
Bad Examples / 10.2.2:
Implementation details / 11:
Design principles / 11.1:
Architectural choices / 11.2:
The Graph library / 11.2.1:
The Evolutionary Core library / 11.2.2:
Front end / 11.2.3:
Code organization and class model / 11.3:
Examples and applications / 12:
Classical one-max / 12.1:
Fitness evaluator / 12.1.1:
Constraints / 12.1.2:
Population settings / 12.1.3:
?Gp settings / 12.1.4:
Running / 12.1.5:
Values of parameters and their influence on the evolution: Arithmetic expressions / 13412.2:
De Jong 3 / 12.2.1:
De Jong 4-Modified / 12.2.2:
Carrom / 12.2.3:
Complex individuals' structures and evaluation: Bit-counting in Assembly / 12.3:
Assembly individuals representation / 12.3.1:
Evaluator / 12.3.2:
Argument and option synopsis / 12.3.3:
External constraints synopsis
References
Evolutionary computation / 1:
Natural and artificial evolution / 1.1:
The classical paradigms / 1.2:
41.

電子ブック

EB
Ernesto Sanchez, Massimiliano Schillaci, Giovanni Squillero
出版情報: SpringerLink Books - AutoHoldings , Springer US, 2011
所蔵情報: loading…
目次情報: 続きを見る
Evolutionary computation / 1:
Natural and artificial evolution / 1.1:
The classical paradigms / 1.2:
Genetic programming / 1.3:
Why yet another one evolutionary optimizer? / 2:
Background / 2.1:
Where to draw the lines / 2.2:
Individuals / 2.3:
Problem specification / 2.4:
Coding Techniques / 2.5:
The ?Gp architecture / 3:
Conceptual design / 3.1:
The evolutionary core / 3.2:
Evolutionary Operators / 3.2.1:
Population / 3.2.2:
The Evolutionary Cycle / 3.3:
Genetic operator selection / 3.3.1:
Parents selection / 3.3.2:
Offspring Generation / 3.3.3:
Individual Evaluation and Slaughtering / 3.3.4:
Termination and Aging / 3.3.5:
Advanced features / 4:
Self adaptation for exploration or exploitation / 4.1:
Self-adaptation inertia / 4.1.1:
Operator strength / 4.1.2:
Tournament size / 4.3.3:
Escaping local optimums / 4.2:
Operator activation probability / 4.2.1:
Tuning the elitism / 4.2.2:
Preserving diversity / 4.3:
Clone detection, scaling and extermination / 4.3.1:
Entropy and delta-entropy computation / 4.3.2:
Fitness holes
Population topology and multiple populations
Coping with the real problems / 4.4:
Parallel fitness evaluation / 4.4.1:
Multiple fitness / 4.4.2:
Performing an evolutionary run / 5:
Robot Pathfinder / 5.1:
?Gp Settings / 5.2:
Population Settings / 5.3:
Library of Constraints / 5.4:
Launching the experiment / 5.5:
?Gp Extractor / 5.6:
Command line syntax / 6:
Starting a run / 6.1:
Controlling messages to the user / 6.2:
Getting help and information / 6.3:
Controlling logging / 6.4:
Controlling recovery / 6.5:
Controlling evolution / 6.6:
Controlling evaluation / 6.7:
Syntax of the settings file / 7:
Syntax of the population parameters file / 7.1:
Strategy parameters / 8.1:
Base parameters / 8.1.1:
Parameters for self adaptation / 8.1.2:
Other parameters / 8.1.3:
Syntax of the external constraints file / 9:
Purposes of the constraints / 9.1:
Organization of constraints and hierarchy / 9.2:
Specifying the structure of the individual / 9.3:
Specifying the contents of the individual / 9.4:
Writing a compliant evaluator / 10:
Information from ?Gp to the fitness evaluator / 10.1:
Expected fitness format / 10.2:
Good Examples / 10.2.1:
Bad Examples / 10.2.2:
Implementation details / 11:
Design principles / 11.1:
Architectural choices / 11.2:
The Graph library / 11.2.1:
The Evolutionary Core library / 11.2.2:
Front end / 11.2.3:
Code organization and class model / 11.3:
Examples and applications / 12:
Classical one-max / 12.1:
Fitness evaluator / 12.1.1:
Constraints / 12.1.2:
Population settings / 12.1.3:
?Gp settings / 12.1.4:
Running / 12.1.5:
Values of parameters and their influence on the evolution: Arithmetic expressions / 13412.2:
De Jong 3 / 12.2.1:
De Jong 4-Modified / 12.2.2:
Carrom / 12.2.3:
Complex individuals' structures and evaluation: Bit-counting in Assembly / 12.3:
Assembly individuals representation / 12.3.1:
Evaluator / 12.3.2:
Argument and option synopsis / 12.3.3:
External constraints synopsis
References
Evolutionary computation / 1:
Natural and artificial evolution / 1.1:
The classical paradigms / 1.2:
42.

電子ブック

EB
Michael R. Berthold, Christian Borgelt, Frank H?ppner, Frank Klawonn
出版情報: Springer eBooks Computer Science , Springer London, 2010
所蔵情報: loading…
目次情報: 続きを見る
Introduction / 1:
Motivation / 1.1:
Data and Knowledge / 1.1.1:
Tycho Brahe and Johannes Kepler / 1.1.2:
Intelligent Data Analysis / 1.1.3:
The Data Analysis Process / 1.2:
Methods, Tasks, and Tools / 1.3:
How to Read This Book / 1.4:
References
Practical Data Analysis: An Example / 2:
The Setup / 2.1:
Data Understanding and Pattern Finding / 2.2:
Explanation Finding / 2.3:
Predicting the Future / 2.4:
Concluding Remarks / 2.5:
Project Understanding / 3:
Determine the Project Objective / 3.1:
Assess the Situation / 3.2:
Determine Analysis Goals / 3.3:
Further Reading / 3.4:
Data Understanding / 4:
Attribute Understanding / 4.1:
Data Quality / 4.2:
Data Visualization / 4.3:
Methods for One and Two Attributes / 4.3.1:
Methods for Higher-Dimensional Data / 4.3.2:
Correlation Analysis / 4.4:
Outlier Detection / 4.5:
Outlier Detection for Single Attributes / 4.5.1:
Outlier Detection for Multidimensional Data / 4.5.2:
Missing Values / 4.6:
A Checklist for Data Understanding / 4.7:
Data Understanding in Practice / 4.8:
Data Understanding in KNIME / 4.8.1:
Data Understanding in R / 4.8.2:
Principles of Modeling / 5:
Model Classes / 5.1:
Fitting Criteria and Score Functions / 5.2:
Error Functions for Classification Problems / 5.2.1:
Measures of Interestingness / 5.2.2:
Algorithms for Model Fitting / 5.3:
Closed Form Solutions / 5.3.1:
Gradient Method / 5.3.2:
Combinatorial Optimization / 5.3.3:
Random Search, Greedy Strategies, and Other Heuristics / 5.3.4:
Types of Errors / 5.4:
Experimental Error / 5.4.1:
Sample Error / 5.4.2:
Model Error / 5.4.3:
Algorithmic Error / 5.4.4:
Machine Learning Bias and Variance / 5.4.5:
Learning Without Bias? / 5.4.6:
Model Validation / 5.5:
Training and Test Data / 5.5.1:
Cross-Validation / 5.5.2:
Bootstrapping / 5.5.3:
Measures for Model Complexity / 5.5.4:
Model Errors and Validation in Practice / 5.6:
Errors and Validation in KNIME / 5.6.1:
Validation in R / 5.6.2:
Data Preparation / 5.7:
Select Data / 6.1:
Feature Selection / 6.1.1:
Dimensionality Reduction / 6.1.2:
Record Selection / 6.1.3:
Clean Data / 6.2:
Improve Data Quality / 6.2.1:
Construct Data / 6.2.2:
Provide Operability / 6.3.1:
Assure Impartially / 6.3.2:
Maximize Efficiency / 6.3.3:
Complex Data Types / 6.4:
Data Integration / 6.5:
Vertical Data Integration / 6.5.1:
Horizontal Data Integration / 6.5.2:
Data Preparation in Practice / 6.6:
Data Preparation in KNIME / 6.6.1:
Data Preparation in R / 6.6.2:
Finding Patterns / 7:
Hierarchical Clustering / 7.1:
Overview / 7.1.1:
Construction / 7.1.2:
Variations and Issues / 7.1.3:
Notion of (Dis-)Similarity / 7.2:
Prototype-and Model-Based Clustering / 7.3:
Density-Based Clustering / 7.3.1:
Self-organizing Maps / 7.4.1:
Frequent Pattern Mining and Association Rules / 7.5.1:
Deviation Analysis / 7.6.1:
Finding Patterns in Practice / 7.7.1:
Finding Patterns with KNIME / 7.8.1:
Finding Patterns in R / 7.8.2:
Finding Explanations / 7.9:
Decision Trees / 8.1:
Bayes Classifiers / 8.1.1:
Regression / 8.2.1:
Two Class Problems / 8.3.1:
Rule learning / 8.4:
Prepositional Rules / 8.4.1:
Inductive Logic Programming or First-Order Rules / 8.4.2:
Finding Explanations in Practice / 8.5:
Finding Explanations with KNIME / 8.5.1:
Using Explanations with R / 8.5.2:
Finding Predictors / 8.6:
Nearest-Neighbor Predictors / 9.1:
Artifical Neural Networks / 9.1.1:
Support Vector Machines / 9.2.1:
Ensemble Methods / 9.3.1:
Finding Predictors in Practice / 9.4.1:
Finding Predictors with KNIME / 9.5.1:
Using Predictors in R / 9.5.2:
Evaluation and Deployment / 10:
Evaluation / 10.1:
Deployment and Monitoring / 10.2:
Statistics / A:
Terms and Notation / A.1:
Descriptive Statistics / A.2:
Tabular Representations / A.2.1:
Graphical Representations / A.2.2:
Characteristic Measures for One-Dimensional Data / A.2.3:
Characteristic Measures for Multidimensional Data / A.2.4:
Principal Component Analysis / A.2.5:
Probability Theory / A.3:
Probability / A.3.1:
Basic Methods and Theorems / A.3.2:
Random Variables / A.3.3:
Characteristic Measures of Random Variables / A.3.4:
Some Special Distributions / A.3.5:
Inferential Statistics / A.4:
Random Samples / A.4.1:
Parameter Estimation / A.4.2:
Hypothesis Testing / A.4.3:
The R Project / B:
Installation and Overview / B.1:
Reading Files and R Objects / B.2:
R Functions and Commands / B.3:
Libraries/Packages / B.4:
R Workspace / B.5:
Finding Help / B.6:
Knime / B.7:
Building Workflows / C.1:
Example Flow / C.3:
R Integration / C.4:
Index / Appendix A:
Introduction / 1:
Motivation / 1.1:
Data and Knowledge / 1.1.1:
43.

電子ブック

EB
Michael R. Berthold, Christian Borgelt, Frank Höppner, Frank Klawonn
出版情報: SpringerLink Books - AutoHoldings , Springer London, 2010
所蔵情報: loading…
目次情報: 続きを見る
Introduction / 1:
Motivation / 1.1:
Data and Knowledge / 1.1.1:
Tycho Brahe and Johannes Kepler / 1.1.2:
Intelligent Data Analysis / 1.1.3:
The Data Analysis Process / 1.2:
Methods, Tasks, and Tools / 1.3:
How to Read This Book / 1.4:
References
Practical Data Analysis: An Example / 2:
The Setup / 2.1:
Data Understanding and Pattern Finding / 2.2:
Explanation Finding / 2.3:
Predicting the Future / 2.4:
Concluding Remarks / 2.5:
Project Understanding / 3:
Determine the Project Objective / 3.1:
Assess the Situation / 3.2:
Determine Analysis Goals / 3.3:
Further Reading / 3.4:
Data Understanding / 4:
Attribute Understanding / 4.1:
Data Quality / 4.2:
Data Visualization / 4.3:
Methods for One and Two Attributes / 4.3.1:
Methods for Higher-Dimensional Data / 4.3.2:
Correlation Analysis / 4.4:
Outlier Detection / 4.5:
Outlier Detection for Single Attributes / 4.5.1:
Outlier Detection for Multidimensional Data / 4.5.2:
Missing Values / 4.6:
A Checklist for Data Understanding / 4.7:
Data Understanding in Practice / 4.8:
Data Understanding in KNIME / 4.8.1:
Data Understanding in R / 4.8.2:
Principles of Modeling / 5:
Model Classes / 5.1:
Fitting Criteria and Score Functions / 5.2:
Error Functions for Classification Problems / 5.2.1:
Measures of Interestingness / 5.2.2:
Algorithms for Model Fitting / 5.3:
Closed Form Solutions / 5.3.1:
Gradient Method / 5.3.2:
Combinatorial Optimization / 5.3.3:
Random Search, Greedy Strategies, and Other Heuristics / 5.3.4:
Types of Errors / 5.4:
Experimental Error / 5.4.1:
Sample Error / 5.4.2:
Model Error / 5.4.3:
Algorithmic Error / 5.4.4:
Machine Learning Bias and Variance / 5.4.5:
Learning Without Bias? / 5.4.6:
Model Validation / 5.5:
Training and Test Data / 5.5.1:
Cross-Validation / 5.5.2:
Bootstrapping / 5.5.3:
Measures for Model Complexity / 5.5.4:
Model Errors and Validation in Practice / 5.6:
Errors and Validation in KNIME / 5.6.1:
Validation in R / 5.6.2:
Data Preparation / 5.7:
Select Data / 6.1:
Feature Selection / 6.1.1:
Dimensionality Reduction / 6.1.2:
Record Selection / 6.1.3:
Clean Data / 6.2:
Improve Data Quality / 6.2.1:
Construct Data / 6.2.2:
Provide Operability / 6.3.1:
Assure Impartially / 6.3.2:
Maximize Efficiency / 6.3.3:
Complex Data Types / 6.4:
Data Integration / 6.5:
Vertical Data Integration / 6.5.1:
Horizontal Data Integration / 6.5.2:
Data Preparation in Practice / 6.6:
Data Preparation in KNIME / 6.6.1:
Data Preparation in R / 6.6.2:
Finding Patterns / 7:
Hierarchical Clustering / 7.1:
Overview / 7.1.1:
Construction / 7.1.2:
Variations and Issues / 7.1.3:
Notion of (Dis-)Similarity / 7.2:
Prototype-and Model-Based Clustering / 7.3:
Density-Based Clustering / 7.3.1:
Self-organizing Maps / 7.4.1:
Frequent Pattern Mining and Association Rules / 7.5.1:
Deviation Analysis / 7.6.1:
Finding Patterns in Practice / 7.7.1:
Finding Patterns with KNIME / 7.8.1:
Finding Patterns in R / 7.8.2:
Finding Explanations / 7.9:
Decision Trees / 8.1:
Bayes Classifiers / 8.1.1:
Regression / 8.2.1:
Two Class Problems / 8.3.1:
Rule learning / 8.4:
Prepositional Rules / 8.4.1:
Inductive Logic Programming or First-Order Rules / 8.4.2:
Finding Explanations in Practice / 8.5:
Finding Explanations with KNIME / 8.5.1:
Using Explanations with R / 8.5.2:
Finding Predictors / 8.6:
Nearest-Neighbor Predictors / 9.1:
Artifical Neural Networks / 9.1.1:
Support Vector Machines / 9.2.1:
Ensemble Methods / 9.3.1:
Finding Predictors in Practice / 9.4.1:
Finding Predictors with KNIME / 9.5.1:
Using Predictors in R / 9.5.2:
Evaluation and Deployment / 10:
Evaluation / 10.1:
Deployment and Monitoring / 10.2:
Statistics / A:
Terms and Notation / A.1:
Descriptive Statistics / A.2:
Tabular Representations / A.2.1:
Graphical Representations / A.2.2:
Characteristic Measures for One-Dimensional Data / A.2.3:
Characteristic Measures for Multidimensional Data / A.2.4:
Principal Component Analysis / A.2.5:
Probability Theory / A.3:
Probability / A.3.1:
Basic Methods and Theorems / A.3.2:
Random Variables / A.3.3:
Characteristic Measures of Random Variables / A.3.4:
Some Special Distributions / A.3.5:
Inferential Statistics / A.4:
Random Samples / A.4.1:
Parameter Estimation / A.4.2:
Hypothesis Testing / A.4.3:
The R Project / B:
Installation and Overview / B.1:
Reading Files and R Objects / B.2:
R Functions and Commands / B.3:
Libraries/Packages / B.4:
R Workspace / B.5:
Finding Help / B.6:
Knime / B.7:
Building Workflows / C.1:
Example Flow / C.3:
R Integration / C.4:
Index / Appendix A:
Introduction / 1:
Motivation / 1.1:
Data and Knowledge / 1.1.1:
44.

電子ブック

EB
Zbigniew Michalewicz, Martin Schmidt
出版情報: Springer eBooks Computer Science , Springer Berlin Heidelberg, 2006
所蔵情報: loading…
目次情報: 続きを見る
Complex Business Problems / Part I:
Introduction / 1:
Characteristics of Complex Business Problems / 2:
Number of Possible Solutions / 2.1:
Time-Changing Environment / 2.2:
Problem-Specific Constraints / 2.3:
Multi-objective Problems / 2.4:
Modeling the Problem / 2.5:
A Real-World Example / 2.6:
An Extended Example: Car Distribution / 3:
Basic Terminology / 3.1:
Off-lease Cars / 3.2:
The Problem / 3.3:
Transportation / 3.4:
Volume Effect / 3.5:
Price Depreciation and Inventory / 3.6:
Dynamic Market Changes / 3.7:
The Solution / 3.8:
Adaptive Business Intelligence / 4:
Data Mining / 4.1:
Prediction / 4.2:
Optimization / 4.3:
Adaptability / 4.4:
The Structure of an Adaptive Business Intelligence System / 4.5:
Prediction and Optimization / Part II:
Prediction Methods and Models / 5:
Data Preparation / 5.1:
Different Prediction Methods / 5.2:
Mathematical Methods / 5.2.1:
Distance Methods / 5.2.2:
Logic Methods / 5.2.3:
Modern Heuristic Methods / 5.2.4:
Additional Considerations / 5.2.5:
Evaluation of Models / 5.3:
Recommended Reading / 5.4:
Modern Optimization Techniques / 6:
Overview / 6.1:
Local Optimization Techniques / 6.2:
Stochastic Hill Climber / 6.3:
Simulated Annealing / 6.4:
Tabu Search / 6.5:
Evolutionary Algorithms / 6.6:
Constraint Handling / 6.7:
Additional Issues / 6.8:
Fuzzy Logic / 6.9:
Fuzzifier / 7.1:
Inference System / 7.3:
Defuzzifier / 7.4:
Tuning the Membership Functions and Rule Base / 7.5:
Artificial Neural Networks / 7.6:
Node Input and Output / 8.1:
Different Types of Networks / 8.3:
Feed-Forward Neural Networks / 8.3.1:
Recurrent Neural Networks / 8.3.2:
Learning Methods / 8.4:
Supervised Learning / 8.4.1:
Unsupervised Learning / 8.4.2:
Data Representation / 8.5:
Other Methods and Techniques / 8.6:
Genetic Programming / 9.1:
Ant Systems and Swarm Intelligence / 9.2:
Agent-Based Modeling / 9.3:
Co-evolution / 9.4:
Hybrid Systems and Adaptability / 9.5:
Hybrid Systems for Prediction / 10.1:
Hybrid Systems for Optimization / 10.2:
Car Distribution System / 10.3:
Graphical User Interface / 11.1:
Reporting / 11.2.1:
Prediction Module / 11.3:
Optimization Module / 11.4:
Adaptability Module / 11.5:
Validation / 11.6:
Applying Adaptive Business Intelligence / 12:
Marketing Campaigns / 12.1:
Manufacturing / 12.2:
Investment Strategies / 12.3:
Emergency Response Services / 12.4:
Credit Card Fraud / 12.5:
Conclusion / 13:
Index
Complex Business Problems / Part I:
Introduction / 1:
Characteristics of Complex Business Problems / 2:
45.

電子ブック

EB
Zbigniew Michalewicz, Martin Schmidt, Constantin Chiriac, Matthew Michalewicz
出版情報: SpringerLink Books - AutoHoldings , Springer Berlin Heidelberg, 2006
所蔵情報: loading…
目次情報: 続きを見る
Complex Business Problems / Part I:
Introduction / 1:
Characteristics of Complex Business Problems / 2:
Number of Possible Solutions / 2.1:
Time-Changing Environment / 2.2:
Problem-Specific Constraints / 2.3:
Multi-objective Problems / 2.4:
Modeling the Problem / 2.5:
A Real-World Example / 2.6:
An Extended Example: Car Distribution / 3:
Basic Terminology / 3.1:
Off-lease Cars / 3.2:
The Problem / 3.3:
Transportation / 3.4:
Volume Effect / 3.5:
Price Depreciation and Inventory / 3.6:
Dynamic Market Changes / 3.7:
The Solution / 3.8:
Adaptive Business Intelligence / 4:
Data Mining / 4.1:
Prediction / 4.2:
Optimization / 4.3:
Adaptability / 4.4:
The Structure of an Adaptive Business Intelligence System / 4.5:
Prediction and Optimization / Part II:
Prediction Methods and Models / 5:
Data Preparation / 5.1:
Different Prediction Methods / 5.2:
Mathematical Methods / 5.2.1:
Distance Methods / 5.2.2:
Logic Methods / 5.2.3:
Modern Heuristic Methods / 5.2.4:
Additional Considerations / 5.2.5:
Evaluation of Models / 5.3:
Recommended Reading / 5.4:
Modern Optimization Techniques / 6:
Overview / 6.1:
Local Optimization Techniques / 6.2:
Stochastic Hill Climber / 6.3:
Simulated Annealing / 6.4:
Tabu Search / 6.5:
Evolutionary Algorithms / 6.6:
Constraint Handling / 6.7:
Additional Issues / 6.8:
Fuzzy Logic / 6.9:
Fuzzifier / 7.1:
Inference System / 7.3:
Defuzzifier / 7.4:
Tuning the Membership Functions and Rule Base / 7.5:
Artificial Neural Networks / 7.6:
Node Input and Output / 8.1:
Different Types of Networks / 8.3:
Feed-Forward Neural Networks / 8.3.1:
Recurrent Neural Networks / 8.3.2:
Learning Methods / 8.4:
Supervised Learning / 8.4.1:
Unsupervised Learning / 8.4.2:
Data Representation / 8.5:
Other Methods and Techniques / 8.6:
Genetic Programming / 9.1:
Ant Systems and Swarm Intelligence / 9.2:
Agent-Based Modeling / 9.3:
Co-evolution / 9.4:
Hybrid Systems and Adaptability / 9.5:
Hybrid Systems for Prediction / 10.1:
Hybrid Systems for Optimization / 10.2:
Car Distribution System / 10.3:
Graphical User Interface / 11.1:
Reporting / 11.2.1:
Prediction Module / 11.3:
Optimization Module / 11.4:
Adaptability Module / 11.5:
Validation / 11.6:
Applying Adaptive Business Intelligence / 12:
Marketing Campaigns / 12.1:
Manufacturing / 12.2:
Investment Strategies / 12.3:
Emergency Response Services / 12.4:
Credit Card Fraud / 12.5:
Conclusion / 13:
Index
Complex Business Problems / Part I:
Introduction / 1:
Characteristics of Complex Business Problems / 2:
46.

電子ブック

EB
Luc De Raedt, J?rg Siekmann
出版情報: Springer eBooks Computer Science , Springer Berlin Heidelberg, 2008
所蔵情報: loading…
目次情報: 続きを見る
Introduction / 1:
What Is Logical and Relational Learning? / 1.1:
Why Is Logical and Relational Learning Important? / 1.2:
Structure Activity Relationship Prediction / 1.2.1:
A Web Mining Example / 1.2.2:
A Language Learning Example / 1.2.3:
How Does Relational and Logical Learning Work? / 1.3:
A Brief History / 1.4:
An Introduction to Logic / 2:
A Relational Database Example / 2.1:
The Syntax of Clausal Logic / 2.2:
The Semantics of Clausal Logic - Model Theory / 2.3:
Inference with Clausal Logic - Proof Theory / 2.4:
Prolog and SLD-resolution / 2.5:
Historical and Bibliographic Remarks / 2.6:
An Introduction to Learning and Search / 3:
Representing Hypotheses and Instances / 3.1:
Boolean Data / 3.2:
Machine Learning / 3.3:
Data Mining / 3.4:
A Generate-and-Test Algorithm / 3.5:
Structuring the Search Space / 3.6:
Monotonicity / 3.7:
Borders / 3.8:
Refinement Operators / 3.9:
A Generic Algorithm for Mining and Learning / 3.10:
A Complete General-to-Specific Algorithm / 3.11:
A Heuristic General-to-Specific Algorithm / 3.12:
A Branch-and-Bound Algorithm / 3.13:
A Specific-to-General Algorithm / 3.14:
Working with Borders* / 3.15:
Computing a Single Border / 3.15.1:
Computing Two Borders / 3.15.2:
Computing Two Borders Incrementally / 3.15.3:
Operations on Borders / 3.15.4:
Conclusions / 3.16:
Bibliographical Notes / 3.17:
Representations for Mining and Learning / 4:
Representing Data and Hypotheses / 4.1:
Attribute-Value Learning / 4.2:
Multiple-Instance Learning: Dealing With Sets / 4.3:
Relational Learning / 4.4:
Logic Programs / 4.5:
Sequences, Lists, and Grammars / 4.6:
Trees and Terms / 4.7:
Graphs / 4.8:
Background Knowledge / 4.9:
Designing It Yourself / 4.10:
A Hierarchy of Representations* / 4.11:
From AV to BL / 4.11.1:
From MI to AV / 4.11.2:
From RL to MI / 4.11.3:
From LP to RL / 4.11.4:
Propositionalization / 4.12:
A Table-Based Approach / 4.12.1:
A Query-Based Approach / 4.12.2:
Aggregation / 4.13:
Historical and Bibliographical Remarks / 4.14:
Generality and Logical Entailment / 5:
Generality and Logical Entailment Coincide / 5.1:
Propositional Subsumption / 5.2:
Subsumption in Logical Atoms / 5.3:
Specialization Operators / 5.3.1:
Generalization Operators* / 5.3.2:
Computing the lgg and the glb / 5.3.3:
[Theta]-Subsumption / 5.4:
Soundness and Completeness / 5.4.1:
Deciding [Theta]-Subsumption / 5.4.2:
Equivalence Classes / 5.4.3:
Variants of [Theta]-Subsumption* / 5.5:
Object Identity* / 5.5.1:
Inverse Implication* / 5.5.2:
Using Background Knowledge / 5.6:
Saturation and Bottom Clauses / 5.6.1:
Relative Least General Generalization* / 5.6.2:
Semantic Refinement* / 5.6.3:
Aggregation* / 5.7:
Inverse Resolution / 5.8:
A Note on Graphs, Trees, and Sequences / 5.9:
Bibliographic Notes / 5.10:
The Upgrading Story / 6:
Motivation for a Methodology / 6.1:
Methodological Issues / 6.2:
Representing the Examples / 6.2.1:
Representing the Hypotheses / 6.2.2:
Adapting the Algorithm / 6.2.3:
Adding Features / 6.2.4:
Case Study 1: Rule Learning and Foil / 6.3:
Foil's Problem Setting / 6.3.1:
Foil's Algorithm / 6.3.2:
Case Study 2: Decision Tree Learning and Tilde / 6.4:
The Problem Setting / 6.4.1:
Inducing Logical Decision Trees / 6.4.2:
Case Study 3: Frequent Item-Set Mining and Warmr / 6.5:
Relational Association Rules and Local Patterns / 6.5.1:
Computing Frequent Queries / 6.5.2:
Language Bias / 6.6:
Syntactic Bias / 6.6.1:
Semantic Bias / 6.6.2:
Inducing Theories / 6.7:
Introduction to Theory Revision / 7.1:
Theories and Model Inference / 7.1.1:
Theory Revision / 7.1.2:
Overview of the Rest of This Chapter / 7.1.3:
Towards Abductive Logic Programming / 7.2:
Abduction / 7.2.1:
Integrity Constraints / 7.2.2:
Abductive Logic Programming / 7.2.3:
Shapiro's Theory Revision System / 7.3:
Interaction / 7.3.1:
The Model Inference System / 7.3.2:
Two Propositional Theory Revision Systems* / 7.4:
Learning a Propositional Horn Theory Efficiently / 7.4.1:
Heuristic Search in Theory Revision / 7.4.2:
Inducing Constraints / 7.5:
Problem Specification / 7.5.1:
An Algorithm for Inducing Integrity Constraints / 7.5.2:
Probabilistic Logic Learning / 7.6:
Probability Theory Review / 8.1:
Probabilistic Logics / 8.2:
Probabilities on Interpretations / 8.2.1:
Probabilities on Proofs / 8.2.2:
Probabilistic Learning / 8.3:
Parameter Estimation / 8.3.1:
Structure Learning / 8.3.2:
First-Order Probabilistic Logics / 8.4:
Probabilistic Interpretations / 8.4.1:
Probabilistic Proofs / 8.4.2:
Learning from Interpretations / 8.5:
Learning from Entailment / 8.5.2:
Learning from Proof Trees and Traces / 8.5.3:
Relational Reinforcement Learning* / 8.6:
Markov Decision Processes / 8.6.1:
Solving Markov Decision Processes / 8.6.2:
Relational Markov Decision Processes / 8.6.3:
Solving Relational Markov Decision Processes / 8.6.4:
Kernels and Distances for Structured Data / 8.7:
A Simple Kernel and Distance / 9.1:
Kernel Methods / 9.2:
The Max Margin Approach / 9.2.1:
Support Vector Machines / 9.2.2:
The Kernel Trick / 9.2.3:
Distance-Based Learning / 9.3:
Distance Functions / 9.3.1:
The k-Nearest Neighbor Algorithm / 9.3.2:
The k-Means Algorithm / 9.3.3:
Kernels for Structured Data / 9.4:
Convolution and Decomposition / 9.4.1:
Vectors and Tuples / 9.4.2:
Sets and Multi-sets / 9.4.3:
Strings / 9.4.4:
Trees and Atoms / 9.4.5:
Graph Kernels* / 9.4.6:
Distances and Metrics / 9.5:
Generalization and Metrics / 9.5.1:
Sets / 9.5.2:
Atoms and Trees / 9.5.4:
Relational Kernels and Distances / 9.5.6:
Bibliographical and Historical Notes / 9.7:
Computational Aspects of Logical and Relational Learning / 10:
Efficiency of Relational Learning / 10.1:
Coverage as [theta]-Subsumption / 10.1.1:
[theta]-Subsumption Empirically / 10.1.2:
Optimizing the Learner for [theta]-subsumption / 10.1.3:
Computational Learning Theory* / 10.2:
Notions of Learnability / 10.2.1:
Positive Results / 10.2.2:
Negative Results / 10.2.3:
Historical and Bibliographic Notes / 10.3:
Lessons Learned / 11:
A Hierarchy of Representations / 11.1:
From Upgrading to Downgrading / 11.2:
Propositionalization and Aggregation / 11.3:
Learning Tasks / 11.4:
Operators and Generality / 11.5:
Unification and Variables / 11.6:
Three Learning Settings / 11.7:
Knowledge and Background Knowledge / 11.8:
Applications / 11.9:
References
Author Index
Index
Introduction / 1:
What Is Logical and Relational Learning? / 1.1:
Why Is Logical and Relational Learning Important? / 1.2:
47.

電子ブック

EB
Yoshinori Kuno., Yoshinori Kuno, Dorothy Monekosso, Paolo Remagnino
出版情報: Springer eBooks Computer Science , Springer London, 2009
所蔵情報: loading…
目次情報: 続きを見る
Preface
List of Contributors
Intelligent Environments: Methods, Algorithms and Applications / Dorothy N. Monekosso ; Paolo Remagnino ; Yoshinori Kuno1:
Intelligent Environments / 1.1:
What Is An Intelligent Environment? / 1.1.1:
How Is An Intelligent Environment Built? / 1.1.2:
Technology for Intelligent Environments / 1.2:
Research Projects / 1.3:
Private Spaces / 1.3.1:
Public Spaces / 1.3.2:
Middleware / 1.3.3:
Chapter Themes in This Collection / 1.4:
Conclusion / 1.5:
References
A Pervasive Sensor System for Evidence-Based Nursing Care Support / Toshio Hori ; Yoshifumi Nishida ; Shin'ichi Murakami2:
Introduction / 2.1:
Evidence-Based Nursing Care Support / 2.2:
Background of the Project / 2.2.1:
Concept of Evidence-Based Nursing Care Support / 2.2.2:
Initial Goal of the Project: Falls Prevention / 2.2.3:
Second Goal of the Project: Obtaining ADL of Inhabitants / 2.2.4:
Related Work / 2.3:
Overview and Implementations of the System / 2.4:
Overview of the Evidence-Based Nursing Care Support System / 2.4.1:
System Implementations / 2.4.2:
Experiments and Analyses / 2.5:
Tracking a Wheelchair for Falls Prevention / 2.5.1:
Activity Transition Diagram: Transition of Activities in One Day / 2.5.2:
Quantitative Evaluation of Daily Activities / 2.5.3:
Probability of "Toilet" Activity / 2.5.4:
Discussion of the Experimental Results / 2.5.5:
Prospect of the Evidence-Based Nursing Care Support System / 2.6:
Conclusions / 2.7:
Anomalous Behavior Detection: Supporting Independent Living / 3:
Related work / 3.1:
Methodology / 3.3:
Unsupervised Classification Techniques / 3.3.1:
Using HMM to Model Behavior / 3.3.2:
Experimental Setup and Data Collection / 3.4:
Noisy Data: Sources of Error / 3.4.1:
Learning activities / 3.4.2:
Experimental Results / 3.5:
Instance Class Annotation / 3.5.1:
Data Preprocessing / 3.5.2:
Models: Unsupervised Classification: Clustering and Allocation of Activities to Clusters / 3.5.3:
Behaviors: Discovering Patterns in Activities / 3.5.4:
Behaviors: Discovering Anomalous Patterns of Activity / 3.5.5:
Discussion / 3.6:
Sequential Pattern Mining for Cooking-Support Robot / Yasushi Nakauchi3.7:
System Design / 4.1:
Inference from Series of Human Actions / 4.2.1:
Time Sequence Data Mining / 4.2.2:
Human Behavior Inference Algorithm / 4.2.3:
Activity Support of Human / 4.2.4:
Implementation / 4.3:
IC Tag System / 4.3.1:
Inference of Human's Next Action / 4.3.2:
Cooking Support Interface / 4.3.3:
Robotic, Sensory and Problem-Solving Ingredients for the Future Home / Amedeo Cesta ; Luca Iocchi ; G. Riccardo Leone ; Daniele Nardi ; Federico Pecora ; Riccardo Rasconi4.4:
Components of the Multiagent System / 5.1:
The Robotic Platform Mobility Subsystem / 5.2:
The Interaction Manager / 5.3:
Environmental Sensors for People Tracking and Posture Recognition / 5.4:
Monitoring Activities of Daily Living / 5.5:
Schedule Representation and Execution Monitoring / 5.5.1:
Constraint Management in the RoboCare Context / 5.5.2:
From Constraint Violations to Verbal Interaction / 5.5.3:
Multiagent Coordination Infrastructure / 5.6:
Casting the MAC Problem to DCOP / 5.6.1:
Cooperatively Solving the MAC Problem / 5.6.2:
Ubiquitous Stereo Vision for Human Sensing / Ikushi Yoda ; Katsuhiko Sakae5.7:
Ubiquitous Stereo Vision / 6.1:
Concept of Ubiquitous Stereo Vision / 6.2.1:
Server-Client Model for USV / 6.2.2:
Real Utilization Cases / 6.2.3:
Hierarchical Utilization of 3D Data and Personal Recognition / 6.3:
Acquisition of 3D Range Information / 6.3.1:
Projection to Floor Plane / 6.3.2:
Recognition of Multiple Persons and Interface / 6.4:
Pose Recognition for Multiple People / 6.4.1:
Personal Identification / 6.4.2:
Interface for Space Control / 6.4.3:
Human Monitoring in Open Space (Safety Management Application) / 6.5:
Monitoring Railroad Crossing / 6.5.1:
Station Platform Edge Safety Management / 6.5.2:
Monitoring Huge Space / 6.5.3:
Conclusion and Future Work / 6.6:
Augmenting Professional Training, an Ambient Intelligence Approach / B. Zhan ; D.N. Monekosso ; S. Rush ; P. Remagnino ; S.A. Velastin7:
Color Tracking of People / 7.1:
Counting People by Spatial Relationship Analysis / 7.3:
Simple People Counting Algorithm / 7.3.1:
Graphs of Blobs / 7.3.2:
Estimation of Distance Between Blobs / 7.3.3:
Temporal Pyramid for Distance Estimation / 7.3.4:
Probabilistic Estimation of Groupings / 7.3.5:
Grouping Blobs / 7.3.6:
Stereo Omnidirectional System (SOS) and Its Applications / Yutaka Satoh ; Katsuhiko Sakaue7.4:
System Configuration / 8.1:
Image integration / 8.3:
Generation of Stable Images at Arbitrary Rotation / 8.4:
An example Application: Intelligent Electric Wheelchair / 8.5:
Overview / 8.5.1:
Obstacle Detection / 8.5.2:
Gesture / Posture Detection / 8.5.4:
Video Analysis for Ambient Intelligence in Urban Environments / Andrea Prati ; Rita Cucchiara8.6:
Visual Data for Urban AmI / 9.1:
Video Surveillance in Urban Environment / 9.2.1:
The LAICA Project / 9.2.2:
Automatic Video Processing for People Tracking / 9.3:
People Detection and Tracking from Single Static Camera / 9.3.1:
People Detection and Tracking from Distributed Cameras / 9.3.2:
People Detection and Tracking from Moving Cameras / 9.3.3:
Privacy and Ethical Issues / 9.4:
From Monomodal to Multimodal: Affect Recognition Using Visual Modalities / Hatice Gunes ; Massimo Piccardi10:
Organization of the Chapter / 10.1:
From Monomodal to Multimodal: Changes and Challenges / 10.3:
Background Research / 10.3.1:
Data Collection / 10.3.2:
Data Annotation / 10.3.3:
Synchrony/Asynchrony Between Modalities / 10.3.4:
Data Integration/Fusion / 10.3.5:
Information Complementarity/Redundancy / 10.3.6:
Information Content of Modalities / 10.3.7:
Monomodal Systems Recognizing Affective Face or Body Movement / 10.4:
Multimodal Systems Recognizing Affect from Face and Body Movement / 10.5:
Project 1: Multimodal Affect Analysis for Future Cars / 10.5.1:
Project 2: Emotion Analysis in Man-Machine Interaction Systems / 10.5.2:
Project 3: Multimodal Affect Recognition in Learning Environments / 10.5.3:
Project 4: FABO-Fusing Face and Body Gestures for Bimodal Emotion Recognition / 10.5.4:
Multimodal Affect Systems: The Future / 10.6:
Importance of Vision in Human-Robot Communication: Understanding Speech Using Robot Vision and Demonstrating Proper Actions to Human Vision / Michie Kawashima ; Keiichi Yamazaki ; Akiko Yamazaki11:
Understanding Simplified Utterances Using Robot Vision / 11.1:
Inexplicit Utterances / 11.2.1:
Information Obtained by Vision / 11.2.2:
Language Processing / 11.2.3:
Vision Processing / 11.2.4:
Synchronization Between Speech and Vision / 11.2.5:
Experiments / 11.2.6:
Communicative Head Gestures for Museum Guide Robots / 11.3:
Observations from Guide-Visitor Interaction / 11.3.1:
Prototype Museum Guide Robot / 11.3.2:
Experiments at a Museum / 11.3.3:
Index / 11.4:
Preface
List of Contributors
Intelligent Environments: Methods, Algorithms and Applications / Dorothy N. Monekosso ; Paolo Remagnino ; Yoshinori Kuno1:
48.

電子ブック

EB
Yoshinori Kuno., Yoshinori Kuno, Dorothy Monekosso, Paolo Remagnino
出版情報: SpringerLink Books - AutoHoldings , Springer London, 2009
所蔵情報: loading…
目次情報: 続きを見る
Preface
List of Contributors
Intelligent Environments: Methods, Algorithms and Applications / Dorothy N. Monekosso ; Paolo Remagnino ; Yoshinori Kuno1:
Intelligent Environments / 1.1:
What Is An Intelligent Environment? / 1.1.1:
How Is An Intelligent Environment Built? / 1.1.2:
Technology for Intelligent Environments / 1.2:
Research Projects / 1.3:
Private Spaces / 1.3.1:
Public Spaces / 1.3.2:
Middleware / 1.3.3:
Chapter Themes in This Collection / 1.4:
Conclusion / 1.5:
References
A Pervasive Sensor System for Evidence-Based Nursing Care Support / Toshio Hori ; Yoshifumi Nishida ; Shin'ichi Murakami2:
Introduction / 2.1:
Evidence-Based Nursing Care Support / 2.2:
Background of the Project / 2.2.1:
Concept of Evidence-Based Nursing Care Support / 2.2.2:
Initial Goal of the Project: Falls Prevention / 2.2.3:
Second Goal of the Project: Obtaining ADL of Inhabitants / 2.2.4:
Related Work / 2.3:
Overview and Implementations of the System / 2.4:
Overview of the Evidence-Based Nursing Care Support System / 2.4.1:
System Implementations / 2.4.2:
Experiments and Analyses / 2.5:
Tracking a Wheelchair for Falls Prevention / 2.5.1:
Activity Transition Diagram: Transition of Activities in One Day / 2.5.2:
Quantitative Evaluation of Daily Activities / 2.5.3:
Probability of "Toilet" Activity / 2.5.4:
Discussion of the Experimental Results / 2.5.5:
Prospect of the Evidence-Based Nursing Care Support System / 2.6:
Conclusions / 2.7:
Anomalous Behavior Detection: Supporting Independent Living / 3:
Related work / 3.1:
Methodology / 3.3:
Unsupervised Classification Techniques / 3.3.1:
Using HMM to Model Behavior / 3.3.2:
Experimental Setup and Data Collection / 3.4:
Noisy Data: Sources of Error / 3.4.1:
Learning activities / 3.4.2:
Experimental Results / 3.5:
Instance Class Annotation / 3.5.1:
Data Preprocessing / 3.5.2:
Models: Unsupervised Classification: Clustering and Allocation of Activities to Clusters / 3.5.3:
Behaviors: Discovering Patterns in Activities / 3.5.4:
Behaviors: Discovering Anomalous Patterns of Activity / 3.5.5:
Discussion / 3.6:
Sequential Pattern Mining for Cooking-Support Robot / Yasushi Nakauchi3.7:
System Design / 4.1:
Inference from Series of Human Actions / 4.2.1:
Time Sequence Data Mining / 4.2.2:
Human Behavior Inference Algorithm / 4.2.3:
Activity Support of Human / 4.2.4:
Implementation / 4.3:
IC Tag System / 4.3.1:
Inference of Human's Next Action / 4.3.2:
Cooking Support Interface / 4.3.3:
Robotic, Sensory and Problem-Solving Ingredients for the Future Home / Amedeo Cesta ; Luca Iocchi ; G. Riccardo Leone ; Daniele Nardi ; Federico Pecora ; Riccardo Rasconi4.4:
Components of the Multiagent System / 5.1:
The Robotic Platform Mobility Subsystem / 5.2:
The Interaction Manager / 5.3:
Environmental Sensors for People Tracking and Posture Recognition / 5.4:
Monitoring Activities of Daily Living / 5.5:
Schedule Representation and Execution Monitoring / 5.5.1:
Constraint Management in the RoboCare Context / 5.5.2:
From Constraint Violations to Verbal Interaction / 5.5.3:
Multiagent Coordination Infrastructure / 5.6:
Casting the MAC Problem to DCOP / 5.6.1:
Cooperatively Solving the MAC Problem / 5.6.2:
Ubiquitous Stereo Vision for Human Sensing / Ikushi Yoda ; Katsuhiko Sakae5.7:
Ubiquitous Stereo Vision / 6.1:
Concept of Ubiquitous Stereo Vision / 6.2.1:
Server-Client Model for USV / 6.2.2:
Real Utilization Cases / 6.2.3:
Hierarchical Utilization of 3D Data and Personal Recognition / 6.3:
Acquisition of 3D Range Information / 6.3.1:
Projection to Floor Plane / 6.3.2:
Recognition of Multiple Persons and Interface / 6.4:
Pose Recognition for Multiple People / 6.4.1:
Personal Identification / 6.4.2:
Interface for Space Control / 6.4.3:
Human Monitoring in Open Space (Safety Management Application) / 6.5:
Monitoring Railroad Crossing / 6.5.1:
Station Platform Edge Safety Management / 6.5.2:
Monitoring Huge Space / 6.5.3:
Conclusion and Future Work / 6.6:
Augmenting Professional Training, an Ambient Intelligence Approach / B. Zhan ; D.N. Monekosso ; S. Rush ; P. Remagnino ; S.A. Velastin7:
Color Tracking of People / 7.1:
Counting People by Spatial Relationship Analysis / 7.3:
Simple People Counting Algorithm / 7.3.1:
Graphs of Blobs / 7.3.2:
Estimation of Distance Between Blobs / 7.3.3:
Temporal Pyramid for Distance Estimation / 7.3.4:
Probabilistic Estimation of Groupings / 7.3.5:
Grouping Blobs / 7.3.6:
Stereo Omnidirectional System (SOS) and Its Applications / Yutaka Satoh ; Katsuhiko Sakaue7.4:
System Configuration / 8.1:
Image integration / 8.3:
Generation of Stable Images at Arbitrary Rotation / 8.4:
An example Application: Intelligent Electric Wheelchair / 8.5:
Overview / 8.5.1:
Obstacle Detection / 8.5.2:
Gesture / Posture Detection / 8.5.4:
Video Analysis for Ambient Intelligence in Urban Environments / Andrea Prati ; Rita Cucchiara8.6:
Visual Data for Urban AmI / 9.1:
Video Surveillance in Urban Environment / 9.2.1:
The LAICA Project / 9.2.2:
Automatic Video Processing for People Tracking / 9.3:
People Detection and Tracking from Single Static Camera / 9.3.1:
People Detection and Tracking from Distributed Cameras / 9.3.2:
People Detection and Tracking from Moving Cameras / 9.3.3:
Privacy and Ethical Issues / 9.4:
From Monomodal to Multimodal: Affect Recognition Using Visual Modalities / Hatice Gunes ; Massimo Piccardi10:
Organization of the Chapter / 10.1:
From Monomodal to Multimodal: Changes and Challenges / 10.3:
Background Research / 10.3.1:
Data Collection / 10.3.2:
Data Annotation / 10.3.3:
Synchrony/Asynchrony Between Modalities / 10.3.4:
Data Integration/Fusion / 10.3.5:
Information Complementarity/Redundancy / 10.3.6:
Information Content of Modalities / 10.3.7:
Monomodal Systems Recognizing Affective Face or Body Movement / 10.4:
Multimodal Systems Recognizing Affect from Face and Body Movement / 10.5:
Project 1: Multimodal Affect Analysis for Future Cars / 10.5.1:
Project 2: Emotion Analysis in Man-Machine Interaction Systems / 10.5.2:
Project 3: Multimodal Affect Recognition in Learning Environments / 10.5.3:
Project 4: FABO-Fusing Face and Body Gestures for Bimodal Emotion Recognition / 10.5.4:
Multimodal Affect Systems: The Future / 10.6:
Importance of Vision in Human-Robot Communication: Understanding Speech Using Robot Vision and Demonstrating Proper Actions to Human Vision / Michie Kawashima ; Keiichi Yamazaki ; Akiko Yamazaki11:
Understanding Simplified Utterances Using Robot Vision / 11.1:
Inexplicit Utterances / 11.2.1:
Information Obtained by Vision / 11.2.2:
Language Processing / 11.2.3:
Vision Processing / 11.2.4:
Synchronization Between Speech and Vision / 11.2.5:
Experiments / 11.2.6:
Communicative Head Gestures for Museum Guide Robots / 11.3:
Observations from Guide-Visitor Interaction / 11.3.1:
Prototype Museum Guide Robot / 11.3.2:
Experiments at a Museum / 11.3.3:
Index / 11.4:
Preface
List of Contributors
Intelligent Environments: Methods, Algorithms and Applications / Dorothy N. Monekosso ; Paolo Remagnino ; Yoshinori Kuno1:
49.

電子ブック

EB
Luc De Raedt, Jörg Siekmann, A. Bundy
出版情報: SpringerLink Books - AutoHoldings , Springer Berlin Heidelberg, 2008
所蔵情報: loading…
目次情報: 続きを見る
Introduction / 1:
What Is Logical and Relational Learning? / 1.1:
Why Is Logical and Relational Learning Important? / 1.2:
Structure Activity Relationship Prediction / 1.2.1:
A Web Mining Example / 1.2.2:
A Language Learning Example / 1.2.3:
How Does Relational and Logical Learning Work? / 1.3:
A Brief History / 1.4:
An Introduction to Logic / 2:
A Relational Database Example / 2.1:
The Syntax of Clausal Logic / 2.2:
The Semantics of Clausal Logic - Model Theory / 2.3:
Inference with Clausal Logic - Proof Theory / 2.4:
Prolog and SLD-resolution / 2.5:
Historical and Bibliographic Remarks / 2.6:
An Introduction to Learning and Search / 3:
Representing Hypotheses and Instances / 3.1:
Boolean Data / 3.2:
Machine Learning / 3.3:
Data Mining / 3.4:
A Generate-and-Test Algorithm / 3.5:
Structuring the Search Space / 3.6:
Monotonicity / 3.7:
Borders / 3.8:
Refinement Operators / 3.9:
A Generic Algorithm for Mining and Learning / 3.10:
A Complete General-to-Specific Algorithm / 3.11:
A Heuristic General-to-Specific Algorithm / 3.12:
A Branch-and-Bound Algorithm / 3.13:
A Specific-to-General Algorithm / 3.14:
Working with Borders* / 3.15:
Computing a Single Border / 3.15.1:
Computing Two Borders / 3.15.2:
Computing Two Borders Incrementally / 3.15.3:
Operations on Borders / 3.15.4:
Conclusions / 3.16:
Bibliographical Notes / 3.17:
Representations for Mining and Learning / 4:
Representing Data and Hypotheses / 4.1:
Attribute-Value Learning / 4.2:
Multiple-Instance Learning: Dealing With Sets / 4.3:
Relational Learning / 4.4:
Logic Programs / 4.5:
Sequences, Lists, and Grammars / 4.6:
Trees and Terms / 4.7:
Graphs / 4.8:
Background Knowledge / 4.9:
Designing It Yourself / 4.10:
A Hierarchy of Representations* / 4.11:
From AV to BL / 4.11.1:
From MI to AV / 4.11.2:
From RL to MI / 4.11.3:
From LP to RL / 4.11.4:
Propositionalization / 4.12:
A Table-Based Approach / 4.12.1:
A Query-Based Approach / 4.12.2:
Aggregation / 4.13:
Historical and Bibliographical Remarks / 4.14:
Generality and Logical Entailment / 5:
Generality and Logical Entailment Coincide / 5.1:
Propositional Subsumption / 5.2:
Subsumption in Logical Atoms / 5.3:
Specialization Operators / 5.3.1:
Generalization Operators* / 5.3.2:
Computing the lgg and the glb / 5.3.3:
[Theta]-Subsumption / 5.4:
Soundness and Completeness / 5.4.1:
Deciding [Theta]-Subsumption / 5.4.2:
Equivalence Classes / 5.4.3:
Variants of [Theta]-Subsumption* / 5.5:
Object Identity* / 5.5.1:
Inverse Implication* / 5.5.2:
Using Background Knowledge / 5.6:
Saturation and Bottom Clauses / 5.6.1:
Relative Least General Generalization* / 5.6.2:
Semantic Refinement* / 5.6.3:
Aggregation* / 5.7:
Inverse Resolution / 5.8:
A Note on Graphs, Trees, and Sequences / 5.9:
Bibliographic Notes / 5.10:
The Upgrading Story / 6:
Motivation for a Methodology / 6.1:
Methodological Issues / 6.2:
Representing the Examples / 6.2.1:
Representing the Hypotheses / 6.2.2:
Adapting the Algorithm / 6.2.3:
Adding Features / 6.2.4:
Case Study 1: Rule Learning and Foil / 6.3:
Foil's Problem Setting / 6.3.1:
Foil's Algorithm / 6.3.2:
Case Study 2: Decision Tree Learning and Tilde / 6.4:
The Problem Setting / 6.4.1:
Inducing Logical Decision Trees / 6.4.2:
Case Study 3: Frequent Item-Set Mining and Warmr / 6.5:
Relational Association Rules and Local Patterns / 6.5.1:
Computing Frequent Queries / 6.5.2:
Language Bias / 6.6:
Syntactic Bias / 6.6.1:
Semantic Bias / 6.6.2:
Inducing Theories / 6.7:
Introduction to Theory Revision / 7.1:
Theories and Model Inference / 7.1.1:
Theory Revision / 7.1.2:
Overview of the Rest of This Chapter / 7.1.3:
Towards Abductive Logic Programming / 7.2:
Abduction / 7.2.1:
Integrity Constraints / 7.2.2:
Abductive Logic Programming / 7.2.3:
Shapiro's Theory Revision System / 7.3:
Interaction / 7.3.1:
The Model Inference System / 7.3.2:
Two Propositional Theory Revision Systems* / 7.4:
Learning a Propositional Horn Theory Efficiently / 7.4.1:
Heuristic Search in Theory Revision / 7.4.2:
Inducing Constraints / 7.5:
Problem Specification / 7.5.1:
An Algorithm for Inducing Integrity Constraints / 7.5.2:
Probabilistic Logic Learning / 7.6:
Probability Theory Review / 8.1:
Probabilistic Logics / 8.2:
Probabilities on Interpretations / 8.2.1:
Probabilities on Proofs / 8.2.2:
Probabilistic Learning / 8.3:
Parameter Estimation / 8.3.1:
Structure Learning / 8.3.2:
First-Order Probabilistic Logics / 8.4:
Probabilistic Interpretations / 8.4.1:
Probabilistic Proofs / 8.4.2:
Learning from Interpretations / 8.5:
Learning from Entailment / 8.5.2:
Learning from Proof Trees and Traces / 8.5.3:
Relational Reinforcement Learning* / 8.6:
Markov Decision Processes / 8.6.1:
Solving Markov Decision Processes / 8.6.2:
Relational Markov Decision Processes / 8.6.3:
Solving Relational Markov Decision Processes / 8.6.4:
Kernels and Distances for Structured Data / 8.7:
A Simple Kernel and Distance / 9.1:
Kernel Methods / 9.2:
The Max Margin Approach / 9.2.1:
Support Vector Machines / 9.2.2:
The Kernel Trick / 9.2.3:
Distance-Based Learning / 9.3:
Distance Functions / 9.3.1:
The k-Nearest Neighbor Algorithm / 9.3.2:
The k-Means Algorithm / 9.3.3:
Kernels for Structured Data / 9.4:
Convolution and Decomposition / 9.4.1:
Vectors and Tuples / 9.4.2:
Sets and Multi-sets / 9.4.3:
Strings / 9.4.4:
Trees and Atoms / 9.4.5:
Graph Kernels* / 9.4.6:
Distances and Metrics / 9.5:
Generalization and Metrics / 9.5.1:
Sets / 9.5.2:
Atoms and Trees / 9.5.4:
Relational Kernels and Distances / 9.5.6:
Bibliographical and Historical Notes / 9.7:
Computational Aspects of Logical and Relational Learning / 10:
Efficiency of Relational Learning / 10.1:
Coverage as [theta]-Subsumption / 10.1.1:
[theta]-Subsumption Empirically / 10.1.2:
Optimizing the Learner for [theta]-subsumption / 10.1.3:
Computational Learning Theory* / 10.2:
Notions of Learnability / 10.2.1:
Positive Results / 10.2.2:
Negative Results / 10.2.3:
Historical and Bibliographic Notes / 10.3:
Lessons Learned / 11:
A Hierarchy of Representations / 11.1:
From Upgrading to Downgrading / 11.2:
Propositionalization and Aggregation / 11.3:
Learning Tasks / 11.4:
Operators and Generality / 11.5:
Unification and Variables / 11.6:
Three Learning Settings / 11.7:
Knowledge and Background Knowledge / 11.8:
Applications / 11.9:
References
Author Index
Index
Introduction / 1:
What Is Logical and Relational Learning? / 1.1:
Why Is Logical and Relational Learning Important? / 1.2:
50.

電子ブック

EB
Toshinori Munakata, David Gries, Fred B. Schneider
出版情報: Springer eBooks Computer Science , Springer London, 2008
所蔵情報: loading…
目次情報: 続きを見る
Preface
Introduction / 1:
An Overview of the Field of Artificial Intelligence / 1.1:
An Overview of the Areas Covered in this Book / 1.2:
Neural Networks: Fundamentals and the Backpropagation Model / 2:
What is a Neural Network? / 2.1:
A Neuron / 2.2:
Basic Idea of the Backpropagation Model / 2.3:
Details of the Backpropagation Mode / 2.4:
A Cookbook Recipe to Implement the Backpropagation Model / 2.5:
Additional Technical Remarks on the Backpropagation Model / 2.6:
Simple Perceptrons / 2.7:
Applications of the Backpropagation Model / 2.8:
General Remarks on Neural Networks / 2.9:
Neural Networks: Other Models / 3:
Prelude / 3.1:
Associative Memory / 3.2:
Hopfield Networks / 3.3:
The Hopfield-Tank Model for Optimization Problems: The Basics / 3.4:
One-Dimensional Layout / 3.4.1:
Two-Dimensional Layout / 3.4.2:
The Hopfield-Tank Model for Optimization Problems: Applications / 3.5:
The N-Queen Problem / 3.5.1:
A General Guideline to Apply the Hopfield-Tank Model to Optimization Problems / 3.5.2:
Traveling Salesman Problem (TSP) / 3.5.3:
The Kohonen Model / 3.6:
Simulated Annealing / 3.7:
Boltzmann Machines / 3.8:
An Overview / 3.8.1:
Unsupervised Learning by the Boltzmann Machine: The Basics Architecture / 3.8.2:
Unsupervised Learning by the Boltzmann Machine: Algorithms / 3.8.3:
Appendix. Derivation of Delta-Weights / 3.8.4:
Genetic Algorithms and Evolutionary Computing / 4:
What are Genetic Algorithms and Evolutionary Computing? / 4.1:
Fundamentals of Genetic Algorithms / 4.2:
A Simple Illustration of Genetic Algorithms / 4.3:
A Machine Learning Example: Input-to-Output Mapping / 4.4:
A Hard Optimization Example: the Traveling Salesman Problem (TSP) / 4.5:
Schemata / 4.6:
Changes of Schemata Over Generations / 4.6.1:
Example of Schema Processing / 4.6.2:
Genetic Programming / 4.7:
Additional Remarks / 4.8:
Fuzzy Systems / 5:
Fundamentals of Fuzzy Sets / 5.1:
What is a Fuzzy Set? / 5.2.1:
Basic Fuzzy Set Relations / 5.2.2:
Basic Fuzzy Set Operations and Their Properties / 5.2.3:
Operations Unique to Fuzzy Sets / 5.2.4:
Fuzzy Relations / 5.3:
Ordinary (Nonfuzzy) Relations / 5.3.1:
Fuzzy Relations Defined on Ordinary Sets / 5.3.2:
Fuzzy Relations Derived from Fuzzy Sets / 5.3.3:
Fuzzy Logic / 5.4:
Ordinary Set Theory and Ordinary Logic / 5.4.1:
Fuzzy Logic Fundamentals / 5.4.2:
Fuzzy Control / 5.5:
Fuzzy Control Basics / 5.5.1:
Case Study: Controlling Temperature with a Variable Heat Source / 5.5.2:
Extended Fuzzy if-then Rules Tables / 5.5.3:
A Note on Fuzzy Control Expert Systems / 5.5.4:
Hybrid Systems / 5.6:
Fundamental Issues / 5.7:
Rough Sets / 5.8:
Review of Ordinary Sets and Relations / 6.1:
Information Tables and Attributes / 6.3:
Approximation Spaces / 6.4:
Knowledge Representation Systems / 6.5:
More on the Basics of Rough Sets / 6.6:
Case Study and Comparisons with Other Techniques / 6.7:
Rough Sets Applied to the Case Study / 6.8.1:
ID3 Approach and the Case Study / 6.8.2:
Comparisons with Other Techniques / 6.8.3:
Chaos / 7:
What is Chaos? / 7.1:
Representing Dynamical Systems / 7.2:
Discrete dynamical systems / 7.2.1:
Continuous dynamical systems / 7.2.2:
State and Phase Spaces / 7.3:
Trajectory, Orbit and Flow / 7.3.1:
Cobwebs / 7.3.2:
Equilibrium Solutions and Stability / 7.4:
Attractors / 7.5:
Fixed-point attractors / 7.5.1:
Periodic attractors / 7.5.2:
Quasi-periodic attractors / 7.5.3:
Chaotic attractors / 7.5.4:
Bifurcations / 7.6:
Fractals / 7.7:
Applications of Chaos / 7.8:
Index
Preface
Introduction / 1:
An Overview of the Field of Artificial Intelligence / 1.1:
文献の複写および貸借の依頼を行う
 文献複写・貸借依頼