close
1.

電子ブック

EB
Gian Piero Zarri
出版情報: Springer eBooks Computer Science , Springer London, 2009
所蔵情報: loading…
目次情報: 続きを見る
Basic Principles / 1:
Narrative Information in an NKRL Context / 1.1:
Narratology and NKRL / 1.1.1:
The Notion of "Event" in an NKRL Context / 1.1.2:
Knowledge Representation and NKRL / 1.2:
"Standard" Ontologies and the "n-ary" Problem / 1.2.1:
A Plain "n-ary" Solution and Some Related Problems / 1.2.2:
In the Guise of Winding Up / 1.3:
The Knowledge Representation Strategy / 2:
Architecture of NKRL: the Four "Components" / 2.1:
The Data Structures of the Four Components / 2.2:
Definitional/Enumerative Data Structures / 2.2.1:
Descriptive/Factual Data Structures / 2.2.2:
Second-order Structures / 2.3:
The Completive Construction / 2.3.1:
Binding Occurrences / 2.3.2:
The Semantic and Ontological Contents / 2.4:
The Organization of the HClass Hierarchy / 3.1:
General Notions about Ontologies / 3.1.1:
HClass Architecture / 3.1.2:
The Organization of the HTemp Hierarchy / 3.2:
Recent Examples of "Structured" Ontological Systems / 3.2.1:
Main Features of Some Specific HTemp Structures / 3.2.2:
The Query and Inference Procedures / 3.3:
"Search Patterns" and Low-level Inferences / 4.1:
The Algorithmic Structure of Fum / 4.1.1:
Temporal Information and Indexing / 4.1.2:
High-level Inference Procedures / 4.2:
General Remarks about Some Reasoning Paradigms / 4.2.1:
Hypothesis Rules / 4.2.2:
Transformation Rules / 4.2.3:
Integrating the Two Main Inferencing Modes of NKRL / 4.2.4:
Inference Rules and Internet Filtering / 4.2.5:
Conclusion / 4.3:
Technological Enhancements / 5.1:
Theoretical Enhancements / 5.2:
Appendix A
Appendix B
References
Index
Basic Principles / 1:
Narrative Information in an NKRL Context / 1.1:
Narratology and NKRL / 1.1.1:
2.

電子ブック

EB
Gian Piero Zarri
出版情報: SpringerLink Books - AutoHoldings , Springer London, 2009
所蔵情報: loading…
目次情報: 続きを見る
Basic Principles / 1:
Narrative Information in an NKRL Context / 1.1:
Narratology and NKRL / 1.1.1:
The Notion of "Event" in an NKRL Context / 1.1.2:
Knowledge Representation and NKRL / 1.2:
"Standard" Ontologies and the "n-ary" Problem / 1.2.1:
A Plain "n-ary" Solution and Some Related Problems / 1.2.2:
In the Guise of Winding Up / 1.3:
The Knowledge Representation Strategy / 2:
Architecture of NKRL: the Four "Components" / 2.1:
The Data Structures of the Four Components / 2.2:
Definitional/Enumerative Data Structures / 2.2.1:
Descriptive/Factual Data Structures / 2.2.2:
Second-order Structures / 2.3:
The Completive Construction / 2.3.1:
Binding Occurrences / 2.3.2:
The Semantic and Ontological Contents / 2.4:
The Organization of the HClass Hierarchy / 3.1:
General Notions about Ontologies / 3.1.1:
HClass Architecture / 3.1.2:
The Organization of the HTemp Hierarchy / 3.2:
Recent Examples of "Structured" Ontological Systems / 3.2.1:
Main Features of Some Specific HTemp Structures / 3.2.2:
The Query and Inference Procedures / 3.3:
"Search Patterns" and Low-level Inferences / 4.1:
The Algorithmic Structure of Fum / 4.1.1:
Temporal Information and Indexing / 4.1.2:
High-level Inference Procedures / 4.2:
General Remarks about Some Reasoning Paradigms / 4.2.1:
Hypothesis Rules / 4.2.2:
Transformation Rules / 4.2.3:
Integrating the Two Main Inferencing Modes of NKRL / 4.2.4:
Inference Rules and Internet Filtering / 4.2.5:
Conclusion / 4.3:
Technological Enhancements / 5.1:
Theoretical Enhancements / 5.2:
Appendix A
Appendix B
References
Index
Basic Principles / 1:
Narrative Information in an NKRL Context / 1.1:
Narratology and NKRL / 1.1.1:
3.

電子ブック

EB
Kathryn E. Merrick, Mary Lou Maher
出版情報: Springer eBooks Computer Science , Springer Berlin Heidelberg, 2009
所蔵情報: loading…
目次情報: 続きを見る
Non-Player Characters and Reinforcement Learning / Part I:
Non-Player Characters in Multiuser Games / 1:
Types of Multiuser Games / 1.1:
Massively Multiplayer Online Role-Playing Games / 1.1.1:
Multiuser Simulation Games / 1.1.2:
Open-Ended Virtual Worlds / 1.1.3:
Character Roles in Multiuser Games / 1.2:
Existing Artificial Intelligence Techniques for Non-Player Characters in Multiuser Games / 1.3:
Reflexive Agents / 1.3.1:
Learning Agents / 1.3.2:
Evolutionary Agents / 1.3.3:
Smart Terrain / 1.3.4:
Summary / 1.4:
References / 1.5:
Motivation in Natural and Artificial Agents / 2:
Defining Motivation / 2.1:
Biological Theories of Motivation / 2.2:
Drive Theory / 2.2.1:
Motivational State Theory / 2.2.2:
Arousal / 2.2.3:
Cognitive Theories of Motivation / 2.3:
Curiosity / 2.3.1:
Operant Theory / 2.3.2:
Incentive / 2.3.3:
Achievement Motivation / 2.3.4:
Attribution Theory / 2.3.5:
Intrinsic Motivation / 2.3.6:
Social Theories of Motivation / 2.4:
Conformity / 2.4.1:
Cultural Effect / 2.4.2:
Evolution / 2.4.3:
Combined Motivation Theories / 2.5:
Maslow's Hierarchy of Needs / 2.5.1:
Existence Relatedness Growth Theory / 2.5.2:
Towards Motivated Reinforcement Learning / 2.6:
Defining Reinforcement Learning / 3.1:
Dynamic Programming / 3.1.1:
Monte Carlo Methods / 3.1.2:
Temporal Difference Learning / 3.1.3:
Reinforcement Learning in Complex Environments / 3.2:
Partially Observable Environments / 3.2.1:
Function Approximation / 3.2.2:
Hierarchical Reinforcement Learning / 3.2.3:
Motivated Reinforcement Learning / 3.3:
Using a Motivation Signal in Addition to a Reward Signal / 3.3.1:
Using a Motivation Signal Instead of a Reward Signal / 3.3.2:
Comparing the Behaviour of Learning Agents / 3.4:
Player Satisfaction / 4.1:
Psychological Flow / 4.1.1:
Structural Flow / 4.1.2:
Formalising Non-Player Character Behaviour / 4.2:
Models of Optimality for Reinforcement Learning / 4.2.1:
Characteristics of Motivated Reinforcement Learning / 4.2.2:
Comparing Motivated Reinforcement Learning Agents / 4.3:
Statistical Model for Identifying Learned Tasks / 4.3.1:
Behavioural Variety / 4.3.2:
Behavioural Complexity / 4.3.3:
Developing Curious Characters Using Motivated Reinforcement Learning / 4.4:
Curiosity, Motivation and Attention Focus / 5:
Agents in Complex, Dynamic Environments / 5.1:
States / 5.1.1:
Actions / 5.1.2:
Reward and Motivation / 5.1.3:
Motivation and Attention Focus / 5.2:
Observations / 5.2.1:
Events / 5.2.2:
Tasks and Task Selection / 5.2.3:
Experience-Based Reward as Cognitive Motivation / 5.2.4:
Arbitration Functions / 5.2.5:
A General Experience-Based Motivation Function / 5.2.6:
Curiosity as Motivation for Support Characters / 5.3:
Curiosity as Interesting Events / 5.3.1:
Curiosity as Interesting and Competence / 5.3.2:
Motivated Reinforcement Learning Agents / 5.4:
A General Motivated Reinforcement Learning Model / 6.1:
Algorithms for Motivated Reinforcement Learning / 6.2:
Motivated Flat Reinforcement Learning / 6.2.1:
Motivated Multioption Reinforcement Learning / 6.2.2:
Motivated Hierarchical Reinforcement Learning / 6.2.3:
Curious Characters in Games / 6.3:
Curious Characters for Multiuser Games / 7:
Motivated Reinforcement Learning for Support Characters in Massively Multiplayer Online Role-Playing Games / 7.1:
Character Behaviour in Small-Scale, Isolated Games Locations / 7.2:
Case Studies of Individual Characters / 7.2.1:
General Trends in Character Behaviour / 7.2.2:
Curious Characters for Games in Complex, Dynamic Environments / 7.3:
Designing Characters That Can Multitask / 8.1:
Designing Characters for Complex Tasks / 8.1.1:
Games That Change While Characters Are Learning / 8.2.1:
Curious Characters for Games in Second Life / 8.3.1:
Motivated Reinforcement Learning in Open-Ended Simulation Games / 9.1:
Game Design / 9.1.1:
Character Design / 9.1.2:
Evaluating Character Behaviour in Response to Game Play Sequences / 9.2:
Discussion / 9.2.1:
Future / 9.3:
Towards the Future / 10:
Using Motivated Reinforcement Learning in Non-Player Characters / 10.1:
Other Gaming Applications for Motivated Reinforcement Learning / 10.2:
Dynamic Difficulty Adjustment / 10.2.1:
Procedural Content Generation / 10.2.2:
Beyond Curiosity / 10.3:
Biological Models of Motivation / 10.3.1:
Cognitive Models of Motivation / 10.3.2:
Social Models of Motivation / 10.3.3:
Combined Models of Motivation / 10.3.4:
New Models of Motivated Learning / 10.4:
Motivated Supervised Learning / 10.4.1:
Motivated Unsupervised Learning / 10.4.2:
Evaluating the Behaviour of Motivated Learning Agents / 10.5:
Concluding Remarks / 10.6:
Index / 10.7:
Non-Player Characters and Reinforcement Learning / Part I:
Non-Player Characters in Multiuser Games / 1:
Types of Multiuser Games / 1.1:
4.

電子ブック

EB
Kathryn E. Merrick, Mary Lou Maher
出版情報: SpringerLink Books - AutoHoldings , Springer Berlin Heidelberg, 2009
所蔵情報: loading…
目次情報: 続きを見る
Non-Player Characters and Reinforcement Learning / Part I:
Non-Player Characters in Multiuser Games / 1:
Types of Multiuser Games / 1.1:
Massively Multiplayer Online Role-Playing Games / 1.1.1:
Multiuser Simulation Games / 1.1.2:
Open-Ended Virtual Worlds / 1.1.3:
Character Roles in Multiuser Games / 1.2:
Existing Artificial Intelligence Techniques for Non-Player Characters in Multiuser Games / 1.3:
Reflexive Agents / 1.3.1:
Learning Agents / 1.3.2:
Evolutionary Agents / 1.3.3:
Smart Terrain / 1.3.4:
Summary / 1.4:
References / 1.5:
Motivation in Natural and Artificial Agents / 2:
Defining Motivation / 2.1:
Biological Theories of Motivation / 2.2:
Drive Theory / 2.2.1:
Motivational State Theory / 2.2.2:
Arousal / 2.2.3:
Cognitive Theories of Motivation / 2.3:
Curiosity / 2.3.1:
Operant Theory / 2.3.2:
Incentive / 2.3.3:
Achievement Motivation / 2.3.4:
Attribution Theory / 2.3.5:
Intrinsic Motivation / 2.3.6:
Social Theories of Motivation / 2.4:
Conformity / 2.4.1:
Cultural Effect / 2.4.2:
Evolution / 2.4.3:
Combined Motivation Theories / 2.5:
Maslow's Hierarchy of Needs / 2.5.1:
Existence Relatedness Growth Theory / 2.5.2:
Towards Motivated Reinforcement Learning / 2.6:
Defining Reinforcement Learning / 3.1:
Dynamic Programming / 3.1.1:
Monte Carlo Methods / 3.1.2:
Temporal Difference Learning / 3.1.3:
Reinforcement Learning in Complex Environments / 3.2:
Partially Observable Environments / 3.2.1:
Function Approximation / 3.2.2:
Hierarchical Reinforcement Learning / 3.2.3:
Motivated Reinforcement Learning / 3.3:
Using a Motivation Signal in Addition to a Reward Signal / 3.3.1:
Using a Motivation Signal Instead of a Reward Signal / 3.3.2:
Comparing the Behaviour of Learning Agents / 3.4:
Player Satisfaction / 4.1:
Psychological Flow / 4.1.1:
Structural Flow / 4.1.2:
Formalising Non-Player Character Behaviour / 4.2:
Models of Optimality for Reinforcement Learning / 4.2.1:
Characteristics of Motivated Reinforcement Learning / 4.2.2:
Comparing Motivated Reinforcement Learning Agents / 4.3:
Statistical Model for Identifying Learned Tasks / 4.3.1:
Behavioural Variety / 4.3.2:
Behavioural Complexity / 4.3.3:
Developing Curious Characters Using Motivated Reinforcement Learning / 4.4:
Curiosity, Motivation and Attention Focus / 5:
Agents in Complex, Dynamic Environments / 5.1:
States / 5.1.1:
Actions / 5.1.2:
Reward and Motivation / 5.1.3:
Motivation and Attention Focus / 5.2:
Observations / 5.2.1:
Events / 5.2.2:
Tasks and Task Selection / 5.2.3:
Experience-Based Reward as Cognitive Motivation / 5.2.4:
Arbitration Functions / 5.2.5:
A General Experience-Based Motivation Function / 5.2.6:
Curiosity as Motivation for Support Characters / 5.3:
Curiosity as Interesting Events / 5.3.1:
Curiosity as Interesting and Competence / 5.3.2:
Motivated Reinforcement Learning Agents / 5.4:
A General Motivated Reinforcement Learning Model / 6.1:
Algorithms for Motivated Reinforcement Learning / 6.2:
Motivated Flat Reinforcement Learning / 6.2.1:
Motivated Multioption Reinforcement Learning / 6.2.2:
Motivated Hierarchical Reinforcement Learning / 6.2.3:
Curious Characters in Games / 6.3:
Curious Characters for Multiuser Games / 7:
Motivated Reinforcement Learning for Support Characters in Massively Multiplayer Online Role-Playing Games / 7.1:
Character Behaviour in Small-Scale, Isolated Games Locations / 7.2:
Case Studies of Individual Characters / 7.2.1:
General Trends in Character Behaviour / 7.2.2:
Curious Characters for Games in Complex, Dynamic Environments / 7.3:
Designing Characters That Can Multitask / 8.1:
Designing Characters for Complex Tasks / 8.1.1:
Games That Change While Characters Are Learning / 8.2.1:
Curious Characters for Games in Second Life / 8.3.1:
Motivated Reinforcement Learning in Open-Ended Simulation Games / 9.1:
Game Design / 9.1.1:
Character Design / 9.1.2:
Evaluating Character Behaviour in Response to Game Play Sequences / 9.2:
Discussion / 9.2.1:
Future / 9.3:
Towards the Future / 10:
Using Motivated Reinforcement Learning in Non-Player Characters / 10.1:
Other Gaming Applications for Motivated Reinforcement Learning / 10.2:
Dynamic Difficulty Adjustment / 10.2.1:
Procedural Content Generation / 10.2.2:
Beyond Curiosity / 10.3:
Biological Models of Motivation / 10.3.1:
Cognitive Models of Motivation / 10.3.2:
Social Models of Motivation / 10.3.3:
Combined Models of Motivation / 10.3.4:
New Models of Motivated Learning / 10.4:
Motivated Supervised Learning / 10.4.1:
Motivated Unsupervised Learning / 10.4.2:
Evaluating the Behaviour of Motivated Learning Agents / 10.5:
Concluding Remarks / 10.6:
Index / 10.7:
Non-Player Characters and Reinforcement Learning / Part I:
Non-Player Characters in Multiuser Games / 1:
Types of Multiuser Games / 1.1:
5.

電子ブック

EB
Milan Studen?, Michael Jordan, Frank P. Kelly, Jon Kleinberg, Bernhard Sch?lkopf, Ian Witten
出版情報: Springer eBooks Computer Science , Springer London, 2005
所蔵情報: loading…
目次情報: 続きを見る
Introduction / 1:
Motivational thoughts / 1.1:
Goals of the monograph / 1.2:
Structure of the book / 1.3:
Basic Concepts / 2:
Conditional independence / 2.1:
Semi-graphoid properties / 2.2:
Formal independence models / 2.2.1:
Semi-graphoids / 2.2.2:
Elementary independence statements / 2.2.3:
Problem of axiomatic characterization / 2.2.4:
Classes of probability measures / 2.3:
Marginally continuous measures / 2.3.1:
Factorizable measures / 2.3.2:
Multiinformation and conditional product / 2.3.3:
Properties of multiinformation function / 2.3.4:
Positive measures / 2.3.5:
Gaussian measures / 2.3.6:
Basic construction / 2.3.7:
Imsets / 2.4:
Graphical Methods / 3:
Undirected graphs / 3.1:
Acyclic directed graphs / 3.2:
Classic chain graphs / 3.3:
Within classic graphical models / 3.4:
Decomposable models / 3.4.1:
Recursive causal graphs / 3.4.2:
Lattice conditional independence models / 3.4.3:
Bubble graphs / 3.4.4:
Advanced graphical models / 3.5:
General directed graphs / 3.5.1:
Reciprocal graphs / 3.5.2:
Joint-response chain graphs / 3.5.3:
Covariance graphs / 3.5.4:
Alternative chain graphs / 3.5.5:
Annotated graphs / 3.5.6:
Hidden variables / 3.5.7:
Ancestral graphs / 3.5.8:
MC graphs / 3.5.9:
Incompleteness of graphical approaches / 3.6:
Structural Imsets: Fundamentals / 4:
Basic class of distributions / 4.1:
Discrete measures / 4.1.1:
Regular Gaussian measures / 4.1.2:
Conditional Gaussian measures / 4.1.3:
Classes of structural imsets / 4.2:
Elementary imsets / 4.2.1:
Semi-elementary and combinatorial imsets / 4.2.2:
Structural imsets / 4.2.3:
Product formula induced by a structural imset / 4.3:
Examples of reference systems of measures / 4.3.1:
Topological assumptions / 4.3.2:
Markov condition / 4.4:
Semi-graphoid induced by a structural imset / 4.4.1:
Markovian measures / 4.4.2:
Equivalence result / 4.5:
Description of Probabilistic Models / 5:
Supermodular set functions / 5.1:
Semi-graphoid produced by a supermodular function / 5.1.1:
Quantitative equivalence of supermodular functions / 5.1.2:
Skeletal supermodular functions / 5.2:
Skeleton / 5.2.1:
Significance of skeletal imsets / 5.2.2:
Description of models by structural imsets / 5.3:
Galois connection / 5.4:
Formal concept analysis / 5.4.1:
Lattice of structural models / 5.4.2:
Equivalence and Implication / 6:
Two concepts of equivalence / 6.1:
Independence and Markov equivalence / 6.1.1:
Independence implication / 6.2:
Direct characterization of independence implication / 6.2.1:
Skeletal characterization of independence implication / 6.2.2:
Testing independence implication / 6.3:
Testing structural imsets / 6.3.1:
Grade / 6.3.2:
Invariants of independence equivalence / 6.4:
Adaptation to a distribution framework / 6.5:
The Problem of Representative Choice / 7:
Baricentral imsets / 7.1:
Standard imsets / 7.2:
Translation of DAG models / 7.2.1:
Translation of decomposable models / 7.2.2:
Imsets of the smallest degree / 7.3:
Decomposition implication / 7.3.1:
Minimal generators / 7.3.2:
Span / 7.4:
Determining and unimarginal classes / 7.4.1:
Imsets with the least lower class / 7.4.2:
Exclusivity of standard imsets / 7.4.3:
Dual description / 7.5:
Coportraits / 7.5.1:
Dual baricentral imsets and global view / 7.5.2:
Learning / 8:
Two approaches to learning / 8.1:
Quality criteria / 8.2:
Criteria for learning DAG models / 8.2.1:
Score equivalent criteria / 8.2.2:
Decomposable criteria / 8.2.3:
Regular criteria / 8.2.4:
Inclusion neighborhood / 8.3:
Standard imsets and learning / 8.4:
Inclusion neighborhood characterization / 8.4.1:
Regular criteria and standard imsets / 8.4.2:
Open Problems / 9:
Theoretical problems / 9.1:
Miscellaneous topics / 9.1.1:
Classification of skeletal imsets / 9.1.2:
Operations with structural models / 9.2:
Reductive operations / 9.2.1:
Expansive operations / 9.2.2:
Cumulative operations / 9.2.3:
Decomposition of structural models / 9.2.4:
Implementation tasks / 9.3:
Interpretation and learning tasks / 9.4:
Meaningful description of structural models / 9.4.1:
Tasks concerning distribution frameworks / 9.4.2:
Learning tasks / 9.4.3:
Appendix / A:
Classes of sets / A.1:
Posets and lattices / A.2:
Graphs / A.3:
Topological concepts / A.4:
Finite-dimensional subspaces and convex cones / A.5:
Linear subspaces / A.5.1:
Convex sets and cones / A.5.2:
Measure-theoretical concepts / A.6:
Measure and integral / A.6.1:
Basic measure-theoretical results / A.6.2:
Information-theoretical concepts / A.6.3:
Conditional probability / A.6.4:
Conditional independence in terms of ?-algebras / A.7:
Concepts from multivariate analysis / A.8:
Matrices / A.8.1:
Statistical characteristics of probability measures / A.8.2:
Multivariate Gaussian distributions / A.8.3:
Elementary statistical concepts / A.9:
Empirical concepts / A.9.1:
Statistical conception / A.9.2:
Likelihood function / A.9.3:
Testing statistical hypotheses / A.9.4:
Distribution framework / A.9.5:
List of Notation
List of Lemmas, Propositions etc
References
Index
Introduction / 1:
Motivational thoughts / 1.1:
Goals of the monograph / 1.2:
6.

電子ブック

EB
Milan Studený, Michael Jordan, Frank P. Kelly, Jon Kleinberg, Bernhard Schölkopf, Ian Witten
出版情報: SpringerLink Books - AutoHoldings , Springer London, 2005
所蔵情報: loading…
目次情報: 続きを見る
Introduction / 1:
Motivational thoughts / 1.1:
Goals of the monograph / 1.2:
Structure of the book / 1.3:
Basic Concepts / 2:
Conditional independence / 2.1:
Semi-graphoid properties / 2.2:
Formal independence models / 2.2.1:
Semi-graphoids / 2.2.2:
Elementary independence statements / 2.2.3:
Problem of axiomatic characterization / 2.2.4:
Classes of probability measures / 2.3:
Marginally continuous measures / 2.3.1:
Factorizable measures / 2.3.2:
Multiinformation and conditional product / 2.3.3:
Properties of multiinformation function / 2.3.4:
Positive measures / 2.3.5:
Gaussian measures / 2.3.6:
Basic construction / 2.3.7:
Imsets / 2.4:
Graphical Methods / 3:
Undirected graphs / 3.1:
Acyclic directed graphs / 3.2:
Classic chain graphs / 3.3:
Within classic graphical models / 3.4:
Decomposable models / 3.4.1:
Recursive causal graphs / 3.4.2:
Lattice conditional independence models / 3.4.3:
Bubble graphs / 3.4.4:
Advanced graphical models / 3.5:
General directed graphs / 3.5.1:
Reciprocal graphs / 3.5.2:
Joint-response chain graphs / 3.5.3:
Covariance graphs / 3.5.4:
Alternative chain graphs / 3.5.5:
Annotated graphs / 3.5.6:
Hidden variables / 3.5.7:
Ancestral graphs / 3.5.8:
MC graphs / 3.5.9:
Incompleteness of graphical approaches / 3.6:
Structural Imsets: Fundamentals / 4:
Basic class of distributions / 4.1:
Discrete measures / 4.1.1:
Regular Gaussian measures / 4.1.2:
Conditional Gaussian measures / 4.1.3:
Classes of structural imsets / 4.2:
Elementary imsets / 4.2.1:
Semi-elementary and combinatorial imsets / 4.2.2:
Structural imsets / 4.2.3:
Product formula induced by a structural imset / 4.3:
Examples of reference systems of measures / 4.3.1:
Topological assumptions / 4.3.2:
Markov condition / 4.4:
Semi-graphoid induced by a structural imset / 4.4.1:
Markovian measures / 4.4.2:
Equivalence result / 4.5:
Description of Probabilistic Models / 5:
Supermodular set functions / 5.1:
Semi-graphoid produced by a supermodular function / 5.1.1:
Quantitative equivalence of supermodular functions / 5.1.2:
Skeletal supermodular functions / 5.2:
Skeleton / 5.2.1:
Significance of skeletal imsets / 5.2.2:
Description of models by structural imsets / 5.3:
Galois connection / 5.4:
Formal concept analysis / 5.4.1:
Lattice of structural models / 5.4.2:
Equivalence and Implication / 6:
Two concepts of equivalence / 6.1:
Independence and Markov equivalence / 6.1.1:
Independence implication / 6.2:
Direct characterization of independence implication / 6.2.1:
Skeletal characterization of independence implication / 6.2.2:
Testing independence implication / 6.3:
Testing structural imsets / 6.3.1:
Grade / 6.3.2:
Invariants of independence equivalence / 6.4:
Adaptation to a distribution framework / 6.5:
The Problem of Representative Choice / 7:
Baricentral imsets / 7.1:
Standard imsets / 7.2:
Translation of DAG models / 7.2.1:
Translation of decomposable models / 7.2.2:
Imsets of the smallest degree / 7.3:
Decomposition implication / 7.3.1:
Minimal generators / 7.3.2:
Span / 7.4:
Determining and unimarginal classes / 7.4.1:
Imsets with the least lower class / 7.4.2:
Exclusivity of standard imsets / 7.4.3:
Dual description / 7.5:
Coportraits / 7.5.1:
Dual baricentral imsets and global view / 7.5.2:
Learning / 8:
Two approaches to learning / 8.1:
Quality criteria / 8.2:
Criteria for learning DAG models / 8.2.1:
Score equivalent criteria / 8.2.2:
Decomposable criteria / 8.2.3:
Regular criteria / 8.2.4:
Inclusion neighborhood / 8.3:
Standard imsets and learning / 8.4:
Inclusion neighborhood characterization / 8.4.1:
Regular criteria and standard imsets / 8.4.2:
Open Problems / 9:
Theoretical problems / 9.1:
Miscellaneous topics / 9.1.1:
Classification of skeletal imsets / 9.1.2:
Operations with structural models / 9.2:
Reductive operations / 9.2.1:
Expansive operations / 9.2.2:
Cumulative operations / 9.2.3:
Decomposition of structural models / 9.2.4:
Implementation tasks / 9.3:
Interpretation and learning tasks / 9.4:
Meaningful description of structural models / 9.4.1:
Tasks concerning distribution frameworks / 9.4.2:
Learning tasks / 9.4.3:
Appendix / A:
Classes of sets / A.1:
Posets and lattices / A.2:
Graphs / A.3:
Topological concepts / A.4:
Finite-dimensional subspaces and convex cones / A.5:
Linear subspaces / A.5.1:
Convex sets and cones / A.5.2:
Measure-theoretical concepts / A.6:
Measure and integral / A.6.1:
Basic measure-theoretical results / A.6.2:
Information-theoretical concepts / A.6.3:
Conditional probability / A.6.4:
Conditional independence in terms of ?-algebras / A.7:
Concepts from multivariate analysis / A.8:
Matrices / A.8.1:
Statistical characteristics of probability measures / A.8.2:
Multivariate Gaussian distributions / A.8.3:
Elementary statistical concepts / A.9:
Empirical concepts / A.9.1:
Statistical conception / A.9.2:
Likelihood function / A.9.3:
Testing statistical hypotheses / A.9.4:
Distribution framework / A.9.5:
List of Notation
List of Lemmas, Propositions etc
References
Index
Introduction / 1:
Motivational thoughts / 1.1:
Goals of the monograph / 1.2:
7.

電子ブック

EB
Carsten Ullrich, J?rg Siekmann
出版情報: Springer eBooks Computer Science , Springer Berlin Heidelberg, 2008
所蔵情報: loading…
目次情報: 続きを見る
Preliminaries / Part I:
Introduction / 1:
Motivation / 1.1:
Contributions / 1.2:
Service-Oriented Course Generation / 1.2.1:
Modeling of Pedagogical Knowledge / 1.2.2:
Adaptivity in Generated Courses / 1.2.3:
Evaluation / 1.2.4:
Overview / 1.3:
Relevant Technologies / 2:
Basic Terminology / 2.1:
Semantic Web Technologies / 2.2:
Extensible Markup Language / 2.2.1:
Resource Description Framework / 2.2.2:
OWL Web Ontology Language / 2.2.3:
E-learning Standards / 2.3:
Learning Object Metadata / 2.3.1:
IMS Content Packaging / 2.3.2:
IMS Simple Sequencing / 2.3.3:
IMS Learning Design / 2.3.4:
Mathematics in the Web / 2.4:
OMDoc (Open Mathematical Documents) / 2.4.1:
The Learning Environment ActiveMath / 2.4.2:
Course Generation / 2.5:
Hierarchical Task Network Planning / 2.6:
Introduction to AI-Planning / 2.6.1:
Introduction to Hierarchical Task Network Planning / 2.6.2:
SHOP2 and JSHOP2 / 2.6.3:
JSHOP2 Formalism / 2.6.4:
Descriptive and Prescriptive Learning Theories / 3:
Behaviorism / 3.1:
Cognitivism / 3.2:
Constructivism / 3.3:
Instructional Design / 3.4:
Competency-Based Learning / 3.5:
Mathematical Competencies / 3.5.1:
Competency Levels / 3.5.2:
PAIGOS / Part II:
General Principles / 4:
An Ontology of Instructional Objects / 4.1:
Description of the Ontology / 4.1.1:
Why an Ontology? / 4.1.3:
Applications of the Ontology / 4.1.4:
A Mediator for Accessing Learning Object Repositories / 4.2:
Related Work / 4.2.1:
Overview of the Mediator Architecture / 4.2.2:
Querying the Mediator / 4.2.3:
Ontology Mapping and Query Rewriting / 4.2.4:
Repository Interface and Caching / 4.2.5:
Limitations of the Mediator as an Educational Service / 4.2.6:
Pedagogical Tasks, Methods and Strategies / 4.3:
Representing Course Generation Knowledge in an HTN Planner / 4.4:
Mapping Pedagogical Tasks onto HTN Tasks / 4.4.1:
Course Generation Planning Problems / 4.4.3:
Critical and Optional Tasks / 4.4.4:
Basic General Purpose Axioms and Operators / 4.5:
Testing for Equality / 4.5.1:
List Manipulation / 4.5.2:
Binding a Variable to All Terms of a Term List / 4.5.3:
Manipulating the World State / 4.5.4:
Basic Operators and Methods of the Course Generation Domain / 4.6:
Inserting References to Educational Resources / 4.6.1:
Starting and Ending Sections / 4.6.2:
Inserting References to Learning-Support Services / 4.6.3:
An Operator for Dynamic Text Generation / 4.6.4:
Dynamic Subtask Expansion / 4.6.5:
Accessing Information about Educational Resources / 4.6.6:
Axioms for Accessing the Learner Model / 4.6.7:
Processing Resources Depending on Learner Characteristics / 4.6.8:
Initializing and Manipulating Information about the Learning Goal / 4.6.9:
Converting a Plan into a Course / 4.7:
Generating Structure and Adaptivity: Dynamic Tasks / 4.8:
Generation of Narrative Bridges and Structure / 4.9:
Empirical Findings / 4.9.1:
Operator and Methods for Text Generation / 4.9.2:
Symbolic Representations of Dynamic Text Items / 4.9.3:
Generation of Structure Information / 4.9.4:
Summary / 4.10:
Course Generation in Practice: Formalized Scenarios / 5:
Moderate Constructivist Competency-Based Scenarios / 5.1:
Course Generation and Constructivism - a Contradiction? / 5.1.1:
Selecting Exercises / 5.1.2:
Selecting Examples / 5.1.3:
Scenario "Discover" / 5.1.4:
Scenario "Rehearse" / 5.1.5:
Scenario "Connect" / 5.1.6:
Scenario "Train Intensively" / 5.1.7:
Scenario "Train Competencies" / 5.1.8:
Scenario "Exam Simulation" / 5.1.9:
Course Generation Based on Instructional Design Principles / 5.2:
Merrill's "First Principles of Instruction" / 5.2.1:
Scenario "Guided Tour" / 5.2.2:
Implementation and Integration / 6:
Implementation / 6.1:
Integration of PAIGOS in ActiveMath / 6.2:
Course Generation in ActiveMath / 6.2.1:
Dynamically Generated Elements in a Table of Contents / 6.2.2:
Usage of Learning-Support Services in ActiveMath / 6.2.3:
Template-Based Generation of Narrative Bridges / 6.2.4:
PAIGOS as a Service in ActiveMath / 6.2.5:
Course Generation as a Web-Service / 6.3:
Interfaces / 6.3.1:
Technical Evaluations and Use Cases / 7:
Evaluation of the Ontology / 7.1.1:
Mediator Use Cases and Evaluations / 7.1.2:
Course Generation Use Cases and Evaluations / 7.1.3:
Performance of PAIGOS / 7.1.4:
Discussion / 7.1.5:
Formative and Summative Evaluation / 7.2:
Formative Evaluations / 7.2.1:
Summative Evaluation / 7.2.2:
Conclusions / 7.2.3:
Early Work / 8:
Generic Tutoring Environment / 8.2:
Dynamic Courseware Generator / 8.3:
ACE/WINDS / 8.4:
Former Course Generator of ActiveMath / 8.5:
APeLS/iClass / 8.6:
SeLeNe / 8.7:
Statistical Methods for Course Generation / 8.8:
Approaches Using Hierarchical Task Network Planning / 8.9:
Ontologies for Instructional Design / 8.10:
Future Work and Acknowledgments / 9:
Future Work / 9.1:
Complete List of User Comments
References
Index
Preliminaries / Part I:
Introduction / 1:
Motivation / 1.1:
8.

電子ブック

EB
Carsten Ullrich, Jörg Siekmann
出版情報: SpringerLink Books - AutoHoldings , Springer Berlin Heidelberg, 2008
所蔵情報: loading…
目次情報: 続きを見る
Preliminaries / Part I:
Introduction / 1:
Motivation / 1.1:
Contributions / 1.2:
Service-Oriented Course Generation / 1.2.1:
Modeling of Pedagogical Knowledge / 1.2.2:
Adaptivity in Generated Courses / 1.2.3:
Evaluation / 1.2.4:
Overview / 1.3:
Relevant Technologies / 2:
Basic Terminology / 2.1:
Semantic Web Technologies / 2.2:
Extensible Markup Language / 2.2.1:
Resource Description Framework / 2.2.2:
OWL Web Ontology Language / 2.2.3:
E-learning Standards / 2.3:
Learning Object Metadata / 2.3.1:
IMS Content Packaging / 2.3.2:
IMS Simple Sequencing / 2.3.3:
IMS Learning Design / 2.3.4:
Mathematics in the Web / 2.4:
OMDoc (Open Mathematical Documents) / 2.4.1:
The Learning Environment ActiveMath / 2.4.2:
Course Generation / 2.5:
Hierarchical Task Network Planning / 2.6:
Introduction to AI-Planning / 2.6.1:
Introduction to Hierarchical Task Network Planning / 2.6.2:
SHOP2 and JSHOP2 / 2.6.3:
JSHOP2 Formalism / 2.6.4:
Descriptive and Prescriptive Learning Theories / 3:
Behaviorism / 3.1:
Cognitivism / 3.2:
Constructivism / 3.3:
Instructional Design / 3.4:
Competency-Based Learning / 3.5:
Mathematical Competencies / 3.5.1:
Competency Levels / 3.5.2:
PAIGOS / Part II:
General Principles / 4:
An Ontology of Instructional Objects / 4.1:
Description of the Ontology / 4.1.1:
Why an Ontology? / 4.1.3:
Applications of the Ontology / 4.1.4:
A Mediator for Accessing Learning Object Repositories / 4.2:
Related Work / 4.2.1:
Overview of the Mediator Architecture / 4.2.2:
Querying the Mediator / 4.2.3:
Ontology Mapping and Query Rewriting / 4.2.4:
Repository Interface and Caching / 4.2.5:
Limitations of the Mediator as an Educational Service / 4.2.6:
Pedagogical Tasks, Methods and Strategies / 4.3:
Representing Course Generation Knowledge in an HTN Planner / 4.4:
Mapping Pedagogical Tasks onto HTN Tasks / 4.4.1:
Course Generation Planning Problems / 4.4.3:
Critical and Optional Tasks / 4.4.4:
Basic General Purpose Axioms and Operators / 4.5:
Testing for Equality / 4.5.1:
List Manipulation / 4.5.2:
Binding a Variable to All Terms of a Term List / 4.5.3:
Manipulating the World State / 4.5.4:
Basic Operators and Methods of the Course Generation Domain / 4.6:
Inserting References to Educational Resources / 4.6.1:
Starting and Ending Sections / 4.6.2:
Inserting References to Learning-Support Services / 4.6.3:
An Operator for Dynamic Text Generation / 4.6.4:
Dynamic Subtask Expansion / 4.6.5:
Accessing Information about Educational Resources / 4.6.6:
Axioms for Accessing the Learner Model / 4.6.7:
Processing Resources Depending on Learner Characteristics / 4.6.8:
Initializing and Manipulating Information about the Learning Goal / 4.6.9:
Converting a Plan into a Course / 4.7:
Generating Structure and Adaptivity: Dynamic Tasks / 4.8:
Generation of Narrative Bridges and Structure / 4.9:
Empirical Findings / 4.9.1:
Operator and Methods for Text Generation / 4.9.2:
Symbolic Representations of Dynamic Text Items / 4.9.3:
Generation of Structure Information / 4.9.4:
Summary / 4.10:
Course Generation in Practice: Formalized Scenarios / 5:
Moderate Constructivist Competency-Based Scenarios / 5.1:
Course Generation and Constructivism - a Contradiction? / 5.1.1:
Selecting Exercises / 5.1.2:
Selecting Examples / 5.1.3:
Scenario "Discover" / 5.1.4:
Scenario "Rehearse" / 5.1.5:
Scenario "Connect" / 5.1.6:
Scenario "Train Intensively" / 5.1.7:
Scenario "Train Competencies" / 5.1.8:
Scenario "Exam Simulation" / 5.1.9:
Course Generation Based on Instructional Design Principles / 5.2:
Merrill's "First Principles of Instruction" / 5.2.1:
Scenario "Guided Tour" / 5.2.2:
Implementation and Integration / 6:
Implementation / 6.1:
Integration of PAIGOS in ActiveMath / 6.2:
Course Generation in ActiveMath / 6.2.1:
Dynamically Generated Elements in a Table of Contents / 6.2.2:
Usage of Learning-Support Services in ActiveMath / 6.2.3:
Template-Based Generation of Narrative Bridges / 6.2.4:
PAIGOS as a Service in ActiveMath / 6.2.5:
Course Generation as a Web-Service / 6.3:
Interfaces / 6.3.1:
Technical Evaluations and Use Cases / 7:
Evaluation of the Ontology / 7.1.1:
Mediator Use Cases and Evaluations / 7.1.2:
Course Generation Use Cases and Evaluations / 7.1.3:
Performance of PAIGOS / 7.1.4:
Discussion / 7.1.5:
Formative and Summative Evaluation / 7.2:
Formative Evaluations / 7.2.1:
Summative Evaluation / 7.2.2:
Conclusions / 7.2.3:
Early Work / 8:
Generic Tutoring Environment / 8.2:
Dynamic Courseware Generator / 8.3:
ACE/WINDS / 8.4:
Former Course Generator of ActiveMath / 8.5:
APeLS/iClass / 8.6:
SeLeNe / 8.7:
Statistical Methods for Course Generation / 8.8:
Approaches Using Hierarchical Task Network Planning / 8.9:
Ontologies for Instructional Design / 8.10:
Future Work and Acknowledgments / 9:
Future Work / 9.1:
Complete List of User Comments
References
Index
Preliminaries / Part I:
Introduction / 1:
Motivation / 1.1:
9.

電子ブック

EB
D. M. Gabbay, Matthieu Cord, J. Siekmann
出版情報: Springer eBooks Computer Science , Springer Berlin Heidelberg, 2008
所蔵情報: loading…
目次情報: 続きを見る
Introduction to Learning Principles for Multimedia Data / Part I:
Introduction to Bayesian Methods and Decision Theory / Simon P. Wilson ; Rozenn Dahyot ; Padraig Cunningham1:
Introduction / 1.1:
Uncertainty and Probability / 1.2:
Quantifying Uncertainty / 1.2.1:
The Laws of Probability / 1.2.2:
Interpreting Probability / 1.2.3:
The Partition Law and Bayes' Law / 1.2.4:
Probability Models, Parameters and Likelihoods / 1.3:
Bayesian Statistical Learning / 1.4:
Implementing Bayesian Statistical Learning Methods / 1.5:
Direct Simulation Methods / 1.5.1:
Markov Chain Monte Carlo / 1.5.2:
Monte Carlo Integration / 1.5.3:
Optimization Methods / 1.5.4:
Decision Theory / 1.6:
Utility and Choosing the Optimal Decision / 1.6.1:
Where Is the Utility? / 1.6.2:
Native Bayes / 1.7:
Further Reading / 1.8:
References
Supervised Learning / Matthieu Cord ; Sarah Jane Delany2:
Introduction to Statistical Learning / 2.1:
Risk Minimization / 2.2.1:
Empirical Risk Minimization / 2.2.2:
Risk Bounds / 2.2.3:
Support Vector Machines and Kernels / 2.3:
Linear Classification: SVM Principle / 2.3.1:
Soft Margin / 2.3.2:
Kernel-Based Classification / 2.3.3:
Nearest Neighbour Classification / 2.4:
Similarity and Distance Metrics / 2.4.1:
Other Distance Metrics for Multimedia Data / 2.4.2:
Computational Complexity / 2.4.3:
Instance Selection and Noise Reduction / 2.4.4:
k-NN: Advantages and Disadvantages / 2.4.5:
Ensemble Techniques / 2.5:
Bias-Variance Analysis of Error / 2.5.1:
Bagging / 2.5.3:
Random Forests / 2.5.4:
Boosting / 2.5.5:
Summary / 2.6:
Unsupervised Learning and Clustering / Derek Greene ; Páadraig Cunningham ; Rudolf Mayer3:
Basic Clustering Techniques / 3.1:
k-Means Clustering / 3.2.1:
Fuzzy Clustering / 3.2.2:
Hierarchical Clustering / 3.2.3:
Modern Clustering Techniques / 3.3:
Kernel Clustering / 3.3.1:
Spectral Clustering / 3.3.2:
Self-organizing Maps / 3.4:
SOM Architecture / 3.4.1:
SOM Algorithm / 3.4.2:
Self-organizing Map and Clustering / 3.4.3:
Variations of the Self-organizing Map / 3.4.4:
Cluster Validation / 3.5:
Internal Validation / 3.5.1:
External Validation / 3.5.2:
Stability-Based Techniques / 3.5.3:
Dimension Reduction / 3.6:
Feature Transformation / 4.1:
Principal Component Analysis / 4.2.1:
Linear Discriminant Analysis / 4.2.2:
Feature Selection / 4.3:
Feature Selection in Supervised Learning / 4.3.1:
Unsupervised Feature Selection / 4.3.2:
Conclusions / 4.4:
Multimedia Applications / Part II:
Online Content-Based Image Retrieval Using Active Learning / Philippe-Henri Gosselin5:
Database Representation: Features and Similarity / 5.1:
Visual Features / 5.2.1:
Signature Based on Visual Pattern Dictionary / 5.2.2:
Similarity / 5.2.3:
Kernel Framework / 5.2.4:
Experiments / 5.2.5:
Classification Framework for Image Collection / 5.3:
Classification Methods for CBIR / 5.3.1:
Query Updating Scheme / 5.3.2:
Active Learning for CBIR / 5.3.3:
Notations for Selective Sampling Optimization / 5.4.1:
Active Learning Methods / 5.4.2:
Further Insights on Active Learning for CBIR / 5.5:
Active Boundary Correction / 5.5.1:
MAP vs Classification Error / 5.5.2:
Batch Selection / 5.5.3:
CBIR Interface: Result Display and Interaction / 5.5.4:
Conservative Learning for Object Detectors / Peter M. Roth ; Horst Bischof6:
Online Conservative Learning / 6.1:
Motion Detection / 6.2.1:
Reconstructive Model / 6.2.2:
Online AdaBoost for Feature Selection / 6.2.3:
Conservative Update Rules / 6.2.4:
Experimental Results / 6.3:
Description of Experiments / 6.3.1:
CoffeeCam / 6.3.2:
Switch to Caviar / 6.3.3:
Further Detection Results / 6.3.4:
Summary and Conclusions / 6.4:
Machine Learning Techniques for Face Analysis / Roberto Valenti ; Nicu Sebe ; Theo Gevers ; Ira Cohen7:
Background / 7.1:
Face Detection / 7.2.1:
Facial Feature Detection / 7.2.2:
Emotion Recognition Research / 7.2.3:
Learning Classifiers for Human-Computer Interaction / 7.3:
Model Is Correct / 7.3.1:
Model Is Incorrect / 7.3.2:
Discussion / 7.3.3:
Learning the Structure of Bayesian Network Classifiers / 7.4:
Bayesian Networks / 7.4.1:
Switching Between Simple Models / 7.4.2:
Beyond Simple Models / 7.4.3:
Classification-Driven Stochastic Structure Search / 7.4.4:
Should Unlabeled Be Weighed Differently? / 7.4.5:
Active Learning / 7.4.6:
Face Detection Experiments / 7.4.7:
Facial Expression Recognition Experiments / 7.5.2:
Mental Search in Image Databases: Implicit Versus Explicit Content Query / Julien Fauqueur ; Nozha Boujemaa7.6:
"Mental Image Search" Versus Other Search Paradigms / 8.1:
Implicit Content Query: Mental Image Search Using Bayesian Inference / 8.3:
Bayesian Inference for CBIR / 8.3.1:
Mental Image Category Search / 8.3.2:
Evaluation / 8.3.3:
Remarks / 8.3.4:
Explicit Content Query: Mental Image Search by Visual Composition Formulation / 8.4:
System Summary / 8.4.1:
Visual Thesaurus Construction / 8.4.2:
Symbolic Indexing, Boolean Search and Range Query Mechanism / 8.4.3:
Results / 8.4.4:
Combining Textual and Visual Information for Semantic Labeling of Images and Videos / Pinar Duygulu ; Muhammet Başstan ; Derya Ozkan8.4.5:
Semantic Labeling of Images / 9.1:
Translation Approach / 9.3:
Learning Correspondences Between Words and Regions / 9.3.1:
Linking Visual Elements to Words in News Videos / 9.3.2:
Translation Approach to Solve Video Association Problem / 9.3.3:
Experiments on News Videos Data Set / 9.3.4:
Naming Faces in News / 9.4:
Integrating Names and Faces / 9.4.1:
Finding Similarity of Faces / 9.4.2:
Finding the Densest Component in the Similarity Graph / 9.4.3:
Conclusions and Discussion / 9.4.4:
Machine Learning for Semi-structured Multimedia Documents: Application to Pornographic Filtering and Thematic Categorization. / Ludovic Denoyer ; Patrick Gallinari10:
Previous Work / 10.1:
Structured Document Classification / 10.2.1:
Multimedia Documents / 10.2.2:
Multimedia Generative Model / 10.3:
Classification of Documents / 10.3.1:
Generative Model / 10.3.2:
Description / 10.3.3:
Learning the Meta Model / 10.4:
Maximization of Lstructure / 10.4.1:
Maximization of Lcontent / 10.4.2:
Local Generative Models for Text and Image / 10.5:
Modelling a Piece of Text with Naive Bayes / 10.5.1:
Image Model / 10.5.2:
Models and Evaluation / 10.6:
Corpora / 10.6.2:
Results over the Pornographic Corpus / 10.6.3:
Results over the Wikipedia Multimedia Categorization Corpus / 10.6.4:
Conclusion / 10.7:
Classification and Clustering of Music for Novel Music Access Applications / Thomas Lidy ; Andreas Rauber11:
Feature Extraction from Audio / 11.1:
Low-Level Audio Features / 11.2.1:
MPEG-7 Audio Descriptors / 11.2.2:
MFCCs / 11.2.3:
MARSYAS Features / 11.2.4:
Rhythm Patterns / 11.2.5:
Statistical Spectrum Descriptors / 11.2.6:
Rhythm Histograms / 11.2.7:
Automatic Classifications of Music into Genres / 11.3:
Evaluation Through Music Classification / 11.3.1:
Benchmark Data Sets for Music Classification / 11.3.2:
Creating and Visualizing Music Maps Based on Self-organizing Maps / 11.4:
Class Visualization / 11.4.1:
Hit Histograms / 11.4.2:
U-Matrix / 11.4.3:
P-Matrix / 11.4.4:
U*-matrix / 11.4.5:
Gradient Fields / 11.4.6:
Component Planes / 11.4.7:
Smoothed Data Histograms / 11.4.8:
PlaySOM - Interaction with Music Maps / 11.5:
Interface / 11.5.1:
Interaction / 11.5.2:
Playlist Creation / 11.5.3:
PocketSOMPlayer - Music Retrieval on Mobile Devices / 11.6:
Playing Scenarios / 11.6.1:
Index / 11.6.3:
Introduction to Learning Principles for Multimedia Data / Part I:
Introduction to Bayesian Methods and Decision Theory / Simon P. Wilson ; Rozenn Dahyot ; Padraig Cunningham1:
Introduction / 1.1:
10.

電子ブック

EB
D. M. Gabbay, Matthieu Cord, J. Siekmann, Pádraig Cunningham
出版情報: SpringerLink Books - AutoHoldings , Springer Berlin Heidelberg, 2008
所蔵情報: loading…
目次情報: 続きを見る
Introduction to Learning Principles for Multimedia Data / Part I:
Introduction to Bayesian Methods and Decision Theory / Simon P. Wilson ; Rozenn Dahyot ; Padraig Cunningham1:
Introduction / 1.1:
Uncertainty and Probability / 1.2:
Quantifying Uncertainty / 1.2.1:
The Laws of Probability / 1.2.2:
Interpreting Probability / 1.2.3:
The Partition Law and Bayes' Law / 1.2.4:
Probability Models, Parameters and Likelihoods / 1.3:
Bayesian Statistical Learning / 1.4:
Implementing Bayesian Statistical Learning Methods / 1.5:
Direct Simulation Methods / 1.5.1:
Markov Chain Monte Carlo / 1.5.2:
Monte Carlo Integration / 1.5.3:
Optimization Methods / 1.5.4:
Decision Theory / 1.6:
Utility and Choosing the Optimal Decision / 1.6.1:
Where Is the Utility? / 1.6.2:
Native Bayes / 1.7:
Further Reading / 1.8:
References
Supervised Learning / Matthieu Cord ; Sarah Jane Delany2:
Introduction to Statistical Learning / 2.1:
Risk Minimization / 2.2.1:
Empirical Risk Minimization / 2.2.2:
Risk Bounds / 2.2.3:
Support Vector Machines and Kernels / 2.3:
Linear Classification: SVM Principle / 2.3.1:
Soft Margin / 2.3.2:
Kernel-Based Classification / 2.3.3:
Nearest Neighbour Classification / 2.4:
Similarity and Distance Metrics / 2.4.1:
Other Distance Metrics for Multimedia Data / 2.4.2:
Computational Complexity / 2.4.3:
Instance Selection and Noise Reduction / 2.4.4:
k-NN: Advantages and Disadvantages / 2.4.5:
Ensemble Techniques / 2.5:
Bias-Variance Analysis of Error / 2.5.1:
Bagging / 2.5.3:
Random Forests / 2.5.4:
Boosting / 2.5.5:
Summary / 2.6:
Unsupervised Learning and Clustering / Derek Greene ; Páadraig Cunningham ; Rudolf Mayer3:
Basic Clustering Techniques / 3.1:
k-Means Clustering / 3.2.1:
Fuzzy Clustering / 3.2.2:
Hierarchical Clustering / 3.2.3:
Modern Clustering Techniques / 3.3:
Kernel Clustering / 3.3.1:
Spectral Clustering / 3.3.2:
Self-organizing Maps / 3.4:
SOM Architecture / 3.4.1:
SOM Algorithm / 3.4.2:
Self-organizing Map and Clustering / 3.4.3:
Variations of the Self-organizing Map / 3.4.4:
Cluster Validation / 3.5:
Internal Validation / 3.5.1:
External Validation / 3.5.2:
Stability-Based Techniques / 3.5.3:
Dimension Reduction / 3.6:
Feature Transformation / 4.1:
Principal Component Analysis / 4.2.1:
Linear Discriminant Analysis / 4.2.2:
Feature Selection / 4.3:
Feature Selection in Supervised Learning / 4.3.1:
Unsupervised Feature Selection / 4.3.2:
Conclusions / 4.4:
Multimedia Applications / Part II:
Online Content-Based Image Retrieval Using Active Learning / Philippe-Henri Gosselin5:
Database Representation: Features and Similarity / 5.1:
Visual Features / 5.2.1:
Signature Based on Visual Pattern Dictionary / 5.2.2:
Similarity / 5.2.3:
Kernel Framework / 5.2.4:
Experiments / 5.2.5:
Classification Framework for Image Collection / 5.3:
Classification Methods for CBIR / 5.3.1:
Query Updating Scheme / 5.3.2:
Active Learning for CBIR / 5.3.3:
Notations for Selective Sampling Optimization / 5.4.1:
Active Learning Methods / 5.4.2:
Further Insights on Active Learning for CBIR / 5.5:
Active Boundary Correction / 5.5.1:
MAP vs Classification Error / 5.5.2:
Batch Selection / 5.5.3:
CBIR Interface: Result Display and Interaction / 5.5.4:
Conservative Learning for Object Detectors / Peter M. Roth ; Horst Bischof6:
Online Conservative Learning / 6.1:
Motion Detection / 6.2.1:
Reconstructive Model / 6.2.2:
Online AdaBoost for Feature Selection / 6.2.3:
Conservative Update Rules / 6.2.4:
Experimental Results / 6.3:
Description of Experiments / 6.3.1:
CoffeeCam / 6.3.2:
Switch to Caviar / 6.3.3:
Further Detection Results / 6.3.4:
Summary and Conclusions / 6.4:
Machine Learning Techniques for Face Analysis / Roberto Valenti ; Nicu Sebe ; Theo Gevers ; Ira Cohen7:
Background / 7.1:
Face Detection / 7.2.1:
Facial Feature Detection / 7.2.2:
Emotion Recognition Research / 7.2.3:
Learning Classifiers for Human-Computer Interaction / 7.3:
Model Is Correct / 7.3.1:
Model Is Incorrect / 7.3.2:
Discussion / 7.3.3:
Learning the Structure of Bayesian Network Classifiers / 7.4:
Bayesian Networks / 7.4.1:
Switching Between Simple Models / 7.4.2:
Beyond Simple Models / 7.4.3:
Classification-Driven Stochastic Structure Search / 7.4.4:
Should Unlabeled Be Weighed Differently? / 7.4.5:
Active Learning / 7.4.6:
Face Detection Experiments / 7.4.7:
Facial Expression Recognition Experiments / 7.5.2:
Mental Search in Image Databases: Implicit Versus Explicit Content Query / Julien Fauqueur ; Nozha Boujemaa7.6:
"Mental Image Search" Versus Other Search Paradigms / 8.1:
Implicit Content Query: Mental Image Search Using Bayesian Inference / 8.3:
Bayesian Inference for CBIR / 8.3.1:
Mental Image Category Search / 8.3.2:
Evaluation / 8.3.3:
Remarks / 8.3.4:
Explicit Content Query: Mental Image Search by Visual Composition Formulation / 8.4:
System Summary / 8.4.1:
Visual Thesaurus Construction / 8.4.2:
Symbolic Indexing, Boolean Search and Range Query Mechanism / 8.4.3:
Results / 8.4.4:
Combining Textual and Visual Information for Semantic Labeling of Images and Videos / Pinar Duygulu ; Muhammet Başstan ; Derya Ozkan8.4.5:
Semantic Labeling of Images / 9.1:
Translation Approach / 9.3:
Learning Correspondences Between Words and Regions / 9.3.1:
Linking Visual Elements to Words in News Videos / 9.3.2:
Translation Approach to Solve Video Association Problem / 9.3.3:
Experiments on News Videos Data Set / 9.3.4:
Naming Faces in News / 9.4:
Integrating Names and Faces / 9.4.1:
Finding Similarity of Faces / 9.4.2:
Finding the Densest Component in the Similarity Graph / 9.4.3:
Conclusions and Discussion / 9.4.4:
Machine Learning for Semi-structured Multimedia Documents: Application to Pornographic Filtering and Thematic Categorization. / Ludovic Denoyer ; Patrick Gallinari10:
Previous Work / 10.1:
Structured Document Classification / 10.2.1:
Multimedia Documents / 10.2.2:
Multimedia Generative Model / 10.3:
Classification of Documents / 10.3.1:
Generative Model / 10.3.2:
Description / 10.3.3:
Learning the Meta Model / 10.4:
Maximization of Lstructure / 10.4.1:
Maximization of Lcontent / 10.4.2:
Local Generative Models for Text and Image / 10.5:
Modelling a Piece of Text with Naive Bayes / 10.5.1:
Image Model / 10.5.2:
Models and Evaluation / 10.6:
Corpora / 10.6.2:
Results over the Pornographic Corpus / 10.6.3:
Results over the Wikipedia Multimedia Categorization Corpus / 10.6.4:
Conclusion / 10.7:
Classification and Clustering of Music for Novel Music Access Applications / Thomas Lidy ; Andreas Rauber11:
Feature Extraction from Audio / 11.1:
Low-Level Audio Features / 11.2.1:
MPEG-7 Audio Descriptors / 11.2.2:
MFCCs / 11.2.3:
MARSYAS Features / 11.2.4:
Rhythm Patterns / 11.2.5:
Statistical Spectrum Descriptors / 11.2.6:
Rhythm Histograms / 11.2.7:
Automatic Classifications of Music into Genres / 11.3:
Evaluation Through Music Classification / 11.3.1:
Benchmark Data Sets for Music Classification / 11.3.2:
Creating and Visualizing Music Maps Based on Self-organizing Maps / 11.4:
Class Visualization / 11.4.1:
Hit Histograms / 11.4.2:
U-Matrix / 11.4.3:
P-Matrix / 11.4.4:
U*-matrix / 11.4.5:
Gradient Fields / 11.4.6:
Component Planes / 11.4.7:
Smoothed Data Histograms / 11.4.8:
PlaySOM - Interaction with Music Maps / 11.5:
Interface / 11.5.1:
Interaction / 11.5.2:
Playlist Creation / 11.5.3:
PocketSOMPlayer - Music Retrieval on Mobile Devices / 11.6:
Playing Scenarios / 11.6.1:
Index / 11.6.3:
Introduction to Learning Principles for Multimedia Data / Part I:
Introduction to Bayesian Methods and Decision Theory / Simon P. Wilson ; Rozenn Dahyot ; Padraig Cunningham1:
Introduction / 1.1:
11.

電子ブック

EB
Clemens van Dinther, Stefan Brantschen, Clemens van Dinther, Marius Walliser
出版情報: Springer eBooks Computer Science , Birkh?user Basel, 2007
所蔵情報: loading…
目次情報: 続きを見る
List of Figures
List of Tables
Motivation and Fundamentals / I:
Introduction / 1:
Problem Description and Research Questions / 1.1:
Organization of the Book / 1.2:
Economic Foundations / 2:
Electronic Markets and Strategic Bidding / 2.1:
Historical Background of Markets / 2.1.1:
Markets as an Economic System / 2.1.2:
Market Institution Types and Auctions / 2.1.3:
Bidding in Auctions under Uncertainty / 2.1.4:
Market Engineering / 2.2:
Structured Design Approach / 2.2.1:
Generic Design Approaches / 2.2.2:
Computer Aided Market Engineering / 2.3:
Conceptual Platform Design / 2.3.1:
The meet2trade Software Suite / 2.3.2:
Summary / 2.4:
Agent-based Computational Economics / 3:
Intelligent Software Agents / 3.1:
Characteristics of Software Agents / 3.1.1:
Agent Learning / 3.1.2:
Agent Architecture / 3.1.3:
Multi Agent Systems / 3.2:
Communication and Interaction in Multi-Agent Systems / 3.2.1:
Coordination in Multi Agent Systems / 3.2.2:
Building Multi Agent-based Simulation Models / 3.3:
Characteristics of Simulations / 3.3.1:
Developing and Applying Simulations / 3.3.2:
Agent-based Simulation Approaches and Tools / 3.4:
Methodological Approaches of MABS in Economics / 4:
Pure Agent-based Simulation: The Bottom-up Approach / 4.1:
Monte Carlo Simulation / 4.2:
Evolutionary Approach / 4.3:
Reinforcement Learning / 4.4:
The Learning Model / 4.4.1:
Markov Games / 4.4.2:
Agent-based Simulation Software / 4.5:
Design Objectives for Agent-based Simulation Software / 5.1:
Functional and Technical Requirements / 5.1.1:
Agent-based Simulation Software - An Overview / 5.1.2:
Requirements Analysis of Agent Platforms / 5.1.3:
The Java Agent Development Framework / 5.2:
Agent Platform Architecture / 5.2.1:
Agent Model / 5.2.2:
Development Tools / 5.2.3:
The Agent-based Market Simulation Environment / 5.3:
Architecture / 5.3.1:
Simulation Control Agent (SCA) / 5.3.2:
Simulation Agents and Behaviours / 5.3.3:
Examples for Simulations on AMASE / 5.3.4:
Examination of Bidding under Uncertainty / 5.4:
Simulation Design / 6:
The Simulation Model / 6.1:
Bidding Process and Action Space / 6.1.1:
Agents and Environment / 6.1.2:
The Reinforcement Learning Mechanism / 6.1.3:
Simulation Approach / 6.2:
Initial Parameter Values of the Simulation / 6.2.1:
Simulation Settings / 6.2.2:
Theoretical Benchmark / 6.2.3:
Assessment of the Simulation Results / 7:
Institutional Rules: Sealed Bid versus Ascending Second Price Auction / 7.1:
Two Agent Environment / 7.1.1:
Five Agent Environment / 7.1.2:
Impact of Information Acquisition Cost / 7.2:
Concluding Discussion and Future Research / 7.2.1:
Conclusion / 8:
Summary of the Main Contributions / 8.1:
Limitations of the Approach / 8.2:
Outlook / 8.3:
Appendices and Bibliography
Appendix: Mathematical Proofs / A:
Expected Social Welfare / A.1:
Auctioneer's Expected Revenue / A.2:
Bidders' Expected Payoff / A.3:
Uncertain Bidders' Expected Payoff / A.4:
Density Function for the Estimated Valuations / A.5:
Appendix: Simulation Data and Figures / B:
Data / B.1:
Institutional Rules: Two Bidder / B.1.1:
Institutional Rules: Five Bidder / B.1.2:
Information Acquisition Costs: Two Bidder / B.1.3:
Information Acquisition Cost: Five Bidder / B.1.4:
Figures / B.2:
Bibliography
Abbreviations
List of Figures
List of Tables
Motivation and Fundamentals / I:
12.

電子ブック

EB
Clemens van Dinther, Stefan Brantschen, Clemens van Dinther, Marius Walliser, Monique Calisti
出版情報: SpringerLink Books - AutoHoldings , Birkhäuser Basel, 2007
所蔵情報: loading…
目次情報: 続きを見る
List of Figures
List of Tables
Motivation and Fundamentals / I:
Introduction / 1:
Problem Description and Research Questions / 1.1:
Organization of the Book / 1.2:
Economic Foundations / 2:
Electronic Markets and Strategic Bidding / 2.1:
Historical Background of Markets / 2.1.1:
Markets as an Economic System / 2.1.2:
Market Institution Types and Auctions / 2.1.3:
Bidding in Auctions under Uncertainty / 2.1.4:
Market Engineering / 2.2:
Structured Design Approach / 2.2.1:
Generic Design Approaches / 2.2.2:
Computer Aided Market Engineering / 2.3:
Conceptual Platform Design / 2.3.1:
The meet2trade Software Suite / 2.3.2:
Summary / 2.4:
Agent-based Computational Economics / 3:
Intelligent Software Agents / 3.1:
Characteristics of Software Agents / 3.1.1:
Agent Learning / 3.1.2:
Agent Architecture / 3.1.3:
Multi Agent Systems / 3.2:
Communication and Interaction in Multi-Agent Systems / 3.2.1:
Coordination in Multi Agent Systems / 3.2.2:
Building Multi Agent-based Simulation Models / 3.3:
Characteristics of Simulations / 3.3.1:
Developing and Applying Simulations / 3.3.2:
Agent-based Simulation Approaches and Tools / 3.4:
Methodological Approaches of MABS in Economics / 4:
Pure Agent-based Simulation: The Bottom-up Approach / 4.1:
Monte Carlo Simulation / 4.2:
Evolutionary Approach / 4.3:
Reinforcement Learning / 4.4:
The Learning Model / 4.4.1:
Markov Games / 4.4.2:
Agent-based Simulation Software / 4.5:
Design Objectives for Agent-based Simulation Software / 5.1:
Functional and Technical Requirements / 5.1.1:
Agent-based Simulation Software - An Overview / 5.1.2:
Requirements Analysis of Agent Platforms / 5.1.3:
The Java Agent Development Framework / 5.2:
Agent Platform Architecture / 5.2.1:
Agent Model / 5.2.2:
Development Tools / 5.2.3:
The Agent-based Market Simulation Environment / 5.3:
Architecture / 5.3.1:
Simulation Control Agent (SCA) / 5.3.2:
Simulation Agents and Behaviours / 5.3.3:
Examples for Simulations on AMASE / 5.3.4:
Examination of Bidding under Uncertainty / 5.4:
Simulation Design / 6:
The Simulation Model / 6.1:
Bidding Process and Action Space / 6.1.1:
Agents and Environment / 6.1.2:
The Reinforcement Learning Mechanism / 6.1.3:
Simulation Approach / 6.2:
Initial Parameter Values of the Simulation / 6.2.1:
Simulation Settings / 6.2.2:
Theoretical Benchmark / 6.2.3:
Assessment of the Simulation Results / 7:
Institutional Rules: Sealed Bid versus Ascending Second Price Auction / 7.1:
Two Agent Environment / 7.1.1:
Five Agent Environment / 7.1.2:
Impact of Information Acquisition Cost / 7.2:
Concluding Discussion and Future Research / 7.2.1:
Conclusion / 8:
Summary of the Main Contributions / 8.1:
Limitations of the Approach / 8.2:
Outlook / 8.3:
Appendices and Bibliography
Appendix: Mathematical Proofs / A:
Expected Social Welfare / A.1:
Auctioneer's Expected Revenue / A.2:
Bidders' Expected Payoff / A.3:
Uncertain Bidders' Expected Payoff / A.4:
Density Function for the Estimated Valuations / A.5:
Appendix: Simulation Data and Figures / B:
Data / B.1:
Institutional Rules: Two Bidder / B.1.1:
Institutional Rules: Five Bidder / B.1.2:
Information Acquisition Costs: Two Bidder / B.1.3:
Information Acquisition Cost: Five Bidder / B.1.4:
Figures / B.2:
Bibliography
Abbreviations
List of Figures
List of Tables
Motivation and Fundamentals / I:
13.

電子ブック

EB
Pierre M. Nugues
出版情報: Springer eBooks Computer Science , Springer Berlin Heidelberg, 2006
所蔵情報: loading…
目次情報: 続きを見る
An Overview of Language Processing / 1:
Linguistics and Language Processing / 1.1:
Applications of Language Processing / 1.2:
The Different Domains of Language Processing / 1.3:
Phonetics / 1.4:
Lexicon and Morphology / 1.5:
Syntax / 1.6:
Syntax as Defined by Noam Chomsky / 1.6.1:
Syntax as Relations and Dependencies / 1.6.2:
Semantics / 1.7:
Discourse and Dialogue / 1.8:
Why Speech and Language Processing Are Difficult / 1.9:
Ambiguity / 1.9.1:
Models and Their Implementation / 1.9.2:
An Example of Language Technology in Action: the Persona Project / 1.10:
Overview of Persona / 1.10.1:
The Persona's Modules / 1.10.2:
Further Reading / 1.11:
Corpus Processing Tools / 2:
Corpora / 2.1:
Types of Corpora / 2.1.1:
Corpora and Lexicon Building / 2.1.2:
Corpora as Knowledge Sources for the Linguist / 2.1.3:
Finite-State Automata / 2.2:
A Description / 2.2.1:
Mathematical Definition of Finite-State Automata / 2.2.2:
Finite-State Automata in Prolog / 2.2.3:
Deterministic and Nondeterministic Automata / 2.2.4:
Building a Deterministic Automata from a Nondeterministic One / 2.2.5:
Searching a String with a Finite-State Automaton / 2.2.6:
Operations on Finite-State Automata / 2.2.7:
Regular Expressions / 2.3:
Repetition Metacharacters / 2.3.1:
The Longest Match / 2.3.2:
Character Classes / 2.3.3:
Nonprintable Symbols or Positions / 2.3.4:
Union and Boolean Operators / 2.3.5:
Operator Combination and Precedence / 2.3.6:
Programming with Regular Expressions / 2.4:
Perl / 2.4.1:
Matching / 2.4.2:
Substitutions / 2.4.3:
Translating Characters / 2.4.4:
String Operators / 2.4.5:
Back References / 2.4.6:
Finding Concordances / 2.5:
Concordances in Prolog / 2.5.1:
Concordances in Perl / 2.5.2:
Approximate String Matching / 2.6:
Edit Operations / 2.6.1:
Minimum Edit Distance / 2.6.2:
Searching Edits in Prolog / 2.6.3:
Encoding, Entropy, and Annotation Schemes / 2.7:
Encoding Texts / 3.1:
Character Sets / 3.2:
Representing Characters / 3.2.1:
Unicode / 3.2.2:
The Unicode Encoding Schemes / 3.2.3:
Locales and Word Order / 3.3:
Presenting Time, Numerical Information, and Ordered Words / 3.3.1:
The Unicode Collation Algorithm / 3.3.2:
Markup Languages / 3.4:
A Brief Background / 3.4.1:
An Outline of XML / 3.4.2:
Writing a DTD / 3.4.3:
Writing an XML Document / 3.4.4:
Namespaces / 3.4.5:
Codes and Information Theory / 3.5:
Entropy / 3.5.1:
Huffman Encoding / 3.5.2:
Cross Entropy / 3.5.3:
Perplexity and Cross Perplexity / 3.5.4:
Entropy and Decision Trees / 3.6:
Decision Trees / 3.6.1:
Inducing Decision Trees Automatically / 3.6.2:
Counting Words / 3.7:
Counting Words and Word Sequences / 4.1:
Words and Tokens / 4.2:
What Is a Word? / 4.2.1:
Breaking a Text into Words: Tokenization / 4.2.2:
Tokenizing Texts / 4.3:
Tokenizing Texts in Prolog / 4.3.1:
Tokenizing Texts in Perl / 4.3.2:
N-grams / 4.4:
Some Definitions / 4.4.1:
Counting Unigrams in Prolog / 4.4.2:
Counting Unigrams with Perl / 4.4.3:
Counting Bigrams with Perl / 4.4.4:
Probabilistic Models of a Word Sequence / 4.5:
The Maximum Likelihood Estimation / 4.5.1:
Using ML Estimates with Nineteen Eighty-Four / 4.5.2:
Smoothing N-gram Probabilities / 4.6:
Sparse Data / 4.6.1:
Laplace's Rule / 4.6.2:
Good-Turing Estimation / 4.6.3:
Using N-grams of Variable Length / 4.7:
Linear Interpolation / 4.7.1:
Back-off / 4.7.2:
Quality of a Language Model / 4.8:
Intuitive Presentation / 4.8.1:
Entropy Rate / 4.8.2:
Perplexity / 4.8.3:
Collocations / 4.9:
Word Preference Measurements / 4.9.1:
Extracting Collocations with Perl / 4.9.2:
Application: Retrieval and Ranking of Documents on the Web / 4.10:
Words, Parts of Speech, and Morphology / 4.11:
Words / 5.1:
Parts of Speech / 5.1.1:
Features / 5.1.2:
Two Significant Parts of Speech: The Noun and the Verb / 5.1.3:
Lexicons / 5.2:
Encoding a Dictionary / 5.2.1:
Building a Trie in Prolog / 5.2.2:
Finding a Word in a Trie / 5.2.3:
Morphology / 5.3:
Morphemes / 5.3.1:
Morphs / 5.3.2:
Inflection and Derivation / 5.3.3:
Language Differences / 5.3.4:
Morphological Parsing / 5.4:
Two-Level Model of Morphology / 5.4.1:
Interpreting the Morphs / 5.4.2:
Finite-State Transducers / 5.4.3:
Conjugating a French Verb / 5.4.4:
Prolog Implementation / 5.4.5:
Operations on Finite-State Transducers / 5.4.6:
Morphological Rules / 5.5:
Two-Level Rules / 5.5.1:
Rules and Finite-State Transducers / 5.5.2:
Rule Composition: An Examplewith French Irregular Verbs / 5.5.3:
Application Examples / 5.6:
Part-of-Speech Tagging Using Rules / 5.7:
Resolving Part-of-Speech Ambiguity / 6.1:
A Manual Method / 6.1.1:
Which Method to Use to Automatically Assign Parts of Speech / 6.1.2:
Tagging with Rules / 6.2:
Brill's Tagger / 6.2.1:
Implementation in Prolog / 6.2.2:
Deriving Rules Automatically / 6.2.3:
Confusion Matrices / 6.2.4:
Unknown Words / 6.3:
Standardized Part-of-Speech Tagsets / 6.4:
Multilingual Part-of-Speech Tags / 6.4.1:
Parts of Speechfor English / 6.4.2:
An Annotation Schemefor Swedish / 6.4.3:
Part-of-Speech Tagging Using Stochastic Techniques / 6.5:
The Noisy Channel Model / 7.1:
Presentation / 7.1.1:
The N-gram Approximation / 7.1.2:
Tagging a Sentence / 7.1.3:
The Viterbi Algorithm: An Intuitive Presentation / 7.1.4:
Markov Models / 7.2:
Markov Chains / 7.2.1:
Hidden Markov Models / 7.2.2:
Three Fundamental Algorithms to Solve Problems with HMMs / 7.2.3:
The Forward Procedure / 7.2.4:
Viterbi Algorithm / 7.2.5:
The Backward Procedure / 7.2.6:
The Forward-Backward Algorithm / 7.2.7:
Tagging with Decision Trees / 7.3:
An Application of the Noisy Channel Model: Spell Checking / 7.4:
A Second Application: Language Models for Machine Translation / 7.6:
Parallel Corpora / 7.6.1:
Alignment / 7.6.2:
Translation / 7.6.3:
Phrase-Structure Grammars in Prolog / 7.7:
Using Prolog to Write Phrase-Structure Grammars / 8.1:
Representing Chomsky's Syntactic Formalism in Prolog / 8.2:
Constituents / 8.2.1:
Tree Structures / 8.2.2:
Phrase-Structure Rules / 8.2.3:
The Definite Clause Grammar (DCG) Notation / 8.2.4:
Parsing with DCGs / 8.3:
Translating DCGs into Prolog Clauses / 8.3.1:
Parsing and Generation / 8.3.2:
Left-Recursive Rules / 8.3.3:
Parsing Ambiguity / 8.4:
Using Variables / 8.5:
Gender and Number Agreement / 8.5.1:
Obtaining the Syntactic Structure / 8.5.2:
Application: Tokenizing Texts Using DCG Rules / 8.6:
Word Breaking / 8.6.1:
Recognition of Sentence Boundaries / 8.6.2:
Semantic Representation / 8.7:
A-Calculus / 8.7.1:
Embedding A-Expressions into DCG Rules / 8.7.2:
Semantic Composition of Verbs / 8.7.3:
An Application of Phrase-Structure Grammars and a Worked Example / 8.8:
Partial Parsing / 8.9:
Is Syntax Necessary? / 9.1:
Word Spotting and Template Matching / 9.2:
ELIZA / 9.2.1:
Word Spotting in Prolog / 9.2.2:
Multiword Detection / 9.3:
Multiwords / 9.3.1:
AStandard Multiword Annotation / 9.3.2:
Detecting Multiwords with Rules / 9.3.3:
Running the Program / 9.3.4:
Noun Groups and Verb Groups / 9.4:
Groups Versus Recursive Phrases / 9.4.1:
DCG Rules to Detect Noun Groups / 9.4.2:
DCG Rules to Detect Verb Groups / 9.4.3:
Running the Rules / 9.4.4:
Group Detection as a Tagging Problem / 9.5:
Tagging Gaps / 9.5.1:
Tagging Words / 9.5.2:
Using Symbolic Rules / 9.5.3:
Using Statistical Tagging / 9.5.4:
Cascading Partial Parsers / 9.6:
Elementary Analysis of Grammatical Functions / 9.7:
Main Functions / 9.7.1:
Extracting Other Groups / 9.7.2:
An Annotation Scheme for Groups in French / 9.8:
Application: The FASTUS System / 9.9:
The Message Understanding Conferences / 9.9.1:
The Syntactic Layers of the FASTUS System / 9.9.2:
Evaluationof Information Extraction Systems / 9.9.3:
Syntactic Formalisms / 9.10:
Introduction / 10.1:
Chomsky's Grammar in Syntactic Structures / 10.2:
Constituency: A Formal Definition / 10.2.1:
Transformations / 10.2.2:
Transformations and Movements / 10.2.3:
Gap Threading / 10.2.4:
Gap Threading to Parse Relative Clauses / 10.2.5:
Standardized Phrase Categories for English / 10.3:
Unification-Based Grammars / 10.4:
Representing Features in Prolog / 10.4.1:
A Formalism for Features and Rules / 10.4.3:
Features Organization / 10.4.4:
Features and Unification / 10.4.5:
A Unification Algorithm for Feature Structures / 10.4.6:
Dependency Grammars / 10.5:
Properties of a Dependency Graph / 10.5.1:
Valence / 10.5.3:
Dependencies and Functions / 10.5.4:
Parsing Techniques / 10.6:
Bottom-up Parsing / 11.1:
The Shift-Reduce Algorithm / 11.2.1:
Implementing Shift-Reduce Parsing in Prolog / 11.2.2:
Differences Between Bottom-up and Top-down Parsing / 11.2.3:
Chart Parsing / 11.3:
Backtracking and Efficiency / 11.3.1:
Structure of a Chart / 11.3.2:
The Active Chart / 11.3.3:
Modules of an Earley Parser / 11.3.4:
The Earley Algorithm in Prolog / 11.3.5:
The Earley Parser to Handle Left-Recursive Rules and Empty Symbols / 11.3.6:
Probabilistic Parsing of Context-Free Grammars / 11.4:
A Description of PCFGs / 11.5:
The Bottom-up Chart / 11.5.1:
The Cocke-Younger-Kasami Algorithm in Prolog / 11.5.2:
Adding Probabilities to the CYK Parser / 11.5.3:
Parser Evaluation / 11.6:
Constituency-Based Evaluation / 11.6.1:
Dependency-Based Evaluation / 11.6.2:
PerformanceofPCFG Parsing / 11.6.3:
Parsing Dependencies / 11.7:
Dependency Rules / 11.7.1:
Extending the Shift-Reduce Algorithm to Parse Dependencies / 11.7.2:
Nivre's Parser in Prolog / 11.7.3:
Finding Dependencies Using Constraints / 11.7.4:
Parsing Dependencies Using Statistical Techniques / 11.7.5:
Semantics and Predicate Logic / 11.8:
Language Meaning and Logic: An Illustrative Example / 12.1:
Formal Semantics / 12.3:
First-Order Predicate Calculus to Represent the State of Affairs / 12.4:
Variables and Constants / 12.4.1:
Predicates / 12.4.2:
Querying the Universe of Discourse / 12.5:
Mapping Phrases onto Logical Formulas / 12.6:
Representing Nouns and Adjectives / 12.6.1:
Representing Noun Groups / 12.6.2:
Representing Verbs and Prepositions / 12.6.3:
The Case of Determiners / 12.7:
Determiners and Logic Quantifiers / 12.7.1:
Translating Sentences Using Quantifiers / 12.7.2:
A General Representation of Sentences / 12.7.3:
Compositionality to Translate Phrases to Logical Forms / 12.8:
Translating the Noun Phrase / 12.8.1:
Translating the Verb Phrase / 12.8.2:
Augmenting the Database and Answering Questions / 12.9:
Declarations / 12.9.1:
Questions with Existential and Universal Quantifiers / 12.9.2:
Prolog and Unknown Predicates / 12.9.3:
Other Determiners and Questions / 12.9.4:
Application: The Spoken Language Translator / 12.10:
Translating Spoken Sentences / 12.10.1:
Compositional Semantics / 12.10.2:
Semantic Representation Transfer / 12.10.3:
Lexical Semantics / 12.11:
Beyond Formal Semantics / 13.1:
La langue etlaparole / 13.1.1:
Language and the Structure of the World / 13.1.2:
Lexical Structures / 13.2:
Some Basic Terms and Concepts / 13.2.1:
Ontological Organization / 13.2.2:
Lexical Classes and Relations / 13.2.3:
Semantic Networks / 13.2.4:
Building a Lexicon / 13.3:
The Lexicon and Word Senses / 13.3.1:
Verb Models / 13.3.2:
Definitions / 13.3.3:
An Example of Exhaustive Lexical Organization: Word Net / 13.4:
Nouns / 13.4.1:
Adjectives / 13.4.2:
Verbs / 13.4.3:
Automatic Word Sense Disambiguation / 13.5:
Senses as Tags / 13.5.1:
Associating a Word with a Context / 13.5.2:
Guessing the Topic / 13.5.3:
Naive Bayes / 13.5.4:
Using Constraints on Verbs / 13.5.5:
Using Dictionary Definitions / 13.5.6:
An Unsupervised Algorithm to Tag Senses / 13.5.7:
Senses and Languages / 13.5.8:
Case Grammars / 13.6:
Cases in Latin / 13.6.1:
Cases and Thematic Roles / 13.6.2:
Parsing with Cases / 13.6.3:
Semantic Grammars / 13.6.4:
Extending Case Grammars / 13.7:
Frame Net / 13.7.1:
A Statistical Method to Identify Semantic Roles / 13.7.2:
An Example of Case Grammar Application: EVAR / 13.8:
EVAR's Ontology and Syntactic Classes / 13.8.1:
Cases in EVAR / 13.8.2:
Discourse / 13.9:
Discourse: A Minimalist Definition / 14.1:
A Description of Discourse / 14.2.1:
Discourse Entities / 14.2.2:
References: An Application-Oriented View / 14.3:
References and Noun Phrases / 14.3.1:
Finding Names - Proper Nouns / 14.3.2:
Coreference / 14.4:
Anaphora / 14.4.1:
Solving Coreferences in an Example / 14.4.2:
A Standard Coreference Annotation / 14.4.3:
References: A More Formal View / 14.5:
Generating Discourse Entities: The Existential Quantifier / 14.5.1:
Retrieving Discourse Entities: Definite Descriptions / 14.5.2:
Generating Discourse Entities: The Universal Quantifier / 14.5.3:
Centering: A Theory on Discourse Structure / 14.6:
Solving Coreferences / 14.7:
A Simplistic Method: Using Syntactic and Semantic Compatibility / 14.7.1:
Solving Coreferences with Shallow Grammatical Information / 14.7.2:
Salience in a Multimodal Context / 14.7.3:
Using a Machine-Learning Technique to Resolve Coreferences / 14.7.4:
More Complex Phenomena: Ellipses / 14.7.5:
Discourse and Rhetoric / 14.8:
Ancient Rhetoric: An Outline / 14.8.1:
Rhetorical Structure Theory / 14.8.2:
Types of Relations / 14.8.3:
Implementing Rhetorical Structure Theory / 14.8.4:
Events and Time / 14.9:
Events / 14.9.1:
Event Types / 14.9.2:
Temporal Representation of Events / 14.9.3:
Events and Tenses / 14.9.4:
Time ML, an Annotation Scheme for Time and Events / 14.10:
Dialogue / 14.11:
Why a Dialogue? / 15.1:
Simple Dialogue Systems / 15.3:
Dialogue Systems Based on Automata / 15.3.1:
Dialogue Modeling / 15.3.2:
Speech Acts: A Theory of Language Interaction / 15.4:
Speech Acts and Human-Machine Dialogue / 15.5:
Speech Acts as a Tagging Model / 15.5.1:
Speech Acts Tags Used in the SUNDIAL Project / 15.5.2:
Dialogue Parsing / 15.5.3:
Interpreting Speech Acts / 15.5.4:
EVAR: A Dialogue Application Using Speech Acts / 15.5.5:
Taking Beliefs and Intentions into Account / 15.6:
Representing Mental States / 15.6.1:
The STRIPS Planning Algorithm / 15.6.2:
Causality / 15.6.3:
An Introduction to Prolog / 15.7:
A Short Background / A.1:
Basic Features of Prolog / A.2:
Facts / A.2.1:
Terms / A.2.2:
Queries / A.2.3:
Logical Variables / A.2.4:
Shared Variables / A.2.5:
Data Types in Prolog / A.2.6:
Rules / A.2.7:
Running a Program / A.3:
Unification / A.4:
Substitution and Instances / A.4.1:
Terms and Unification / A.4.2:
The Herbrand Unification Algorithm / A.4.3:
Example / A.4.4:
The Occurs-Check / A.4.5:
Resolution / A.5:
Modus Ponens / A.5.1:
A Resolution Algorithm / A.5.2:
Derivation Trees and Backtracking / A.5.3:
Tracing and Debugging / A.6:
Cuts, Negation, and Related Predicates / A.7:
Cuts / A.7.1:
Negation / A.7.2:
The once/1 Predicate / A.7.3:
Lists / A.8:
Some List-Handling Predicates / A.9:
The member/2 Predicate / A.9.1:
The append/3 Predicate / A.9.2:
The delete/3 Predicate / A.9.3:
The intersection/3 Predicate / A.9.4:
The reverse/2 Predicate / A.9.5:
The Mode of an Argument / A.9.6:
Operators and Arithmetic / A.10:
Operators / A.10.1:
Arithmetic Operations / A.10.2:
Comparison Operators / A.10.3:
Lists and Arithmetic: The length/2 Predicate / A.10.4:
Lists and Comparison: The quicksort/2 Predicate / A.10.5:
Some Other Built-in Predicates / A.11:
Type Predicates / A.11.1:
Term Manipulation Predicates / A.11.2:
Handling Run-Time Errors and Exceptions / A.12:
Dynamically Accessing and Updatingthe Database / A.13:
Accessing a Clause: The clause/2 Predicate / A.13.1:
Dynamic and Static Predicates / A.13.2:
Adding a Clause: The asserta/1 and 1 assertz/Predicates / A.13.3:
Removing Clauses: The retract/1 and abolish/2 Predicates / A.13.4:
Handling Unknown Predicates / A.13.5:
All-Solutions Predicates / A.14:
Fundamental Search Algorithms / A.15:
Representing the Graph / A.15.1:
Depth-First Search / A.15.2:
Breadth-First Search / A.15.3:
A* Search / A.15.4:
Input/Output / A.16:
Reading and Writing Characters with Edinburgh Prolog / A.16.1:
Reading and Writing Terms with Edinburgh Prolog / A.16.2:
Opening and Closing Files with Edinburgh Prolog / A.16.3:
Reading and Writing Characters with Standard Prolog / A.16.4:
Reading and Writing Terms with Standard Prolog / A.16.5:
Opening and Closing Files with Standard Prolog / A.16.6:
Writing Loops / A.16.7:
Developing Prolog Programs / A.17:
Presentation Style / A.17.1:
Improving Programs / A.17.2:
Index
References
An Overview of Language Processing / 1:
Linguistics and Language Processing / 1.1:
Applications of Language Processing / 1.2:
14.

電子ブック

EB
Pierre M. Nugues, A. Bundy, Jörg Siekmann
出版情報: SpringerLink Books - AutoHoldings , Springer Berlin Heidelberg, 2006
所蔵情報: loading…
目次情報: 続きを見る
An Overview of Language Processing / 1:
Linguistics and Language Processing / 1.1:
Applications of Language Processing / 1.2:
The Different Domains of Language Processing / 1.3:
Phonetics / 1.4:
Lexicon and Morphology / 1.5:
Syntax / 1.6:
Syntax as Defined by Noam Chomsky / 1.6.1:
Syntax as Relations and Dependencies / 1.6.2:
Semantics / 1.7:
Discourse and Dialogue / 1.8:
Why Speech and Language Processing Are Difficult / 1.9:
Ambiguity / 1.9.1:
Models and Their Implementation / 1.9.2:
An Example of Language Technology in Action: the Persona Project / 1.10:
Overview of Persona / 1.10.1:
The Persona's Modules / 1.10.2:
Further Reading / 1.11:
Corpus Processing Tools / 2:
Corpora / 2.1:
Types of Corpora / 2.1.1:
Corpora and Lexicon Building / 2.1.2:
Corpora as Knowledge Sources for the Linguist / 2.1.3:
Finite-State Automata / 2.2:
A Description / 2.2.1:
Mathematical Definition of Finite-State Automata / 2.2.2:
Finite-State Automata in Prolog / 2.2.3:
Deterministic and Nondeterministic Automata / 2.2.4:
Building a Deterministic Automata from a Nondeterministic One / 2.2.5:
Searching a String with a Finite-State Automaton / 2.2.6:
Operations on Finite-State Automata / 2.2.7:
Regular Expressions / 2.3:
Repetition Metacharacters / 2.3.1:
The Longest Match / 2.3.2:
Character Classes / 2.3.3:
Nonprintable Symbols or Positions / 2.3.4:
Union and Boolean Operators / 2.3.5:
Operator Combination and Precedence / 2.3.6:
Programming with Regular Expressions / 2.4:
Perl / 2.4.1:
Matching / 2.4.2:
Substitutions / 2.4.3:
Translating Characters / 2.4.4:
String Operators / 2.4.5:
Back References / 2.4.6:
Finding Concordances / 2.5:
Concordances in Prolog / 2.5.1:
Concordances in Perl / 2.5.2:
Approximate String Matching / 2.6:
Edit Operations / 2.6.1:
Minimum Edit Distance / 2.6.2:
Searching Edits in Prolog / 2.6.3:
Encoding, Entropy, and Annotation Schemes / 2.7:
Encoding Texts / 3.1:
Character Sets / 3.2:
Representing Characters / 3.2.1:
Unicode / 3.2.2:
The Unicode Encoding Schemes / 3.2.3:
Locales and Word Order / 3.3:
Presenting Time, Numerical Information, and Ordered Words / 3.3.1:
The Unicode Collation Algorithm / 3.3.2:
Markup Languages / 3.4:
A Brief Background / 3.4.1:
An Outline of XML / 3.4.2:
Writing a DTD / 3.4.3:
Writing an XML Document / 3.4.4:
Namespaces / 3.4.5:
Codes and Information Theory / 3.5:
Entropy / 3.5.1:
Huffman Encoding / 3.5.2:
Cross Entropy / 3.5.3:
Perplexity and Cross Perplexity / 3.5.4:
Entropy and Decision Trees / 3.6:
Decision Trees / 3.6.1:
Inducing Decision Trees Automatically / 3.6.2:
Counting Words / 3.7:
Counting Words and Word Sequences / 4.1:
Words and Tokens / 4.2:
What Is a Word? / 4.2.1:
Breaking a Text into Words: Tokenization / 4.2.2:
Tokenizing Texts / 4.3:
Tokenizing Texts in Prolog / 4.3.1:
Tokenizing Texts in Perl / 4.3.2:
N-grams / 4.4:
Some Definitions / 4.4.1:
Counting Unigrams in Prolog / 4.4.2:
Counting Unigrams with Perl / 4.4.3:
Counting Bigrams with Perl / 4.4.4:
Probabilistic Models of a Word Sequence / 4.5:
The Maximum Likelihood Estimation / 4.5.1:
Using ML Estimates with Nineteen Eighty-Four / 4.5.2:
Smoothing N-gram Probabilities / 4.6:
Sparse Data / 4.6.1:
Laplace's Rule / 4.6.2:
Good-Turing Estimation / 4.6.3:
Using N-grams of Variable Length / 4.7:
Linear Interpolation / 4.7.1:
Back-off / 4.7.2:
Quality of a Language Model / 4.8:
Intuitive Presentation / 4.8.1:
Entropy Rate / 4.8.2:
Perplexity / 4.8.3:
Collocations / 4.9:
Word Preference Measurements / 4.9.1:
Extracting Collocations with Perl / 4.9.2:
Application: Retrieval and Ranking of Documents on the Web / 4.10:
Words, Parts of Speech, and Morphology / 4.11:
Words / 5.1:
Parts of Speech / 5.1.1:
Features / 5.1.2:
Two Significant Parts of Speech: The Noun and the Verb / 5.1.3:
Lexicons / 5.2:
Encoding a Dictionary / 5.2.1:
Building a Trie in Prolog / 5.2.2:
Finding a Word in a Trie / 5.2.3:
Morphology / 5.3:
Morphemes / 5.3.1:
Morphs / 5.3.2:
Inflection and Derivation / 5.3.3:
Language Differences / 5.3.4:
Morphological Parsing / 5.4:
Two-Level Model of Morphology / 5.4.1:
Interpreting the Morphs / 5.4.2:
Finite-State Transducers / 5.4.3:
Conjugating a French Verb / 5.4.4:
Prolog Implementation / 5.4.5:
Operations on Finite-State Transducers / 5.4.6:
Morphological Rules / 5.5:
Two-Level Rules / 5.5.1:
Rules and Finite-State Transducers / 5.5.2:
Rule Composition: An Examplewith French Irregular Verbs / 5.5.3:
Application Examples / 5.6:
Part-of-Speech Tagging Using Rules / 5.7:
Resolving Part-of-Speech Ambiguity / 6.1:
A Manual Method / 6.1.1:
Which Method to Use to Automatically Assign Parts of Speech / 6.1.2:
Tagging with Rules / 6.2:
Brill's Tagger / 6.2.1:
Implementation in Prolog / 6.2.2:
Deriving Rules Automatically / 6.2.3:
Confusion Matrices / 6.2.4:
Unknown Words / 6.3:
Standardized Part-of-Speech Tagsets / 6.4:
Multilingual Part-of-Speech Tags / 6.4.1:
Parts of Speechfor English / 6.4.2:
An Annotation Schemefor Swedish / 6.4.3:
Part-of-Speech Tagging Using Stochastic Techniques / 6.5:
The Noisy Channel Model / 7.1:
Presentation / 7.1.1:
The N-gram Approximation / 7.1.2:
Tagging a Sentence / 7.1.3:
The Viterbi Algorithm: An Intuitive Presentation / 7.1.4:
Markov Models / 7.2:
Markov Chains / 7.2.1:
Hidden Markov Models / 7.2.2:
Three Fundamental Algorithms to Solve Problems with HMMs / 7.2.3:
The Forward Procedure / 7.2.4:
Viterbi Algorithm / 7.2.5:
The Backward Procedure / 7.2.6:
The Forward-Backward Algorithm / 7.2.7:
Tagging with Decision Trees / 7.3:
An Application of the Noisy Channel Model: Spell Checking / 7.4:
A Second Application: Language Models for Machine Translation / 7.6:
Parallel Corpora / 7.6.1:
Alignment / 7.6.2:
Translation / 7.6.3:
Phrase-Structure Grammars in Prolog / 7.7:
Using Prolog to Write Phrase-Structure Grammars / 8.1:
Representing Chomsky's Syntactic Formalism in Prolog / 8.2:
Constituents / 8.2.1:
Tree Structures / 8.2.2:
Phrase-Structure Rules / 8.2.3:
The Definite Clause Grammar (DCG) Notation / 8.2.4:
Parsing with DCGs / 8.3:
Translating DCGs into Prolog Clauses / 8.3.1:
Parsing and Generation / 8.3.2:
Left-Recursive Rules / 8.3.3:
Parsing Ambiguity / 8.4:
Using Variables / 8.5:
Gender and Number Agreement / 8.5.1:
Obtaining the Syntactic Structure / 8.5.2:
Application: Tokenizing Texts Using DCG Rules / 8.6:
Word Breaking / 8.6.1:
Recognition of Sentence Boundaries / 8.6.2:
Semantic Representation / 8.7:
A-Calculus / 8.7.1:
Embedding A-Expressions into DCG Rules / 8.7.2:
Semantic Composition of Verbs / 8.7.3:
An Application of Phrase-Structure Grammars and a Worked Example / 8.8:
Partial Parsing / 8.9:
Is Syntax Necessary? / 9.1:
Word Spotting and Template Matching / 9.2:
ELIZA / 9.2.1:
Word Spotting in Prolog / 9.2.2:
Multiword Detection / 9.3:
Multiwords / 9.3.1:
AStandard Multiword Annotation / 9.3.2:
Detecting Multiwords with Rules / 9.3.3:
Running the Program / 9.3.4:
Noun Groups and Verb Groups / 9.4:
Groups Versus Recursive Phrases / 9.4.1:
DCG Rules to Detect Noun Groups / 9.4.2:
DCG Rules to Detect Verb Groups / 9.4.3:
Running the Rules / 9.4.4:
Group Detection as a Tagging Problem / 9.5:
Tagging Gaps / 9.5.1:
Tagging Words / 9.5.2:
Using Symbolic Rules / 9.5.3:
Using Statistical Tagging / 9.5.4:
Cascading Partial Parsers / 9.6:
Elementary Analysis of Grammatical Functions / 9.7:
Main Functions / 9.7.1:
Extracting Other Groups / 9.7.2:
An Annotation Scheme for Groups in French / 9.8:
Application: The FASTUS System / 9.9:
The Message Understanding Conferences / 9.9.1:
The Syntactic Layers of the FASTUS System / 9.9.2:
Evaluationof Information Extraction Systems / 9.9.3:
Syntactic Formalisms / 9.10:
Introduction / 10.1:
Chomsky's Grammar in Syntactic Structures / 10.2:
Constituency: A Formal Definition / 10.2.1:
Transformations / 10.2.2:
Transformations and Movements / 10.2.3:
Gap Threading / 10.2.4:
Gap Threading to Parse Relative Clauses / 10.2.5:
Standardized Phrase Categories for English / 10.3:
Unification-Based Grammars / 10.4:
Representing Features in Prolog / 10.4.1:
A Formalism for Features and Rules / 10.4.3:
Features Organization / 10.4.4:
Features and Unification / 10.4.5:
A Unification Algorithm for Feature Structures / 10.4.6:
Dependency Grammars / 10.5:
Properties of a Dependency Graph / 10.5.1:
Valence / 10.5.3:
Dependencies and Functions / 10.5.4:
Parsing Techniques / 10.6:
Bottom-up Parsing / 11.1:
The Shift-Reduce Algorithm / 11.2.1:
Implementing Shift-Reduce Parsing in Prolog / 11.2.2:
Differences Between Bottom-up and Top-down Parsing / 11.2.3:
Chart Parsing / 11.3:
Backtracking and Efficiency / 11.3.1:
Structure of a Chart / 11.3.2:
The Active Chart / 11.3.3:
Modules of an Earley Parser / 11.3.4:
The Earley Algorithm in Prolog / 11.3.5:
The Earley Parser to Handle Left-Recursive Rules and Empty Symbols / 11.3.6:
Probabilistic Parsing of Context-Free Grammars / 11.4:
A Description of PCFGs / 11.5:
The Bottom-up Chart / 11.5.1:
The Cocke-Younger-Kasami Algorithm in Prolog / 11.5.2:
Adding Probabilities to the CYK Parser / 11.5.3:
Parser Evaluation / 11.6:
Constituency-Based Evaluation / 11.6.1:
Dependency-Based Evaluation / 11.6.2:
PerformanceofPCFG Parsing / 11.6.3:
Parsing Dependencies / 11.7:
Dependency Rules / 11.7.1:
Extending the Shift-Reduce Algorithm to Parse Dependencies / 11.7.2:
Nivre's Parser in Prolog / 11.7.3:
Finding Dependencies Using Constraints / 11.7.4:
Parsing Dependencies Using Statistical Techniques / 11.7.5:
Semantics and Predicate Logic / 11.8:
Language Meaning and Logic: An Illustrative Example / 12.1:
Formal Semantics / 12.3:
First-Order Predicate Calculus to Represent the State of Affairs / 12.4:
Variables and Constants / 12.4.1:
Predicates / 12.4.2:
Querying the Universe of Discourse / 12.5:
Mapping Phrases onto Logical Formulas / 12.6:
Representing Nouns and Adjectives / 12.6.1:
Representing Noun Groups / 12.6.2:
Representing Verbs and Prepositions / 12.6.3:
The Case of Determiners / 12.7:
Determiners and Logic Quantifiers / 12.7.1:
Translating Sentences Using Quantifiers / 12.7.2:
A General Representation of Sentences / 12.7.3:
Compositionality to Translate Phrases to Logical Forms / 12.8:
Translating the Noun Phrase / 12.8.1:
Translating the Verb Phrase / 12.8.2:
Augmenting the Database and Answering Questions / 12.9:
Declarations / 12.9.1:
Questions with Existential and Universal Quantifiers / 12.9.2:
Prolog and Unknown Predicates / 12.9.3:
Other Determiners and Questions / 12.9.4:
Application: The Spoken Language Translator / 12.10:
Translating Spoken Sentences / 12.10.1:
Compositional Semantics / 12.10.2:
Semantic Representation Transfer / 12.10.3:
Lexical Semantics / 12.11:
Beyond Formal Semantics / 13.1:
La langue etlaparole / 13.1.1:
Language and the Structure of the World / 13.1.2:
Lexical Structures / 13.2:
Some Basic Terms and Concepts / 13.2.1:
Ontological Organization / 13.2.2:
Lexical Classes and Relations / 13.2.3:
Semantic Networks / 13.2.4:
Building a Lexicon / 13.3:
The Lexicon and Word Senses / 13.3.1:
Verb Models / 13.3.2:
Definitions / 13.3.3:
An Example of Exhaustive Lexical Organization: Word Net / 13.4:
Nouns / 13.4.1:
Adjectives / 13.4.2:
Verbs / 13.4.3:
Automatic Word Sense Disambiguation / 13.5:
Senses as Tags / 13.5.1:
Associating a Word with a Context / 13.5.2:
Guessing the Topic / 13.5.3:
Naive Bayes / 13.5.4:
Using Constraints on Verbs / 13.5.5:
Using Dictionary Definitions / 13.5.6:
An Unsupervised Algorithm to Tag Senses / 13.5.7:
Senses and Languages / 13.5.8:
Case Grammars / 13.6:
Cases in Latin / 13.6.1:
Cases and Thematic Roles / 13.6.2:
Parsing with Cases / 13.6.3:
Semantic Grammars / 13.6.4:
Extending Case Grammars / 13.7:
Frame Net / 13.7.1:
A Statistical Method to Identify Semantic Roles / 13.7.2:
An Example of Case Grammar Application: EVAR / 13.8:
EVAR's Ontology and Syntactic Classes / 13.8.1:
Cases in EVAR / 13.8.2:
Discourse / 13.9:
Discourse: A Minimalist Definition / 14.1:
A Description of Discourse / 14.2.1:
Discourse Entities / 14.2.2:
References: An Application-Oriented View / 14.3:
References and Noun Phrases / 14.3.1:
Finding Names - Proper Nouns / 14.3.2:
Coreference / 14.4:
Anaphora / 14.4.1:
Solving Coreferences in an Example / 14.4.2:
A Standard Coreference Annotation / 14.4.3:
References: A More Formal View / 14.5:
Generating Discourse Entities: The Existential Quantifier / 14.5.1:
Retrieving Discourse Entities: Definite Descriptions / 14.5.2:
Generating Discourse Entities: The Universal Quantifier / 14.5.3:
Centering: A Theory on Discourse Structure / 14.6:
Solving Coreferences / 14.7:
A Simplistic Method: Using Syntactic and Semantic Compatibility / 14.7.1:
Solving Coreferences with Shallow Grammatical Information / 14.7.2:
Salience in a Multimodal Context / 14.7.3:
Using a Machine-Learning Technique to Resolve Coreferences / 14.7.4:
More Complex Phenomena: Ellipses / 14.7.5:
Discourse and Rhetoric / 14.8:
Ancient Rhetoric: An Outline / 14.8.1:
Rhetorical Structure Theory / 14.8.2:
Types of Relations / 14.8.3:
Implementing Rhetorical Structure Theory / 14.8.4:
Events and Time / 14.9:
Events / 14.9.1:
Event Types / 14.9.2:
Temporal Representation of Events / 14.9.3:
Events and Tenses / 14.9.4:
Time ML, an Annotation Scheme for Time and Events / 14.10:
Dialogue / 14.11:
Why a Dialogue? / 15.1:
Simple Dialogue Systems / 15.3:
Dialogue Systems Based on Automata / 15.3.1:
Dialogue Modeling / 15.3.2:
Speech Acts: A Theory of Language Interaction / 15.4:
Speech Acts and Human-Machine Dialogue / 15.5:
Speech Acts as a Tagging Model / 15.5.1:
Speech Acts Tags Used in the SUNDIAL Project / 15.5.2:
Dialogue Parsing / 15.5.3:
Interpreting Speech Acts / 15.5.4:
EVAR: A Dialogue Application Using Speech Acts / 15.5.5:
Taking Beliefs and Intentions into Account / 15.6:
Representing Mental States / 15.6.1:
The STRIPS Planning Algorithm / 15.6.2:
Causality / 15.6.3:
An Introduction to Prolog / 15.7:
A Short Background / A.1:
Basic Features of Prolog / A.2:
Facts / A.2.1:
Terms / A.2.2:
Queries / A.2.3:
Logical Variables / A.2.4:
Shared Variables / A.2.5:
Data Types in Prolog / A.2.6:
Rules / A.2.7:
Running a Program / A.3:
Unification / A.4:
Substitution and Instances / A.4.1:
Terms and Unification / A.4.2:
The Herbrand Unification Algorithm / A.4.3:
Example / A.4.4:
The Occurs-Check / A.4.5:
Resolution / A.5:
Modus Ponens / A.5.1:
A Resolution Algorithm / A.5.2:
Derivation Trees and Backtracking / A.5.3:
Tracing and Debugging / A.6:
Cuts, Negation, and Related Predicates / A.7:
Cuts / A.7.1:
Negation / A.7.2:
The once/1 Predicate / A.7.3:
Lists / A.8:
Some List-Handling Predicates / A.9:
The member/2 Predicate / A.9.1:
The append/3 Predicate / A.9.2:
The delete/3 Predicate / A.9.3:
The intersection/3 Predicate / A.9.4:
The reverse/2 Predicate / A.9.5:
The Mode of an Argument / A.9.6:
Operators and Arithmetic / A.10:
Operators / A.10.1:
Arithmetic Operations / A.10.2:
Comparison Operators / A.10.3:
Lists and Arithmetic: The length/2 Predicate / A.10.4:
Lists and Comparison: The quicksort/2 Predicate / A.10.5:
Some Other Built-in Predicates / A.11:
Type Predicates / A.11.1:
Term Manipulation Predicates / A.11.2:
Handling Run-Time Errors and Exceptions / A.12:
Dynamically Accessing and Updatingthe Database / A.13:
Accessing a Clause: The clause/2 Predicate / A.13.1:
Dynamic and Static Predicates / A.13.2:
Adding a Clause: The asserta/1 and 1 assertz/Predicates / A.13.3:
Removing Clauses: The retract/1 and abolish/2 Predicates / A.13.4:
Handling Unknown Predicates / A.13.5:
All-Solutions Predicates / A.14:
Fundamental Search Algorithms / A.15:
Representing the Graph / A.15.1:
Depth-First Search / A.15.2:
Breadth-First Search / A.15.3:
A* Search / A.15.4:
Input/Output / A.16:
Reading and Writing Characters with Edinburgh Prolog / A.16.1:
Reading and Writing Terms with Edinburgh Prolog / A.16.2:
Opening and Closing Files with Edinburgh Prolog / A.16.3:
Reading and Writing Characters with Standard Prolog / A.16.4:
Reading and Writing Terms with Standard Prolog / A.16.5:
Opening and Closing Files with Standard Prolog / A.16.6:
Writing Loops / A.16.7:
Developing Prolog Programs / A.17:
Presentation Style / A.17.1:
Improving Programs / A.17.2:
Index
References
An Overview of Language Processing / 1:
Linguistics and Language Processing / 1.1:
Applications of Language Processing / 1.2:
15.

電子ブック

EB
Giovanni Pezzulo, Martin V. Butz, Cristiano Castelfranchi, Rino Falcone, J?rg Siekmann
出版情報: Springer eBooks Computer Science , Springer Berlin Heidelberg, 2008
所蔵情報: loading…
目次情報: 続きを見る
Theory / Part I:
Introduction: Anticipation in Natural and Artificial Cognition / Giovanni Pezzulo ; Martin V. Butz ; Cristiano Castelfranchi ; Rino Falcone1:
Introduction / 1.1:
The Path to Anticipatory Cognitive Systems / 1.2:
Symbolic Behavior, Representation-Less Behavior, and Their Merge to Anticipatory Behavior / 1.2.1:
The Power of Anticipation: From Reactivity to Proactivity / 1.2.2:
The Anticipatory Approach to Cognitive Systems / 1.2.3:
The Unitary Nature of Anticipation / 1.2.4:
Anticipation in Living Organisms / 1.3:
Anticipatory Natural Cognition / 1.3.1:
Anticipatory Codes in the Brain / 1.3.2:
Simulative Theories of Cognition, and Their Unifying Nature / 1.3.3:
Conclusions / 1.4:
The Anticipatory Approach: Definitions and Taxonomies / 2:
Anticipatory Systems, Anticipation, and Anticipatory Behavior / 2.1:
Prediction vs. Anticipation / 2.2:
Predictive Capabilities / 2.2.1:
Anticipatory Capabilities / 2.2.2:
Anticipation and Goal-Oriented Behavior / 2.3:
The Anticipatory Structure of Goal-Oriented Behavior / 2.3.1:
Not All Anticipatory Behavior Is Goal-Oriented / 2.3.2:
Which Anticipations Permit Goal-Oriented Action? / 2.3.3:
The Hierarchical Organization of Anticipatory Goal-Oriented Action / 2.3.4:
Additional Elements of True Goal-Oriented Behavior / 2.3.5:
Anticipation and Learning / 2.4:
Learning to Predict / 2.4.1:
Bootstrapping Autonomous Cognitive Development: Surprise and Curiosity / 2.4.2:
From Willed to Automatic Control of Action and Vice Versa on the Basis of Surprise / 2.4.3:
Benefits of Anticipations in Cognitive Agents / 2.5:
Potentials for Anticipatory Systems / 3.1:
Potential Benefits of Anticipatory Mechanisms on Cognitive Functions / 3.2:
Effective, Context-Based Action Initiation / 3.2.1:
Faster and Smoother Behavior Execution / 3.2.2:
Improving Top-Down Attention / 3.2.3:
Improving Information Seeking / 3.2.4:
Improving Decision Making / 3.2.5:
Object Grounding, Categorization, and Ontologies / 3.2.6:
Social Abilities / 3.2.7:
Learning / 3.2.8:
Arising Challenges Due to Anticipations and Avoiding Them / 3.3:
Conclusion / 3.4:
Models, Architectures, and Applications / Part II:
Anticipation in Attention / Christian Balkenius ; Alexander Forster ; Birger Johansson ; Vin Thorsteinsdottir4:
Learning What to Look at / 4.1:
A Learning Saliency Map / 4.2.1:
Cue-Target Learning / 4.3:
Cueing by a Single Stimulus / 4.3.1:
Contextual Cueing / 4.3.2:
Fovea Based Solution / 4.3.3:
Attending to Moving Targets / 4.4:
Models of Smooth Pursuit / 4.4.1:
Engineering Approaches / 4.4.2:
The State Based Approach / 4.4.3:
The Prediction Approach / 4.4.4:
The Fovea Based Approach / 4.4.5:
Combining Bottom-Up and Top-Down Processes / 4.5:
Anticipatory, Goal-Directed Behavior / Oliver Herbort5:
A Brief History of Schemas / 5.1:
Schema Approaches / 5.2:
Symbolic Schemas for Policy Learning / 5.2.1:
Symbolic Schemas and Prediction for Selection / 5.2.2:
Neural-Based Planning / 5.2.3:
Neural Network-Based Dynamic Programming / 5.2.4:
Inverse Model Approaches / 5.3:
Inverse Models in Motor Learning and Control / 5.3.1:
Inverse Models and Schema Approaches / 5.3.2:
Advanced Structures / 5.4:
Prediction and Action / 5.4.1:
Coupled Forward-Inverse Models / 5.4.2:
Hierarchical Anticipatory Systems / 5.4.3:
Evaluation of Predictive and Anticipatory Capabilities / 5.5:
Schema-Based Systems / 5.5.1:
Discussion / 5.5.2:
Contrasting Predictive System Capabilities / 5.6.1:
Contrasting Anticipatory System Capabilities / 5.6.2:
Integration / 5.6.3:
Anticipation and Believability / Carlos Martinho ; Ana Paiva5.7:
Animation and Believability / 6.1:
Emotion and Exaggeration / 6.1.2:
Anticipation / 6.1.3:
Anticipation, Emotion, and Believability / 6.1.4:
Related Work / 6.2:
Oz Project / 6.2.1:
EMA / 6.2.2:
Duncan the Highland Terrier / 6.2.3:
Emotivector / 6.3:
Architecture / 6.3.1:
Anticipation Model / 6.3.2:
Salience Model / 6.3.3:
Sensation Model / 6.3.4:
Selection Model / 6.3.5:
Uncertainty / 6.3.6:
Aini, the Synthetic Flower / 6.4:
Emotivectors in Action / 6.4.1:
Evaluation / 6.4.2:
iCat, the Affective Game Buddy / 6.5:
Emotivector Integration in Agent Architectures / 6.5.1:
Anticipation and Emotions for Goal Directed Agents / Emiliano Lorini ; Michele Piunti ; Maria Miceli6.7:
Related Works in Affective Computing / 7.1:
Expectations and Surprise / 7.3:
A Typology of Expectations and Predictions / 7.3.1:
From the Typology of Expectations to the Typology of Surprise / 7.3.2:
Roles of Surprise in Cognitive Processing / 7.3.3:
Expectations and Emotions for Goal-Directed Agents / 7.4:
Expectations and Decision Making / 7.4.1:
Situated Agents and Affective States / 7.4.2:
Confidence of Predictions and Modulation of the Probability Function / 7.4.3:
A Reinforcement-Learning Model of Top-Down Attention Based on a Potential-Action Map / Dimitri Ognibene ; Gianluca Baldassarre7.4.4:
Methods / 8.1:
RGB Camera Input / 8.2.1:
Saliency Map and Action Selection / 8.2.2:
Fovea / 8.2.3:
Periphery Map / 8.2.4:
Inhibition-of-Return Map / 8.2.5:
Potential Action Map / 8.2.6:
Actor-Critic Model / 8.2.7:
Parameter Settings / 8.2.8:
The Tasks / 8.2.9:
Results / 8.3:
Learning and Performance of the Models / 8.3.1:
Bottom-Up Attention: Periphery Map and Inhibition-of-Return Map / 8.3.2:
Analysis of the Vote Maps / 8.3.3:
Capability of Learning to Stay, and of Staying, on the Target / 8.3.4:
Potential Action Map: An Action-Oriented Memory of Cue Information / 8.3.5:
Potential Action Map: Capacity to Integrate Multiple Sources of Information / 8.3.6:
Anticipation by Analogy / Boicho Kokinov ; Maurice Grinberg ; Georgi Petkov ; Kiril Kiryazov8.4:
The Anticipation by Analogy Scenario / 9.1:
Models of Analogy-Making / 9.3:
AMBR Model of Analogy-Making / 9.4:
Integrating Visual Perception and Motor Control in AMBR / 9.5:
Top-Down Perception / 9.5.1:
Attention / 9.5.2:
Transfer of the Solution / 9.5.3:
Action Execution / 9.5.4:
Running the Simulated Model and Comparing It with Human Data / 9.6:
Comparing with Human Data / 9.6.1:
Running the Real Robot Model in the Real World / 9.7:
Ikaros / 9.7.1:
AMBR2Robot / 9.7.2:
Tests / 9.7.3:
Mechanisms for Active Vision / 9.8:
Discussion and Conclusion / 9.9:
Anticipation in Coordination / Emilian Lalev10:
The Prisoner's Dilemma Game / 10.1:
Related Research / 10.2:
Fictitious Play / 10.2.1:
Strategic Teaching and Reputation Formation / 10.2.2:
Social Order and Coordination / 10.2.3:
Anticipation and Information Processing in Societies / 10.2.4:
Agent Architecture and Decision Making Model / 10.3:
The Model / 10.3.1:
Judgment and Decision Making / 10.3.2:
Game Simulations with Individual Agents: Comparison with Experimental Results / 10.4:
Comparison of the Model with Experimental Results / 10.4.1:
Multi-Agent Simulations / 10.5:
Agent Societies / 10.5.1:
Simulation Results and Discussions / 10.5.2:
Endowing Artificial Systems with Anticipatory Capabilities: Success Cases / 10.6:
Flexible Goal-Directed Arm Control: The SURE_REACH Architecture / 11.1:
Learning Cognitive Maps for Anticipatory Control: Time Growing Neural Gas / 11.3:
Learning Effective Directional Arm Control: The Evolutionary System XCSF / 11.4:
Anticipatory Target Motion Prediction / 11.5:
Anticipatory Spatial Attention with Saliency Maps / 11.6:
Behavior Prediction in a Group of Robots / 11.7:
Enhanced Adaptivity in a Predator-Prey Scenario / 11.8:
Adaptive Navigation and Control with Anticipation / 11.9:
Mental Experiments for Selecting Actions / 11.10:
Anticipations for Believable Behavior / 11.11:
Anticipatory Behavior in a Searching-for-an-Object Task / 11.12:
The Role of Anticipation in Cooperation and Coordination / 11.13:
Anticipatory Effects of Expectations and Emotions / 11.14:
On-Line and Off-Line Anticipation for Action Control / 11.15:
References / 11.16:
Theory / Part I:
Introduction: Anticipation in Natural and Artificial Cognition / Giovanni Pezzulo ; Martin V. Butz ; Cristiano Castelfranchi ; Rino Falcone1:
Introduction / 1.1:
16.

電子ブック

EB
Giovanni Pezzulo, Martin V. Butz, Cristiano Castelfranchi, Rino Falcone, Jörg Siekmann
出版情報: SpringerLink Books - AutoHoldings , Springer Berlin Heidelberg, 2008
所蔵情報: loading…
目次情報: 続きを見る
Theory / Part I:
Introduction: Anticipation in Natural and Artificial Cognition / Giovanni Pezzulo ; Martin V. Butz ; Cristiano Castelfranchi ; Rino Falcone1:
Introduction / 1.1:
The Path to Anticipatory Cognitive Systems / 1.2:
Symbolic Behavior, Representation-Less Behavior, and Their Merge to Anticipatory Behavior / 1.2.1:
The Power of Anticipation: From Reactivity to Proactivity / 1.2.2:
The Anticipatory Approach to Cognitive Systems / 1.2.3:
The Unitary Nature of Anticipation / 1.2.4:
Anticipation in Living Organisms / 1.3:
Anticipatory Natural Cognition / 1.3.1:
Anticipatory Codes in the Brain / 1.3.2:
Simulative Theories of Cognition, and Their Unifying Nature / 1.3.3:
Conclusions / 1.4:
The Anticipatory Approach: Definitions and Taxonomies / 2:
Anticipatory Systems, Anticipation, and Anticipatory Behavior / 2.1:
Prediction vs. Anticipation / 2.2:
Predictive Capabilities / 2.2.1:
Anticipatory Capabilities / 2.2.2:
Anticipation and Goal-Oriented Behavior / 2.3:
The Anticipatory Structure of Goal-Oriented Behavior / 2.3.1:
Not All Anticipatory Behavior Is Goal-Oriented / 2.3.2:
Which Anticipations Permit Goal-Oriented Action? / 2.3.3:
The Hierarchical Organization of Anticipatory Goal-Oriented Action / 2.3.4:
Additional Elements of True Goal-Oriented Behavior / 2.3.5:
Anticipation and Learning / 2.4:
Learning to Predict / 2.4.1:
Bootstrapping Autonomous Cognitive Development: Surprise and Curiosity / 2.4.2:
From Willed to Automatic Control of Action and Vice Versa on the Basis of Surprise / 2.4.3:
Benefits of Anticipations in Cognitive Agents / 2.5:
Potentials for Anticipatory Systems / 3.1:
Potential Benefits of Anticipatory Mechanisms on Cognitive Functions / 3.2:
Effective, Context-Based Action Initiation / 3.2.1:
Faster and Smoother Behavior Execution / 3.2.2:
Improving Top-Down Attention / 3.2.3:
Improving Information Seeking / 3.2.4:
Improving Decision Making / 3.2.5:
Object Grounding, Categorization, and Ontologies / 3.2.6:
Social Abilities / 3.2.7:
Learning / 3.2.8:
Arising Challenges Due to Anticipations and Avoiding Them / 3.3:
Conclusion / 3.4:
Models, Architectures, and Applications / Part II:
Anticipation in Attention / Christian Balkenius ; Alexander Forster ; Birger Johansson ; Vin Thorsteinsdottir4:
Learning What to Look at / 4.1:
A Learning Saliency Map / 4.2.1:
Cue-Target Learning / 4.3:
Cueing by a Single Stimulus / 4.3.1:
Contextual Cueing / 4.3.2:
Fovea Based Solution / 4.3.3:
Attending to Moving Targets / 4.4:
Models of Smooth Pursuit / 4.4.1:
Engineering Approaches / 4.4.2:
The State Based Approach / 4.4.3:
The Prediction Approach / 4.4.4:
The Fovea Based Approach / 4.4.5:
Combining Bottom-Up and Top-Down Processes / 4.5:
Anticipatory, Goal-Directed Behavior / Oliver Herbort5:
A Brief History of Schemas / 5.1:
Schema Approaches / 5.2:
Symbolic Schemas for Policy Learning / 5.2.1:
Symbolic Schemas and Prediction for Selection / 5.2.2:
Neural-Based Planning / 5.2.3:
Neural Network-Based Dynamic Programming / 5.2.4:
Inverse Model Approaches / 5.3:
Inverse Models in Motor Learning and Control / 5.3.1:
Inverse Models and Schema Approaches / 5.3.2:
Advanced Structures / 5.4:
Prediction and Action / 5.4.1:
Coupled Forward-Inverse Models / 5.4.2:
Hierarchical Anticipatory Systems / 5.4.3:
Evaluation of Predictive and Anticipatory Capabilities / 5.5:
Schema-Based Systems / 5.5.1:
Discussion / 5.5.2:
Contrasting Predictive System Capabilities / 5.6.1:
Contrasting Anticipatory System Capabilities / 5.6.2:
Integration / 5.6.3:
Anticipation and Believability / Carlos Martinho ; Ana Paiva5.7:
Animation and Believability / 6.1:
Emotion and Exaggeration / 6.1.2:
Anticipation / 6.1.3:
Anticipation, Emotion, and Believability / 6.1.4:
Related Work / 6.2:
Oz Project / 6.2.1:
EMA / 6.2.2:
Duncan the Highland Terrier / 6.2.3:
Emotivector / 6.3:
Architecture / 6.3.1:
Anticipation Model / 6.3.2:
Salience Model / 6.3.3:
Sensation Model / 6.3.4:
Selection Model / 6.3.5:
Uncertainty / 6.3.6:
Aini, the Synthetic Flower / 6.4:
Emotivectors in Action / 6.4.1:
Evaluation / 6.4.2:
iCat, the Affective Game Buddy / 6.5:
Emotivector Integration in Agent Architectures / 6.5.1:
Anticipation and Emotions for Goal Directed Agents / Emiliano Lorini ; Michele Piunti ; Maria Miceli6.7:
Related Works in Affective Computing / 7.1:
Expectations and Surprise / 7.3:
A Typology of Expectations and Predictions / 7.3.1:
From the Typology of Expectations to the Typology of Surprise / 7.3.2:
Roles of Surprise in Cognitive Processing / 7.3.3:
Expectations and Emotions for Goal-Directed Agents / 7.4:
Expectations and Decision Making / 7.4.1:
Situated Agents and Affective States / 7.4.2:
Confidence of Predictions and Modulation of the Probability Function / 7.4.3:
A Reinforcement-Learning Model of Top-Down Attention Based on a Potential-Action Map / Dimitri Ognibene ; Gianluca Baldassarre7.4.4:
Methods / 8.1:
RGB Camera Input / 8.2.1:
Saliency Map and Action Selection / 8.2.2:
Fovea / 8.2.3:
Periphery Map / 8.2.4:
Inhibition-of-Return Map / 8.2.5:
Potential Action Map / 8.2.6:
Actor-Critic Model / 8.2.7:
Parameter Settings / 8.2.8:
The Tasks / 8.2.9:
Results / 8.3:
Learning and Performance of the Models / 8.3.1:
Bottom-Up Attention: Periphery Map and Inhibition-of-Return Map / 8.3.2:
Analysis of the Vote Maps / 8.3.3:
Capability of Learning to Stay, and of Staying, on the Target / 8.3.4:
Potential Action Map: An Action-Oriented Memory of Cue Information / 8.3.5:
Potential Action Map: Capacity to Integrate Multiple Sources of Information / 8.3.6:
Anticipation by Analogy / Boicho Kokinov ; Maurice Grinberg ; Georgi Petkov ; Kiril Kiryazov8.4:
The Anticipation by Analogy Scenario / 9.1:
Models of Analogy-Making / 9.3:
AMBR Model of Analogy-Making / 9.4:
Integrating Visual Perception and Motor Control in AMBR / 9.5:
Top-Down Perception / 9.5.1:
Attention / 9.5.2:
Transfer of the Solution / 9.5.3:
Action Execution / 9.5.4:
Running the Simulated Model and Comparing It with Human Data / 9.6:
Comparing with Human Data / 9.6.1:
Running the Real Robot Model in the Real World / 9.7:
Ikaros / 9.7.1:
AMBR2Robot / 9.7.2:
Tests / 9.7.3:
Mechanisms for Active Vision / 9.8:
Discussion and Conclusion / 9.9:
Anticipation in Coordination / Emilian Lalev10:
The Prisoner's Dilemma Game / 10.1:
Related Research / 10.2:
Fictitious Play / 10.2.1:
Strategic Teaching and Reputation Formation / 10.2.2:
Social Order and Coordination / 10.2.3:
Anticipation and Information Processing in Societies / 10.2.4:
Agent Architecture and Decision Making Model / 10.3:
The Model / 10.3.1:
Judgment and Decision Making / 10.3.2:
Game Simulations with Individual Agents: Comparison with Experimental Results / 10.4:
Comparison of the Model with Experimental Results / 10.4.1:
Multi-Agent Simulations / 10.5:
Agent Societies / 10.5.1:
Simulation Results and Discussions / 10.5.2:
Endowing Artificial Systems with Anticipatory Capabilities: Success Cases / 10.6:
Flexible Goal-Directed Arm Control: The SURE_REACH Architecture / 11.1:
Learning Cognitive Maps for Anticipatory Control: Time Growing Neural Gas / 11.3:
Learning Effective Directional Arm Control: The Evolutionary System XCSF / 11.4:
Anticipatory Target Motion Prediction / 11.5:
Anticipatory Spatial Attention with Saliency Maps / 11.6:
Behavior Prediction in a Group of Robots / 11.7:
Enhanced Adaptivity in a Predator-Prey Scenario / 11.8:
Adaptive Navigation and Control with Anticipation / 11.9:
Mental Experiments for Selecting Actions / 11.10:
Anticipations for Believable Behavior / 11.11:
Anticipatory Behavior in a Searching-for-an-Object Task / 11.12:
The Role of Anticipation in Cooperation and Coordination / 11.13:
Anticipatory Effects of Expectations and Emotions / 11.14:
On-Line and Off-Line Anticipation for Action Control / 11.15:
References / 11.16:
Theory / Part I:
Introduction: Anticipation in Natural and Artificial Cognition / Giovanni Pezzulo ; Martin V. Butz ; Cristiano Castelfranchi ; Rino Falcone1:
Introduction / 1.1:
17.

電子ブック

EB
Gaurav Sukhatme
出版情報: Springer eBooks Computer Science , Springer US, 2009
所蔵情報: loading…
目次情報: 続きを見る
Recent Research in Autonomous Robots / Part I:
Mobile Robots for Polar Remote Sensing / Christopher M. Gifford ; Eric L. Akers ; Richard S. Stansbury ; Arvin Agah1:
Introduction / 1.1:
Polar Mobile Robots / 1.2:
Challenges and Survivability Issues for Polar Robotics / 1.2.1:
MARVIN I / 1.2.2:
MARVIN II / 1.2.3:
Software Architecture / 1.2.4:
North Greenland Ice Core Project (GRIP) Camp Operations / 1.2.5:
Summit Camp Operations / 1.2.6:
West Antarctic Ice Sheet (WAIS) Divide Camp Operations / 1.2.7:
Robotics-Based Approaches to Seismic Surveying / 1.3:
Related Work / 1.3.1:
Robotics-Based Approaches / 1.3.2:
Conclusion / 1.4:
Guidance and Control of Formation Flying Spacecraft / F. Y. Hadaegh ; G. Singh ; B. Acikmese ; D. P. Scharf ; M. Mandic2:
Modeling and Simulation / 2.1:
Guidance and Control Architectures / 2.3:
Formation State Estimation / 2.4:
Guidance and Control / 2.5:
Formulation of Optimal Path Planning Problem / 2.5.1:
Conclusions / 2.6:
Acknowledgement / 2.7:
Adaptive Sampling for Field Reconstruction With Multiple Mobile Robots / Bin Zhang ; Gaurav S. Sukhatme3:
Adaptive Sampling / 3.1:
Divide and Conquer / 3.4:
Discretization / 3.4.1:
Graph Partition / 3.4.2:
Path Planning for a Single Robot / 3.4.3:
Simulations / 3.5:
Conclusion and Future Work / 3.6:
Grasping Affordances: Learning to Connect Vision to Hand Action / Charles de Granville ; Di Wang ; Joshua Southerland ; Robert Platt, Jr. ; Andrew H. Fagg4:
Learning Models of 3D Object Appearance / 4.1:
Edgel Constellations for Describing 2D Object Appearance / 4.2.1:
Capturing Object Appearance in 3D / 4.2.2:
Learning Complete 3D Appearance Models / 4.2.3:
Data Collection and Preprocessing / 4.2.4:
Experimental Results / 4.2.5:
Learning Canonical Grasps for Objects / 4.3:
Modeling Hand Orientation / 4.3.1:
Modeling Hand Position / 4.3.2:
Modeling Finger Posture / 4.3.3:
Modeling Mixtures of Hand Postures / 4.3.4:
Data Collection / 4.3.5:
Discussion / 4.3.6:
Intelligent Robotics for Assistive Healtheare and Therapy / Ayanna M. Howard ; Sekou Remy ; Chung Hyuk Park ; Hae Won Park ; Douglas Brooks5:
Activities of Daily Living: Robot Learning from Human Teleoperation / 5.1:
Divided Force Guidance for Haptic Feedback / 5.2.1:
Learning through Haptically Guided Manipulation / 5.2.2:
Experiments / 5.2.3:
Child Therapy and Education: Robots in Interactive Play Scenarios / 5.3:
Defining Play Primitives / 5.3.1:
Physical Therapy: Robot Assistance via Patient Observation / 5.3.2:
Learning of Exercise Primitives / 5.4.1:
Learning of Exercise Behaviors / 5.4.2:
A New Direction in Human-Robot Interaction: A Lesson from Star Wars? / Gerard Jounghyun Kim5.4.3:
Indirect Human-Robot Interaction / 6.1:
Robot location/pose tracking / 6.2.1:
User/environment sensing / 6.2.2:
Flexible projection / 6.2.3:
Large display surface centered interaction design / 6.2.4:
Summary and Postscript / 6.3:
Neurorobotics Primer / M. Anthony Lewis ; Theresa J. Klein7:
Neurorobots and the Scientific Method / 7.1:
21st Century Robotics: Productizing Mythology / 7.1.2:
Computational Substrate / 7.1.3:
Neuromorphic Chips / 7.1.4:
Graphics Processing Units / 7.1.5:
Purpose of this Chapter / 7.1.6:
Classical Robotics / 7.2:
Configuration Space / 7.2.1:
Kinematics / 7.2.2:
Differential Motion / 7.2.3:
Statics / 7.2.4:
Redundancy / 7.2.5:
Dynamics / 7.2.6:
Trajectory Generation / 7.2.7:
A Pause to Reflect / 7.2.8:
Basic Neurocomputation / 7.3:
Information Flows into Dendrites and Out of Axons / 7.3.1:
The Neuron Cell is a Capacitor with a Decision Making Capability / 7.3.2:
Neural Models Capture the Basic Dynamics of the Cell Body and Throw Away Some Details / 7.3.3:
Numerical Integration / 7.3.4:
Reflexes and High Level Control / 7.3.5 Building Neural Oscillators: Nature's Coordination and Trajectory Generation Mechanism:
Notable Systems / 7.4:
GPUs / 7.5:
Learning Inverse Dynamics by Gaussian Process Regression under the Multi-Task Learning Framework / Dit-Yan Yeung ; Yu Zhang7.6:
Appreciation and Dedication / 8.1:
Kinematics and Dynamics / 8.2 Robotic Control:
Reasons Against Analytic Solutions / 8.2.2:
Insights from Human Arm Control / 8.2.3:
Learning and Control / 8.2.4:
Learning Inverse Dynamics / 8.3:
Recent Work / 8.3.1:
Learning Inverse Dynamics as a Regression Problem / 8.3.2:
Gaussian Process Regression / 8.4:
Brief Review / 8.4.1:
Gaussian Process Regression for Learning Inverse Dynamics / 8.4.2:
Multi-Task Gaussian Process Regression / 8.5:
Brief Review of Bonilla et al.'s Method (33) / 8.5.1:
Multi-Task Gaussian Process Regression for Learning Inverse Dynamics / 8.5.2:
Tributes and Recollections from Former Students / 8.6:
Professor George Albert Bekey / 9:
Personal Life / 9.1:
Research / 9.2:
Teaching and Students / 9.3:
Service to the University and the Profession / 9.4:
Recognition, Honors, and Awards / 9.5:
A Personal Tribute / 9.6:
Current History of the Bekey Tribe / H. Pete Schmid ; Monte Ung10:
Recollections and Tributes / Dan Antonelli ; Arun Bhadoria ; Willis G. Downing, Jr. ; Huan Liu ; Michael Merritt ; L. Warren Morrison11:
From Aerospace Engineering to Biomedical Engineering / 11.1:
The Final Oral Examination / 11.2:
Recent Work on Preventing Fractures caused by a Fall / 11.3:
Teacher, Mentor, and Friend / 11.4:
A Testimonial / 11.5:
Making it Look Easy / 11.6:
Solving Complex Problems Efficiently / 11.7:
References
Index
Recent Research in Autonomous Robots / Part I:
Mobile Robots for Polar Remote Sensing / Christopher M. Gifford ; Eric L. Akers ; Richard S. Stansbury ; Arvin Agah1:
Introduction / 1.1:
18.

電子ブック

EB
Gaurav Sukhatme, Gaurav S. Sukhatme
出版情報: SpringerLink Books - AutoHoldings , Springer US, 2009
所蔵情報: loading…
目次情報: 続きを見る
Recent Research in Autonomous Robots / Part I:
Mobile Robots for Polar Remote Sensing / Christopher M. Gifford ; Eric L. Akers ; Richard S. Stansbury ; Arvin Agah1:
Introduction / 1.1:
Polar Mobile Robots / 1.2:
Challenges and Survivability Issues for Polar Robotics / 1.2.1:
MARVIN I / 1.2.2:
MARVIN II / 1.2.3:
Software Architecture / 1.2.4:
North Greenland Ice Core Project (GRIP) Camp Operations / 1.2.5:
Summit Camp Operations / 1.2.6:
West Antarctic Ice Sheet (WAIS) Divide Camp Operations / 1.2.7:
Robotics-Based Approaches to Seismic Surveying / 1.3:
Related Work / 1.3.1:
Robotics-Based Approaches / 1.3.2:
Conclusion / 1.4:
Guidance and Control of Formation Flying Spacecraft / F. Y. Hadaegh ; G. Singh ; B. Acikmese ; D. P. Scharf ; M. Mandic2:
Modeling and Simulation / 2.1:
Guidance and Control Architectures / 2.3:
Formation State Estimation / 2.4:
Guidance and Control / 2.5:
Formulation of Optimal Path Planning Problem / 2.5.1:
Conclusions / 2.6:
Acknowledgement / 2.7:
Adaptive Sampling for Field Reconstruction With Multiple Mobile Robots / Bin Zhang ; Gaurav S. Sukhatme3:
Adaptive Sampling / 3.1:
Divide and Conquer / 3.4:
Discretization / 3.4.1:
Graph Partition / 3.4.2:
Path Planning for a Single Robot / 3.4.3:
Simulations / 3.5:
Conclusion and Future Work / 3.6:
Grasping Affordances: Learning to Connect Vision to Hand Action / Charles de Granville ; Di Wang ; Joshua Southerland ; Robert Platt, Jr. ; Andrew H. Fagg4:
Learning Models of 3D Object Appearance / 4.1:
Edgel Constellations for Describing 2D Object Appearance / 4.2.1:
Capturing Object Appearance in 3D / 4.2.2:
Learning Complete 3D Appearance Models / 4.2.3:
Data Collection and Preprocessing / 4.2.4:
Experimental Results / 4.2.5:
Learning Canonical Grasps for Objects / 4.3:
Modeling Hand Orientation / 4.3.1:
Modeling Hand Position / 4.3.2:
Modeling Finger Posture / 4.3.3:
Modeling Mixtures of Hand Postures / 4.3.4:
Data Collection / 4.3.5:
Discussion / 4.3.6:
Intelligent Robotics for Assistive Healtheare and Therapy / Ayanna M. Howard ; Sekou Remy ; Chung Hyuk Park ; Hae Won Park ; Douglas Brooks5:
Activities of Daily Living: Robot Learning from Human Teleoperation / 5.1:
Divided Force Guidance for Haptic Feedback / 5.2.1:
Learning through Haptically Guided Manipulation / 5.2.2:
Experiments / 5.2.3:
Child Therapy and Education: Robots in Interactive Play Scenarios / 5.3:
Defining Play Primitives / 5.3.1:
Physical Therapy: Robot Assistance via Patient Observation / 5.3.2:
Learning of Exercise Primitives / 5.4.1:
Learning of Exercise Behaviors / 5.4.2:
A New Direction in Human-Robot Interaction: A Lesson from Star Wars? / Gerard Jounghyun Kim5.4.3:
Indirect Human-Robot Interaction / 6.1:
Robot location/pose tracking / 6.2.1:
User/environment sensing / 6.2.2:
Flexible projection / 6.2.3:
Large display surface centered interaction design / 6.2.4:
Summary and Postscript / 6.3:
Neurorobotics Primer / M. Anthony Lewis ; Theresa J. Klein7:
Neurorobots and the Scientific Method / 7.1:
21st Century Robotics: Productizing Mythology / 7.1.2:
Computational Substrate / 7.1.3:
Neuromorphic Chips / 7.1.4:
Graphics Processing Units / 7.1.5:
Purpose of this Chapter / 7.1.6:
Classical Robotics / 7.2:
Configuration Space / 7.2.1:
Kinematics / 7.2.2:
Differential Motion / 7.2.3:
Statics / 7.2.4:
Redundancy / 7.2.5:
Dynamics / 7.2.6:
Trajectory Generation / 7.2.7:
A Pause to Reflect / 7.2.8:
Basic Neurocomputation / 7.3:
Information Flows into Dendrites and Out of Axons / 7.3.1:
The Neuron Cell is a Capacitor with a Decision Making Capability / 7.3.2:
Neural Models Capture the Basic Dynamics of the Cell Body and Throw Away Some Details / 7.3.3:
Numerical Integration / 7.3.4:
Reflexes and High Level Control / 7.3.5 Building Neural Oscillators: Nature's Coordination and Trajectory Generation Mechanism:
Notable Systems / 7.4:
GPUs / 7.5:
Learning Inverse Dynamics by Gaussian Process Regression under the Multi-Task Learning Framework / Dit-Yan Yeung ; Yu Zhang7.6:
Appreciation and Dedication / 8.1:
Kinematics and Dynamics / 8.2 Robotic Control:
Reasons Against Analytic Solutions / 8.2.2:
Insights from Human Arm Control / 8.2.3:
Learning and Control / 8.2.4:
Learning Inverse Dynamics / 8.3:
Recent Work / 8.3.1:
Learning Inverse Dynamics as a Regression Problem / 8.3.2:
Gaussian Process Regression / 8.4:
Brief Review / 8.4.1:
Gaussian Process Regression for Learning Inverse Dynamics / 8.4.2:
Multi-Task Gaussian Process Regression / 8.5:
Brief Review of Bonilla et al.'s Method (33) / 8.5.1:
Multi-Task Gaussian Process Regression for Learning Inverse Dynamics / 8.5.2:
Tributes and Recollections from Former Students / 8.6:
Professor George Albert Bekey / 9:
Personal Life / 9.1:
Research / 9.2:
Teaching and Students / 9.3:
Service to the University and the Profession / 9.4:
Recognition, Honors, and Awards / 9.5:
A Personal Tribute / 9.6:
Current History of the Bekey Tribe / H. Pete Schmid ; Monte Ung10:
Recollections and Tributes / Dan Antonelli ; Arun Bhadoria ; Willis G. Downing, Jr. ; Huan Liu ; Michael Merritt ; L. Warren Morrison11:
From Aerospace Engineering to Biomedical Engineering / 11.1:
The Final Oral Examination / 11.2:
Recent Work on Preventing Fractures caused by a Fall / 11.3:
Teacher, Mentor, and Friend / 11.4:
A Testimonial / 11.5:
Making it Look Easy / 11.6:
Solving Complex Problems Efficiently / 11.7:
References
Index
Recent Research in Autonomous Robots / Part I:
Mobile Robots for Polar Remote Sensing / Christopher M. Gifford ; Eric L. Akers ; Richard S. Stansbury ; Arvin Agah1:
Introduction / 1.1:
19.

電子ブック

EB
Muddassar Farooq, Th B?ck, A. E. Eiben, G. Rozenberg
出版情報: Springer eBooks Computer Science , Springer Berlin Heidelberg, 2009
所蔵情報: loading…
目次情報: 続きを見る
Introduction / 1:
Motivation of the Work / 1.1:
Problem Statement / 1.2:
Hypotheses / 1.2.1:
An Engineering Approach to Nature-Inspired Routing Protocols / 1.3:
The Scientific Contributions of the Work / 1.4:
A Simple, Disributed, Decentralized Multi-Agent System / 1.4.1:
A Comprehensive Routing System / 1.4.2:
An Empirical Comprehensive Performance Evaluation Framework / 1.4.3:
A Scalability Framework for (Nature-Inspired) Agent-Based Routing Protocols / 1.4.4:
Protocol Engineering of Nature-Inspired Routing Protocols / 1.4.5:
A Nature-Inspired Linux Router / 1.4.6:
The Protocol Validation Framework / 1.4.7:
The Formal Framework for Nature-Inspired Protocols / 1.4.8:
A Simple, Efficient, and Scalable Nature-Inspired Security Framework / 1.4.9:
Emerging Mobile and Wireless Sensors Ad Hoc Networks / 1.4.10:
Organization of the Book / 1.5:
A Comprehensive Survey of Nature-Inspired Routing Protocols / 2:
Organization of the Chapter / 2.1:
Network Routing Algorithms / 2.2:
Features Landscape of a Modern Routing Algorithm / 2.2.1:
Taxonomy of Routing Algorithms / 2.2.2:
Ant Colony Optimization (ACO) Routing Algorithms for Fixed Networks / 2.3:
Important Elements of ACO in Routing / 2.3.1:
Ant-Based Control (ABC) for Circuit-Switched Networks / 2.3.2:
Ant-Based Control (ABC) for Packet-Switched Networks / 2.3.3:
AntNet / 2.3.4:
Ant Colony Routing (ACR) and AntNet+SELA QoS-Aware Routing / 2.3.5:
A Brief History of Research in AntNet / 2.3.6:
Evolutionary Routing Algorithms for Fixed Networks / 2.4:
Important Elements of EA in Routing / 2.4.1:
GARA / 2.4.2:
ASGA and SynthECA / 2.4.3:
DGA / 2.4.4:
Related Work on Routing Algorithms for Fixed Networks / 2.5:
Artificial Intelligence Community / 2.5.1:
Networking Community / 2.5.2:
Summary / 2.6:
From The Wisdom of the Hive to Routing in Telecommunication Networks / 3:
An Agent-Based Investigation of a Honeybee Colony / 3.1:
Labor Management / 3.2.1:
The Communication Network of a Honeybee Colony / 3.2.2:
Reinforcement Learning / 3.2.3:
Distributed Coordination and Planning / 3.2.4:
Energy-Efficient Foraging / 3.2.5:
Stochastic Selection of Flower Sites / 3.2.6:
Group Organization / 3.2.7:
BeeHive: The Mapping of Concepts from Nature to Networks / 3.3:
The Bee Agent Model / 3.4:
Estimation Model of Agents / 3.4.1:
Goodness of a Neighbor / 3.4.2:
Communication Paradigm of Agents / 3.4.3:
Packet-Switching Algorithm / 3.4.4:
BeeHive Algorithm / 3.5:
The Performance Evaluation Framework for Nature-Inspired Routing Algorithms / 3.6:
Routing Algorithms Used for Comparison / 3.7:
OSPF / 3.7.1:
Daemon / 3.7.4:
Simulation Environment for BeeHive / 3.8:
simpleNet / 3.8.1:
NTTNet / 3.8.2:
Node150 / 3.8.3:
Discussion of the Results from the Experiments / 3.9:
Congestion Avoidance Behavior / 3.9.1:
Queue Management Behavior / 3.9.2:
Hot Spots / 3.9.3:
Router Crash Experiments / 3.9.4:
Bursty Traffic Generator / 3.9.5:
Sessionless Network Traffic / 3.9.6:
Size of Routing Table / 3.9.7:
A Scalability Framework for Nature-Inspired Routing Algorithms / 3.10:
Existing Work on Scalability Analysis / 4.1:
The Scalability Model for a Routing Algorithm / 4.1.2:
Cost Model / 4.2.1:
Power Model of an Algorithm / 4.2.2:
Scalability Metric for a Routing Algorithm / 4.2.3:
Simulation Environment for Scalability Analysis / 4.3:
Node350 / 4.3.1:
Node650 / 4.3.5:
Node1050 / 4.3.6:
Throughput and Packet Delivery Ratio / 4.4:
Packet Delay / 4.4.2:
Control Overhead and Suboptimal Overhead / 4.4.3:
Agent and Packet Processing Complexity / 4.4.4:
Routing Table Size / 4.4.5:
Investigation of the Behavior of AntNet / 4.4.6:
Towards an Empirically Founded Scalability Model for Routing Protocols / 4.5:
Scalability Matrix and Scalability Analysis / 4.5.1:
Scalability Analysis of BeeHive / 4.5.2:
Scalability Analysis of AntNet / 4.5.3:
Scalability Analysis of OSPF / 4.5.4:
BeeHive in Real Networks of Linux Routers / 4.6:
Engineering of Nature-Inspired Routing Protocols / 5.1:
Structural Design of a Routing Framework / 5.2.1:
Structural Semantics of the Network Stack / 5.2.2:
System Design Issues / 5.2.3:
Natural Routing Framework: Design and Implementation / 5.3:
Algorithm-Independent Framework / 5.3.1:
Algorithmic-Dependent BeeHive Module / 5.3.2:
Protocol Verification Framework / 5.4:
The Motivation Behind the Design and Structure of Experiments / 5.5:
Quantum Traffic Engineering / 5.6:
Real-World Applications Traffic Engineering / 5.6.2:
Hybrid Traffic Engineering / 5.6.3:
A Formal Framework for Analyzing the Behavior of BeeHive / 5.7:
Goodness / 6.1:
Analytical Model / 6.3:
Node Traffic / 6.3.1:
Link Flows / 6.3.2:
Calculation of Delays / 6.3.3:
Throughput / 6.3.4:
Empirical Verification of the Formal Model / 6.4:
Example 1 / 6.4.1:
Example 2 / 6.4.2:
An Efficient Nature-Inspired Security Framework for BeeHive / 6.5:
Robustness and Security Analysis of a Routing Protocol / 7.1:
Security Threats to Nature-Inspired Routing Protocols / 7.2.1:
Existing Works on Security of Routing Protocols / 7.2.2:
BeeHiveGuard: A Digital Signature-Based Security Framework / 7.3:
Agent Integrity / 7.3.1:
Routing Information Integrity / 7.3.2:
Architecture of BeeHiveGuard / 7.3.3:
BeeHiveAIS: an Immune-Inspired Security Framework for BeeHive / 7.4:
Artificial Immune Systems (AISs) / 7.4.1:
Behavioral Analysis of BeeHive for Designing an AIS / 7.4.2:
The AIS Model of BeeHiveAIS / 7.4.3:
Top-Level BeeHiveAIS / 7.4.4:
Simulation Models of Our Security Frameworks / 7.5:
Attack Scenarios on Simple Topologies / 7.5.1:
Analysis of Attacks and Effectiveness of Security Frameworks / 7.5.2:
Bee-Inspired Routing Protocols for Mobile Ad Hoc and Sensor Networks / 7.5.3:
Existing Works on Nature-Inspired MANET Routing Protocols / 8.1:
Bee Agent Model / 8.1.2:
Packers / 8.2.1:
Scouts / 8.2.2:
Foragers / 8.2.3:
Beeswarm / 8.2.4:
Architecture of BeeAdHoc / 8.3:
Packing Floor / 8.3.1:
Entrance / 8.3.2:
Dance Floor / 8.3.3:
Simulation Framework / 8.4:
Metrics / 8.4.1:
Node Mobility Behavior / 8.4.2:
BeeAdHoc in Real-World MANETs / 8.5:
A Performance Evaluation Framework for Real MANETs in Linux / 8.5.1:
Results of Experiments / 8.6:
Security Threats in BeeAdHoc / 8.7:
Challenges for Routing Protocols in Ad Hoc Sensor Networks / 8.8:
Existing Works on Routing Protocols for Wireless Sensor Networks / 8.8.1:
BeeSensor: Architecture and Working / 8.9:
BeeSensor Agent's Model / 8.9.1:
Protocol Description / 8.9.2:
A Performance Evaluation Framework for Nature-Inspired Routing Protocols for WSNs / 8.10:
Results / 8.10.1:
Conclusion and Future Work / 8.12:
Conclusion / 9.1:
Future Research / 9.2:
Quality of Service (QoS) Routing / 9.2.1:
Cyclic Paths / 9.2.2:
Intelligent and Knowledgeable Network Engineering / 9.2.3:
Bee Colony Metaheuristic / 9.2.4:
Natural Engineering: The Need for a Distinct Discipline / 9.3:
References
Index
Introduction / 1:
Motivation of the Work / 1.1:
Problem Statement / 1.2:
20.

電子ブック

EB
Muddassar Farooq, Th Bäck, A. E. Eiben, G. Rozenberg
出版情報: SpringerLink Books - AutoHoldings , Springer Berlin Heidelberg, 2009
所蔵情報: loading…
目次情報: 続きを見る
Introduction / 1:
Motivation of the Work / 1.1:
Problem Statement / 1.2:
Hypotheses / 1.2.1:
An Engineering Approach to Nature-Inspired Routing Protocols / 1.3:
The Scientific Contributions of the Work / 1.4:
A Simple, Disributed, Decentralized Multi-Agent System / 1.4.1:
A Comprehensive Routing System / 1.4.2:
An Empirical Comprehensive Performance Evaluation Framework / 1.4.3:
A Scalability Framework for (Nature-Inspired) Agent-Based Routing Protocols / 1.4.4:
Protocol Engineering of Nature-Inspired Routing Protocols / 1.4.5:
A Nature-Inspired Linux Router / 1.4.6:
The Protocol Validation Framework / 1.4.7:
The Formal Framework for Nature-Inspired Protocols / 1.4.8:
A Simple, Efficient, and Scalable Nature-Inspired Security Framework / 1.4.9:
Emerging Mobile and Wireless Sensors Ad Hoc Networks / 1.4.10:
Organization of the Book / 1.5:
A Comprehensive Survey of Nature-Inspired Routing Protocols / 2:
Organization of the Chapter / 2.1:
Network Routing Algorithms / 2.2:
Features Landscape of a Modern Routing Algorithm / 2.2.1:
Taxonomy of Routing Algorithms / 2.2.2:
Ant Colony Optimization (ACO) Routing Algorithms for Fixed Networks / 2.3:
Important Elements of ACO in Routing / 2.3.1:
Ant-Based Control (ABC) for Circuit-Switched Networks / 2.3.2:
Ant-Based Control (ABC) for Packet-Switched Networks / 2.3.3:
AntNet / 2.3.4:
Ant Colony Routing (ACR) and AntNet+SELA QoS-Aware Routing / 2.3.5:
A Brief History of Research in AntNet / 2.3.6:
Evolutionary Routing Algorithms for Fixed Networks / 2.4:
Important Elements of EA in Routing / 2.4.1:
GARA / 2.4.2:
ASGA and SynthECA / 2.4.3:
DGA / 2.4.4:
Related Work on Routing Algorithms for Fixed Networks / 2.5:
Artificial Intelligence Community / 2.5.1:
Networking Community / 2.5.2:
Summary / 2.6:
From The Wisdom of the Hive to Routing in Telecommunication Networks / 3:
An Agent-Based Investigation of a Honeybee Colony / 3.1:
Labor Management / 3.2.1:
The Communication Network of a Honeybee Colony / 3.2.2:
Reinforcement Learning / 3.2.3:
Distributed Coordination and Planning / 3.2.4:
Energy-Efficient Foraging / 3.2.5:
Stochastic Selection of Flower Sites / 3.2.6:
Group Organization / 3.2.7:
BeeHive: The Mapping of Concepts from Nature to Networks / 3.3:
The Bee Agent Model / 3.4:
Estimation Model of Agents / 3.4.1:
Goodness of a Neighbor / 3.4.2:
Communication Paradigm of Agents / 3.4.3:
Packet-Switching Algorithm / 3.4.4:
BeeHive Algorithm / 3.5:
The Performance Evaluation Framework for Nature-Inspired Routing Algorithms / 3.6:
Routing Algorithms Used for Comparison / 3.7:
OSPF / 3.7.1:
Daemon / 3.7.4:
Simulation Environment for BeeHive / 3.8:
simpleNet / 3.8.1:
NTTNet / 3.8.2:
Node150 / 3.8.3:
Discussion of the Results from the Experiments / 3.9:
Congestion Avoidance Behavior / 3.9.1:
Queue Management Behavior / 3.9.2:
Hot Spots / 3.9.3:
Router Crash Experiments / 3.9.4:
Bursty Traffic Generator / 3.9.5:
Sessionless Network Traffic / 3.9.6:
Size of Routing Table / 3.9.7:
A Scalability Framework for Nature-Inspired Routing Algorithms / 3.10:
Existing Work on Scalability Analysis / 4.1:
The Scalability Model for a Routing Algorithm / 4.1.2:
Cost Model / 4.2.1:
Power Model of an Algorithm / 4.2.2:
Scalability Metric for a Routing Algorithm / 4.2.3:
Simulation Environment for Scalability Analysis / 4.3:
Node350 / 4.3.1:
Node650 / 4.3.5:
Node1050 / 4.3.6:
Throughput and Packet Delivery Ratio / 4.4:
Packet Delay / 4.4.2:
Control Overhead and Suboptimal Overhead / 4.4.3:
Agent and Packet Processing Complexity / 4.4.4:
Routing Table Size / 4.4.5:
Investigation of the Behavior of AntNet / 4.4.6:
Towards an Empirically Founded Scalability Model for Routing Protocols / 4.5:
Scalability Matrix and Scalability Analysis / 4.5.1:
Scalability Analysis of BeeHive / 4.5.2:
Scalability Analysis of AntNet / 4.5.3:
Scalability Analysis of OSPF / 4.5.4:
BeeHive in Real Networks of Linux Routers / 4.6:
Engineering of Nature-Inspired Routing Protocols / 5.1:
Structural Design of a Routing Framework / 5.2.1:
Structural Semantics of the Network Stack / 5.2.2:
System Design Issues / 5.2.3:
Natural Routing Framework: Design and Implementation / 5.3:
Algorithm-Independent Framework / 5.3.1:
Algorithmic-Dependent BeeHive Module / 5.3.2:
Protocol Verification Framework / 5.4:
The Motivation Behind the Design and Structure of Experiments / 5.5:
Quantum Traffic Engineering / 5.6:
Real-World Applications Traffic Engineering / 5.6.2:
Hybrid Traffic Engineering / 5.6.3:
A Formal Framework for Analyzing the Behavior of BeeHive / 5.7:
Goodness / 6.1:
Analytical Model / 6.3:
Node Traffic / 6.3.1:
Link Flows / 6.3.2:
Calculation of Delays / 6.3.3:
Throughput / 6.3.4:
Empirical Verification of the Formal Model / 6.4:
Example 1 / 6.4.1:
Example 2 / 6.4.2:
An Efficient Nature-Inspired Security Framework for BeeHive / 6.5:
Robustness and Security Analysis of a Routing Protocol / 7.1:
Security Threats to Nature-Inspired Routing Protocols / 7.2.1:
Existing Works on Security of Routing Protocols / 7.2.2:
BeeHiveGuard: A Digital Signature-Based Security Framework / 7.3:
Agent Integrity / 7.3.1:
Routing Information Integrity / 7.3.2:
Architecture of BeeHiveGuard / 7.3.3:
BeeHiveAIS: an Immune-Inspired Security Framework for BeeHive / 7.4:
Artificial Immune Systems (AISs) / 7.4.1:
Behavioral Analysis of BeeHive for Designing an AIS / 7.4.2:
The AIS Model of BeeHiveAIS / 7.4.3:
Top-Level BeeHiveAIS / 7.4.4:
Simulation Models of Our Security Frameworks / 7.5:
Attack Scenarios on Simple Topologies / 7.5.1:
Analysis of Attacks and Effectiveness of Security Frameworks / 7.5.2:
Bee-Inspired Routing Protocols for Mobile Ad Hoc and Sensor Networks / 7.5.3:
Existing Works on Nature-Inspired MANET Routing Protocols / 8.1:
Bee Agent Model / 8.1.2:
Packers / 8.2.1:
Scouts / 8.2.2:
Foragers / 8.2.3:
Beeswarm / 8.2.4:
Architecture of BeeAdHoc / 8.3:
Packing Floor / 8.3.1:
Entrance / 8.3.2:
Dance Floor / 8.3.3:
Simulation Framework / 8.4:
Metrics / 8.4.1:
Node Mobility Behavior / 8.4.2:
BeeAdHoc in Real-World MANETs / 8.5:
A Performance Evaluation Framework for Real MANETs in Linux / 8.5.1:
Results of Experiments / 8.6:
Security Threats in BeeAdHoc / 8.7:
Challenges for Routing Protocols in Ad Hoc Sensor Networks / 8.8:
Existing Works on Routing Protocols for Wireless Sensor Networks / 8.8.1:
BeeSensor: Architecture and Working / 8.9:
BeeSensor Agent's Model / 8.9.1:
Protocol Description / 8.9.2:
A Performance Evaluation Framework for Nature-Inspired Routing Protocols for WSNs / 8.10:
Results / 8.10.1:
Conclusion and Future Work / 8.12:
Conclusion / 9.1:
Future Research / 9.2:
Quality of Service (QoS) Routing / 9.2.1:
Cyclic Paths / 9.2.2:
Intelligent and Knowledgeable Network Engineering / 9.2.3:
Bee Colony Metaheuristic / 9.2.4:
Natural Engineering: The Need for a Distinct Discipline / 9.3:
References
Index
Introduction / 1:
Motivation of the Work / 1.1:
Problem Statement / 1.2:
21.

電子ブック

EB
James P. Gunderson, Louise F. Gunderson, Unspecified
出版情報: Springer eBooks Computer Science , Springer US, 2009
所蔵情報: loading…
目次情報: 続きを見る
Introduction / 1:
Bridging the Gap / 1.1:
Bidirectional Mapping / 1.1.1:
Reification and Preafference in Biological Entities / 1.2:
More Advanced Brains / 1.3:
What This Book Is and What It Is Not / 1.4:
Structure of the Book / 1.5:
A Note on Typefaces and Terminology / 1.6:
Anthropomorphization / 1.6.1:
Some background material on probability and biology / 2:
Layout / 2.1:
Probability in the Real World / 2.2:
Why a Biologically Principled Argument? / 2.3:
Biological Principles / 2.3.1:
What Is a Biologically Principled Argument? / 2.4:
Biology Is an Observational Science / 2.4.1:
Life Has Structure / 2.4.2:
The Theory of Evolution Explains the Observed Diversity of Life / 2.4.3:
So Why Is Our Model Biologically Principled? / 2.5:
Why Not Just Use Expected Value? / 2.5.1:
Using Cognition and Physiology to Build a Cognitive Model / 3:
Reification in Biological Entities / 3.1:
Recognition / 3.1.1:
Preafference / 3.1.2:
Biological Storage / 3.2:
Explicit Memory / 3.2.1:
Emotion / 3.3:
Emotion as mediator / 3.3.1:
Representation / 4:
Representing Features of the World / 4.1:
Representing Goals / 4.2:
Representing Actions in the World / 4.3:
Enabling Conditions / 4.3.1:
Outcomes / 4.3.2:
Representing Likelihoods / 4.3.3:
Exogenous Events / 4.4:
Perception/Action System / 5:
Robot as Perception/Action System / 5.1:
Robot as Body / 5.1.1:
Robot as Senor / 5.1.2:
Robot as Agent of Change / 5.1.3:
Low Level Control Loop - Procedural Memory / 5.1.4:
System Safety and Routine Actions / 5.1.5:
Examples of Perception/Action Systems / 5.2:
Fred - a simple test robot / 5.2.1:
Basil / 5.2.2:
Summary of Perception/Action Systems / 5.3:
Design of a Reification Engine / 6:
Model Selection Criteria / 6.1:
Judgment Analysis / 6.2:
Designing the Reification Engine / 6.3:
Bridging the Sensor to Symbol Gap / 7:
Supporting Bidirectional Mapping / 7.1:
A Third Approach / 7.1.1:
Reification Architecture / 7.2:
PerCepts and Reification / 7.3:
PerCept Data / 7.3.1:
PerCept Function / 7.3.2:
Mental Model / 7.4:
Current World State / 7.5:
Reification functionality / 7.6:
Initialization / 7.6.1:
Mapping the World onto its Model - Recognition / 7.6.2:
Projecting the Model onto the World - Preafference / 7.6.3:
Updating the Current World State / 7.6.4:
Wrapping Up Reification / 7.7:
Working Memory and the Construction of Personal Experiences / 8:
Transient Memory / 8.1:
Working Memory and the Current World State / 8.1.1:
Internal State / 8.1.2:
Episodic Memory / 8.2:
Emotive Tags / 8.2.1:
Memory Services / 8.3:
Providing Memory Services to the Reification Process / 8.4:
Memory, What Was That Again? / 8.5:
Semantic Memory and the Personal Rough Ontology / 9:
Semantic Memory / 9.1:
What is a Personal Rough Ontology? / 9.1.1:
Building Semantic Memory / 9.2:
Structure of the Ontology / 9.2.1:
The nodes in the multi-graph / 9.2.2:
Relationships, the Edges of the Graph / 9.2.3:
A Note on Representing Probabilities / 9.2.4:
Persistent Storage in the Personal Rough Ontology / 9.3:
Transient versus Persistent Knowledge / 9.4:
Extracting Problems for the Deliberative System / 9.5:
Focusing Attention by Finding Sub-Ontologies / 9.6:
Weighted Transitivity / 9.6.1:
Deliberative System / 10:
Deliberation / 10.1:
Reasoning About the Present / 10.2:
Sense-Symbols from the Reification Engine / 10.2.1:
Symbols from the Ontology / 10.2.2:
Reasoning with WorldSets / 10.2.3:
Choosing the Future / 10.3:
Planning as Search / 10.3.1:
Adapting to Failure / 10.3.2:
Plan Evaluation and Selection / 10.4:
Acquiring Distributions / 10.4.1:
Simulator Fidelity / 10.4.2:
Summary / 10.5:
Putting it All Together / 11:
How it Fits Together / 11.1:
Goals and Environment / 11.2:
Knowledge Sources / 11.3:
Ontological Knowledge / 11.3.1:
Reification Knowledge / 11.3.2:
Perception/Action Knowledge / 11.3.3:
The process / 11.4:
Perception/Action / 11.4.1:
Reification / 11.4.2:
Execution / 11.4.3:
Execution, Reification and Action / 11.4.4:
Perception/Action - Reflex / 11.4.6:
Execution Failure / 11.4.7:
Back Up to Deliberation / 11.4.8:
Procedural Memory and Localization / 11.4.9:
A Few Notes About the General Flow / 11.5:
Testing / 12:
Testing the Robot, or How Does One Test an Embedded System? / 12.1:
eXtreme Programming / 12.2:
Methodology for Testing Embodied Systems / 12.3:
Benefits of Partitioning the Tests / 12.3.1:
General Testing Guidelines / 12.4:
General Partitioning Guidelines / 12.4.1:
Testing in the lab / 12.5:
Hardware / 12.5.1:
Static Tests / 12.5.2:
Dynamic tests / 12.5.3:
Formal System Tests - Testing In The Real World / 12.6:
Testing Recognition / 12.6.1:
Testing Preafference / 12.6.2:
Testing Self-Localization / 12.6.3:
Where do we go from here / 12.7:
A Stopping Point / 13.1:
Next Steps / 13.2:
Adding Learning to the Model / 13.2.1:
Adding Additional Data Sources / 13.2.2:
Porting the Brain into New Bodies / 13.2.3:
Glossary
References
Index
Introduction / 1:
Bridging the Gap / 1.1:
Bidirectional Mapping / 1.1.1:
22.

電子ブック

EB
Malte Helmert, J?rg Siekmann
出版情報: Springer eBooks Computer Science , Springer Berlin Heidelberg, 2008
所蔵情報: loading…
目次情報: 続きを見る
Planning Benchmarks / Part I:
The Role of Benchmarks / 1:
Evaluating Planner Performance / 1.1:
Worst-Case Evaluation / 1.1.1:
Average-Case Evaluation / 1.1.2:
Planning Benchmarks Are Important / 1.2:
Theoretical Analyses of Planning Benchmarks / 1.3:
Why Theoretical Analyses Are Useful / 1.3.1:
Published Results on Benchmark Complexity / 1.3.2:
Standard Benchmarks / 1.4:
Summary and Overview / 1.5:
Defining Planning Domains / 2:
Optimization Problems / 2.1:
Minimization Problems / 2.1.1:
Approximation Algorithms / 2.1.2:
Approximation Classes / 2.1.3:
Reductions / 2.1.4:
Formalizing Planning Domains / 2.2:
General Results and Reductions / 2.3:
Upper Bounds / 2.3.1:
Shortest Plan Length / 2.3.2:
Approximation Classes of Limited Interest / 2.3.3:
Relating Planning and (Bounded) Plan Existence / 2.3.4:
Generalization and Specialization / 2.3.5:
The Benchmark Suite / 3:
Defining the Competition Domains / 3.1:
IPC1 Domains / 3.2:
IPC2 Domains / 3.2.2:
IPC3 Domains / 3.2.3:
IPC4 Domains / 3.2.4:
Domains and Domain Families / 3.3:
Transportation and Route Planning / 4:
Transport and Route / 4.1:
The Transport Domain / 4.1.1:
The Route Domain / 4.1.2:
Special Cases and Hierarchy / 4.1.3:
General Results / 4.2:
Plan Existence / 4.3:
Hardness of Optimization / 4.4:
Constant Factor Approximation / 4.5:
Hardness of Constant Factor Approximation / 4.6:
Summary / 4.7:
Beyond Transport and Route / 4.8:
IPC Domains: Transportation and Route Planning / 5:
Gripper / 5.1:
Mystery and Mystery Prime / 5.2:
Logistics / 5.3:
Zenotravel / 5.4:
Depots / 5.5:
Miconic-10 / 5.6:
Rovers / 5.7:
Grid / 5.8:
Driverlog / 5.9:
Airport / 5.10:
IPC Domains: Others / 5.11:
Assembly / 6.1:
Blocksworld / 6.2:
Freecell / 6.3:
Movie / 6.4:
Pipesworld / 6.5:
Promela / 6.6:
PSR / 6.7:
Satellite / 6.8:
Schedule / 6.9:
Conclusions / 6.10:
Ten Conclusions / 7.1:
Going Further / 7.2:
Fast Downward / Part II:
Solving Planning Tasks Hierarchically / 8:
Introduction / 8.1:
Related Work / 8.2:
Causal Graphs and Abstraction / 8.2.1:
Causal Graphs and Unary STRIPS Operators / 8.2.2:
Multi-Valued Planning Tasks / 8.2.3:
Architecture and Overview / 8.3:
Translation / 9:
PDDL and Multi-valued Planning Tasks / 9.1:
Translation Overview / 9.2:
Normalization / 9.3:
Compiling Away Types / 9.3.1:
Simplifying Conditions / 9.3.2:
Simplifying Effects / 9.3.3:
Normalization Result / 9.3.4:
Invariant Synthesis / 9.4:
Initial Candidates / 9.4.1:
Proving Invariance / 9.4.2:
Refining Failed Candidates / 9.4.3:
Examples / 9.4.4:
Grounding / 9.4.5:
Overview of Horn Exploration / 9.5.1:
Generating the Logic Program / 9.5.2:
Translating the Logic Program to Normal Form / 9.5.3:
Computing the Canonical Model / 9.5.4:
Axiom and Operator Instantiation / 9.5.5:
Multi-valued Planning Task Generation / 9.6:
Variable Selection / 9.6.1:
Converting the Initial State / 9.6.2:
Converting Operator Effects / 9.6.3:
Converting Conditions / 9.6.4:
Computing Axiom Layers / 9.6.5:
Generating the Output / 9.6.6:
Performance Notes / 9.7:
Relative Performance Compared to MIPS Translator / 9.7.1:
Absolute Performance / 9.7.2:
Knowledge Compilation / 10:
Overview / 10.1:
Domain Transition Graphs / 10.2:
Causal Graphs / 10.3:
Acyclic Causal Graphs / 10.3.1:
Generating and Pruning Causal Graphs / 10.3.2:
Causal Graph Examples / 10.3.3:
Successor Generators and Axiom Evaluators / 10.4:
Successor Generators / 10.4.1:
Axiom Evaluators / 10.4.2:
Search / 11:
The Causal Graph Heuristic / 11.1:
Conceptual View of the Causal Graph Heurstic / 11.2.1:
Computation of the Causal Graph Heuristic / 11.2.2:
States with Infinite Heuristic Value / 11.2.3:
Helpful Transitions / 11.2.4:
The FF Heuristic / 11.3:
Greedy Best-First Search in Fast Downward / 11.4:
Preferred Operators / 11.4.1:
Deferred Heuristic Evaluation / 11.4.2:
Multi-heuristic Best-First Search / 11.5:
Focused Iterative-Broadening Search / 11.6:
Experiments / 12:
Experiment Design / 12.1:
Benchmark Set / 12.1.1:
Experiment Setup / 12.1.2:
Translation and Knowledge Compilation vs. Search / 12.1.3:
Strips Domains from IPC1-3 / 12.2:
ADL Domains from IPC1-3 / 12.3:
Domains from IPC4 / 12.4:
Conclusions from the Experiment / 12.5:
Discussion / 13:
Major Contributors / 13.1:
Multi-valued Representations / 13.2.1:
Task Decomposition Heuristics / 13.2.2:
Minor Contributions / 13.3:
References / 13.4:
Index
Planning Benchmarks / Part I:
The Role of Benchmarks / 1:
Evaluating Planner Performance / 1.1:
23.

電子ブック

EB
James P. Gunderson, Louise F. Gunderson, Unspecified
出版情報: SpringerLink Books - AutoHoldings , Springer US, 2009
所蔵情報: loading…
目次情報: 続きを見る
Introduction / 1:
Bridging the Gap / 1.1:
Bidirectional Mapping / 1.1.1:
Reification and Preafference in Biological Entities / 1.2:
More Advanced Brains / 1.3:
What This Book Is and What It Is Not / 1.4:
Structure of the Book / 1.5:
A Note on Typefaces and Terminology / 1.6:
Anthropomorphization / 1.6.1:
Some background material on probability and biology / 2:
Layout / 2.1:
Probability in the Real World / 2.2:
Why a Biologically Principled Argument? / 2.3:
Biological Principles / 2.3.1:
What Is a Biologically Principled Argument? / 2.4:
Biology Is an Observational Science / 2.4.1:
Life Has Structure / 2.4.2:
The Theory of Evolution Explains the Observed Diversity of Life / 2.4.3:
So Why Is Our Model Biologically Principled? / 2.5:
Why Not Just Use Expected Value? / 2.5.1:
Using Cognition and Physiology to Build a Cognitive Model / 3:
Reification in Biological Entities / 3.1:
Recognition / 3.1.1:
Preafference / 3.1.2:
Biological Storage / 3.2:
Explicit Memory / 3.2.1:
Emotion / 3.3:
Emotion as mediator / 3.3.1:
Representation / 4:
Representing Features of the World / 4.1:
Representing Goals / 4.2:
Representing Actions in the World / 4.3:
Enabling Conditions / 4.3.1:
Outcomes / 4.3.2:
Representing Likelihoods / 4.3.3:
Exogenous Events / 4.4:
Perception/Action System / 5:
Robot as Perception/Action System / 5.1:
Robot as Body / 5.1.1:
Robot as Senor / 5.1.2:
Robot as Agent of Change / 5.1.3:
Low Level Control Loop - Procedural Memory / 5.1.4:
System Safety and Routine Actions / 5.1.5:
Examples of Perception/Action Systems / 5.2:
Fred - a simple test robot / 5.2.1:
Basil / 5.2.2:
Summary of Perception/Action Systems / 5.3:
Design of a Reification Engine / 6:
Model Selection Criteria / 6.1:
Judgment Analysis / 6.2:
Designing the Reification Engine / 6.3:
Bridging the Sensor to Symbol Gap / 7:
Supporting Bidirectional Mapping / 7.1:
A Third Approach / 7.1.1:
Reification Architecture / 7.2:
PerCepts and Reification / 7.3:
PerCept Data / 7.3.1:
PerCept Function / 7.3.2:
Mental Model / 7.4:
Current World State / 7.5:
Reification functionality / 7.6:
Initialization / 7.6.1:
Mapping the World onto its Model - Recognition / 7.6.2:
Projecting the Model onto the World - Preafference / 7.6.3:
Updating the Current World State / 7.6.4:
Wrapping Up Reification / 7.7:
Working Memory and the Construction of Personal Experiences / 8:
Transient Memory / 8.1:
Working Memory and the Current World State / 8.1.1:
Internal State / 8.1.2:
Episodic Memory / 8.2:
Emotive Tags / 8.2.1:
Memory Services / 8.3:
Providing Memory Services to the Reification Process / 8.4:
Memory, What Was That Again? / 8.5:
Semantic Memory and the Personal Rough Ontology / 9:
Semantic Memory / 9.1:
What is a Personal Rough Ontology? / 9.1.1:
Building Semantic Memory / 9.2:
Structure of the Ontology / 9.2.1:
The nodes in the multi-graph / 9.2.2:
Relationships, the Edges of the Graph / 9.2.3:
A Note on Representing Probabilities / 9.2.4:
Persistent Storage in the Personal Rough Ontology / 9.3:
Transient versus Persistent Knowledge / 9.4:
Extracting Problems for the Deliberative System / 9.5:
Focusing Attention by Finding Sub-Ontologies / 9.6:
Weighted Transitivity / 9.6.1:
Deliberative System / 10:
Deliberation / 10.1:
Reasoning About the Present / 10.2:
Sense-Symbols from the Reification Engine / 10.2.1:
Symbols from the Ontology / 10.2.2:
Reasoning with WorldSets / 10.2.3:
Choosing the Future / 10.3:
Planning as Search / 10.3.1:
Adapting to Failure / 10.3.2:
Plan Evaluation and Selection / 10.4:
Acquiring Distributions / 10.4.1:
Simulator Fidelity / 10.4.2:
Summary / 10.5:
Putting it All Together / 11:
How it Fits Together / 11.1:
Goals and Environment / 11.2:
Knowledge Sources / 11.3:
Ontological Knowledge / 11.3.1:
Reification Knowledge / 11.3.2:
Perception/Action Knowledge / 11.3.3:
The process / 11.4:
Perception/Action / 11.4.1:
Reification / 11.4.2:
Execution / 11.4.3:
Execution, Reification and Action / 11.4.4:
Perception/Action - Reflex / 11.4.6:
Execution Failure / 11.4.7:
Back Up to Deliberation / 11.4.8:
Procedural Memory and Localization / 11.4.9:
A Few Notes About the General Flow / 11.5:
Testing / 12:
Testing the Robot, or How Does One Test an Embedded System? / 12.1:
eXtreme Programming / 12.2:
Methodology for Testing Embodied Systems / 12.3:
Benefits of Partitioning the Tests / 12.3.1:
General Testing Guidelines / 12.4:
General Partitioning Guidelines / 12.4.1:
Testing in the lab / 12.5:
Hardware / 12.5.1:
Static Tests / 12.5.2:
Dynamic tests / 12.5.3:
Formal System Tests - Testing In The Real World / 12.6:
Testing Recognition / 12.6.1:
Testing Preafference / 12.6.2:
Testing Self-Localization / 12.6.3:
Where do we go from here / 12.7:
A Stopping Point / 13.1:
Next Steps / 13.2:
Adding Learning to the Model / 13.2.1:
Adding Additional Data Sources / 13.2.2:
Porting the Brain into New Bodies / 13.2.3:
Glossary
References
Index
Introduction / 1:
Bridging the Gap / 1.1:
Bidirectional Mapping / 1.1.1:
24.

電子ブック

EB
Malte Helmert, Jörg Siekmann
出版情報: SpringerLink Books - AutoHoldings , Springer Berlin Heidelberg, 2008
所蔵情報: loading…
目次情報: 続きを見る
Planning Benchmarks / Part I:
The Role of Benchmarks / 1:
Evaluating Planner Performance / 1.1:
Worst-Case Evaluation / 1.1.1:
Average-Case Evaluation / 1.1.2:
Planning Benchmarks Are Important / 1.2:
Theoretical Analyses of Planning Benchmarks / 1.3:
Why Theoretical Analyses Are Useful / 1.3.1:
Published Results on Benchmark Complexity / 1.3.2:
Standard Benchmarks / 1.4:
Summary and Overview / 1.5:
Defining Planning Domains / 2:
Optimization Problems / 2.1:
Minimization Problems / 2.1.1:
Approximation Algorithms / 2.1.2:
Approximation Classes / 2.1.3:
Reductions / 2.1.4:
Formalizing Planning Domains / 2.2:
General Results and Reductions / 2.3:
Upper Bounds / 2.3.1:
Shortest Plan Length / 2.3.2:
Approximation Classes of Limited Interest / 2.3.3:
Relating Planning and (Bounded) Plan Existence / 2.3.4:
Generalization and Specialization / 2.3.5:
The Benchmark Suite / 3:
Defining the Competition Domains / 3.1:
IPC1 Domains / 3.2:
IPC2 Domains / 3.2.2:
IPC3 Domains / 3.2.3:
IPC4 Domains / 3.2.4:
Domains and Domain Families / 3.3:
Transportation and Route Planning / 4:
Transport and Route / 4.1:
The Transport Domain / 4.1.1:
The Route Domain / 4.1.2:
Special Cases and Hierarchy / 4.1.3:
General Results / 4.2:
Plan Existence / 4.3:
Hardness of Optimization / 4.4:
Constant Factor Approximation / 4.5:
Hardness of Constant Factor Approximation / 4.6:
Summary / 4.7:
Beyond Transport and Route / 4.8:
IPC Domains: Transportation and Route Planning / 5:
Gripper / 5.1:
Mystery and Mystery Prime / 5.2:
Logistics / 5.3:
Zenotravel / 5.4:
Depots / 5.5:
Miconic-10 / 5.6:
Rovers / 5.7:
Grid / 5.8:
Driverlog / 5.9:
Airport / 5.10:
IPC Domains: Others / 5.11:
Assembly / 6.1:
Blocksworld / 6.2:
Freecell / 6.3:
Movie / 6.4:
Pipesworld / 6.5:
Promela / 6.6:
PSR / 6.7:
Satellite / 6.8:
Schedule / 6.9:
Conclusions / 6.10:
Ten Conclusions / 7.1:
Going Further / 7.2:
Fast Downward / Part II:
Solving Planning Tasks Hierarchically / 8:
Introduction / 8.1:
Related Work / 8.2:
Causal Graphs and Abstraction / 8.2.1:
Causal Graphs and Unary STRIPS Operators / 8.2.2:
Multi-Valued Planning Tasks / 8.2.3:
Architecture and Overview / 8.3:
Translation / 9:
PDDL and Multi-valued Planning Tasks / 9.1:
Translation Overview / 9.2:
Normalization / 9.3:
Compiling Away Types / 9.3.1:
Simplifying Conditions / 9.3.2:
Simplifying Effects / 9.3.3:
Normalization Result / 9.3.4:
Invariant Synthesis / 9.4:
Initial Candidates / 9.4.1:
Proving Invariance / 9.4.2:
Refining Failed Candidates / 9.4.3:
Examples / 9.4.4:
Grounding / 9.4.5:
Overview of Horn Exploration / 9.5.1:
Generating the Logic Program / 9.5.2:
Translating the Logic Program to Normal Form / 9.5.3:
Computing the Canonical Model / 9.5.4:
Axiom and Operator Instantiation / 9.5.5:
Multi-valued Planning Task Generation / 9.6:
Variable Selection / 9.6.1:
Converting the Initial State / 9.6.2:
Converting Operator Effects / 9.6.3:
Converting Conditions / 9.6.4:
Computing Axiom Layers / 9.6.5:
Generating the Output / 9.6.6:
Performance Notes / 9.7:
Relative Performance Compared to MIPS Translator / 9.7.1:
Absolute Performance / 9.7.2:
Knowledge Compilation / 10:
Overview / 10.1:
Domain Transition Graphs / 10.2:
Causal Graphs / 10.3:
Acyclic Causal Graphs / 10.3.1:
Generating and Pruning Causal Graphs / 10.3.2:
Causal Graph Examples / 10.3.3:
Successor Generators and Axiom Evaluators / 10.4:
Successor Generators / 10.4.1:
Axiom Evaluators / 10.4.2:
Search / 11:
The Causal Graph Heuristic / 11.1:
Conceptual View of the Causal Graph Heurstic / 11.2.1:
Computation of the Causal Graph Heuristic / 11.2.2:
States with Infinite Heuristic Value / 11.2.3:
Helpful Transitions / 11.2.4:
The FF Heuristic / 11.3:
Greedy Best-First Search in Fast Downward / 11.4:
Preferred Operators / 11.4.1:
Deferred Heuristic Evaluation / 11.4.2:
Multi-heuristic Best-First Search / 11.5:
Focused Iterative-Broadening Search / 11.6:
Experiments / 12:
Experiment Design / 12.1:
Benchmark Set / 12.1.1:
Experiment Setup / 12.1.2:
Translation and Knowledge Compilation vs. Search / 12.1.3:
Strips Domains from IPC1-3 / 12.2:
ADL Domains from IPC1-3 / 12.3:
Domains from IPC4 / 12.4:
Conclusions from the Experiment / 12.5:
Discussion / 13:
Major Contributors / 13.1:
Multi-valued Representations / 13.2.1:
Task Decomposition Heuristics / 13.2.2:
Minor Contributions / 13.3:
References / 13.4:
Index
Planning Benchmarks / Part I:
The Role of Benchmarks / 1:
Evaluating Planner Performance / 1.1:
25.

電子ブック

EB
Eyke H?llermeier, Eyke H?llermeier
出版情報: Springer eBooks Computer Science , Springer Netherlands, 2007
所蔵情報: loading…
目次情報: 続きを見る
Dedication
Foreword
Preface
Notation
Introduction / 1:
Similarity and case-based reasoning / 1.1:
Objective of this book / 1.2:
Making case-based inference more reliable / 1.2.1:
The important role of models / 1.2.2:
Formal models of case-based inference / 1.2.3:
Overview / 1.3:
Similarity and Case-Based Inference / 2:
Model-based and instance-based approaches / 2.1:
Model-based approaches / 2.1.1:
Instance-based approaches / 2.1.2:
Knowledge representation / 2.1.3:
Performance in generalization / 2.1.4:
Computational complexity / 2.1.5:
Similarity-based methods / 2.2:
Nearest neighbor (NN) estimation / 2.2.1:
Instance-based learning / 2.2.2:
Case-based reasoning / 2.2.3:
The concept of similarity / 2.3:
Similarity in case-based reasoning / 2.3.1:
Similarity and fuzzy sets / 2.3.2:
Aggregation of local similarity measures / 2.3.3:
Case-based inference / 2.4:
Deterministic inference problems / 2.4.1:
Non-deterministic inference problems / 2.4.2:
Summary and remarks / 2.4.3:
Constraint-Based Modeling of Case-Based Inference / 3:
Basic concepts / 3.1:
Similarity profiles and hypotheses / 3.1.1:
Generalized similarity profiles / 3.1.2:
Constraint-based inference / 3.2:
A constraint-based inference scheme / 3.2.1:
Non-deterministic problems / 3.2.2:
Case-based approximation / 3.3:
Properties of case-based approximation / 3.3.1:
Local similarity profiles / 3.3.2:
Learning similarity hypotheses / 3.4:
The learning task / 3.4.1:
A learning algorithm / 3.4.2:
Properties of case-based learning / 3.4.3:
Experimental results / 3.4.4:
Application to statistical inference / 3.5:
Case-based parameter estimation / 3.5.1:
Case-based prior elicitation / 3.5.2:
Probabilistic Modeling of Case-Based Inference / 3.6:
Basic probabilistic concepts / 4.1:
Probabilistic similarity profiles and hypotheses / 4.1.1:
Generalized probabilistic profiles / 4.1.2:
Case-based inference, probabilistic reasoning, and statistical inference / 4.2:
Learning probabilistic similarity hypotheses / 4.3:
Simple hypotheses and credible case-based inference / 4.3.1:
Extended case-based learning / 4.3.2:
Experiments with regression and label ranking / 4.4:
Regression: artificial data / 4.4.1:
Regression: real-world data / 4.4.2:
Label ranking / 4.4.3:
Case-based inference as evidential reasoning / 4.5:
Transformation of probabilistic evidence / 4.5.1:
Inference from individual cases / 4.5.2:
Combining evidence from several cases / 4.5.3:
Assessment of cases / 4.6:
Similarity-weighted approximation / 4.6.1:
More general criteria / 4.6.2:
Assessment of individual cases / 4.6.3:
Complex similarity hypotheses / 4.7:
Inference schemes of higher order / 4.7.1:
Partially admissible profiles / 4.7.2:
Approximate probabilistic inference / 4.8:
Generalized uncertainty measures and profiles / 4.8.1:
An approximate inference scheme / 4.8.2:
Fuzzy Set-Based Modeling of Case-Based Inference I / 4.9:
Background on possibility theory / 5.1:
Possibility distributions as generalized constraints / 5.1.1:
Possibility as evidential support / 5.1.2:
Fuzzy rule-based modeling of the CBI hypothesis / 5.2:
Possibility rules / 5.2.1:
Modeling the CBI hypothesis / 5.2.2:
Generalized possibilistic prediction / 5.3:
Control of compensation and accumulation of support / 5.3.1:
Possibilistic support and weighted NN estimation / 5.3.2:
Upper and lower possibility bounds / 5.3.3:
Fuzzy logical evaluation / 5.3.4:
Comparison of extrapolation principles / 5.3.5:
From predictions to decisions / 5.3.6:
An illustrative example / 5.3.7:
Complexity issues / 5.3.8:
Extensions of the basic model / 5.4:
Dealing with incomplete information / 5.4.1:
Discounting noisy and atypical instances / 5.4.2:
From instances to rules / 5.4.3:
Modified possibility rules / 5.4.4:
Combination of several rules / 5.4.5:
Locally restricted extrapolation / 5.4.6:
Incorporation of background knowledge / 5.4.7:
Experimental studies / 5.5:
Preliminaries / 5.5.1:
Classification accuracy / 5.5.2:
Statistical assumptions and robustness / 5.5.3:
Variation of the aggregation operator / 5.5.4:
Representation of uncertainty / 5.5.5:
Calibration of CBI models / 5.6:
Relations to other fields / 5.7:
Fuzzy and possibilistic data analysis / 5.7.1:
Fuzzy set-based approximate reasoning / 5.7.2:
Fuzzy Set-Based Modeling of Case-Based Inference II / 5.8:
Gradual inference rules / 6.1:
The basic model / 6.1.1:
Modification of gradual rules / 6.1.2:
Certainty rules / 6.2:
Cases as information sources / 6.3:
A probabilistic model / 6.3.1:
Combination of information sources / 6.3.2:
Exceptionality and assessment of cases / 6.4:
Local rules / 6.5:
Case-Based Decision Making / 6.6:
Case-based decision theory / 7.1:
Nearest Neighbor decisions / 7.2:
Nearest Neighbor classification and decision making / 7.2.1:
Nearest Neighbor decision rules / 7.2.2:
An axiomatic characterization / 7.2.3:
Fuzzy modeling of case-based decisions / 7.3:
Basic measures for act evaluation / 7.3.1:
Modification of the basic measures / 7.3.2:
Interpretation of the decision criteria / 7.3.3:
Fuzzy quantification in act evaluation / 7.4:
A CBI framework of CBDM / 7.5:
Generalized decision-theoretic setups / 7.5.1:
Decision making using belief functions / 7.5.2:
Possibilistic decision making / 7.5.3:
CBDM models: A discussion of selected issues / 7.6:
The relation between similarity, preference, and belief / 7.6.1:
The effect of observed cases / 7.6.2:
Dynamic aspects of decision making / 7.6.3:
Experience-based decision making / 7.7:
Compiled decision models / 7.7.1:
Satisficing decision trees / 7.7.2:
Experimental evaluation / 7.7.3:
Conclusions and Outlook / 7.8:
Possibilistic Dominance in Qualitative Decisions / A:
Implication-Based Fuzzy Rules as Randomized Gradual Rules / B:
Implication-based fuzzy rules / B.1:
Gradual rules / B.1.1:
Other implication-based rules / B.1.2:
Randomized gradual rules / B.2:
A probabilistic representation of implication-based fuzzy rules / B.3:
Similarity-Based Reasoning as Logical Inference / C:
Simulation Results of Section 3.4.4 / D:
Experimental Results of Section 5.5.4 / E:
Simulation Results of Section 7.4 / F:
Computation of an Extended Splitting Measures / G:
Experimental Results of Section 7.7.2 / H:
References
Index
Dedication
Foreword
Preface
26.

電子ブック

EB
Eyke Hüllermeier, Eyke Hüllermeier
出版情報: SpringerLink Books - AutoHoldings , Springer Netherlands, 2007
所蔵情報: loading…
目次情報: 続きを見る
Dedication
Foreword
Preface
Notation
Introduction / 1:
Similarity and case-based reasoning / 1.1:
Objective of this book / 1.2:
Making case-based inference more reliable / 1.2.1:
The important role of models / 1.2.2:
Formal models of case-based inference / 1.2.3:
Overview / 1.3:
Similarity and Case-Based Inference / 2:
Model-based and instance-based approaches / 2.1:
Model-based approaches / 2.1.1:
Instance-based approaches / 2.1.2:
Knowledge representation / 2.1.3:
Performance in generalization / 2.1.4:
Computational complexity / 2.1.5:
Similarity-based methods / 2.2:
Nearest neighbor (NN) estimation / 2.2.1:
Instance-based learning / 2.2.2:
Case-based reasoning / 2.2.3:
The concept of similarity / 2.3:
Similarity in case-based reasoning / 2.3.1:
Similarity and fuzzy sets / 2.3.2:
Aggregation of local similarity measures / 2.3.3:
Case-based inference / 2.4:
Deterministic inference problems / 2.4.1:
Non-deterministic inference problems / 2.4.2:
Summary and remarks / 2.4.3:
Constraint-Based Modeling of Case-Based Inference / 3:
Basic concepts / 3.1:
Similarity profiles and hypotheses / 3.1.1:
Generalized similarity profiles / 3.1.2:
Constraint-based inference / 3.2:
A constraint-based inference scheme / 3.2.1:
Non-deterministic problems / 3.2.2:
Case-based approximation / 3.3:
Properties of case-based approximation / 3.3.1:
Local similarity profiles / 3.3.2:
Learning similarity hypotheses / 3.4:
The learning task / 3.4.1:
A learning algorithm / 3.4.2:
Properties of case-based learning / 3.4.3:
Experimental results / 3.4.4:
Application to statistical inference / 3.5:
Case-based parameter estimation / 3.5.1:
Case-based prior elicitation / 3.5.2:
Probabilistic Modeling of Case-Based Inference / 3.6:
Basic probabilistic concepts / 4.1:
Probabilistic similarity profiles and hypotheses / 4.1.1:
Generalized probabilistic profiles / 4.1.2:
Case-based inference, probabilistic reasoning, and statistical inference / 4.2:
Learning probabilistic similarity hypotheses / 4.3:
Simple hypotheses and credible case-based inference / 4.3.1:
Extended case-based learning / 4.3.2:
Experiments with regression and label ranking / 4.4:
Regression: artificial data / 4.4.1:
Regression: real-world data / 4.4.2:
Label ranking / 4.4.3:
Case-based inference as evidential reasoning / 4.5:
Transformation of probabilistic evidence / 4.5.1:
Inference from individual cases / 4.5.2:
Combining evidence from several cases / 4.5.3:
Assessment of cases / 4.6:
Similarity-weighted approximation / 4.6.1:
More general criteria / 4.6.2:
Assessment of individual cases / 4.6.3:
Complex similarity hypotheses / 4.7:
Inference schemes of higher order / 4.7.1:
Partially admissible profiles / 4.7.2:
Approximate probabilistic inference / 4.8:
Generalized uncertainty measures and profiles / 4.8.1:
An approximate inference scheme / 4.8.2:
Fuzzy Set-Based Modeling of Case-Based Inference I / 4.9:
Background on possibility theory / 5.1:
Possibility distributions as generalized constraints / 5.1.1:
Possibility as evidential support / 5.1.2:
Fuzzy rule-based modeling of the CBI hypothesis / 5.2:
Possibility rules / 5.2.1:
Modeling the CBI hypothesis / 5.2.2:
Generalized possibilistic prediction / 5.3:
Control of compensation and accumulation of support / 5.3.1:
Possibilistic support and weighted NN estimation / 5.3.2:
Upper and lower possibility bounds / 5.3.3:
Fuzzy logical evaluation / 5.3.4:
Comparison of extrapolation principles / 5.3.5:
From predictions to decisions / 5.3.6:
An illustrative example / 5.3.7:
Complexity issues / 5.3.8:
Extensions of the basic model / 5.4:
Dealing with incomplete information / 5.4.1:
Discounting noisy and atypical instances / 5.4.2:
From instances to rules / 5.4.3:
Modified possibility rules / 5.4.4:
Combination of several rules / 5.4.5:
Locally restricted extrapolation / 5.4.6:
Incorporation of background knowledge / 5.4.7:
Experimental studies / 5.5:
Preliminaries / 5.5.1:
Classification accuracy / 5.5.2:
Statistical assumptions and robustness / 5.5.3:
Variation of the aggregation operator / 5.5.4:
Representation of uncertainty / 5.5.5:
Calibration of CBI models / 5.6:
Relations to other fields / 5.7:
Fuzzy and possibilistic data analysis / 5.7.1:
Fuzzy set-based approximate reasoning / 5.7.2:
Fuzzy Set-Based Modeling of Case-Based Inference II / 5.8:
Gradual inference rules / 6.1:
The basic model / 6.1.1:
Modification of gradual rules / 6.1.2:
Certainty rules / 6.2:
Cases as information sources / 6.3:
A probabilistic model / 6.3.1:
Combination of information sources / 6.3.2:
Exceptionality and assessment of cases / 6.4:
Local rules / 6.5:
Case-Based Decision Making / 6.6:
Case-based decision theory / 7.1:
Nearest Neighbor decisions / 7.2:
Nearest Neighbor classification and decision making / 7.2.1:
Nearest Neighbor decision rules / 7.2.2:
An axiomatic characterization / 7.2.3:
Fuzzy modeling of case-based decisions / 7.3:
Basic measures for act evaluation / 7.3.1:
Modification of the basic measures / 7.3.2:
Interpretation of the decision criteria / 7.3.3:
Fuzzy quantification in act evaluation / 7.4:
A CBI framework of CBDM / 7.5:
Generalized decision-theoretic setups / 7.5.1:
Decision making using belief functions / 7.5.2:
Possibilistic decision making / 7.5.3:
CBDM models: A discussion of selected issues / 7.6:
The relation between similarity, preference, and belief / 7.6.1:
The effect of observed cases / 7.6.2:
Dynamic aspects of decision making / 7.6.3:
Experience-based decision making / 7.7:
Compiled decision models / 7.7.1:
Satisficing decision trees / 7.7.2:
Experimental evaluation / 7.7.3:
Conclusions and Outlook / 7.8:
Possibilistic Dominance in Qualitative Decisions / A:
Implication-Based Fuzzy Rules as Randomized Gradual Rules / B:
Implication-based fuzzy rules / B.1:
Gradual rules / B.1.1:
Other implication-based rules / B.1.2:
Randomized gradual rules / B.2:
A probabilistic representation of implication-based fuzzy rules / B.3:
Similarity-Based Reasoning as Logical Inference / C:
Simulation Results of Section 3.4.4 / D:
Experimental Results of Section 5.5.4 / E:
Simulation Results of Section 7.4 / F:
Computation of an Extended Splitting Measures / G:
Experimental Results of Section 7.7.2 / H:
References
Index
Dedication
Foreword
Preface
27.

電子ブック

EB
Raymond S. T. Lee, Toru Ishida, Nicholas R. Jennings
出版情報: Springer eBooks Computer Science , Springer Berlin Heidelberg, 2006
所蔵情報: loading…
目次情報: 続きを見る
Introduction / 1:
The Coming of the Age of Intelligent Agents / 1.1:
The Structure of This Book / 1.2:
Outline of Each Chapter / 1.3:
Readers of This Book / 1.4:
Concluding Remarks / 1.5:
Concepts and Theories / Part I:
The Search for Human Intelligence / 2:
What Is Intelligence? / 2.1:
The Philosophical View on Intelligence / 2.2:
Introduction - The Search for Intelligence and Ultimate Knowledge / 2.2.1:
The Traditional Philosophical View of Knowledge - Belief, Truth and Justification / 2.2.2:
Rationalistic Versus Empiristic View of Knowledge / 2.2.3:
Kant's Critique of Pure Reason and the Theory of Knowledge / 2.2.4:
Russell's View of Knowledge / 2.2.5:
Krishnamurti's The Awakening of Intelligence - Thought Versus Intelligence / 2.2.6:
Lee's Theory on Knowledge and Intelligence - The Unification Theory of Senses and Experiences / 2.2.7:
The Cognitive-Scientific View on Intelligence / 2.3:
The Cognitive-Scientific Definition of Intelligence / 2.3.1:
Spearman's Model of the Nature of Intelligence / 2.3.2:
Piaget's Psychology of Intelligence / 2.3.3:
Major Approaches of Intelligence - From Psychometric Approach to Latest Studies / 2.3.4:
Gardner's Theory on Multiple Intelligence / 2.3.5:
Lee's Unification Theory of Senses and Experiences - The Psychological Interpretation / 2.3.6:
The Neuroscience and Neurophysiology View on Intelligence / 2.4:
The Major Challenges of Mind Science (The Exploration of the Mind from the Neuroscience Perspective) / 2.4.1:
A Brief History - The Search for Intelligence in Neuroscience / 2.4.2:
Contemporary Research in Mind Science - From Neural Oscillators to the "Chaos in the Brain" / 2.4.3:
The Neuroscientific and Neurophysiological Implications of the Unification Theory of Senses and Experiences / 2.4.4:
Summary / 2.4.5:
From AI to IA - The Emergence of Agent Technology / 2.5:
What is AI? / 3.1:
A Brief History of AI / 3.2:
The Dartmouth Meeting (1956) - The Birth of AI / 3.2.1:
The Turing Test - A Prelude of AI / 3.2.2:
Strong Versus Weak AI / 3.2.3:
Searle's Chinese Room Thought Experiment / 3.2.4:
Development of AI in the Late 1970s / 3.2.5:
The "Reincarnation" of Neural Networks in the Late 1980s / 3.2.6:
The Birth of IAs in the Late 1990s / 3.2.7:
An Overview of the Classification of AI Technologies / 3.3:
AI - Where to Go? / 3.4:
The Coming of the Age of IAs / 3.5:
What Is an IA? - A "Right" Place to Start / 3.5.1:
The Emergence of Agent Technology - The Idea of Portable Intelligence / 3.5.2:
The Ten Basic Requirements of IAs / 3.6:
The Contemporary Variety of IAs / 3.7:
The Conceptual Model of IAs / 3.8:
The BFI Agent Intellectual Conceptual Model / 3.8.1:
The Agent Development Conceptual Model (GIA vs. TIA) / 3.8.2:
Major Challenges and Threats of Agent Technology / 3.9:
AI Techniques for Agent Construction / 3.10:
The World of Fuzziness, Chaos, and Uncertainty / 4.1:
Fuzzy Logic / 4.2:
What is Fuzzy Logic? / 4.2.1:
Fuzzy Theory and the Uncertainty Principle / 4.2.2:
Fuzzy Logic - A Structural Overview / 4.2.3:
Fuzzy Reasoning - A Case Study on Fuzzy Air-conditioning Control System / 4.2.4:
Applications of Fuzzy Logic / 4.2.5:
Neural Networks - the "Brain" of IAs / 4.3:
Neural Networks - Background / 4.3.1:
ANN Architecture / 4.3.2:
Classification of Neural Networks / 4.3.3:
Associative Memory Neural Networks: Auto-associative Networks / 4.3.4:
Hopfield Networks / 4.3.5:
Multilayer Feedforward Backpropagation Networks (FFBPNs) / 4.3.6:
Neural Networks - Where to Go? / 4.3.7:
Genetic Algorithms - the Nature of Evolution / 4.4:
Genetic Algorithms - Basic Principle / 4.4.1:
Population Initialization / 4.4.2:
Fitness Evaluation / 4.4.3:
Parent Selection Scheme / 4.4.4:
Crossover and Mutation / 4.4.5:
Implementation of GAs / 4.4.6:
Hybridization of GA with Neural Networks / 4.4.7:
Chaos Theory - The World of Nonlinear Dynamics / 4.5:
Chaos Theory - The Study of Nonlinear Dynamics / 4.5.1:
Battle Between two Worlds: Deterministic Versus Probabilistic / 4.5.2:
A Snapshot of Chaos Theory / 4.5.3:
Characteristics of Chaos Systems / 4.5.4:
Chaos Theory Versus Uncertainty Principle / 4.5.5:
Current Work on Chaos Theory / 4.5.6:
Chaotic Neural Networks and the Lee-Oscillator / 4.6:
Chaotic Neural Oscillators - An Overview / 4.6.1:
The Lee-Oscillator / 4.6.2:
The Lee-Associator / 4.6.3:
System Implementation and Experimental Results / 4.6.4:
Progressive Memory Recalling Scheme of the Lee-Associator and Its Biological and Psychological Implications / 4.6.5:
Related Work / 4.6.6:
Conclusion / 4.6.7:
Further Reading / 4.7:
Applications of Intelligent Agents Using iJADK / Part II:
The Design and Implementation of an Intelligent Agent-Based System Using iJADK / 5:
iJADE - System Framework / 5.1:
iJADE Architecture / 5.2.1:
Application Layer / 5.2.2:
Conscious (Intelligent) Layer / 5.2.3:
Technology Layer / 5.2.4:
Supporting Layer / 5.2.5:
iJADK Architecture / 5.3:
Introduction to iJADK / 5.3.1:
Basic Components of iJADK / 5.3.2:
Internal Operations of iJADK / 5.3.3:
Agent Programming Over the iJADK Platform / 5.4:
User Interface / 5.4.1:
Agent Class / 5.4.2:
LifeCycleManager / 5.4.3:
RuntimeAgent / 5.4.4:
Sample iJADE Agents / 5.5:
HelloWorldAgent / 5.5.1:
HelloWorldAgent2 / 5.5.2:
TalkAgent / 5.5.3:
Latest Works of iJADE / 5.6:
iJADE WShopper - Intelligent Mobile Shopping Based on Fuzzy-Neuro Shopping Agents / 5.7:
WAP Technology / 6.1:
WAP Technology - From Web to MEB / 6.2.1:
Constraints for Contemporary WAP Technology on MEB / 6.2.2:
iJADE WShopper - System Framework / 6.3:
iJADE WShopper - System Overview / 6.3.1:
iJADE WShopper for M-shopping - System Components / 6.3.2:
Experimental Results / 6.4:
The RTT Test / 6.4.1:
The PS Test / 6.4.3:
The iWSAS Test / 6.4.4:
Migration to the J2ME Platform / 6.5:
Incorporate Other AI Capabilities in the Shopper Agents - iJADE Negotiator / 6.6.2:
iJADE WeatherMAN - A Weather Forecasting Agent Using the Fuzzy Neural Network Model / 7:
Weather Prediction Using a Fuzzy-Neuro Model / 7.1:
iJADE WeatherMAN - System Overview / 7.3:
User Requirement Definition Scheme (URDS) and Weather Reporting Scheme (WRS) / 7.3.1:
Data Collection Scheme (DCS) / 7.3.2:
Variable Selection and Transformation Scheme (VSTS) / 7.3.3:
Fuzzy-Neuro Training and Prediction Scheme (FNTPS) / 7.3.4:
iJADE WeatherMAN - System Implementation / 7.4:
iJADE WeatherMAN Weather Site / 7.4.1:
Central Agent/Test Agent / 7.4.2:
iJADE WeatherMan Place / 7.4.3:
iJADE WeatherMan Agent / 7.4.4:
iJADE Weather Forecaster Place / 7.4.5:
iJADE Forecaster Agent / 7.4.6:
Evaluation Considerations / 7.5:
Average Classification Rate / 7.5.2:
Model Performance / 7.5.3:
The HKO Forecast / 7.5.4:
Future Work / 7.6:
iJADE Stock Advisor - An Intelligent Agent-Based Stock Prediction System Using the Hybrid RBF Recurrent Network / 8:
Stock Advisory and Prediction System - A General Overview / 8.1:
Stochastic Indicator / 8.2.1:
Relative Strength Index (RSI) / 8.2.2:
Money Flow / 8.2.3:
Moving Average / 8.2.4:
Support and Resistant Lines (Trendlines) / 8.2.5:
Trend Generalization / 8.2.6:
iJADE Stock Advisor - System Framework / 8.3:
iJADE Stock Advisor - System Overview / 8.3.1:
Stock Prediction Using the HRBF model / 8.3.2:
Parameter Selection Scheme in HRBFN / 8.4:
Round-Trip-Time (RTT) Test / 8.4.2:
Long- and Short-Term Prediction, Window Size Evaluation Test / 8.4.3:
Stock Prediction Performance Test / 8.4.4:
iJADE Surveillant - A Multi-resolution Neuro-oscillatory Agent-Based Surveillance System / 8.5:
Surveillance System - An Overview / 9.1:
Background / 9.2.1:
Scene Analysis / 9.2.2:
Human Face Recognition / 9.2.3:
Supporting Technologies / 9.3:
MPEG-7 - System Overview / 9.3.1:
MPEG-7 Model / 9.3.2:
The Latest MPEG-7 Development Work on Visual Object Modeling / 9.3.3:
iJADE Surveillant - System Overview / 9.4:
iJADE Surveillant - System Architecture / 9.4.1:
Automatic Multi-resolution Scene Segmentation Scheme Using the CNOW Model / 9.4.2:
Automatic Human Face Detection and Contour Features Extraction Using the ACM / 9.4.3:
Invariant Human Face Recognition Using the EGDLM / 9.4.4:
System Implementation / 9.5:
Automatic Color Scene Segmentation Scheme / 9.5.1:
Invariant Human Face Recognition Scheme / 9.5.2:
Facial Pattern Occlusion and Distortion Test / 9.5.3:
Performance Analysis / 9.5.4:
iJADE Negotiator - An Intelligent Fuzzy Agent-Based Negotiation System for Internet Shopping / 9.6:
Negotiation Systems - An Overview / 10.1:
iJADE Negotiator - System Architecture / 10.3:
iJADE Negotiator - System Overview / 10.3.1:
iJADE Negotiator - Main Functional Modules / 10.3.2:
iJADE Negotiator - Intelligent Negotiation Strategy and Negotiation Protocol / 10.3.3:
iJADE Negotiator - System Implementation / 10.4:
Future Agent Technology - Modern Ontology and Ontological Agent Technologies (OAT) / 10.4.1:
What Is Ontology? / 11.1:
Ontology - Theories of Existence / 11.1.1:
Universals Versus Particulars / 11.1.2:
Ontology - The World of Universals / 11.1.3:
Ontological View of the Nature of Existence / 11.1.4:
Impact of Ontology on Modern AI / 11.1.5:
Modern Ontology and Ontological Agents / 11.2:
The Theoretical Foundation of OAT - Conceptualization Theory / 11.2.1:
Characteristics of Ontological Agents / 11.2.2:
Potential Applications of OAT / 11.2.3:
Cogito iJADE Project / 11.2.4:
Cogito iJADE - A New Era of Self-aware IAs / 11.3.1:
Cogito iJADE - A System Overview / 11.3.2:
Latest Works of Cogito Agents / 11.3.3:
Agent Technology - The Future / 11.4:
iJADK 2.0 API / Appendix:
References
Index
About the Author
Introduction / 1:
The Coming of the Age of Intelligent Agents / 1.1:
The Structure of This Book / 1.2:
28.

電子ブック

EB
Raymond S. T. Lee, Toru Ishida, Nicholas R. Jennings, Katia Sycara
出版情報: SpringerLink Books - AutoHoldings , Springer Berlin Heidelberg, 2006
所蔵情報: loading…
目次情報: 続きを見る
Introduction / 1:
The Coming of the Age of Intelligent Agents / 1.1:
The Structure of This Book / 1.2:
Outline of Each Chapter / 1.3:
Readers of This Book / 1.4:
Concluding Remarks / 1.5:
Concepts and Theories / Part I:
The Search for Human Intelligence / 2:
What Is Intelligence? / 2.1:
The Philosophical View on Intelligence / 2.2:
Introduction - The Search for Intelligence and Ultimate Knowledge / 2.2.1:
The Traditional Philosophical View of Knowledge - Belief, Truth and Justification / 2.2.2:
Rationalistic Versus Empiristic View of Knowledge / 2.2.3:
Kant's Critique of Pure Reason and the Theory of Knowledge / 2.2.4:
Russell's View of Knowledge / 2.2.5:
Krishnamurti's The Awakening of Intelligence - Thought Versus Intelligence / 2.2.6:
Lee's Theory on Knowledge and Intelligence - The Unification Theory of Senses and Experiences / 2.2.7:
The Cognitive-Scientific View on Intelligence / 2.3:
The Cognitive-Scientific Definition of Intelligence / 2.3.1:
Spearman's Model of the Nature of Intelligence / 2.3.2:
Piaget's Psychology of Intelligence / 2.3.3:
Major Approaches of Intelligence - From Psychometric Approach to Latest Studies / 2.3.4:
Gardner's Theory on Multiple Intelligence / 2.3.5:
Lee's Unification Theory of Senses and Experiences - The Psychological Interpretation / 2.3.6:
The Neuroscience and Neurophysiology View on Intelligence / 2.4:
The Major Challenges of Mind Science (The Exploration of the Mind from the Neuroscience Perspective) / 2.4.1:
A Brief History - The Search for Intelligence in Neuroscience / 2.4.2:
Contemporary Research in Mind Science - From Neural Oscillators to the "Chaos in the Brain" / 2.4.3:
The Neuroscientific and Neurophysiological Implications of the Unification Theory of Senses and Experiences / 2.4.4:
Summary / 2.4.5:
From AI to IA - The Emergence of Agent Technology / 2.5:
What is AI? / 3.1:
A Brief History of AI / 3.2:
The Dartmouth Meeting (1956) - The Birth of AI / 3.2.1:
The Turing Test - A Prelude of AI / 3.2.2:
Strong Versus Weak AI / 3.2.3:
Searle's Chinese Room Thought Experiment / 3.2.4:
Development of AI in the Late 1970s / 3.2.5:
The "Reincarnation" of Neural Networks in the Late 1980s / 3.2.6:
The Birth of IAs in the Late 1990s / 3.2.7:
An Overview of the Classification of AI Technologies / 3.3:
AI - Where to Go? / 3.4:
The Coming of the Age of IAs / 3.5:
What Is an IA? - A "Right" Place to Start / 3.5.1:
The Emergence of Agent Technology - The Idea of Portable Intelligence / 3.5.2:
The Ten Basic Requirements of IAs / 3.6:
The Contemporary Variety of IAs / 3.7:
The Conceptual Model of IAs / 3.8:
The BFI Agent Intellectual Conceptual Model / 3.8.1:
The Agent Development Conceptual Model (GIA vs. TIA) / 3.8.2:
Major Challenges and Threats of Agent Technology / 3.9:
AI Techniques for Agent Construction / 3.10:
The World of Fuzziness, Chaos, and Uncertainty / 4.1:
Fuzzy Logic / 4.2:
What is Fuzzy Logic? / 4.2.1:
Fuzzy Theory and the Uncertainty Principle / 4.2.2:
Fuzzy Logic - A Structural Overview / 4.2.3:
Fuzzy Reasoning - A Case Study on Fuzzy Air-conditioning Control System / 4.2.4:
Applications of Fuzzy Logic / 4.2.5:
Neural Networks - the "Brain" of IAs / 4.3:
Neural Networks - Background / 4.3.1:
ANN Architecture / 4.3.2:
Classification of Neural Networks / 4.3.3:
Associative Memory Neural Networks: Auto-associative Networks / 4.3.4:
Hopfield Networks / 4.3.5:
Multilayer Feedforward Backpropagation Networks (FFBPNs) / 4.3.6:
Neural Networks - Where to Go? / 4.3.7:
Genetic Algorithms - the Nature of Evolution / 4.4:
Genetic Algorithms - Basic Principle / 4.4.1:
Population Initialization / 4.4.2:
Fitness Evaluation / 4.4.3:
Parent Selection Scheme / 4.4.4:
Crossover and Mutation / 4.4.5:
Implementation of GAs / 4.4.6:
Hybridization of GA with Neural Networks / 4.4.7:
Chaos Theory - The World of Nonlinear Dynamics / 4.5:
Chaos Theory - The Study of Nonlinear Dynamics / 4.5.1:
Battle Between two Worlds: Deterministic Versus Probabilistic / 4.5.2:
A Snapshot of Chaos Theory / 4.5.3:
Characteristics of Chaos Systems / 4.5.4:
Chaos Theory Versus Uncertainty Principle / 4.5.5:
Current Work on Chaos Theory / 4.5.6:
Chaotic Neural Networks and the Lee-Oscillator / 4.6:
Chaotic Neural Oscillators - An Overview / 4.6.1:
The Lee-Oscillator / 4.6.2:
The Lee-Associator / 4.6.3:
System Implementation and Experimental Results / 4.6.4:
Progressive Memory Recalling Scheme of the Lee-Associator and Its Biological and Psychological Implications / 4.6.5:
Related Work / 4.6.6:
Conclusion / 4.6.7:
Further Reading / 4.7:
Applications of Intelligent Agents Using iJADK / Part II:
The Design and Implementation of an Intelligent Agent-Based System Using iJADK / 5:
iJADE - System Framework / 5.1:
iJADE Architecture / 5.2.1:
Application Layer / 5.2.2:
Conscious (Intelligent) Layer / 5.2.3:
Technology Layer / 5.2.4:
Supporting Layer / 5.2.5:
iJADK Architecture / 5.3:
Introduction to iJADK / 5.3.1:
Basic Components of iJADK / 5.3.2:
Internal Operations of iJADK / 5.3.3:
Agent Programming Over the iJADK Platform / 5.4:
User Interface / 5.4.1:
Agent Class / 5.4.2:
LifeCycleManager / 5.4.3:
RuntimeAgent / 5.4.4:
Sample iJADE Agents / 5.5:
HelloWorldAgent / 5.5.1:
HelloWorldAgent2 / 5.5.2:
TalkAgent / 5.5.3:
Latest Works of iJADE / 5.6:
iJADE WShopper - Intelligent Mobile Shopping Based on Fuzzy-Neuro Shopping Agents / 5.7:
WAP Technology / 6.1:
WAP Technology - From Web to MEB / 6.2.1:
Constraints for Contemporary WAP Technology on MEB / 6.2.2:
iJADE WShopper - System Framework / 6.3:
iJADE WShopper - System Overview / 6.3.1:
iJADE WShopper for M-shopping - System Components / 6.3.2:
Experimental Results / 6.4:
The RTT Test / 6.4.1:
The PS Test / 6.4.3:
The iWSAS Test / 6.4.4:
Migration to the J2ME Platform / 6.5:
Incorporate Other AI Capabilities in the Shopper Agents - iJADE Negotiator / 6.6.2:
iJADE WeatherMAN - A Weather Forecasting Agent Using the Fuzzy Neural Network Model / 7:
Weather Prediction Using a Fuzzy-Neuro Model / 7.1:
iJADE WeatherMAN - System Overview / 7.3:
User Requirement Definition Scheme (URDS) and Weather Reporting Scheme (WRS) / 7.3.1:
Data Collection Scheme (DCS) / 7.3.2:
Variable Selection and Transformation Scheme (VSTS) / 7.3.3:
Fuzzy-Neuro Training and Prediction Scheme (FNTPS) / 7.3.4:
iJADE WeatherMAN - System Implementation / 7.4:
iJADE WeatherMAN Weather Site / 7.4.1:
Central Agent/Test Agent / 7.4.2:
iJADE WeatherMan Place / 7.4.3:
iJADE WeatherMan Agent / 7.4.4:
iJADE Weather Forecaster Place / 7.4.5:
iJADE Forecaster Agent / 7.4.6:
Evaluation Considerations / 7.5:
Average Classification Rate / 7.5.2:
Model Performance / 7.5.3:
The HKO Forecast / 7.5.4:
Future Work / 7.6:
iJADE Stock Advisor - An Intelligent Agent-Based Stock Prediction System Using the Hybrid RBF Recurrent Network / 8:
Stock Advisory and Prediction System - A General Overview / 8.1:
Stochastic Indicator / 8.2.1:
Relative Strength Index (RSI) / 8.2.2:
Money Flow / 8.2.3:
Moving Average / 8.2.4:
Support and Resistant Lines (Trendlines) / 8.2.5:
Trend Generalization / 8.2.6:
iJADE Stock Advisor - System Framework / 8.3:
iJADE Stock Advisor - System Overview / 8.3.1:
Stock Prediction Using the HRBF model / 8.3.2:
Parameter Selection Scheme in HRBFN / 8.4:
Round-Trip-Time (RTT) Test / 8.4.2:
Long- and Short-Term Prediction, Window Size Evaluation Test / 8.4.3:
Stock Prediction Performance Test / 8.4.4:
iJADE Surveillant - A Multi-resolution Neuro-oscillatory Agent-Based Surveillance System / 8.5:
Surveillance System - An Overview / 9.1:
Background / 9.2.1:
Scene Analysis / 9.2.2:
Human Face Recognition / 9.2.3:
Supporting Technologies / 9.3:
MPEG-7 - System Overview / 9.3.1:
MPEG-7 Model / 9.3.2:
The Latest MPEG-7 Development Work on Visual Object Modeling / 9.3.3:
iJADE Surveillant - System Overview / 9.4:
iJADE Surveillant - System Architecture / 9.4.1:
Automatic Multi-resolution Scene Segmentation Scheme Using the CNOW Model / 9.4.2:
Automatic Human Face Detection and Contour Features Extraction Using the ACM / 9.4.3:
Invariant Human Face Recognition Using the EGDLM / 9.4.4:
System Implementation / 9.5:
Automatic Color Scene Segmentation Scheme / 9.5.1:
Invariant Human Face Recognition Scheme / 9.5.2:
Facial Pattern Occlusion and Distortion Test / 9.5.3:
Performance Analysis / 9.5.4:
iJADE Negotiator - An Intelligent Fuzzy Agent-Based Negotiation System for Internet Shopping / 9.6:
Negotiation Systems - An Overview / 10.1:
iJADE Negotiator - System Architecture / 10.3:
iJADE Negotiator - System Overview / 10.3.1:
iJADE Negotiator - Main Functional Modules / 10.3.2:
iJADE Negotiator - Intelligent Negotiation Strategy and Negotiation Protocol / 10.3.3:
iJADE Negotiator - System Implementation / 10.4:
Future Agent Technology - Modern Ontology and Ontological Agent Technologies (OAT) / 10.4.1:
What Is Ontology? / 11.1:
Ontology - Theories of Existence / 11.1.1:
Universals Versus Particulars / 11.1.2:
Ontology - The World of Universals / 11.1.3:
Ontological View of the Nature of Existence / 11.1.4:
Impact of Ontology on Modern AI / 11.1.5:
Modern Ontology and Ontological Agents / 11.2:
The Theoretical Foundation of OAT - Conceptualization Theory / 11.2.1:
Characteristics of Ontological Agents / 11.2.2:
Potential Applications of OAT / 11.2.3:
Cogito iJADE Project / 11.2.4:
Cogito iJADE - A New Era of Self-aware IAs / 11.3.1:
Cogito iJADE - A System Overview / 11.3.2:
Latest Works of Cogito Agents / 11.3.3:
Agent Technology - The Future / 11.4:
iJADK 2.0 API / Appendix:
References
Index
About the Author
Introduction / 1:
The Coming of the Age of Intelligent Agents / 1.1:
The Structure of This Book / 1.2:
29.

電子ブック

EB
Zbigniew Michalewicz, Martin Schmidt
出版情報: Springer eBooks Computer Science , Springer Berlin Heidelberg, 2006
所蔵情報: loading…
目次情報: 続きを見る
Complex Business Problems / Part I:
Introduction / 1:
Characteristics of Complex Business Problems / 2:
Number of Possible Solutions / 2.1:
Time-Changing Environment / 2.2:
Problem-Specific Constraints / 2.3:
Multi-objective Problems / 2.4:
Modeling the Problem / 2.5:
A Real-World Example / 2.6:
An Extended Example: Car Distribution / 3:
Basic Terminology / 3.1:
Off-lease Cars / 3.2:
The Problem / 3.3:
Transportation / 3.4:
Volume Effect / 3.5:
Price Depreciation and Inventory / 3.6:
Dynamic Market Changes / 3.7:
The Solution / 3.8:
Adaptive Business Intelligence / 4:
Data Mining / 4.1:
Prediction / 4.2:
Optimization / 4.3:
Adaptability / 4.4:
The Structure of an Adaptive Business Intelligence System / 4.5:
Prediction and Optimization / Part II:
Prediction Methods and Models / 5:
Data Preparation / 5.1:
Different Prediction Methods / 5.2:
Mathematical Methods / 5.2.1:
Distance Methods / 5.2.2:
Logic Methods / 5.2.3:
Modern Heuristic Methods / 5.2.4:
Additional Considerations / 5.2.5:
Evaluation of Models / 5.3:
Recommended Reading / 5.4:
Modern Optimization Techniques / 6:
Overview / 6.1:
Local Optimization Techniques / 6.2:
Stochastic Hill Climber / 6.3:
Simulated Annealing / 6.4:
Tabu Search / 6.5:
Evolutionary Algorithms / 6.6:
Constraint Handling / 6.7:
Additional Issues / 6.8:
Fuzzy Logic / 6.9:
Fuzzifier / 7.1:
Inference System / 7.3:
Defuzzifier / 7.4:
Tuning the Membership Functions and Rule Base / 7.5:
Artificial Neural Networks / 7.6:
Node Input and Output / 8.1:
Different Types of Networks / 8.3:
Feed-Forward Neural Networks / 8.3.1:
Recurrent Neural Networks / 8.3.2:
Learning Methods / 8.4:
Supervised Learning / 8.4.1:
Unsupervised Learning / 8.4.2:
Data Representation / 8.5:
Other Methods and Techniques / 8.6:
Genetic Programming / 9.1:
Ant Systems and Swarm Intelligence / 9.2:
Agent-Based Modeling / 9.3:
Co-evolution / 9.4:
Hybrid Systems and Adaptability / 9.5:
Hybrid Systems for Prediction / 10.1:
Hybrid Systems for Optimization / 10.2:
Car Distribution System / 10.3:
Graphical User Interface / 11.1:
Reporting / 11.2.1:
Prediction Module / 11.3:
Optimization Module / 11.4:
Adaptability Module / 11.5:
Validation / 11.6:
Applying Adaptive Business Intelligence / 12:
Marketing Campaigns / 12.1:
Manufacturing / 12.2:
Investment Strategies / 12.3:
Emergency Response Services / 12.4:
Credit Card Fraud / 12.5:
Conclusion / 13:
Index
Complex Business Problems / Part I:
Introduction / 1:
Characteristics of Complex Business Problems / 2:
30.

電子ブック

EB
Zbigniew Michalewicz, Martin Schmidt, Constantin Chiriac, Matthew Michalewicz
出版情報: SpringerLink Books - AutoHoldings , Springer Berlin Heidelberg, 2006
所蔵情報: loading…
目次情報: 続きを見る
Complex Business Problems / Part I:
Introduction / 1:
Characteristics of Complex Business Problems / 2:
Number of Possible Solutions / 2.1:
Time-Changing Environment / 2.2:
Problem-Specific Constraints / 2.3:
Multi-objective Problems / 2.4:
Modeling the Problem / 2.5:
A Real-World Example / 2.6:
An Extended Example: Car Distribution / 3:
Basic Terminology / 3.1:
Off-lease Cars / 3.2:
The Problem / 3.3:
Transportation / 3.4:
Volume Effect / 3.5:
Price Depreciation and Inventory / 3.6:
Dynamic Market Changes / 3.7:
The Solution / 3.8:
Adaptive Business Intelligence / 4:
Data Mining / 4.1:
Prediction / 4.2:
Optimization / 4.3:
Adaptability / 4.4:
The Structure of an Adaptive Business Intelligence System / 4.5:
Prediction and Optimization / Part II:
Prediction Methods and Models / 5:
Data Preparation / 5.1:
Different Prediction Methods / 5.2:
Mathematical Methods / 5.2.1:
Distance Methods / 5.2.2:
Logic Methods / 5.2.3:
Modern Heuristic Methods / 5.2.4:
Additional Considerations / 5.2.5:
Evaluation of Models / 5.3:
Recommended Reading / 5.4:
Modern Optimization Techniques / 6:
Overview / 6.1:
Local Optimization Techniques / 6.2:
Stochastic Hill Climber / 6.3:
Simulated Annealing / 6.4:
Tabu Search / 6.5:
Evolutionary Algorithms / 6.6:
Constraint Handling / 6.7:
Additional Issues / 6.8:
Fuzzy Logic / 6.9:
Fuzzifier / 7.1:
Inference System / 7.3:
Defuzzifier / 7.4:
Tuning the Membership Functions and Rule Base / 7.5:
Artificial Neural Networks / 7.6:
Node Input and Output / 8.1:
Different Types of Networks / 8.3:
Feed-Forward Neural Networks / 8.3.1:
Recurrent Neural Networks / 8.3.2:
Learning Methods / 8.4:
Supervised Learning / 8.4.1:
Unsupervised Learning / 8.4.2:
Data Representation / 8.5:
Other Methods and Techniques / 8.6:
Genetic Programming / 9.1:
Ant Systems and Swarm Intelligence / 9.2:
Agent-Based Modeling / 9.3:
Co-evolution / 9.4:
Hybrid Systems and Adaptability / 9.5:
Hybrid Systems for Prediction / 10.1:
Hybrid Systems for Optimization / 10.2:
Car Distribution System / 10.3:
Graphical User Interface / 11.1:
Reporting / 11.2.1:
Prediction Module / 11.3:
Optimization Module / 11.4:
Adaptability Module / 11.5:
Validation / 11.6:
Applying Adaptive Business Intelligence / 12:
Marketing Campaigns / 12.1:
Manufacturing / 12.2:
Investment Strategies / 12.3:
Emergency Response Services / 12.4:
Credit Card Fraud / 12.5:
Conclusion / 13:
Index
Complex Business Problems / Part I:
Introduction / 1:
Characteristics of Complex Business Problems / 2:
31.

電子ブック

EB
Luc De Raedt, J?rg Siekmann
出版情報: Springer eBooks Computer Science , Springer Berlin Heidelberg, 2008
所蔵情報: loading…
目次情報: 続きを見る
Introduction / 1:
What Is Logical and Relational Learning? / 1.1:
Why Is Logical and Relational Learning Important? / 1.2:
Structure Activity Relationship Prediction / 1.2.1:
A Web Mining Example / 1.2.2:
A Language Learning Example / 1.2.3:
How Does Relational and Logical Learning Work? / 1.3:
A Brief History / 1.4:
An Introduction to Logic / 2:
A Relational Database Example / 2.1:
The Syntax of Clausal Logic / 2.2:
The Semantics of Clausal Logic - Model Theory / 2.3:
Inference with Clausal Logic - Proof Theory / 2.4:
Prolog and SLD-resolution / 2.5:
Historical and Bibliographic Remarks / 2.6:
An Introduction to Learning and Search / 3:
Representing Hypotheses and Instances / 3.1:
Boolean Data / 3.2:
Machine Learning / 3.3:
Data Mining / 3.4:
A Generate-and-Test Algorithm / 3.5:
Structuring the Search Space / 3.6:
Monotonicity / 3.7:
Borders / 3.8:
Refinement Operators / 3.9:
A Generic Algorithm for Mining and Learning / 3.10:
A Complete General-to-Specific Algorithm / 3.11:
A Heuristic General-to-Specific Algorithm / 3.12:
A Branch-and-Bound Algorithm / 3.13:
A Specific-to-General Algorithm / 3.14:
Working with Borders* / 3.15:
Computing a Single Border / 3.15.1:
Computing Two Borders / 3.15.2:
Computing Two Borders Incrementally / 3.15.3:
Operations on Borders / 3.15.4:
Conclusions / 3.16:
Bibliographical Notes / 3.17:
Representations for Mining and Learning / 4:
Representing Data and Hypotheses / 4.1:
Attribute-Value Learning / 4.2:
Multiple-Instance Learning: Dealing With Sets / 4.3:
Relational Learning / 4.4:
Logic Programs / 4.5:
Sequences, Lists, and Grammars / 4.6:
Trees and Terms / 4.7:
Graphs / 4.8:
Background Knowledge / 4.9:
Designing It Yourself / 4.10:
A Hierarchy of Representations* / 4.11:
From AV to BL / 4.11.1:
From MI to AV / 4.11.2:
From RL to MI / 4.11.3:
From LP to RL / 4.11.4:
Propositionalization / 4.12:
A Table-Based Approach / 4.12.1:
A Query-Based Approach / 4.12.2:
Aggregation / 4.13:
Historical and Bibliographical Remarks / 4.14:
Generality and Logical Entailment / 5:
Generality and Logical Entailment Coincide / 5.1:
Propositional Subsumption / 5.2:
Subsumption in Logical Atoms / 5.3:
Specialization Operators / 5.3.1:
Generalization Operators* / 5.3.2:
Computing the lgg and the glb / 5.3.3:
[Theta]-Subsumption / 5.4:
Soundness and Completeness / 5.4.1:
Deciding [Theta]-Subsumption / 5.4.2:
Equivalence Classes / 5.4.3:
Variants of [Theta]-Subsumption* / 5.5:
Object Identity* / 5.5.1:
Inverse Implication* / 5.5.2:
Using Background Knowledge / 5.6:
Saturation and Bottom Clauses / 5.6.1:
Relative Least General Generalization* / 5.6.2:
Semantic Refinement* / 5.6.3:
Aggregation* / 5.7:
Inverse Resolution / 5.8:
A Note on Graphs, Trees, and Sequences / 5.9:
Bibliographic Notes / 5.10:
The Upgrading Story / 6:
Motivation for a Methodology / 6.1:
Methodological Issues / 6.2:
Representing the Examples / 6.2.1:
Representing the Hypotheses / 6.2.2:
Adapting the Algorithm / 6.2.3:
Adding Features / 6.2.4:
Case Study 1: Rule Learning and Foil / 6.3:
Foil's Problem Setting / 6.3.1:
Foil's Algorithm / 6.3.2:
Case Study 2: Decision Tree Learning and Tilde / 6.4:
The Problem Setting / 6.4.1:
Inducing Logical Decision Trees / 6.4.2:
Case Study 3: Frequent Item-Set Mining and Warmr / 6.5:
Relational Association Rules and Local Patterns / 6.5.1:
Computing Frequent Queries / 6.5.2:
Language Bias / 6.6:
Syntactic Bias / 6.6.1:
Semantic Bias / 6.6.2:
Inducing Theories / 6.7:
Introduction to Theory Revision / 7.1:
Theories and Model Inference / 7.1.1:
Theory Revision / 7.1.2:
Overview of the Rest of This Chapter / 7.1.3:
Towards Abductive Logic Programming / 7.2:
Abduction / 7.2.1:
Integrity Constraints / 7.2.2:
Abductive Logic Programming / 7.2.3:
Shapiro's Theory Revision System / 7.3:
Interaction / 7.3.1:
The Model Inference System / 7.3.2:
Two Propositional Theory Revision Systems* / 7.4:
Learning a Propositional Horn Theory Efficiently / 7.4.1:
Heuristic Search in Theory Revision / 7.4.2:
Inducing Constraints / 7.5:
Problem Specification / 7.5.1:
An Algorithm for Inducing Integrity Constraints / 7.5.2:
Probabilistic Logic Learning / 7.6:
Probability Theory Review / 8.1:
Probabilistic Logics / 8.2:
Probabilities on Interpretations / 8.2.1:
Probabilities on Proofs / 8.2.2:
Probabilistic Learning / 8.3:
Parameter Estimation / 8.3.1:
Structure Learning / 8.3.2:
First-Order Probabilistic Logics / 8.4:
Probabilistic Interpretations / 8.4.1:
Probabilistic Proofs / 8.4.2:
Learning from Interpretations / 8.5:
Learning from Entailment / 8.5.2:
Learning from Proof Trees and Traces / 8.5.3:
Relational Reinforcement Learning* / 8.6:
Markov Decision Processes / 8.6.1:
Solving Markov Decision Processes / 8.6.2:
Relational Markov Decision Processes / 8.6.3:
Solving Relational Markov Decision Processes / 8.6.4:
Kernels and Distances for Structured Data / 8.7:
A Simple Kernel and Distance / 9.1:
Kernel Methods / 9.2:
The Max Margin Approach / 9.2.1:
Support Vector Machines / 9.2.2:
The Kernel Trick / 9.2.3:
Distance-Based Learning / 9.3:
Distance Functions / 9.3.1:
The k-Nearest Neighbor Algorithm / 9.3.2:
The k-Means Algorithm / 9.3.3:
Kernels for Structured Data / 9.4:
Convolution and Decomposition / 9.4.1:
Vectors and Tuples / 9.4.2:
Sets and Multi-sets / 9.4.3:
Strings / 9.4.4:
Trees and Atoms / 9.4.5:
Graph Kernels* / 9.4.6:
Distances and Metrics / 9.5:
Generalization and Metrics / 9.5.1:
Sets / 9.5.2:
Atoms and Trees / 9.5.4:
Relational Kernels and Distances / 9.5.6:
Bibliographical and Historical Notes / 9.7:
Computational Aspects of Logical and Relational Learning / 10:
Efficiency of Relational Learning / 10.1:
Coverage as [theta]-Subsumption / 10.1.1:
[theta]-Subsumption Empirically / 10.1.2:
Optimizing the Learner for [theta]-subsumption / 10.1.3:
Computational Learning Theory* / 10.2:
Notions of Learnability / 10.2.1:
Positive Results / 10.2.2:
Negative Results / 10.2.3:
Historical and Bibliographic Notes / 10.3:
Lessons Learned / 11:
A Hierarchy of Representations / 11.1:
From Upgrading to Downgrading / 11.2:
Propositionalization and Aggregation / 11.3:
Learning Tasks / 11.4:
Operators and Generality / 11.5:
Unification and Variables / 11.6:
Three Learning Settings / 11.7:
Knowledge and Background Knowledge / 11.8:
Applications / 11.9:
References
Author Index
Index
Introduction / 1:
What Is Logical and Relational Learning? / 1.1:
Why Is Logical and Relational Learning Important? / 1.2:
32.

電子ブック

EB
Yoshinori Kuno., Yoshinori Kuno, Dorothy Monekosso, Paolo Remagnino
出版情報: Springer eBooks Computer Science , Springer London, 2009
所蔵情報: loading…
目次情報: 続きを見る
Preface
List of Contributors
Intelligent Environments: Methods, Algorithms and Applications / Dorothy N. Monekosso ; Paolo Remagnino ; Yoshinori Kuno1:
Intelligent Environments / 1.1:
What Is An Intelligent Environment? / 1.1.1:
How Is An Intelligent Environment Built? / 1.1.2:
Technology for Intelligent Environments / 1.2:
Research Projects / 1.3:
Private Spaces / 1.3.1:
Public Spaces / 1.3.2:
Middleware / 1.3.3:
Chapter Themes in This Collection / 1.4:
Conclusion / 1.5:
References
A Pervasive Sensor System for Evidence-Based Nursing Care Support / Toshio Hori ; Yoshifumi Nishida ; Shin'ichi Murakami2:
Introduction / 2.1:
Evidence-Based Nursing Care Support / 2.2:
Background of the Project / 2.2.1:
Concept of Evidence-Based Nursing Care Support / 2.2.2:
Initial Goal of the Project: Falls Prevention / 2.2.3:
Second Goal of the Project: Obtaining ADL of Inhabitants / 2.2.4:
Related Work / 2.3:
Overview and Implementations of the System / 2.4:
Overview of the Evidence-Based Nursing Care Support System / 2.4.1:
System Implementations / 2.4.2:
Experiments and Analyses / 2.5:
Tracking a Wheelchair for Falls Prevention / 2.5.1:
Activity Transition Diagram: Transition of Activities in One Day / 2.5.2:
Quantitative Evaluation of Daily Activities / 2.5.3:
Probability of "Toilet" Activity / 2.5.4:
Discussion of the Experimental Results / 2.5.5:
Prospect of the Evidence-Based Nursing Care Support System / 2.6:
Conclusions / 2.7:
Anomalous Behavior Detection: Supporting Independent Living / 3:
Related work / 3.1:
Methodology / 3.3:
Unsupervised Classification Techniques / 3.3.1:
Using HMM to Model Behavior / 3.3.2:
Experimental Setup and Data Collection / 3.4:
Noisy Data: Sources of Error / 3.4.1:
Learning activities / 3.4.2:
Experimental Results / 3.5:
Instance Class Annotation / 3.5.1:
Data Preprocessing / 3.5.2:
Models: Unsupervised Classification: Clustering and Allocation of Activities to Clusters / 3.5.3:
Behaviors: Discovering Patterns in Activities / 3.5.4:
Behaviors: Discovering Anomalous Patterns of Activity / 3.5.5:
Discussion / 3.6:
Sequential Pattern Mining for Cooking-Support Robot / Yasushi Nakauchi3.7:
System Design / 4.1:
Inference from Series of Human Actions / 4.2.1:
Time Sequence Data Mining / 4.2.2:
Human Behavior Inference Algorithm / 4.2.3:
Activity Support of Human / 4.2.4:
Implementation / 4.3:
IC Tag System / 4.3.1:
Inference of Human's Next Action / 4.3.2:
Cooking Support Interface / 4.3.3:
Robotic, Sensory and Problem-Solving Ingredients for the Future Home / Amedeo Cesta ; Luca Iocchi ; G. Riccardo Leone ; Daniele Nardi ; Federico Pecora ; Riccardo Rasconi4.4:
Components of the Multiagent System / 5.1:
The Robotic Platform Mobility Subsystem / 5.2:
The Interaction Manager / 5.3:
Environmental Sensors for People Tracking and Posture Recognition / 5.4:
Monitoring Activities of Daily Living / 5.5:
Schedule Representation and Execution Monitoring / 5.5.1:
Constraint Management in the RoboCare Context / 5.5.2:
From Constraint Violations to Verbal Interaction / 5.5.3:
Multiagent Coordination Infrastructure / 5.6:
Casting the MAC Problem to DCOP / 5.6.1:
Cooperatively Solving the MAC Problem / 5.6.2:
Ubiquitous Stereo Vision for Human Sensing / Ikushi Yoda ; Katsuhiko Sakae5.7:
Ubiquitous Stereo Vision / 6.1:
Concept of Ubiquitous Stereo Vision / 6.2.1:
Server-Client Model for USV / 6.2.2:
Real Utilization Cases / 6.2.3:
Hierarchical Utilization of 3D Data and Personal Recognition / 6.3:
Acquisition of 3D Range Information / 6.3.1:
Projection to Floor Plane / 6.3.2:
Recognition of Multiple Persons and Interface / 6.4:
Pose Recognition for Multiple People / 6.4.1:
Personal Identification / 6.4.2:
Interface for Space Control / 6.4.3:
Human Monitoring in Open Space (Safety Management Application) / 6.5:
Monitoring Railroad Crossing / 6.5.1:
Station Platform Edge Safety Management / 6.5.2:
Monitoring Huge Space / 6.5.3:
Conclusion and Future Work / 6.6:
Augmenting Professional Training, an Ambient Intelligence Approach / B. Zhan ; D.N. Monekosso ; S. Rush ; P. Remagnino ; S.A. Velastin7:
Color Tracking of People / 7.1:
Counting People by Spatial Relationship Analysis / 7.3:
Simple People Counting Algorithm / 7.3.1:
Graphs of Blobs / 7.3.2:
Estimation of Distance Between Blobs / 7.3.3:
Temporal Pyramid for Distance Estimation / 7.3.4:
Probabilistic Estimation of Groupings / 7.3.5:
Grouping Blobs / 7.3.6:
Stereo Omnidirectional System (SOS) and Its Applications / Yutaka Satoh ; Katsuhiko Sakaue7.4:
System Configuration / 8.1:
Image integration / 8.3:
Generation of Stable Images at Arbitrary Rotation / 8.4:
An example Application: Intelligent Electric Wheelchair / 8.5:
Overview / 8.5.1:
Obstacle Detection / 8.5.2:
Gesture / Posture Detection / 8.5.4:
Video Analysis for Ambient Intelligence in Urban Environments / Andrea Prati ; Rita Cucchiara8.6:
Visual Data for Urban AmI / 9.1:
Video Surveillance in Urban Environment / 9.2.1:
The LAICA Project / 9.2.2:
Automatic Video Processing for People Tracking / 9.3:
People Detection and Tracking from Single Static Camera / 9.3.1:
People Detection and Tracking from Distributed Cameras / 9.3.2:
People Detection and Tracking from Moving Cameras / 9.3.3:
Privacy and Ethical Issues / 9.4:
From Monomodal to Multimodal: Affect Recognition Using Visual Modalities / Hatice Gunes ; Massimo Piccardi10:
Organization of the Chapter / 10.1:
From Monomodal to Multimodal: Changes and Challenges / 10.3:
Background Research / 10.3.1:
Data Collection / 10.3.2:
Data Annotation / 10.3.3:
Synchrony/Asynchrony Between Modalities / 10.3.4:
Data Integration/Fusion / 10.3.5:
Information Complementarity/Redundancy / 10.3.6:
Information Content of Modalities / 10.3.7:
Monomodal Systems Recognizing Affective Face or Body Movement / 10.4:
Multimodal Systems Recognizing Affect from Face and Body Movement / 10.5:
Project 1: Multimodal Affect Analysis for Future Cars / 10.5.1:
Project 2: Emotion Analysis in Man-Machine Interaction Systems / 10.5.2:
Project 3: Multimodal Affect Recognition in Learning Environments / 10.5.3:
Project 4: FABO-Fusing Face and Body Gestures for Bimodal Emotion Recognition / 10.5.4:
Multimodal Affect Systems: The Future / 10.6:
Importance of Vision in Human-Robot Communication: Understanding Speech Using Robot Vision and Demonstrating Proper Actions to Human Vision / Michie Kawashima ; Keiichi Yamazaki ; Akiko Yamazaki11:
Understanding Simplified Utterances Using Robot Vision / 11.1:
Inexplicit Utterances / 11.2.1:
Information Obtained by Vision / 11.2.2:
Language Processing / 11.2.3:
Vision Processing / 11.2.4:
Synchronization Between Speech and Vision / 11.2.5:
Experiments / 11.2.6:
Communicative Head Gestures for Museum Guide Robots / 11.3:
Observations from Guide-Visitor Interaction / 11.3.1:
Prototype Museum Guide Robot / 11.3.2:
Experiments at a Museum / 11.3.3:
Index / 11.4:
Preface
List of Contributors
Intelligent Environments: Methods, Algorithms and Applications / Dorothy N. Monekosso ; Paolo Remagnino ; Yoshinori Kuno1:
33.

電子ブック

EB
Yoshinori Kuno., Yoshinori Kuno, Dorothy Monekosso, Paolo Remagnino
出版情報: SpringerLink Books - AutoHoldings , Springer London, 2009
所蔵情報: loading…
目次情報: 続きを見る
Preface
List of Contributors
Intelligent Environments: Methods, Algorithms and Applications / Dorothy N. Monekosso ; Paolo Remagnino ; Yoshinori Kuno1:
Intelligent Environments / 1.1:
What Is An Intelligent Environment? / 1.1.1:
How Is An Intelligent Environment Built? / 1.1.2:
Technology for Intelligent Environments / 1.2:
Research Projects / 1.3:
Private Spaces / 1.3.1:
Public Spaces / 1.3.2:
Middleware / 1.3.3:
Chapter Themes in This Collection / 1.4:
Conclusion / 1.5:
References
A Pervasive Sensor System for Evidence-Based Nursing Care Support / Toshio Hori ; Yoshifumi Nishida ; Shin'ichi Murakami2:
Introduction / 2.1:
Evidence-Based Nursing Care Support / 2.2:
Background of the Project / 2.2.1:
Concept of Evidence-Based Nursing Care Support / 2.2.2:
Initial Goal of the Project: Falls Prevention / 2.2.3:
Second Goal of the Project: Obtaining ADL of Inhabitants / 2.2.4:
Related Work / 2.3:
Overview and Implementations of the System / 2.4:
Overview of the Evidence-Based Nursing Care Support System / 2.4.1:
System Implementations / 2.4.2:
Experiments and Analyses / 2.5:
Tracking a Wheelchair for Falls Prevention / 2.5.1:
Activity Transition Diagram: Transition of Activities in One Day / 2.5.2:
Quantitative Evaluation of Daily Activities / 2.5.3:
Probability of "Toilet" Activity / 2.5.4:
Discussion of the Experimental Results / 2.5.5:
Prospect of the Evidence-Based Nursing Care Support System / 2.6:
Conclusions / 2.7:
Anomalous Behavior Detection: Supporting Independent Living / 3:
Related work / 3.1:
Methodology / 3.3:
Unsupervised Classification Techniques / 3.3.1:
Using HMM to Model Behavior / 3.3.2:
Experimental Setup and Data Collection / 3.4:
Noisy Data: Sources of Error / 3.4.1:
Learning activities / 3.4.2:
Experimental Results / 3.5:
Instance Class Annotation / 3.5.1:
Data Preprocessing / 3.5.2:
Models: Unsupervised Classification: Clustering and Allocation of Activities to Clusters / 3.5.3:
Behaviors: Discovering Patterns in Activities / 3.5.4:
Behaviors: Discovering Anomalous Patterns of Activity / 3.5.5:
Discussion / 3.6:
Sequential Pattern Mining for Cooking-Support Robot / Yasushi Nakauchi3.7:
System Design / 4.1:
Inference from Series of Human Actions / 4.2.1:
Time Sequence Data Mining / 4.2.2:
Human Behavior Inference Algorithm / 4.2.3:
Activity Support of Human / 4.2.4:
Implementation / 4.3:
IC Tag System / 4.3.1:
Inference of Human's Next Action / 4.3.2:
Cooking Support Interface / 4.3.3:
Robotic, Sensory and Problem-Solving Ingredients for the Future Home / Amedeo Cesta ; Luca Iocchi ; G. Riccardo Leone ; Daniele Nardi ; Federico Pecora ; Riccardo Rasconi4.4:
Components of the Multiagent System / 5.1:
The Robotic Platform Mobility Subsystem / 5.2:
The Interaction Manager / 5.3:
Environmental Sensors for People Tracking and Posture Recognition / 5.4:
Monitoring Activities of Daily Living / 5.5:
Schedule Representation and Execution Monitoring / 5.5.1:
Constraint Management in the RoboCare Context / 5.5.2:
From Constraint Violations to Verbal Interaction / 5.5.3:
Multiagent Coordination Infrastructure / 5.6:
Casting the MAC Problem to DCOP / 5.6.1:
Cooperatively Solving the MAC Problem / 5.6.2:
Ubiquitous Stereo Vision for Human Sensing / Ikushi Yoda ; Katsuhiko Sakae5.7:
Ubiquitous Stereo Vision / 6.1:
Concept of Ubiquitous Stereo Vision / 6.2.1:
Server-Client Model for USV / 6.2.2:
Real Utilization Cases / 6.2.3:
Hierarchical Utilization of 3D Data and Personal Recognition / 6.3:
Acquisition of 3D Range Information / 6.3.1:
Projection to Floor Plane / 6.3.2:
Recognition of Multiple Persons and Interface / 6.4:
Pose Recognition for Multiple People / 6.4.1:
Personal Identification / 6.4.2:
Interface for Space Control / 6.4.3:
Human Monitoring in Open Space (Safety Management Application) / 6.5:
Monitoring Railroad Crossing / 6.5.1:
Station Platform Edge Safety Management / 6.5.2:
Monitoring Huge Space / 6.5.3:
Conclusion and Future Work / 6.6:
Augmenting Professional Training, an Ambient Intelligence Approach / B. Zhan ; D.N. Monekosso ; S. Rush ; P. Remagnino ; S.A. Velastin7:
Color Tracking of People / 7.1:
Counting People by Spatial Relationship Analysis / 7.3:
Simple People Counting Algorithm / 7.3.1:
Graphs of Blobs / 7.3.2:
Estimation of Distance Between Blobs / 7.3.3:
Temporal Pyramid for Distance Estimation / 7.3.4:
Probabilistic Estimation of Groupings / 7.3.5:
Grouping Blobs / 7.3.6:
Stereo Omnidirectional System (SOS) and Its Applications / Yutaka Satoh ; Katsuhiko Sakaue7.4:
System Configuration / 8.1:
Image integration / 8.3:
Generation of Stable Images at Arbitrary Rotation / 8.4:
An example Application: Intelligent Electric Wheelchair / 8.5:
Overview / 8.5.1:
Obstacle Detection / 8.5.2:
Gesture / Posture Detection / 8.5.4:
Video Analysis for Ambient Intelligence in Urban Environments / Andrea Prati ; Rita Cucchiara8.6:
Visual Data for Urban AmI / 9.1:
Video Surveillance in Urban Environment / 9.2.1:
The LAICA Project / 9.2.2:
Automatic Video Processing for People Tracking / 9.3:
People Detection and Tracking from Single Static Camera / 9.3.1:
People Detection and Tracking from Distributed Cameras / 9.3.2:
People Detection and Tracking from Moving Cameras / 9.3.3:
Privacy and Ethical Issues / 9.4:
From Monomodal to Multimodal: Affect Recognition Using Visual Modalities / Hatice Gunes ; Massimo Piccardi10:
Organization of the Chapter / 10.1:
From Monomodal to Multimodal: Changes and Challenges / 10.3:
Background Research / 10.3.1:
Data Collection / 10.3.2:
Data Annotation / 10.3.3:
Synchrony/Asynchrony Between Modalities / 10.3.4:
Data Integration/Fusion / 10.3.5:
Information Complementarity/Redundancy / 10.3.6:
Information Content of Modalities / 10.3.7:
Monomodal Systems Recognizing Affective Face or Body Movement / 10.4:
Multimodal Systems Recognizing Affect from Face and Body Movement / 10.5:
Project 1: Multimodal Affect Analysis for Future Cars / 10.5.1:
Project 2: Emotion Analysis in Man-Machine Interaction Systems / 10.5.2:
Project 3: Multimodal Affect Recognition in Learning Environments / 10.5.3:
Project 4: FABO-Fusing Face and Body Gestures for Bimodal Emotion Recognition / 10.5.4:
Multimodal Affect Systems: The Future / 10.6:
Importance of Vision in Human-Robot Communication: Understanding Speech Using Robot Vision and Demonstrating Proper Actions to Human Vision / Michie Kawashima ; Keiichi Yamazaki ; Akiko Yamazaki11:
Understanding Simplified Utterances Using Robot Vision / 11.1:
Inexplicit Utterances / 11.2.1:
Information Obtained by Vision / 11.2.2:
Language Processing / 11.2.3:
Vision Processing / 11.2.4:
Synchronization Between Speech and Vision / 11.2.5:
Experiments / 11.2.6:
Communicative Head Gestures for Museum Guide Robots / 11.3:
Observations from Guide-Visitor Interaction / 11.3.1:
Prototype Museum Guide Robot / 11.3.2:
Experiments at a Museum / 11.3.3:
Index / 11.4:
Preface
List of Contributors
Intelligent Environments: Methods, Algorithms and Applications / Dorothy N. Monekosso ; Paolo Remagnino ; Yoshinori Kuno1:
34.

電子ブック

EB
Luc De Raedt, Jörg Siekmann, A. Bundy
出版情報: SpringerLink Books - AutoHoldings , Springer Berlin Heidelberg, 2008
所蔵情報: loading…
目次情報: 続きを見る
Introduction / 1:
What Is Logical and Relational Learning? / 1.1:
Why Is Logical and Relational Learning Important? / 1.2:
Structure Activity Relationship Prediction / 1.2.1:
A Web Mining Example / 1.2.2:
A Language Learning Example / 1.2.3:
How Does Relational and Logical Learning Work? / 1.3:
A Brief History / 1.4:
An Introduction to Logic / 2:
A Relational Database Example / 2.1:
The Syntax of Clausal Logic / 2.2:
The Semantics of Clausal Logic - Model Theory / 2.3:
Inference with Clausal Logic - Proof Theory / 2.4:
Prolog and SLD-resolution / 2.5:
Historical and Bibliographic Remarks / 2.6:
An Introduction to Learning and Search / 3:
Representing Hypotheses and Instances / 3.1:
Boolean Data / 3.2:
Machine Learning / 3.3:
Data Mining / 3.4:
A Generate-and-Test Algorithm / 3.5:
Structuring the Search Space / 3.6:
Monotonicity / 3.7:
Borders / 3.8:
Refinement Operators / 3.9:
A Generic Algorithm for Mining and Learning / 3.10:
A Complete General-to-Specific Algorithm / 3.11:
A Heuristic General-to-Specific Algorithm / 3.12:
A Branch-and-Bound Algorithm / 3.13:
A Specific-to-General Algorithm / 3.14:
Working with Borders* / 3.15:
Computing a Single Border / 3.15.1:
Computing Two Borders / 3.15.2:
Computing Two Borders Incrementally / 3.15.3:
Operations on Borders / 3.15.4:
Conclusions / 3.16:
Bibliographical Notes / 3.17:
Representations for Mining and Learning / 4:
Representing Data and Hypotheses / 4.1:
Attribute-Value Learning / 4.2:
Multiple-Instance Learning: Dealing With Sets / 4.3:
Relational Learning / 4.4:
Logic Programs / 4.5:
Sequences, Lists, and Grammars / 4.6:
Trees and Terms / 4.7:
Graphs / 4.8:
Background Knowledge / 4.9:
Designing It Yourself / 4.10:
A Hierarchy of Representations* / 4.11:
From AV to BL / 4.11.1:
From MI to AV / 4.11.2:
From RL to MI / 4.11.3:
From LP to RL / 4.11.4:
Propositionalization / 4.12:
A Table-Based Approach / 4.12.1:
A Query-Based Approach / 4.12.2:
Aggregation / 4.13:
Historical and Bibliographical Remarks / 4.14:
Generality and Logical Entailment / 5:
Generality and Logical Entailment Coincide / 5.1:
Propositional Subsumption / 5.2:
Subsumption in Logical Atoms / 5.3:
Specialization Operators / 5.3.1:
Generalization Operators* / 5.3.2:
Computing the lgg and the glb / 5.3.3:
[Theta]-Subsumption / 5.4:
Soundness and Completeness / 5.4.1:
Deciding [Theta]-Subsumption / 5.4.2:
Equivalence Classes / 5.4.3:
Variants of [Theta]-Subsumption* / 5.5:
Object Identity* / 5.5.1:
Inverse Implication* / 5.5.2:
Using Background Knowledge / 5.6:
Saturation and Bottom Clauses / 5.6.1:
Relative Least General Generalization* / 5.6.2:
Semantic Refinement* / 5.6.3:
Aggregation* / 5.7:
Inverse Resolution / 5.8:
A Note on Graphs, Trees, and Sequences / 5.9:
Bibliographic Notes / 5.10:
The Upgrading Story / 6:
Motivation for a Methodology / 6.1:
Methodological Issues / 6.2:
Representing the Examples / 6.2.1:
Representing the Hypotheses / 6.2.2:
Adapting the Algorithm / 6.2.3:
Adding Features / 6.2.4:
Case Study 1: Rule Learning and Foil / 6.3:
Foil's Problem Setting / 6.3.1:
Foil's Algorithm / 6.3.2:
Case Study 2: Decision Tree Learning and Tilde / 6.4:
The Problem Setting / 6.4.1:
Inducing Logical Decision Trees / 6.4.2:
Case Study 3: Frequent Item-Set Mining and Warmr / 6.5:
Relational Association Rules and Local Patterns / 6.5.1:
Computing Frequent Queries / 6.5.2:
Language Bias / 6.6:
Syntactic Bias / 6.6.1:
Semantic Bias / 6.6.2:
Inducing Theories / 6.7:
Introduction to Theory Revision / 7.1:
Theories and Model Inference / 7.1.1:
Theory Revision / 7.1.2:
Overview of the Rest of This Chapter / 7.1.3:
Towards Abductive Logic Programming / 7.2:
Abduction / 7.2.1:
Integrity Constraints / 7.2.2:
Abductive Logic Programming / 7.2.3:
Shapiro's Theory Revision System / 7.3:
Interaction / 7.3.1:
The Model Inference System / 7.3.2:
Two Propositional Theory Revision Systems* / 7.4:
Learning a Propositional Horn Theory Efficiently / 7.4.1:
Heuristic Search in Theory Revision / 7.4.2:
Inducing Constraints / 7.5:
Problem Specification / 7.5.1:
An Algorithm for Inducing Integrity Constraints / 7.5.2:
Probabilistic Logic Learning / 7.6:
Probability Theory Review / 8.1:
Probabilistic Logics / 8.2:
Probabilities on Interpretations / 8.2.1:
Probabilities on Proofs / 8.2.2:
Probabilistic Learning / 8.3:
Parameter Estimation / 8.3.1:
Structure Learning / 8.3.2:
First-Order Probabilistic Logics / 8.4:
Probabilistic Interpretations / 8.4.1:
Probabilistic Proofs / 8.4.2:
Learning from Interpretations / 8.5:
Learning from Entailment / 8.5.2:
Learning from Proof Trees and Traces / 8.5.3:
Relational Reinforcement Learning* / 8.6:
Markov Decision Processes / 8.6.1:
Solving Markov Decision Processes / 8.6.2:
Relational Markov Decision Processes / 8.6.3:
Solving Relational Markov Decision Processes / 8.6.4:
Kernels and Distances for Structured Data / 8.7:
A Simple Kernel and Distance / 9.1:
Kernel Methods / 9.2:
The Max Margin Approach / 9.2.1:
Support Vector Machines / 9.2.2:
The Kernel Trick / 9.2.3:
Distance-Based Learning / 9.3:
Distance Functions / 9.3.1:
The k-Nearest Neighbor Algorithm / 9.3.2:
The k-Means Algorithm / 9.3.3:
Kernels for Structured Data / 9.4:
Convolution and Decomposition / 9.4.1:
Vectors and Tuples / 9.4.2:
Sets and Multi-sets / 9.4.3:
Strings / 9.4.4:
Trees and Atoms / 9.4.5:
Graph Kernels* / 9.4.6:
Distances and Metrics / 9.5:
Generalization and Metrics / 9.5.1:
Sets / 9.5.2:
Atoms and Trees / 9.5.4:
Relational Kernels and Distances / 9.5.6:
Bibliographical and Historical Notes / 9.7:
Computational Aspects of Logical and Relational Learning / 10:
Efficiency of Relational Learning / 10.1:
Coverage as [theta]-Subsumption / 10.1.1:
[theta]-Subsumption Empirically / 10.1.2:
Optimizing the Learner for [theta]-subsumption / 10.1.3:
Computational Learning Theory* / 10.2:
Notions of Learnability / 10.2.1:
Positive Results / 10.2.2:
Negative Results / 10.2.3:
Historical and Bibliographic Notes / 10.3:
Lessons Learned / 11:
A Hierarchy of Representations / 11.1:
From Upgrading to Downgrading / 11.2:
Propositionalization and Aggregation / 11.3:
Learning Tasks / 11.4:
Operators and Generality / 11.5:
Unification and Variables / 11.6:
Three Learning Settings / 11.7:
Knowledge and Background Knowledge / 11.8:
Applications / 11.9:
References
Author Index
Index
Introduction / 1:
What Is Logical and Relational Learning? / 1.1:
Why Is Logical and Relational Learning Important? / 1.2:
35.

電子ブック

EB
Toshinori Munakata, David Gries, Fred B. Schneider
出版情報: Springer eBooks Computer Science , Springer London, 2008
所蔵情報: loading…
目次情報: 続きを見る
Preface
Introduction / 1:
An Overview of the Field of Artificial Intelligence / 1.1:
An Overview of the Areas Covered in this Book / 1.2:
Neural Networks: Fundamentals and the Backpropagation Model / 2:
What is a Neural Network? / 2.1:
A Neuron / 2.2:
Basic Idea of the Backpropagation Model / 2.3:
Details of the Backpropagation Mode / 2.4:
A Cookbook Recipe to Implement the Backpropagation Model / 2.5:
Additional Technical Remarks on the Backpropagation Model / 2.6:
Simple Perceptrons / 2.7:
Applications of the Backpropagation Model / 2.8:
General Remarks on Neural Networks / 2.9:
Neural Networks: Other Models / 3:
Prelude / 3.1:
Associative Memory / 3.2:
Hopfield Networks / 3.3:
The Hopfield-Tank Model for Optimization Problems: The Basics / 3.4:
One-Dimensional Layout / 3.4.1:
Two-Dimensional Layout / 3.4.2:
The Hopfield-Tank Model for Optimization Problems: Applications / 3.5:
The N-Queen Problem / 3.5.1:
A General Guideline to Apply the Hopfield-Tank Model to Optimization Problems / 3.5.2:
Traveling Salesman Problem (TSP) / 3.5.3:
The Kohonen Model / 3.6:
Simulated Annealing / 3.7:
Boltzmann Machines / 3.8:
An Overview / 3.8.1:
Unsupervised Learning by the Boltzmann Machine: The Basics Architecture / 3.8.2:
Unsupervised Learning by the Boltzmann Machine: Algorithms / 3.8.3:
Appendix. Derivation of Delta-Weights / 3.8.4:
Genetic Algorithms and Evolutionary Computing / 4:
What are Genetic Algorithms and Evolutionary Computing? / 4.1:
Fundamentals of Genetic Algorithms / 4.2:
A Simple Illustration of Genetic Algorithms / 4.3:
A Machine Learning Example: Input-to-Output Mapping / 4.4:
A Hard Optimization Example: the Traveling Salesman Problem (TSP) / 4.5:
Schemata / 4.6:
Changes of Schemata Over Generations / 4.6.1:
Example of Schema Processing / 4.6.2:
Genetic Programming / 4.7:
Additional Remarks / 4.8:
Fuzzy Systems / 5:
Fundamentals of Fuzzy Sets / 5.1:
What is a Fuzzy Set? / 5.2.1:
Basic Fuzzy Set Relations / 5.2.2:
Basic Fuzzy Set Operations and Their Properties / 5.2.3:
Operations Unique to Fuzzy Sets / 5.2.4:
Fuzzy Relations / 5.3:
Ordinary (Nonfuzzy) Relations / 5.3.1:
Fuzzy Relations Defined on Ordinary Sets / 5.3.2:
Fuzzy Relations Derived from Fuzzy Sets / 5.3.3:
Fuzzy Logic / 5.4:
Ordinary Set Theory and Ordinary Logic / 5.4.1:
Fuzzy Logic Fundamentals / 5.4.2:
Fuzzy Control / 5.5:
Fuzzy Control Basics / 5.5.1:
Case Study: Controlling Temperature with a Variable Heat Source / 5.5.2:
Extended Fuzzy if-then Rules Tables / 5.5.3:
A Note on Fuzzy Control Expert Systems / 5.5.4:
Hybrid Systems / 5.6:
Fundamental Issues / 5.7:
Rough Sets / 5.8:
Review of Ordinary Sets and Relations / 6.1:
Information Tables and Attributes / 6.3:
Approximation Spaces / 6.4:
Knowledge Representation Systems / 6.5:
More on the Basics of Rough Sets / 6.6:
Case Study and Comparisons with Other Techniques / 6.7:
Rough Sets Applied to the Case Study / 6.8.1:
ID3 Approach and the Case Study / 6.8.2:
Comparisons with Other Techniques / 6.8.3:
Chaos / 7:
What is Chaos? / 7.1:
Representing Dynamical Systems / 7.2:
Discrete dynamical systems / 7.2.1:
Continuous dynamical systems / 7.2.2:
State and Phase Spaces / 7.3:
Trajectory, Orbit and Flow / 7.3.1:
Cobwebs / 7.3.2:
Equilibrium Solutions and Stability / 7.4:
Attractors / 7.5:
Fixed-point attractors / 7.5.1:
Periodic attractors / 7.5.2:
Quasi-periodic attractors / 7.5.3:
Chaotic attractors / 7.5.4:
Bifurcations / 7.6:
Fractals / 7.7:
Applications of Chaos / 7.8:
Index
Preface
Introduction / 1:
An Overview of the Field of Artificial Intelligence / 1.1:
36.

電子ブック

EB
Toshinori Munakata, David Gries, Fred B. Schneider
出版情報: SpringerLink Books - AutoHoldings , Springer London, 2008
所蔵情報: loading…
目次情報: 続きを見る
Preface
Introduction / 1:
An Overview of the Field of Artificial Intelligence / 1.1:
An Overview of the Areas Covered in this Book / 1.2:
Neural Networks: Fundamentals and the Backpropagation Model / 2:
What is a Neural Network? / 2.1:
A Neuron / 2.2:
Basic Idea of the Backpropagation Model / 2.3:
Details of the Backpropagation Mode / 2.4:
A Cookbook Recipe to Implement the Backpropagation Model / 2.5:
Additional Technical Remarks on the Backpropagation Model / 2.6:
Simple Perceptrons / 2.7:
Applications of the Backpropagation Model / 2.8:
General Remarks on Neural Networks / 2.9:
Neural Networks: Other Models / 3:
Prelude / 3.1:
Associative Memory / 3.2:
Hopfield Networks / 3.3:
The Hopfield-Tank Model for Optimization Problems: The Basics / 3.4:
One-Dimensional Layout / 3.4.1:
Two-Dimensional Layout / 3.4.2:
The Hopfield-Tank Model for Optimization Problems: Applications / 3.5:
The N-Queen Problem / 3.5.1:
A General Guideline to Apply the Hopfield-Tank Model to Optimization Problems / 3.5.2:
Traveling Salesman Problem (TSP) / 3.5.3:
The Kohonen Model / 3.6:
Simulated Annealing / 3.7:
Boltzmann Machines / 3.8:
An Overview / 3.8.1:
Unsupervised Learning by the Boltzmann Machine: The Basics Architecture / 3.8.2:
Unsupervised Learning by the Boltzmann Machine: Algorithms / 3.8.3:
Appendix. Derivation of Delta-Weights / 3.8.4:
Genetic Algorithms and Evolutionary Computing / 4:
What are Genetic Algorithms and Evolutionary Computing? / 4.1:
Fundamentals of Genetic Algorithms / 4.2:
A Simple Illustration of Genetic Algorithms / 4.3:
A Machine Learning Example: Input-to-Output Mapping / 4.4:
A Hard Optimization Example: the Traveling Salesman Problem (TSP) / 4.5:
Schemata / 4.6:
Changes of Schemata Over Generations / 4.6.1:
Example of Schema Processing / 4.6.2:
Genetic Programming / 4.7:
Additional Remarks / 4.8:
Fuzzy Systems / 5:
Fundamentals of Fuzzy Sets / 5.1:
What is a Fuzzy Set? / 5.2.1:
Basic Fuzzy Set Relations / 5.2.2:
Basic Fuzzy Set Operations and Their Properties / 5.2.3:
Operations Unique to Fuzzy Sets / 5.2.4:
Fuzzy Relations / 5.3:
Ordinary (Nonfuzzy) Relations / 5.3.1:
Fuzzy Relations Defined on Ordinary Sets / 5.3.2:
Fuzzy Relations Derived from Fuzzy Sets / 5.3.3:
Fuzzy Logic / 5.4:
Ordinary Set Theory and Ordinary Logic / 5.4.1:
Fuzzy Logic Fundamentals / 5.4.2:
Fuzzy Control / 5.5:
Fuzzy Control Basics / 5.5.1:
Case Study: Controlling Temperature with a Variable Heat Source / 5.5.2:
Extended Fuzzy if-then Rules Tables / 5.5.3:
A Note on Fuzzy Control Expert Systems / 5.5.4:
Hybrid Systems / 5.6:
Fundamental Issues / 5.7:
Rough Sets / 5.8:
Review of Ordinary Sets and Relations / 6.1:
Information Tables and Attributes / 6.3:
Approximation Spaces / 6.4:
Knowledge Representation Systems / 6.5:
More on the Basics of Rough Sets / 6.6:
Case Study and Comparisons with Other Techniques / 6.7:
Rough Sets Applied to the Case Study / 6.8.1:
ID3 Approach and the Case Study / 6.8.2:
Comparisons with Other Techniques / 6.8.3:
Chaos / 7:
What is Chaos? / 7.1:
Representing Dynamical Systems / 7.2:
Discrete dynamical systems / 7.2.1:
Continuous dynamical systems / 7.2.2:
State and Phase Spaces / 7.3:
Trajectory, Orbit and Flow / 7.3.1:
Cobwebs / 7.3.2:
Equilibrium Solutions and Stability / 7.4:
Attractors / 7.5:
Fixed-point attractors / 7.5.1:
Periodic attractors / 7.5.2:
Quasi-periodic attractors / 7.5.3:
Chaotic attractors / 7.5.4:
Bifurcations / 7.6:
Fractals / 7.7:
Applications of Chaos / 7.8:
Index
Preface
Introduction / 1:
An Overview of the Field of Artificial Intelligence / 1.1:
37.

電子ブック

EB
Srikanta Patnaik, D.M Gabbay, J. Siekmann
出版情報: Springer eBooks Computer Science , Springer Berlin Heidelberg, 2007
所蔵情報: loading…
目次情報: 続きを見る
Cybernetic View of Robot Cognition and Perception / 1:
Introduction to the Model of Cognition / 1.1:
Various States of Cognition / 1.1.1:
Cycles of Cognition / 1.1.2:
Visual Perception / 1.2:
Human Visual System / 1.2.1:
Vision for Mobile Robots / 1.2.2:
Visual Recognition / 1.3:
Template Matching / 1.3.1:
Feature-Based Model / 1.3.2:
Fourier Model / 1.3.3:
Structural Model / 1.3.4:
The Computational Theory of Marr / 1.3.5:
Machine Learning / 1.4:
Properties and Issues in Machine Learning / 1.4.1:
Classification of Machine Learning / 1.4.2:
Soft Computing Tools and Robot Cognition / 1.5:
Modeling Cognition Using ANN / 1.5.1:
Fuzzy Logic in Robot Cognition / 1.5.2:
Genetic Algorithms in Robot Cognition / 1.5.3:
Summary / 1.6:
Map Building / 2:
Introduction / 2.1:
Constructing a 2D World Map / 2.2:
Data Structure for Map Building / 2.2.1:
Explanation of the Algorithm / 2.2.2:
An Illustration of Procedure: Traverse Boundary / 2.2.3:
An Illustration of Procedure: Map Building / 2.2.4:
Robot Simulation / 2.2.5:
Execution of the Map Building Program / 2.3:
Path Planning / 2.4:
Representation of the Robot's Environment / 3.1:
GVD Using Cellular Automata / 3.2.1:
Path Optimization by the Quadtree Approach / 3.3:
Introduction to the Quadtree / 3.3.1:
Definition / 3.3.2:
Generation of the Quadtree / 3.3.3:
Neighbor-Finding Algorithms for the Quadtree / 3.4:
The A Algorithm for Selecting the Best Neighbor / 3.5:
Execution of the Quadtree-Based Path Planner Program / 3.6:
Navigation Using a Genetic Algorithm / 3.7:
Genetic Algorithms / 4.1:
Encoding of a Chromosome / 4.2.1:
Crossover / 4.2.2:
Mutation / 4.2.3:
Parameters of a GA / 4.2.4:
Selection / 4.2.5:
Navigation by a Genetic Algorithm / 4.3:
Formulation of Navigation / 4.3.1:
Execution of the GA-Based Navigation Program / 4.4:
Replanning by Temporal Associative Memory / 4.5:
Introduction to TAM / 4.5.1:
Encoding and Decoding Process in a Temporal Memory / 4.5.2:
An Example in a Semi-dynamic Environment / 4.5.3:
Implications of Results / 4.5.4:
Robot Programming Packages / 4.6:
Robot Hardware and Software Resources / 5.1:
Components / 5.2.1:
ARIA / 5.3:
ARIA Client-Server / 5.3.1:
Robot Communication / 5.3.2:
Opening the Connection / 5.3.3:
ArRobot / 5.3.4:
Range Devices / 5.3.5:
Commands and Actions / 5.3.6:
Socket Programming / 5.4:
Socket Programming in ARIA / 5.4.1:
BotSpeak Speech System / 5.5:
Functions / 5.5.1:
Small Vision System (SVS) / 5.6:
SVS C++ Classes / 5.6.1:
Parameter Classes / 5.6.2:
Stereo Image Class / 5.6.3:
Acquisition Classes / 5.6.4:
Multithreading / 5.7:
Client Front-End Design Using Java / 5.8:
Robot Parameter Display / 5.9:
Flow Chart and Source Code for Robot Parameter Display / 6.1:
Program for BotSpeak / 6.3:
Flow Chart and Source Code for BotSpeak Program / 7.1:
Gripper Control Program / 7.3:
Flow Chart and Source Code for Gripper Control Program / 8.1:
Program for Sonar Reading Display / 8.3:
Flow Chart and Source Code for Sonar Reading Display on Client / 9.1:
Program for Wandering Within the Workspace / 9.3:
Algorithm and Source Code for Wandering Within the Workspace / 10.1:
Program for Tele-operation / 10.3:
Algorithm and Source Code for Tele-operation / 11.1:
A Complete Program for Autonomous Navigation / 11.3:
The ImageServer Program / 12.1:
The MotionServer Program / 12.3:
The Navigator Client Program / 12.4:
Imaging Geometry / 12.5:
Necessity for 3D Reconstruction / 13.1:
Building Perception / 13.3:
Problems of Understanding 3D Objects from 2D Imagery / 13.3.1:
Process of 3D Reconstruction / 13.3.2:
Image Formation / 13.4:
Perspective Projection in One Dimension / 13.4.2:
Perspective Projection in 3D / 13.4.3:
Global Representation / 13.5:
Transformation to Global Coordinate System / 13.6:
Image Capture Program / 13.7:
Algorithm for Image Capture / 14.1:
Building 3D Perception Using a Kalman Filter / 14.3:
Minimal Representation / 15.1:
Recursive Kalman Filter / 15.3:
Experiments and Estimation / 15.4:
Reconstruction of 3D Points / 15.4.1:
Reconstruction of a 3D Line / 15.4.2:
Reconstruction of a 3D Plane / 15.4.3:
Correspondence Problem in 3D Recovery / 15.5:
Program for 3D Perception / 15.6:
Flow Chart and Source Code for 3D Perception / 16.1:
Perceptions of Non-planar Surfaces / 16.3:
Methods of Edge Detection / 17.1:
Curve Tracking and Curve Fitting / 17.3:
Program for Curve Detector / 17.4:
Intelligent Garbage Collection / 17.5:
Algorithms and Source Code for Garbage Collection / 18.1:
References / 18.3:
Index
Cybernetic View of Robot Cognition and Perception / 1:
Introduction to the Model of Cognition / 1.1:
Various States of Cognition / 1.1.1:
38.

電子ブック

EB
Srikanta Patnaik, D.M Gabbay, J. Siekmann, Luigia Carlucci Aiello
出版情報: SpringerLink Books - AutoHoldings , Springer Berlin Heidelberg, 2007
所蔵情報: loading…
目次情報: 続きを見る
Cybernetic View of Robot Cognition and Perception / 1:
Introduction to the Model of Cognition / 1.1:
Various States of Cognition / 1.1.1:
Cycles of Cognition / 1.1.2:
Visual Perception / 1.2:
Human Visual System / 1.2.1:
Vision for Mobile Robots / 1.2.2:
Visual Recognition / 1.3:
Template Matching / 1.3.1:
Feature-Based Model / 1.3.2:
Fourier Model / 1.3.3:
Structural Model / 1.3.4:
The Computational Theory of Marr / 1.3.5:
Machine Learning / 1.4:
Properties and Issues in Machine Learning / 1.4.1:
Classification of Machine Learning / 1.4.2:
Soft Computing Tools and Robot Cognition / 1.5:
Modeling Cognition Using ANN / 1.5.1:
Fuzzy Logic in Robot Cognition / 1.5.2:
Genetic Algorithms in Robot Cognition / 1.5.3:
Summary / 1.6:
Map Building / 2:
Introduction / 2.1:
Constructing a 2D World Map / 2.2:
Data Structure for Map Building / 2.2.1:
Explanation of the Algorithm / 2.2.2:
An Illustration of Procedure: Traverse Boundary / 2.2.3:
An Illustration of Procedure: Map Building / 2.2.4:
Robot Simulation / 2.2.5:
Execution of the Map Building Program / 2.3:
Path Planning / 2.4:
Representation of the Robot's Environment / 3.1:
GVD Using Cellular Automata / 3.2.1:
Path Optimization by the Quadtree Approach / 3.3:
Introduction to the Quadtree / 3.3.1:
Definition / 3.3.2:
Generation of the Quadtree / 3.3.3:
Neighbor-Finding Algorithms for the Quadtree / 3.4:
The A Algorithm for Selecting the Best Neighbor / 3.5:
Execution of the Quadtree-Based Path Planner Program / 3.6:
Navigation Using a Genetic Algorithm / 3.7:
Genetic Algorithms / 4.1:
Encoding of a Chromosome / 4.2.1:
Crossover / 4.2.2:
Mutation / 4.2.3:
Parameters of a GA / 4.2.4:
Selection / 4.2.5:
Navigation by a Genetic Algorithm / 4.3:
Formulation of Navigation / 4.3.1:
Execution of the GA-Based Navigation Program / 4.4:
Replanning by Temporal Associative Memory / 4.5:
Introduction to TAM / 4.5.1:
Encoding and Decoding Process in a Temporal Memory / 4.5.2:
An Example in a Semi-dynamic Environment / 4.5.3:
Implications of Results / 4.5.4:
Robot Programming Packages / 4.6:
Robot Hardware and Software Resources / 5.1:
Components / 5.2.1:
ARIA / 5.3:
ARIA Client-Server / 5.3.1:
Robot Communication / 5.3.2:
Opening the Connection / 5.3.3:
ArRobot / 5.3.4:
Range Devices / 5.3.5:
Commands and Actions / 5.3.6:
Socket Programming / 5.4:
Socket Programming in ARIA / 5.4.1:
BotSpeak Speech System / 5.5:
Functions / 5.5.1:
Small Vision System (SVS) / 5.6:
SVS C++ Classes / 5.6.1:
Parameter Classes / 5.6.2:
Stereo Image Class / 5.6.3:
Acquisition Classes / 5.6.4:
Multithreading / 5.7:
Client Front-End Design Using Java / 5.8:
Robot Parameter Display / 5.9:
Flow Chart and Source Code for Robot Parameter Display / 6.1:
Program for BotSpeak / 6.3:
Flow Chart and Source Code for BotSpeak Program / 7.1:
Gripper Control Program / 7.3:
Flow Chart and Source Code for Gripper Control Program / 8.1:
Program for Sonar Reading Display / 8.3:
Flow Chart and Source Code for Sonar Reading Display on Client / 9.1:
Program for Wandering Within the Workspace / 9.3:
Algorithm and Source Code for Wandering Within the Workspace / 10.1:
Program for Tele-operation / 10.3:
Algorithm and Source Code for Tele-operation / 11.1:
A Complete Program for Autonomous Navigation / 11.3:
The ImageServer Program / 12.1:
The MotionServer Program / 12.3:
The Navigator Client Program / 12.4:
Imaging Geometry / 12.5:
Necessity for 3D Reconstruction / 13.1:
Building Perception / 13.3:
Problems of Understanding 3D Objects from 2D Imagery / 13.3.1:
Process of 3D Reconstruction / 13.3.2:
Image Formation / 13.4:
Perspective Projection in One Dimension / 13.4.2:
Perspective Projection in 3D / 13.4.3:
Global Representation / 13.5:
Transformation to Global Coordinate System / 13.6:
Image Capture Program / 13.7:
Algorithm for Image Capture / 14.1:
Building 3D Perception Using a Kalman Filter / 14.3:
Minimal Representation / 15.1:
Recursive Kalman Filter / 15.3:
Experiments and Estimation / 15.4:
Reconstruction of 3D Points / 15.4.1:
Reconstruction of a 3D Line / 15.4.2:
Reconstruction of a 3D Plane / 15.4.3:
Correspondence Problem in 3D Recovery / 15.5:
Program for 3D Perception / 15.6:
Flow Chart and Source Code for 3D Perception / 16.1:
Perceptions of Non-planar Surfaces / 16.3:
Methods of Edge Detection / 17.1:
Curve Tracking and Curve Fitting / 17.3:
Program for Curve Detector / 17.4:
Intelligent Garbage Collection / 17.5:
Algorithms and Source Code for Garbage Collection / 18.1:
References / 18.3:
Index
Cybernetic View of Robot Cognition and Perception / 1:
Introduction to the Model of Cognition / 1.1:
Various States of Cognition / 1.1.1:
39.

電子ブック

EB
Maria Chli, Philippe De Wilde, Lakhmi Jain
出版情報: Springer eBooks Computer Science , Springer London, 2009
所蔵情報: loading…
目次情報: 続きを見る
Introduction / 1:
Background to the Research / 1.1:
Approach / 1.2:
Contributions / 1.3:
Reader's Guide to the Book / 1.4:
Research Issues / 2:
Multi-agent Systems / 2.1:
Agent-based Modelling / 2.2:
An Ecosystem Perspective of Multi-agent Systems / 2.3:
Convergence Issues / 2.4:
Interaction and Knowledge Exchange / 2.5:
Stability of Multi-agent Systems / 3:
Background / 3.1:
Stability in Games / 3.3:
Stochastic Systems Primer / 3.3.1:
Definition of Stability / 3.3.2:
Example Games / 3.3.3:
Experiments / 3.4:
Trading Simulation Model / 3.4.1:
Load Transportation Model / 3.4.2:
Virus Spreading Model / 3.4.3:
The Market Demonstrator / 3.4.4:
Conclusion / 3.5:
Limitations and Future Work / 3.5.1:
Achievements / 3.5.2:
The Emergence of Knowledge Exchange: An Agent-based Model of a Software Market / 4:
Digital Business Ecosystem / 4.1:
A DBE Economy / 4.2.1:
Market Efficiency / 4.2.2:
Exchange in Economic Markets / 4.3:
The Software Industry / 4.3.2:
An Agent-based Model of the DBE / 4.4:
The Setting / 4.4.1:
Exchange of Services / 4.4.3:
Discussion / 4.4.4:
Analysis of the Model / 4.5:
Service Exchange / 4.5.1:
Concluding Remarks / 4.5.2:
Collaborative Query Expansion / 5:
Query Expansion / 5.1:
Discriminative Document Terms / 5.1.2:
Term Value / 5.2:
Implementation / 5.3:
Initial Phase / 5.3.1:
Stemming / 5.3.2:
Common Word Filtering / 5.3.3:
Term Selection / 5.3.4:
Evaluation / 5.4:
Evaluation Results / 5.4.1:
Introducing User Collaboration for Query Expansion / 5.5:
Collaboration Procedure / 5.5.1:
Comparing Sets of Terms / 5.5.2:
Example of Collaboration / 5.5.3:
Conclusions / 5.6:
Micro-economic Control of Distributed Intelligent Personal Assistants / 6:
Stable Strategies / 6.1:
Network of Intelligent Personal Assistants / 6.2:
Definition of the Automatic PA / 6.2.1:
Further Specifications / 6.2.2:
The Intelligent Automatic PA / 6.2.3:
Negotiating and Optimizing Agents / 6.2.4:
An Example / 6.2.5:
Finding a Stable Strategy / 6.3:
The Discrete Event Simulator / 6.3.1:
A Stable Strategy / 6.3.2:
Conclusions and Future Work / 6.4:
Future Directions / 7.1:
Ecosystems of Networked Businesses / 7.2.1:
Exchange in Natural Ecosystems / 7.2.2:
Appendices / 7.3:
The EEII Project / A:
Statistical Analysis / B:
Statistical Hypothesis Testing / B.1:
Tests for Showing That Two Samples Come from the Same Distribution / B.2:
Methodology: Evolutionary Algorithms / C:
References
Index
Introduction / 1:
Background to the Research / 1.1:
Approach / 1.2:
40.

電子ブック

EB
Maria Chli, Philippe De Wilde, Lakhmi Jain
出版情報: SpringerLink Books - AutoHoldings , Springer London, 2009
所蔵情報: loading…
目次情報: 続きを見る
Introduction / 1:
Background to the Research / 1.1:
Approach / 1.2:
Contributions / 1.3:
Reader's Guide to the Book / 1.4:
Research Issues / 2:
Multi-agent Systems / 2.1:
Agent-based Modelling / 2.2:
An Ecosystem Perspective of Multi-agent Systems / 2.3:
Convergence Issues / 2.4:
Interaction and Knowledge Exchange / 2.5:
Stability of Multi-agent Systems / 3:
Background / 3.1:
Stability in Games / 3.3:
Stochastic Systems Primer / 3.3.1:
Definition of Stability / 3.3.2:
Example Games / 3.3.3:
Experiments / 3.4:
Trading Simulation Model / 3.4.1:
Load Transportation Model / 3.4.2:
Virus Spreading Model / 3.4.3:
The Market Demonstrator / 3.4.4:
Conclusion / 3.5:
Limitations and Future Work / 3.5.1:
Achievements / 3.5.2:
The Emergence of Knowledge Exchange: An Agent-based Model of a Software Market / 4:
Digital Business Ecosystem / 4.1:
A DBE Economy / 4.2.1:
Market Efficiency / 4.2.2:
Exchange in Economic Markets / 4.3:
The Software Industry / 4.3.2:
An Agent-based Model of the DBE / 4.4:
The Setting / 4.4.1:
Exchange of Services / 4.4.3:
Discussion / 4.4.4:
Analysis of the Model / 4.5:
Service Exchange / 4.5.1:
Concluding Remarks / 4.5.2:
Collaborative Query Expansion / 5:
Query Expansion / 5.1:
Discriminative Document Terms / 5.1.2:
Term Value / 5.2:
Implementation / 5.3:
Initial Phase / 5.3.1:
Stemming / 5.3.2:
Common Word Filtering / 5.3.3:
Term Selection / 5.3.4:
Evaluation / 5.4:
Evaluation Results / 5.4.1:
Introducing User Collaboration for Query Expansion / 5.5:
Collaboration Procedure / 5.5.1:
Comparing Sets of Terms / 5.5.2:
Example of Collaboration / 5.5.3:
Conclusions / 5.6:
Micro-economic Control of Distributed Intelligent Personal Assistants / 6:
Stable Strategies / 6.1:
Network of Intelligent Personal Assistants / 6.2:
Definition of the Automatic PA / 6.2.1:
Further Specifications / 6.2.2:
The Intelligent Automatic PA / 6.2.3:
Negotiating and Optimizing Agents / 6.2.4:
An Example / 6.2.5:
Finding a Stable Strategy / 6.3:
The Discrete Event Simulator / 6.3.1:
A Stable Strategy / 6.3.2:
Conclusions and Future Work / 6.4:
Future Directions / 7.1:
Ecosystems of Networked Businesses / 7.2.1:
Exchange in Natural Ecosystems / 7.2.2:
Appendices / 7.3:
The EEII Project / A:
Statistical Analysis / B:
Statistical Hypothesis Testing / B.1:
Tests for Showing That Two Samples Come from the Same Distribution / B.2:
Methodology: Evolutionary Algorithms / C:
References
Index
Introduction / 1:
Background to the Research / 1.1:
Approach / 1.2:
41.

電子ブック

EB
Amit Konar, L. C. Jain, Lakhmi C. Jain
出版情報: Springer eBooks Computer Science , Springer London, 2005
所蔵情報: loading…
目次情報: 続きを見る
Foreword
Preface
Acknowledgments
The Psychological Basis of Cognitive Modeling / Chapter 1:
Introduction / 1.1:
Cognitive Models of Pattern Recognition / 1.2:
Template-Matching Theory / 1.2.1:
Prototype-Matching Theory / 1.2.2:
Feature-Based Approach for Pattern Recognition / 1.2.3:
The Computational Approach / 1.2.4:
Cognitive Models of Memory / 1.3:
Atkinson-Shiffrin's Model / 1.3.1:
Debates on Atkinson-Shiffrin's Model / 1.3.2:
Tulving's Model / 1.3.3:
The Parallel Distributed Processing Approach / 1.3.4:
Mental Imagery / 1.4:
Mental Representation of Imagery / 1.4.1:
Rotation of Mental Imagery / 1.4.2:
Imagery and Size / 1.4.3:
Imagery and Shape / 1.4.4:
Part-Whole Relationship in Mental Imagery / 1.4.5:
Ambiguity in Mental Imagery / 1.4.6:
Neurophysiological Similarity between Imagery and Perception / 1.4.7:
Cognitive Maps of Mental Imagery / 1.4.8:
Understanding a Problem / 1.5:
Steps in Understanding a Problem / 1.5.1:
A Cybernetic View of Cognition / 1.6:
The States of Cognition / 1.6.1:
Computational Modeling of Cognitive Systems / 1.7:
Petri Nets: A Brief Review / 1.8:
Extension of Petri Net Models for Distributed Modeling of Cognition / 1.9:
Scope of the Book / 1.10:
Summary / 1.11:
Exercises
References
Parallel and Distributed Logic Programming / Chapter 2:
Formal Definitions / 2.1:
Preliminary Definitions / 2.2.1:
Properties of the Substitution Set / 2.2.2:
SLD Resolution / 2.2.3:
Concurrency in Resolution / 2.3:
Types of Concurrent Resolution / 2.3.1:
Petri Net Model for Concurrent Resolution / 2.4:
Extended Petri Net / 2.4.1:
Algorithm for Concurrent Resolution / 2.4.2:
Performance Analysis of Petri Net-Based Models / 2.5:
The Speed-up / 2.5.1:
The Resource Utilization Rate / 2.5.2:
Resource Unlimited Speed-up and Utilization Rate / 2.5.3:
Conclusions / 2.6:
Distributed Reasoning by Fuzzy Petri Nets: A Review / Chapter 3:
Fuzzy Logic and Approximate Reasoning / 3.1:
Structured Models of Approximate Reasoning / 3.2:
Looney's Model / 3.3:
The Model Proposed by Chen et al / 3.4:
Konar and Mandal's Model / 3.5:
Yu's Model / 3.6:
Chen's Model for Backward Reasoning / 3.7:
Bugarin and Barro's Model / 3.8:
Pedrycz and Gomide's Learning Model / 3.9:
Construction of Reduction Rules Using FPN / 3.10:
Scope of Extension of Fuzzy Reasoning on Petri Nets / 3.11:
Belief Propagation and Belief Revision Models in Fuzzy Petri Nets / 3.12:
Imprecision Management in an Acyclic FPN / 4.1:
Formal Definitions and the Proposed Model / 4.2.1:
Proposed Model for Belief Propagation / 4.2.2:
Proposed Algorithm for Belief Propagation / 4.2.3:
Properties of FPN and Belief Propagation Scheme / 4.2.4:
Imprecision and Inconsistency Management in a Cyclic FPN / 4.3:
Proposed Model for Belief Revision / 4.3.1:
Stability Analysis of the Belief Revision Model / 4.3.2:
Detection and Elimination of Limit Cycles / 4.3.3:
Nonmonotonic Reasoning in an FPN / 4.3.4:
Building Expert Systems Using Fuzzy Petri Nets / 4.4:
The Database / 5.1:
The Data-tree / 5.2.1:
The Knowledge Base / 5.3:
The Inference Engine / 5.4:
Searching Antecedent Parts of PR in the Data-tree / 5.4.1:
Formation of the FPN / 5.4.2:
Decision Making and Explanation Tracing / 5.4.3:
A Case Study / 5.5:
Performance Evaluation / 5.6:
Time-Complexisty for the Default-Data-Tree-Formation Procedure / 5.6.1:
Time-Complexity for the Procedure Suspect-Identification / 5.6.2:
Time-Complexity for the Procedure Variable-Instantiation-of-PRs / 5.6.3:
Time-Complexity for the Procedure Create-tree / 5.6.4:
Time-Complexity for the Procedure Search-on-Data-Tree / 5.6.5:
Time-Complexity for the Procedure FPN-Formation / 5.6.6:
Time-Complexity for the Belief-Revision and Limit-Cycle-Detection Procedure / 5.6.7:
Time-Complexity Analysis for the Procedure Limit-Cycle-Elimination / 5.6.8:
Time-Complexity for the Procedure Nonmonotonic Reasoning / 5.6.9:
Time-Complexity for the Procedure Decision-Making and Explanation Tracing / 5.6.10:
Time-Complexity of the Overall Expert System / 5.6.11:
Distributed Learning Using Fuzzy Cognitive Maps / 5.7:
Axelord's Cognitive Maps / 6.1:
Kosko's Model / 6.3:
Kosko's Extended Model / 6.4:
Adaptive FCMs / 6.5:
Zhang, Chen, and Bezdek's Model / 6.6:
Pal and Konar's FCM Model / 6.7:
Unsupervised Learning by Fuzzy Petri Nets / 6.8:
The Proposed Model for Cognitive Learning / 7.1:
Encoding of Weights / 7.2.1:
The Recall Model / 7.2.2:
State-Space Formulation / 7.3:
State-Space Model for Belief Updating / 7.3.1:
State-Space Model for FTT Updating of Transitions / 7.3.2:
State-Space Model for Weights / 7.3.3:
Stability Analysis of the Cognitive Model / 7.4:
Computer Simulation / 7.5:
Implication of the Results / 7.6:
Knowledge Refinement by Hebbian Learning / 7.7:
The Encoding Model / 7.7.1:
The Recall/Reasoning Model / 7.7.2:
Case Study by Computer Simulation / 7.7.3:
Supervised Learning by a Fuzzy Petri Net / 7.7.4:
Proposed Model of Fuzzy Petri Nets / 8.1:
Algorithm for Training / 8.2.1:
Analysis of Convergence / 8.4:
Application in Fuzzy Pattern Recognition / 8.5:
Distributed Modeling of Abduction, Reciprocity, and Duality by Fuzzy Petri Nets / 8.6:
State-Space Formulation of the Proposed FPN Model / 9.1:
The Behavioral Model of FPN / 9.3.1:
State-Space Formulation of the Model / 9.3.2:
Special Cases of the Model / 9.3.3:
Stability Analysis / 9.4:
Forward Reasoning in FPNs / 9.5:
Abductive Reasoning in FPN / 9.6:
Bi-directional Reasoning in an FPN / 9.7:
Fuzzy Modus Tollens and Duality / 9.8:
Human Mood Detection and Control: A Cybernetic Approach / 9.9:
Filtering, Segmentation and Localization of Facial Components / 10.1:
Segmentation of the Mouth Region / 10.2.1:
Segmentation of the Eye Region / 10.2.2:
Segmentation of Eyebrow Constriction / 10.2.3:
Determination of Facial Attributes / 10.3:
Determination of the Mouth-Opening / 10.3.1:
Determination of the Eye-Opening / 10.3.2:
Determination of the Length of Eyebrow-Constriction / 10.3.3:
Fuzzy Relational Model for Mood Detection / 10.4:
Fuzzification of Facial Attributes / 10.4.1:
The Fuzzy Relational Model for Mood Detection / 10.4.2:
Validation of System Performance / 10.5:
A Basic Scheme of Human Mood Control / 10.6:
A Simple Model of Human Mood Transition Dynamics / 10.7:
The Model / 10.7.1:
Properties of the Model / 10.7.2:
The Proportional Model of Human Mood Control / 10.8:
Mamdani's Model for Mood Control / 10.9:
Ranking the Music, Audio, and Video Clips / 10.10:
Experimental Results / 10.11:
Distributed Planning and Multi-agent Coordination of Robots / 10.12:
Single-Agent Planning / 11.1:
Multi-Agent Planning / 11.3:
Task Sharing and Distribution in Multi-agent Planning / 11.3.1:
Cooperation with/without Communication / 11.3.2:
Homogeneous and Heterogeneous Distributed Planning / 11.3.3:
Vision-based Transportation of Blocks by Two Robots / 11.4:
Timing Analysis of the Transportation Problem / 11.5:
Analysis with Two agents / 11.6.1:
Analysis with /-agents / 11.6.2:
Index / 11.7:
Foreword
Preface
Acknowledgments
42.

電子ブック

EB
Amit Konar, L. C. Jain, Lakhmi C. Jain
出版情報: SpringerLink Books - AutoHoldings , Springer London, 2005
所蔵情報: loading…
目次情報: 続きを見る
Foreword
Preface
Acknowledgments
The Psychological Basis of Cognitive Modeling / Chapter 1:
Introduction / 1.1:
Cognitive Models of Pattern Recognition / 1.2:
Template-Matching Theory / 1.2.1:
Prototype-Matching Theory / 1.2.2:
Feature-Based Approach for Pattern Recognition / 1.2.3:
The Computational Approach / 1.2.4:
Cognitive Models of Memory / 1.3:
Atkinson-Shiffrin's Model / 1.3.1:
Debates on Atkinson-Shiffrin's Model / 1.3.2:
Tulving's Model / 1.3.3:
The Parallel Distributed Processing Approach / 1.3.4:
Mental Imagery / 1.4:
Mental Representation of Imagery / 1.4.1:
Rotation of Mental Imagery / 1.4.2:
Imagery and Size / 1.4.3:
Imagery and Shape / 1.4.4:
Part-Whole Relationship in Mental Imagery / 1.4.5:
Ambiguity in Mental Imagery / 1.4.6:
Neurophysiological Similarity between Imagery and Perception / 1.4.7:
Cognitive Maps of Mental Imagery / 1.4.8:
Understanding a Problem / 1.5:
Steps in Understanding a Problem / 1.5.1:
A Cybernetic View of Cognition / 1.6:
The States of Cognition / 1.6.1:
Computational Modeling of Cognitive Systems / 1.7:
Petri Nets: A Brief Review / 1.8:
Extension of Petri Net Models for Distributed Modeling of Cognition / 1.9:
Scope of the Book / 1.10:
Summary / 1.11:
Exercises
References
Parallel and Distributed Logic Programming / Chapter 2:
Formal Definitions / 2.1:
Preliminary Definitions / 2.2.1:
Properties of the Substitution Set / 2.2.2:
SLD Resolution / 2.2.3:
Concurrency in Resolution / 2.3:
Types of Concurrent Resolution / 2.3.1:
Petri Net Model for Concurrent Resolution / 2.4:
Extended Petri Net / 2.4.1:
Algorithm for Concurrent Resolution / 2.4.2:
Performance Analysis of Petri Net-Based Models / 2.5:
The Speed-up / 2.5.1:
The Resource Utilization Rate / 2.5.2:
Resource Unlimited Speed-up and Utilization Rate / 2.5.3:
Conclusions / 2.6:
Distributed Reasoning by Fuzzy Petri Nets: A Review / Chapter 3:
Fuzzy Logic and Approximate Reasoning / 3.1:
Structured Models of Approximate Reasoning / 3.2:
Looney's Model / 3.3:
The Model Proposed by Chen et al / 3.4:
Konar and Mandal's Model / 3.5:
Yu's Model / 3.6:
Chen's Model for Backward Reasoning / 3.7:
Bugarin and Barro's Model / 3.8:
Pedrycz and Gomide's Learning Model / 3.9:
Construction of Reduction Rules Using FPN / 3.10:
Scope of Extension of Fuzzy Reasoning on Petri Nets / 3.11:
Belief Propagation and Belief Revision Models in Fuzzy Petri Nets / 3.12:
Imprecision Management in an Acyclic FPN / 4.1:
Formal Definitions and the Proposed Model / 4.2.1:
Proposed Model for Belief Propagation / 4.2.2:
Proposed Algorithm for Belief Propagation / 4.2.3:
Properties of FPN and Belief Propagation Scheme / 4.2.4:
Imprecision and Inconsistency Management in a Cyclic FPN / 4.3:
Proposed Model for Belief Revision / 4.3.1:
Stability Analysis of the Belief Revision Model / 4.3.2:
Detection and Elimination of Limit Cycles / 4.3.3:
Nonmonotonic Reasoning in an FPN / 4.3.4:
Building Expert Systems Using Fuzzy Petri Nets / 4.4:
The Database / 5.1:
The Data-tree / 5.2.1:
The Knowledge Base / 5.3:
The Inference Engine / 5.4:
Searching Antecedent Parts of PR in the Data-tree / 5.4.1:
Formation of the FPN / 5.4.2:
Decision Making and Explanation Tracing / 5.4.3:
A Case Study / 5.5:
Performance Evaluation / 5.6:
Time-Complexisty for the Default-Data-Tree-Formation Procedure / 5.6.1:
Time-Complexity for the Procedure Suspect-Identification / 5.6.2:
Time-Complexity for the Procedure Variable-Instantiation-of-PRs / 5.6.3:
Time-Complexity for the Procedure Create-tree / 5.6.4:
Time-Complexity for the Procedure Search-on-Data-Tree / 5.6.5:
Time-Complexity for the Procedure FPN-Formation / 5.6.6:
Time-Complexity for the Belief-Revision and Limit-Cycle-Detection Procedure / 5.6.7:
Time-Complexity Analysis for the Procedure Limit-Cycle-Elimination / 5.6.8:
Time-Complexity for the Procedure Nonmonotonic Reasoning / 5.6.9:
Time-Complexity for the Procedure Decision-Making and Explanation Tracing / 5.6.10:
Time-Complexity of the Overall Expert System / 5.6.11:
Distributed Learning Using Fuzzy Cognitive Maps / 5.7:
Axelord's Cognitive Maps / 6.1:
Kosko's Model / 6.3:
Kosko's Extended Model / 6.4:
Adaptive FCMs / 6.5:
Zhang, Chen, and Bezdek's Model / 6.6:
Pal and Konar's FCM Model / 6.7:
Unsupervised Learning by Fuzzy Petri Nets / 6.8:
The Proposed Model for Cognitive Learning / 7.1:
Encoding of Weights / 7.2.1:
The Recall Model / 7.2.2:
State-Space Formulation / 7.3:
State-Space Model for Belief Updating / 7.3.1:
State-Space Model for FTT Updating of Transitions / 7.3.2:
State-Space Model for Weights / 7.3.3:
Stability Analysis of the Cognitive Model / 7.4:
Computer Simulation / 7.5:
Implication of the Results / 7.6:
Knowledge Refinement by Hebbian Learning / 7.7:
The Encoding Model / 7.7.1:
The Recall/Reasoning Model / 7.7.2:
Case Study by Computer Simulation / 7.7.3:
Supervised Learning by a Fuzzy Petri Net / 7.7.4:
Proposed Model of Fuzzy Petri Nets / 8.1:
Algorithm for Training / 8.2.1:
Analysis of Convergence / 8.4:
Application in Fuzzy Pattern Recognition / 8.5:
Distributed Modeling of Abduction, Reciprocity, and Duality by Fuzzy Petri Nets / 8.6:
State-Space Formulation of the Proposed FPN Model / 9.1:
The Behavioral Model of FPN / 9.3.1:
State-Space Formulation of the Model / 9.3.2:
Special Cases of the Model / 9.3.3:
Stability Analysis / 9.4:
Forward Reasoning in FPNs / 9.5:
Abductive Reasoning in FPN / 9.6:
Bi-directional Reasoning in an FPN / 9.7:
Fuzzy Modus Tollens and Duality / 9.8:
Human Mood Detection and Control: A Cybernetic Approach / 9.9:
Filtering, Segmentation and Localization of Facial Components / 10.1:
Segmentation of the Mouth Region / 10.2.1:
Segmentation of the Eye Region / 10.2.2:
Segmentation of Eyebrow Constriction / 10.2.3:
Determination of Facial Attributes / 10.3:
Determination of the Mouth-Opening / 10.3.1:
Determination of the Eye-Opening / 10.3.2:
Determination of the Length of Eyebrow-Constriction / 10.3.3:
Fuzzy Relational Model for Mood Detection / 10.4:
Fuzzification of Facial Attributes / 10.4.1:
The Fuzzy Relational Model for Mood Detection / 10.4.2:
Validation of System Performance / 10.5:
A Basic Scheme of Human Mood Control / 10.6:
A Simple Model of Human Mood Transition Dynamics / 10.7:
The Model / 10.7.1:
Properties of the Model / 10.7.2:
The Proportional Model of Human Mood Control / 10.8:
Mamdani's Model for Mood Control / 10.9:
Ranking the Music, Audio, and Video Clips / 10.10:
Experimental Results / 10.11:
Distributed Planning and Multi-agent Coordination of Robots / 10.12:
Single-Agent Planning / 11.1:
Multi-Agent Planning / 11.3:
Task Sharing and Distribution in Multi-agent Planning / 11.3.1:
Cooperation with/without Communication / 11.3.2:
Homogeneous and Heterogeneous Distributed Planning / 11.3.3:
Vision-based Transportation of Blocks by Two Robots / 11.4:
Timing Analysis of the Transportation Problem / 11.5:
Analysis with Two agents / 11.6.1:
Analysis with /-agents / 11.6.2:
Index / 11.7:
Foreword
Preface
Acknowledgments
43.

電子ブック

EB
Dietmar; Fodor, Georg; Zucker, Gerhard Dietrich, Dietmar Dietrich, Georg Fodor
出版情報: Springer eBooks Computer Science , Springer Vienna, 2009
所蔵情報: loading…
目次情報: 続きを見る
Theory / Part I:
The Vision / 1:
Basics / 2:
Introduction to Automation / 2.1:
Introduction to Psychoanalysis / 2.2:
Psychoanalysis, a Natural Science? / 2.3:
Neuropsychoanalysis / 2.4:
Realizing Psychic Functions in a Machine / 2.5:
Automation as the Challenge for Psychoanalysis / 2.6:
Two Different Sciences - two Different Languages / 2.7:
Model / 3:
Modeling a Decision Unit for Autonomous Agents / 3.1:
Perception in Automation / 3.2:
Towards the new ARS-PA Model / 3.3:
The New Model and its Description: Top-Down-Design / 3.4:
Implementation and Application / 4:
Differentiation between Modeling and Implementation / 4.1:
The Bubble-World / 4.2:
Applying the Model / 4.3:
Possible Future Benefits for the Humanities / 4.4:
References
Proceedings of Emulating the Mind (ENF 2007) / Part II:
Session 1
A Brief Overview of Artificial Intelligence Focusing on Computational Models of Emotions / 1.1:
Considering a Technical Realization of a Neuropsychoanalytical Model of the Mind - A Theoretical Framework / 1.2:
What is the "Mind"? A Neuro-Psychoanalytical Approach / 1.3:
Discussion Chaired by Authors / 1.4:
Session 2
Machines in the Ghost
Simulating the Primal Affective Mentalities of the Mammalian Brain: A Fugue on the Emotional Feelings of Mental Life and Implications for AI-Robotics
Session 3
Cognitive and Affective Automation: Machines Using the Psychoanalytic Model of the Human Mind
Issues at the Interface of Artificial Intelligence and Psychoanalysis: Emotion, Consciousness, Transference
Session 4
The Prometheus Phantasy - Functions of the Human Psyche for Technical Systems
Return of the Zombie - Neuropsychoanalysis, Consciousness, and the Engineering of Psychic Functions
Discussion Sessions / 5:
Psychoanalysis and Computer Engineering / 5.1:
The Mammal in the Machine / 5.2:
The Remembering Body / 5.3:
Emotions, Drives and Desire (Silicone in Love) / 5.4:
Getting A Grasp / 5.5:
Free Will / 5.6:
Responses to the ENF 2007 / Part III:
Introductory Words
Collected Papers
A Computational Model of Affects
The Physics of Thoughts
A Functional View on "Cognitive" Perceptual Systems Based on Functions and Principles of the Human Mind
Four Laws of Machine Psychodynamics
Artificial Group Mind, a Psychoanalytically Founded Thought Experiment
Artificial Group Psychodynamics: Emergence of the Collective
A Primer of Psychoanalysis for Alan Turing
Alexander R. Luria and the Theory of Functional Systems / 2.8:
A Mind for Resolving the Interior-Exterior Distinctions / 2.9:
The Vision, Revisited / 2.10:
Explanations for Engineers and Psychoanalysts / Part IV:
Abbreviations
Index
Theory / Part I:
The Vision / 1:
Basics / 2:
44.

電子ブック

EB
Dietmar; Fodor, Georg; Zucker, Gerhard Dietrich, Dietmar Dietrich, Georg Fodor, Dietmar Bruckner, Gerhard Zucker
出版情報: SpringerLink Books - AutoHoldings , Springer Vienna, 2009
所蔵情報: loading…
目次情報: 続きを見る
Theory / Part I:
The Vision / 1:
Basics / 2:
Introduction to Automation / 2.1:
Introduction to Psychoanalysis / 2.2:
Psychoanalysis, a Natural Science? / 2.3:
Neuropsychoanalysis / 2.4:
Realizing Psychic Functions in a Machine / 2.5:
Automation as the Challenge for Psychoanalysis / 2.6:
Two Different Sciences - two Different Languages / 2.7:
Model / 3:
Modeling a Decision Unit for Autonomous Agents / 3.1:
Perception in Automation / 3.2:
Towards the new ARS-PA Model / 3.3:
The New Model and its Description: Top-Down-Design / 3.4:
Implementation and Application / 4:
Differentiation between Modeling and Implementation / 4.1:
The Bubble-World / 4.2:
Applying the Model / 4.3:
Possible Future Benefits for the Humanities / 4.4:
References
Proceedings of Emulating the Mind (ENF 2007) / Part II:
Session 1
A Brief Overview of Artificial Intelligence Focusing on Computational Models of Emotions / 1.1:
Considering a Technical Realization of a Neuropsychoanalytical Model of the Mind - A Theoretical Framework / 1.2:
What is the "Mind"? A Neuro-Psychoanalytical Approach / 1.3:
Discussion Chaired by Authors / 1.4:
Session 2
Machines in the Ghost
Simulating the Primal Affective Mentalities of the Mammalian Brain: A Fugue on the Emotional Feelings of Mental Life and Implications for AI-Robotics
Session 3
Cognitive and Affective Automation: Machines Using the Psychoanalytic Model of the Human Mind
Issues at the Interface of Artificial Intelligence and Psychoanalysis: Emotion, Consciousness, Transference
Session 4
The Prometheus Phantasy - Functions of the Human Psyche for Technical Systems
Return of the Zombie - Neuropsychoanalysis, Consciousness, and the Engineering of Psychic Functions
Discussion Sessions / 5:
Psychoanalysis and Computer Engineering / 5.1:
The Mammal in the Machine / 5.2:
The Remembering Body / 5.3:
Emotions, Drives and Desire (Silicone in Love) / 5.4:
Getting A Grasp / 5.5:
Free Will / 5.6:
Responses to the ENF 2007 / Part III:
Introductory Words
Collected Papers
A Computational Model of Affects
The Physics of Thoughts
A Functional View on "Cognitive" Perceptual Systems Based on Functions and Principles of the Human Mind
Four Laws of Machine Psychodynamics
Artificial Group Mind, a Psychoanalytically Founded Thought Experiment
Artificial Group Psychodynamics: Emergence of the Collective
A Primer of Psychoanalysis for Alan Turing
Alexander R. Luria and the Theory of Functional Systems / 2.8:
A Mind for Resolving the Interior-Exterior Distinctions / 2.9:
The Vision, Revisited / 2.10:
Explanations for Engineers and Psychoanalysts / Part IV:
Abbreviations
Index
Theory / Part I:
The Vision / 1:
Basics / 2:
45.

電子ブック

EB
Michael Kohlhase, Takeo Kanade
出版情報: Springer eBooks Computer Science , Springer Berlin Heidelberg, 2006
所蔵情報: loading…
目次情報: 続きを見る
Setting the Stage for Open Mathematical Documents / Part I:
Document Markup for the Web / 1:
Structure vest. Appearance in Markup / 1.1:
Markup for the World Wide Web / 1.2:
XML, the eXtensible Markup Language / 1.3:
Markup for Mathematical Knowledge / 2:
Mathematical Objects and Formulae / 2.1:
Mathematical Texts and Statements / 2.2:
Large-Scale Structure and Context in Mathematics / 2.3:
Open Mathematical Documents / 3:
A Brief History of the OMDoc Format / 3.1:
Three Levels of Markup / 3.2:
Situating the OMDoc Format / 3.3:
The Future: An Active Web of (Mathematical) Knowledge / 3.4:
An OMDoc Primer / Part II:
Textbooks and Articles / 4:
Minimal OMDoc Markup / 4.1:
Structure and Statements / 4.2:
Marking up the Formulae / 4.3:
Full Formalization / 4.4:
OpenMath Content Dictionaries / 5:
Structured and Parametrized Theories / 6:
A Development Graph for Elementary Algebra / 7:
Courseware and the Narrative/Content Distinction / 8:
A Knowledge-Centered View / 8.1:
A Narrative-Structured View / 8.2:
Choreographing Narrative and Content OMDoc / 8.3:
Summary / 8.4:
Communication Between Systems / 9:
The OMDoc Document Format / Part III:
OMDoc as a Modular Format / 10:
The OMDoc Namespaces / 10.1:
Common Attributes in OMDoc / 10.2:
Document Infrastructure / 11:
The Document Root / 11.1:
Metadata / 11.2:
Document Comments / 11.3:
Document Structure / 11.4:
Sharing Document Parts / 11.5:
The Dublin Core Elements (Module DC) / 12:
Roles in Dublin Core Elements / 12.2:
Managing Rights / 12.3:
Inheritance of Metadata / 12.4:
Mathematical Objects / 13:
OpenMath / 13.1:
Content MathML / 13.2:
Representing Types in Content-MathML and OpenMath / 13.3:
Semantics of Variables / 13.4:
Legacy Representation for Migration / 13.5:
Mathematical Text / 14:
Multilingual Mathematical Vernacular / 14.1:
Formal Mathematical Properties / 14.2:
Text Fragments and Their Rhetoric/Mathematical Roles / 14.3:
Phrase-Level Markup of Mathematical Vernacular / 14.4:
Technical Terms / 14.5:
Rich Text Structure / 14.6:
Mathematical Statements / 15:
Types of Statements in Mathematics / 15.1:
Theory-Constitutive Statements in OMDoc / 15.2:
The Unassuming Rest / 15.3:
Mathematical Examples in OMDoc / 15.4:
Inline Statements / 15.5:
Theories as Structured Contexts / 15.6:
Abstract Data Types / 16:
Representing Proofs / 17:
Proof Structure / 17.1:
Proof Step Justifications / 17.2:
Scoping and Context in a Proof / 17.3:
Formal Proofs as Mathematical Objects / 17.4:
Complex Theories / 18:
Inheritance via Translations / 18.1:
Postulated Theory Inclusions / 18.2:
Local/Required Theory Inclusions / 18.3:
Induced Assertions / 18.4:
Development Graphs / 18.5:
Notation and Presentation / 19:
Styling OMDoc Elements / 19.1:
A Restricted Style Language / 19.2:
Notation of Symbols / 19.3:
Presenting Bound Variables / 19.4:
Auxiliary Elements / 20:
Non-XML Data and Program Code in OMDoc / 20.1:
Applets and External Objects in OMDoc / 20.2:
Exercises / 21:
Document Models for OMDoc / 22:
XML Document Models / 22.1:
The OMDoc Document Model / 22.2:
OMDoc Sub-Languages / 22.3:
OMDoc Applications, Tools, and Projects / Part IV:
OMDoc Resources / 23:
The OMDoc Web Site, Wiki, and Mailing List / 23.1:
The OMDoc Distribution / 23.2:
The OMDoc Bug Tracker / 23.3:
An XML Catalog for OMDoc / 23.4:
External Resources / 23.5:
Validating OMDoc Documents / 24:
Validation with Document Type Definitions / 24.1:
Validation with RelaxNG Schemata / 24.2:
Validation with XML Schema / 24.3:
Transforming OMDoc / 25:
Extracting and Linking XSLT Templates / 25.1:
Interfaces for Systems / 25.2:
Presenting OMDoc to Humans / 25.3:
Applications and Projects / 26:
Introduction / 26.1:
QMath Parser / 26.2:
Sentido Integrated Environment / 26.3:
MBase / 26.4:
A Search Engine for Mathematical Formulae / 26.5:
Semantic Interrelation and Change Management / 26.6:
MathDox / 26.7:
ActiveMath / 26.8:
Authoring Tools for ActiveMath / 26.9:
SWiM - An OMDoc-Based Semantic Wiki / 26.10:
Induction Challenge Problems / 26.11:
Maya / 26.12:
Hets / 26.13:
CPoint / 26.14:
Stex: A Latex-Based Workflow for OMDoc / 26.15:
An Emacs Mode for Editing OMDoc Documents / 26.16:
Converting Mathematica Notebooks to OMDoc / 26.17:
Standardizing Context in System Interoperability / 26.18:
Proof Assistants in Scientific Editors / 26.19:
VeriFun / 26.20:
Appendix / Part V:
Changes to the Specification / A:
Changes from 1.1 to 1.2 / A.1:
Changes from 1.0 to 1.1 / A.2:
Quick-Reference / B:
Table of Attributes / C:
The RelaxNG Schema for OMDoc / D:
The Sub-language Drivers / D.1:
Common Attributes / D.2:
Module MOBJ: Mathematical Objects and Text / D.3:
Module MTXT: Mathematical Text / D.4:
Module DOC: Document Infrastructure / D.5:
Module DC: Dublin Core Metadata / D.6:
Module ST: Mathematical Statements / D.7:
Module ADT: Abstract Data Types / D.8:
Module PF: Proofs and Proof objects / D.9:
Module CTH: Complex Theories / D.10:
Module RT: Rich Text Structure / D.11:
Module EXT: Applets and Non-XML Data / D.12:
Module PRES: Adding Presentation Information / D.13:
Module QUIZ: Infrastructure for Assessments / D.14:
The RelaxNG Schemata for Mathematical Objects / E:
The RelaxNG Schema for OpenMath / E.1:
The RelaxNG Schema for MathML / E.2:
Bibliography
Index
Setting the Stage for Open Mathematical Documents / Part I:
Document Markup for the Web / 1:
Structure vest. Appearance in Markup / 1.1:
46.

電子ブック

EB
Thomas Br?unl
出版情報: Springer eBooks Computer Science , Springer Berlin Heidelberg, 2008
所蔵情報: loading…
目次情報: 続きを見る
Embedded Systems / Part I:
Robots and Controllers / 1:
Mobile Robots / 1.1:
Embedded Controllers / 1.2:
Interfaces / 1.3:
Operating System / 1.4:
References / 1.5:
Central Processing Unit / 2:
Logic Gates / 2.1:
Function Units / 2.2:
Registers and Memory / 2.3:
Retro / 2.4:
Arithmetic Logic Unit / 2.5:
Control Unit / 2.6:
Sensors / 2.7:
Sensor Categories / 3.1:
Binary Sensor / 3.2:
Analog versus Digital Sensors / 3.3:
Shaft Encoder / 3.4:
A/D Converter / 3.5:
Position Sensitive Device / 3.6:
Compass / 3.7:
Gyroscope, Accelerometer, Inclinometer / 3.8:
Digital Camera / 3.9:
Actuators / 3.10:
DC Motors / 4.1:
H-Bridge / 4.2:
Pulse Width Modulation / 4.3:
Stepper Motors / 4.4:
Servos / 4.5:
Control / 4.6:
On-Off Control / 5.1:
PID Control / 5.2:
Velocity Control and Position Control / 5.3:
Multiple Motors - Driving Straight / 5.4:
V-Omega Interface / 5.5:
Multitasking / 5.6:
Cooperative Multitasking / 6.1:
Preemptive Multitasking / 6.2:
Synchronization / 6.3:
Scheduling / 6.4:
Interrupts and Timer-Activated Tasks / 6.5:
Wireless Communication / 6.6:
Communication Model / 7.1:
Messages / 7.2:
Fault-Tolerant Self-Configuration / 7.3:
User Interface and Remote Control / 7.4:
Sample Application Program / 7.5:
Mobile Robot Design / 7.6:
Driving Robots / 8:
Single Wheel Drive / 8.1:
Differential Drive / 8.2:
Tracked Robots / 8.3:
Synchro-Drive / 8.4:
Ackermann Steering / 8.5:
Drive Kinematics / 8.6:
Omni-Directional Robots / 8.7:
Mecanum Wheels / 9.1:
Omni-Directional Drive / 9.2:
Kinematics / 9.3:
Omni-Directional Robot Design / 9.4:
Driving Program / 9.5:
Balancing Robots / 9.6:
Simulation / 10.1:
Inverted Pendulum Robot / 10.2:
Double Inverted Pendulum / 10.3:
Walking Robots / 10.4:
Six-Legged Robot Design / 11.1:
Biped Robot Design / 11.2:
Sensors for Walking Robots / 11.3:
Static Balance / 11.4:
Dynamic Balance / 11.5:
Autonomous Planes / 11.6:
Application / 12.1:
Control System and Sensors / 12.2:
Flight Program / 12.3:
Autonomous Vessels and Underwater Vehicles / 12.4:
Dynamic Model / 13.1:
AUV Design Mako / 13.3:
AUV Design USAL / 13.4:
Robot Manipulators / 13.5:
Homogeneous Coordinates / 14.1:
Simulation and Programming / 14.2:
Simulation Systems / 14.4:
Mobile Robot Simulation / 15.1:
EyeSim Simulation System / 15.2:
Multiple Robot Simulation / 15.3:
EyeSim Application / 15.4:
EyeSim Environment and Parameter Files / 15.5:
SubSim Simulation System / 15.6:
Actuator and Sensor Models / 15.7:
SubSim Application / 15.8:
SubSim Environment and Parameter Files / 15.9:
Mobile Robot Applications / 15.10:
Localization and Navigation / 16:
Localization / 16.1:
Probabilistic Localization / 16.2:
Coordinate Systems / 16.3:
Environment Representation / 16.4:
Visibility Graph / 16.5:
Voronoi Diagram / 16.6:
Potential Field Method / 16.7:
Wandering Standpoint Algorithm / 16.8:
Bug Algorithm Family / 16.9:
Dijkstra's Algorithm / 16.10:
A* Algorithm / 16.11:
Maze Exploration / 16.12:
Micro Mouse Contest / 17.1:
Maze Exploration Algorithms / 17.2:
Simulated versus Real Maze Program / 17.3:
Map Generation / 17.4:
Mapping Algorithm / 18.1:
Data Representation / 18.2:
Boundary-Following Algorithm / 18.3:
Algorithm Execution / 18.4:
Simulation Experiments / 18.5:
Robot Experiments / 18.6:
Results / 18.7:
Real-Time Image Processing / 18.8:
Camera Interface / 19.1:
Auto-Brightness / 19.2:
Edge Detection / 19.3:
Motion Detection / 19.4:
Color Space / 19.5:
Color Object Detection / 19.6:
Image Segmentation / 19.7:
Image Coordinates versus World Coordinates / 19.8:
Robot Soccer / 19.9:
RoboCup and FIRA Competitions / 20.1:
Team Structure / 20.2:
Mechanics and Actuators / 20.3:
Sensing / 20.4:
Image Processing / 20.5:
Trajectory Planning / 20.6:
Neural Networks / 20.7:
Neural Network Principles / 21.1:
Feed-Forward Networks / 21.2:
Backpropagation / 21.3:
Neural Network Examples / 21.4:
Neural Controller / 21.5:
Genetic Algorithms / 21.6:
Genetic Algorithm Principles / 22.1:
Genetic Operators / 22.2:
Applications to Robot Control / 22.3:
Example Evolution / 22.4:
Implementation of Genetic Algorithms / 22.5:
Starman / 22.6:
Genetic Programming / 22.7:
Concepts and Applications / 23.1:
Lisp / 23.2:
Evolution / 23.3:
Tracking Problem / 23.5:
Evolution of Tracking Behavior / 23.6:
Behavior-Based Systems / 23.7:
Software Architecture / 24.1:
Behavior-Based Robotics / 24.2:
Behavior-Based Applications / 24.3:
Behavior Framework / 24.4:
Adaptive Controller / 24.5:
Neural Network Controller / 24.6:
Experiments / 24.8:
Evolution of Walking Gaits / 24.9:
Splines / 25.1:
Control Algorithm / 25.2:
Incorporating Feedback / 25.3:
Controller Evolution / 25.4:
Controller Assessment / 25.5:
Evolved Gaits / 25.6:
Automotive Systems / 25.7:
Autonomous Automobiles / 26.1:
Automobile Conversion for Autonomous Driving / 26.2:
Computer Vision for Driver-Assistance Systems / 26.3:
Image Processing Framework / 26.4:
Lane Detection / 26.5:
Vehicle Recognition and Tracking / 26.6:
Automatic Parking / 26.7:
Outlook / 26.8:
Appendices
Programming Tools / A:
RoBIOS Operating System / B:
Hardware Description Table / C:
Hardware Specification / D:
Laboratories / E:
Solutions / F:
Index
Embedded Systems / Part I:
Robots and Controllers / 1:
Mobile Robots / 1.1:
47.

電子ブック

EB
Thomas Bräunl
出版情報: SpringerLink Books - AutoHoldings , Springer Berlin Heidelberg, 2008
所蔵情報: loading…
目次情報: 続きを見る
Embedded Systems / Part I:
Robots and Controllers / 1:
Mobile Robots / 1.1:
Embedded Controllers / 1.2:
Interfaces / 1.3:
Operating System / 1.4:
References / 1.5:
Central Processing Unit / 2:
Logic Gates / 2.1:
Function Units / 2.2:
Registers and Memory / 2.3:
Retro / 2.4:
Arithmetic Logic Unit / 2.5:
Control Unit / 2.6:
Sensors / 2.7:
Sensor Categories / 3.1:
Binary Sensor / 3.2:
Analog versus Digital Sensors / 3.3:
Shaft Encoder / 3.4:
A/D Converter / 3.5:
Position Sensitive Device / 3.6:
Compass / 3.7:
Gyroscope, Accelerometer, Inclinometer / 3.8:
Digital Camera / 3.9:
Actuators / 3.10:
DC Motors / 4.1:
H-Bridge / 4.2:
Pulse Width Modulation / 4.3:
Stepper Motors / 4.4:
Servos / 4.5:
Control / 4.6:
On-Off Control / 5.1:
PID Control / 5.2:
Velocity Control and Position Control / 5.3:
Multiple Motors - Driving Straight / 5.4:
V-Omega Interface / 5.5:
Multitasking / 5.6:
Cooperative Multitasking / 6.1:
Preemptive Multitasking / 6.2:
Synchronization / 6.3:
Scheduling / 6.4:
Interrupts and Timer-Activated Tasks / 6.5:
Wireless Communication / 6.6:
Communication Model / 7.1:
Messages / 7.2:
Fault-Tolerant Self-Configuration / 7.3:
User Interface and Remote Control / 7.4:
Sample Application Program / 7.5:
Mobile Robot Design / 7.6:
Driving Robots / 8:
Single Wheel Drive / 8.1:
Differential Drive / 8.2:
Tracked Robots / 8.3:
Synchro-Drive / 8.4:
Ackermann Steering / 8.5:
Drive Kinematics / 8.6:
Omni-Directional Robots / 8.7:
Mecanum Wheels / 9.1:
Omni-Directional Drive / 9.2:
Kinematics / 9.3:
Omni-Directional Robot Design / 9.4:
Driving Program / 9.5:
Balancing Robots / 9.6:
Simulation / 10.1:
Inverted Pendulum Robot / 10.2:
Double Inverted Pendulum / 10.3:
Walking Robots / 10.4:
Six-Legged Robot Design / 11.1:
Biped Robot Design / 11.2:
Sensors for Walking Robots / 11.3:
Static Balance / 11.4:
Dynamic Balance / 11.5:
Autonomous Planes / 11.6:
Application / 12.1:
Control System and Sensors / 12.2:
Flight Program / 12.3:
Autonomous Vessels and Underwater Vehicles / 12.4:
Dynamic Model / 13.1:
AUV Design Mako / 13.3:
AUV Design USAL / 13.4:
Robot Manipulators / 13.5:
Homogeneous Coordinates / 14.1:
Simulation and Programming / 14.2:
Simulation Systems / 14.4:
Mobile Robot Simulation / 15.1:
EyeSim Simulation System / 15.2:
Multiple Robot Simulation / 15.3:
EyeSim Application / 15.4:
EyeSim Environment and Parameter Files / 15.5:
SubSim Simulation System / 15.6:
Actuator and Sensor Models / 15.7:
SubSim Application / 15.8:
SubSim Environment and Parameter Files / 15.9:
Mobile Robot Applications / 15.10:
Localization and Navigation / 16:
Localization / 16.1:
Probabilistic Localization / 16.2:
Coordinate Systems / 16.3:
Environment Representation / 16.4:
Visibility Graph / 16.5:
Voronoi Diagram / 16.6:
Potential Field Method / 16.7:
Wandering Standpoint Algorithm / 16.8:
Bug Algorithm Family / 16.9:
Dijkstra's Algorithm / 16.10:
A* Algorithm / 16.11:
Maze Exploration / 16.12:
Micro Mouse Contest / 17.1:
Maze Exploration Algorithms / 17.2:
Simulated versus Real Maze Program / 17.3:
Map Generation / 17.4:
Mapping Algorithm / 18.1:
Data Representation / 18.2:
Boundary-Following Algorithm / 18.3:
Algorithm Execution / 18.4:
Simulation Experiments / 18.5:
Robot Experiments / 18.6:
Results / 18.7:
Real-Time Image Processing / 18.8:
Camera Interface / 19.1:
Auto-Brightness / 19.2:
Edge Detection / 19.3:
Motion Detection / 19.4:
Color Space / 19.5:
Color Object Detection / 19.6:
Image Segmentation / 19.7:
Image Coordinates versus World Coordinates / 19.8:
Robot Soccer / 19.9:
RoboCup and FIRA Competitions / 20.1:
Team Structure / 20.2:
Mechanics and Actuators / 20.3:
Sensing / 20.4:
Image Processing / 20.5:
Trajectory Planning / 20.6:
Neural Networks / 20.7:
Neural Network Principles / 21.1:
Feed-Forward Networks / 21.2:
Backpropagation / 21.3:
Neural Network Examples / 21.4:
Neural Controller / 21.5:
Genetic Algorithms / 21.6:
Genetic Algorithm Principles / 22.1:
Genetic Operators / 22.2:
Applications to Robot Control / 22.3:
Example Evolution / 22.4:
Implementation of Genetic Algorithms / 22.5:
Starman / 22.6:
Genetic Programming / 22.7:
Concepts and Applications / 23.1:
Lisp / 23.2:
Evolution / 23.3:
Tracking Problem / 23.5:
Evolution of Tracking Behavior / 23.6:
Behavior-Based Systems / 23.7:
Software Architecture / 24.1:
Behavior-Based Robotics / 24.2:
Behavior-Based Applications / 24.3:
Behavior Framework / 24.4:
Adaptive Controller / 24.5:
Neural Network Controller / 24.6:
Experiments / 24.8:
Evolution of Walking Gaits / 24.9:
Splines / 25.1:
Control Algorithm / 25.2:
Incorporating Feedback / 25.3:
Controller Evolution / 25.4:
Controller Assessment / 25.5:
Evolved Gaits / 25.6:
Automotive Systems / 25.7:
Autonomous Automobiles / 26.1:
Automobile Conversion for Autonomous Driving / 26.2:
Computer Vision for Driver-Assistance Systems / 26.3:
Image Processing Framework / 26.4:
Lane Detection / 26.5:
Vehicle Recognition and Tracking / 26.6:
Automatic Parking / 26.7:
Outlook / 26.8:
Appendices
Programming Tools / A:
RoBIOS Operating System / B:
Hardware Description Table / C:
Hardware Specification / D:
Laboratories / E:
Solutions / F:
Index
Embedded Systems / Part I:
Robots and Controllers / 1:
Mobile Robots / 1.1:
48.

電子ブック

EB
Michael Kohlhase, Takeo Kanade, Josef Kittler
出版情報: SpringerLink Books - AutoHoldings , Springer Berlin Heidelberg, 2006
所蔵情報: loading…
目次情報: 続きを見る
Setting the Stage for Open Mathematical Documents / Part I:
Document Markup for the Web / 1:
Structure vest. Appearance in Markup / 1.1:
Markup for the World Wide Web / 1.2:
XML, the eXtensible Markup Language / 1.3:
Markup for Mathematical Knowledge / 2:
Mathematical Objects and Formulae / 2.1:
Mathematical Texts and Statements / 2.2:
Large-Scale Structure and Context in Mathematics / 2.3:
Open Mathematical Documents / 3:
A Brief History of the OMDoc Format / 3.1:
Three Levels of Markup / 3.2:
Situating the OMDoc Format / 3.3:
The Future: An Active Web of (Mathematical) Knowledge / 3.4:
An OMDoc Primer / Part II:
Textbooks and Articles / 4:
Minimal OMDoc Markup / 4.1:
Structure and Statements / 4.2:
Marking up the Formulae / 4.3:
Full Formalization / 4.4:
OpenMath Content Dictionaries / 5:
Structured and Parametrized Theories / 6:
A Development Graph for Elementary Algebra / 7:
Courseware and the Narrative/Content Distinction / 8:
A Knowledge-Centered View / 8.1:
A Narrative-Structured View / 8.2:
Choreographing Narrative and Content OMDoc / 8.3:
Summary / 8.4:
Communication Between Systems / 9:
The OMDoc Document Format / Part III:
OMDoc as a Modular Format / 10:
The OMDoc Namespaces / 10.1:
Common Attributes in OMDoc / 10.2:
Document Infrastructure / 11:
The Document Root / 11.1:
Metadata / 11.2:
Document Comments / 11.3:
Document Structure / 11.4:
Sharing Document Parts / 11.5:
The Dublin Core Elements (Module DC) / 12:
Roles in Dublin Core Elements / 12.2:
Managing Rights / 12.3:
Inheritance of Metadata / 12.4:
Mathematical Objects / 13:
OpenMath / 13.1:
Content MathML / 13.2:
Representing Types in Content-MathML and OpenMath / 13.3:
Semantics of Variables / 13.4:
Legacy Representation for Migration / 13.5:
Mathematical Text / 14:
Multilingual Mathematical Vernacular / 14.1:
Formal Mathematical Properties / 14.2:
Text Fragments and Their Rhetoric/Mathematical Roles / 14.3:
Phrase-Level Markup of Mathematical Vernacular / 14.4:
Technical Terms / 14.5:
Rich Text Structure / 14.6:
Mathematical Statements / 15:
Types of Statements in Mathematics / 15.1:
Theory-Constitutive Statements in OMDoc / 15.2:
The Unassuming Rest / 15.3:
Mathematical Examples in OMDoc / 15.4:
Inline Statements / 15.5:
Theories as Structured Contexts / 15.6:
Abstract Data Types / 16:
Representing Proofs / 17:
Proof Structure / 17.1:
Proof Step Justifications / 17.2:
Scoping and Context in a Proof / 17.3:
Formal Proofs as Mathematical Objects / 17.4:
Complex Theories / 18:
Inheritance via Translations / 18.1:
Postulated Theory Inclusions / 18.2:
Local/Required Theory Inclusions / 18.3:
Induced Assertions / 18.4:
Development Graphs / 18.5:
Notation and Presentation / 19:
Styling OMDoc Elements / 19.1:
A Restricted Style Language / 19.2:
Notation of Symbols / 19.3:
Presenting Bound Variables / 19.4:
Auxiliary Elements / 20:
Non-XML Data and Program Code in OMDoc / 20.1:
Applets and External Objects in OMDoc / 20.2:
Exercises / 21:
Document Models for OMDoc / 22:
XML Document Models / 22.1:
The OMDoc Document Model / 22.2:
OMDoc Sub-Languages / 22.3:
OMDoc Applications, Tools, and Projects / Part IV:
OMDoc Resources / 23:
The OMDoc Web Site, Wiki, and Mailing List / 23.1:
The OMDoc Distribution / 23.2:
The OMDoc Bug Tracker / 23.3:
An XML Catalog for OMDoc / 23.4:
External Resources / 23.5:
Validating OMDoc Documents / 24:
Validation with Document Type Definitions / 24.1:
Validation with RelaxNG Schemata / 24.2:
Validation with XML Schema / 24.3:
Transforming OMDoc / 25:
Extracting and Linking XSLT Templates / 25.1:
Interfaces for Systems / 25.2:
Presenting OMDoc to Humans / 25.3:
Applications and Projects / 26:
Introduction / 26.1:
QMath Parser / 26.2:
Sentido Integrated Environment / 26.3:
MBase / 26.4:
A Search Engine for Mathematical Formulae / 26.5:
Semantic Interrelation and Change Management / 26.6:
MathDox / 26.7:
ActiveMath / 26.8:
Authoring Tools for ActiveMath / 26.9:
SWiM - An OMDoc-Based Semantic Wiki / 26.10:
Induction Challenge Problems / 26.11:
Maya / 26.12:
Hets / 26.13:
CPoint / 26.14:
Stex: A Latex-Based Workflow for OMDoc / 26.15:
An Emacs Mode for Editing OMDoc Documents / 26.16:
Converting Mathematica Notebooks to OMDoc / 26.17:
Standardizing Context in System Interoperability / 26.18:
Proof Assistants in Scientific Editors / 26.19:
VeriFun / 26.20:
Appendix / Part V:
Changes to the Specification / A:
Changes from 1.1 to 1.2 / A.1:
Changes from 1.0 to 1.1 / A.2:
Quick-Reference / B:
Table of Attributes / C:
The RelaxNG Schema for OMDoc / D:
The Sub-language Drivers / D.1:
Common Attributes / D.2:
Module MOBJ: Mathematical Objects and Text / D.3:
Module MTXT: Mathematical Text / D.4:
Module DOC: Document Infrastructure / D.5:
Module DC: Dublin Core Metadata / D.6:
Module ST: Mathematical Statements / D.7:
Module ADT: Abstract Data Types / D.8:
Module PF: Proofs and Proof objects / D.9:
Module CTH: Complex Theories / D.10:
Module RT: Rich Text Structure / D.11:
Module EXT: Applets and Non-XML Data / D.12:
Module PRES: Adding Presentation Information / D.13:
Module QUIZ: Infrastructure for Assessments / D.14:
The RelaxNG Schemata for Mathematical Objects / E:
The RelaxNG Schema for OpenMath / E.1:
The RelaxNG Schema for MathML / E.2:
Bibliography
Index
Setting the Stage for Open Mathematical Documents / Part I:
Document Markup for the Web / 1:
Structure vest. Appearance in Markup / 1.1:
49.

電子ブック

EB
Radovan Cervenka, Stefan Brantschen, Ivan Trencansky, Marius Walliser
出版情報: Springer eBooks Computer Science , Birkh?user Basel, 2007
所蔵情報: loading…
目次情報: 続きを見る
Introduction / 1:
Overview / 1.1:
Goals of this Work / 1.2:
Outline of the Book / 1.3:
Background Information / Part I:
Survey on Agent-Oriented Modeling Languages / 2:
Gaia / 2.1:
AUML / 2.2:
MESSAGE / 2.3:
Tropos / 2.4:
MAS-ML / 2.5:
AOR / 2.6:
Summary of Today's MAS Modeling Languages / 2.7:
Requirements on a MAS Modeling Language / 3:
Solution Summary / Part II:
The AML Approach / 4:
The Purpose of AML / 4.1:
The Scope of AML / 4.2:
The Development of AML / 4.3:
AML Sources / 4.4:
The Language Architecture / 4.5:
Concepts of AML / 5:
Multi-Agent System / 5.1:
MAS Semi-entities and Entities / 5.2:
Structural Aspects / 5.3:
Social Aspects / 5.4:
MAS Deployment and Mobility / 5.5:
Behaviors / 5.6:
Mental Aspects / 5.7:
Ontologies / 5.8:
AML Modeling Mechanisms / 6:
Generic Modeling Mechanisms / 6.1:
Modeling Entity Types / 6.2:
Modeling Social Aspects / 6.3:
Modeling MAS Deployment and Mobility / 6.4:
Modeling Capabilities and Behavior Decomposition / 6.5:
Modeling Interactions / 6.6:
Modeling Mental Aspects / 6.7:
Modeling Ontologies / 6.8:
Modeling Contexts / 6.9:
Related Work / 7:
CASE Tool Support / 7.1:
Methodological Support / 7.2:
Practical Apphcation of AML / 7.3:
Standardization Activities / 7.4:
AML Specification / Part III:
Extensions to Standard UML Notation / 8:
Stereotyped Classifier / 8.1:
ConnectableElement with a Stereotyped Type / 8.2:
Connector with a Stereotyped Type / 8.3:
Lifeline with a Stereotyped Type / 8.4:
Composed Lifelines in Communication Diagrams / 8.5:
ObjectNode with a Stereotyped Type / 8.6:
Bi-directional Dependencies / 8.7:
Internal Structure of ConnectableElements / 8.8:
Organization of the AML Specification / 9:
Overall AML Package Structure / 9.1:
Specification Structure / 9.2:
Architecture / 10:
Entities / 10.1:
Agents / 10.2:
Resources / 10.3:
Environments / 10.4:
MAS Deployment / 10.5:
Basic Behaviors / 11:
Behavior Decomposition / 11.2:
Communicative Interactions / 11.3:
Services / 11.4:
Observations and Effecting Interactions / 11.5:
Mobility / 11.6:
Mental / 12:
Mental States / 12.1:
Beliefs / 12.2:
Goals / 12.3:
Plans / 12.4:
Mental Relationships / 12.5:
Basic Ontologies / 13:
Model Management / 14:
Contexts / 14.1:
UML Extension for AM L / 15:
Extended Actor / 15.1:
Extended BehavioralFeature / 15.2:
Extended Behavior / 15.3:
Diagrams / 16:
Diagram Frames / 16.1:
Diagram Types / 16.2:
Extension of OCL / 17:
New Operators / 17.1:
Final Remarks / Part IV:
Conclusions / 18:
Context of the Work / 18.1:
Solution / 18.2:
Challenges / 18.3:
Results / 18.4:
Summary of Original Contribution / 18.5:
Further Work / 19:
Improvements of AML / 19.1:
Broader Application of AML / 19.2:
Assurance of Future Work / 19.3:
Bibliography
List of Acronyms
Index
Introduction / 1:
Overview / 1.1:
Goals of this Work / 1.2:
50.

電子ブック

EB
Radovan Cervenka, Stefan Brantschen, Ivan Trencansky, Marius Walliser, Monique Calisti
出版情報: SpringerLink Books - AutoHoldings , Birkhäuser Basel, 2007
所蔵情報: loading…
目次情報: 続きを見る
Introduction / 1:
Overview / 1.1:
Goals of this Work / 1.2:
Outline of the Book / 1.3:
Background Information / Part I:
Survey on Agent-Oriented Modeling Languages / 2:
Gaia / 2.1:
AUML / 2.2:
MESSAGE / 2.3:
Tropos / 2.4:
MAS-ML / 2.5:
AOR / 2.6:
Summary of Today's MAS Modeling Languages / 2.7:
Requirements on a MAS Modeling Language / 3:
Solution Summary / Part II:
The AML Approach / 4:
The Purpose of AML / 4.1:
The Scope of AML / 4.2:
The Development of AML / 4.3:
AML Sources / 4.4:
The Language Architecture / 4.5:
Concepts of AML / 5:
Multi-Agent System / 5.1:
MAS Semi-entities and Entities / 5.2:
Structural Aspects / 5.3:
Social Aspects / 5.4:
MAS Deployment and Mobility / 5.5:
Behaviors / 5.6:
Mental Aspects / 5.7:
Ontologies / 5.8:
AML Modeling Mechanisms / 6:
Generic Modeling Mechanisms / 6.1:
Modeling Entity Types / 6.2:
Modeling Social Aspects / 6.3:
Modeling MAS Deployment and Mobility / 6.4:
Modeling Capabilities and Behavior Decomposition / 6.5:
Modeling Interactions / 6.6:
Modeling Mental Aspects / 6.7:
Modeling Ontologies / 6.8:
Modeling Contexts / 6.9:
Related Work / 7:
CASE Tool Support / 7.1:
Methodological Support / 7.2:
Practical Apphcation of AML / 7.3:
Standardization Activities / 7.4:
AML Specification / Part III:
Extensions to Standard UML Notation / 8:
Stereotyped Classifier / 8.1:
ConnectableElement with a Stereotyped Type / 8.2:
Connector with a Stereotyped Type / 8.3:
Lifeline with a Stereotyped Type / 8.4:
Composed Lifelines in Communication Diagrams / 8.5:
ObjectNode with a Stereotyped Type / 8.6:
Bi-directional Dependencies / 8.7:
Internal Structure of ConnectableElements / 8.8:
Organization of the AML Specification / 9:
Overall AML Package Structure / 9.1:
Specification Structure / 9.2:
Architecture / 10:
Entities / 10.1:
Agents / 10.2:
Resources / 10.3:
Environments / 10.4:
MAS Deployment / 10.5:
Basic Behaviors / 11:
Behavior Decomposition / 11.2:
Communicative Interactions / 11.3:
Services / 11.4:
Observations and Effecting Interactions / 11.5:
Mobility / 11.6:
Mental / 12:
Mental States / 12.1:
Beliefs / 12.2:
Goals / 12.3:
Plans / 12.4:
Mental Relationships / 12.5:
Basic Ontologies / 13:
Model Management / 14:
Contexts / 14.1:
UML Extension for AM L / 15:
Extended Actor / 15.1:
Extended BehavioralFeature / 15.2:
Extended Behavior / 15.3:
Diagrams / 16:
Diagram Frames / 16.1:
Diagram Types / 16.2:
Extension of OCL / 17:
New Operators / 17.1:
Final Remarks / Part IV:
Conclusions / 18:
Context of the Work / 18.1:
Solution / 18.2:
Challenges / 18.3:
Results / 18.4:
Summary of Original Contribution / 18.5:
Further Work / 19:
Improvements of AML / 19.1:
Broader Application of AML / 19.2:
Assurance of Future Work / 19.3:
Bibliography
List of Acronyms
Index
Introduction / 1:
Overview / 1.1:
Goals of this Work / 1.2:
文献の複写および貸借の依頼を行う
 文献複写・貸借依頼