close
1.

電子ブック

EB
Gian Piero Zarri
出版情報: Springer eBooks Computer Science , Springer London, 2009
所蔵情報: loading…
目次情報: 続きを見る
Basic Principles / 1:
Narrative Information in an NKRL Context / 1.1:
Narratology and NKRL / 1.1.1:
The Notion of "Event" in an NKRL Context / 1.1.2:
Knowledge Representation and NKRL / 1.2:
"Standard" Ontologies and the "n-ary" Problem / 1.2.1:
A Plain "n-ary" Solution and Some Related Problems / 1.2.2:
In the Guise of Winding Up / 1.3:
The Knowledge Representation Strategy / 2:
Architecture of NKRL: the Four "Components" / 2.1:
The Data Structures of the Four Components / 2.2:
Definitional/Enumerative Data Structures / 2.2.1:
Descriptive/Factual Data Structures / 2.2.2:
Second-order Structures / 2.3:
The Completive Construction / 2.3.1:
Binding Occurrences / 2.3.2:
The Semantic and Ontological Contents / 2.4:
The Organization of the HClass Hierarchy / 3.1:
General Notions about Ontologies / 3.1.1:
HClass Architecture / 3.1.2:
The Organization of the HTemp Hierarchy / 3.2:
Recent Examples of "Structured" Ontological Systems / 3.2.1:
Main Features of Some Specific HTemp Structures / 3.2.2:
The Query and Inference Procedures / 3.3:
"Search Patterns" and Low-level Inferences / 4.1:
The Algorithmic Structure of Fum / 4.1.1:
Temporal Information and Indexing / 4.1.2:
High-level Inference Procedures / 4.2:
General Remarks about Some Reasoning Paradigms / 4.2.1:
Hypothesis Rules / 4.2.2:
Transformation Rules / 4.2.3:
Integrating the Two Main Inferencing Modes of NKRL / 4.2.4:
Inference Rules and Internet Filtering / 4.2.5:
Conclusion / 4.3:
Technological Enhancements / 5.1:
Theoretical Enhancements / 5.2:
Appendix A
Appendix B
References
Index
Basic Principles / 1:
Narrative Information in an NKRL Context / 1.1:
Narratology and NKRL / 1.1.1:
2.

電子ブック

EB
Gian Piero Zarri
出版情報: SpringerLink Books - AutoHoldings , Springer London, 2009
所蔵情報: loading…
目次情報: 続きを見る
Basic Principles / 1:
Narrative Information in an NKRL Context / 1.1:
Narratology and NKRL / 1.1.1:
The Notion of "Event" in an NKRL Context / 1.1.2:
Knowledge Representation and NKRL / 1.2:
"Standard" Ontologies and the "n-ary" Problem / 1.2.1:
A Plain "n-ary" Solution and Some Related Problems / 1.2.2:
In the Guise of Winding Up / 1.3:
The Knowledge Representation Strategy / 2:
Architecture of NKRL: the Four "Components" / 2.1:
The Data Structures of the Four Components / 2.2:
Definitional/Enumerative Data Structures / 2.2.1:
Descriptive/Factual Data Structures / 2.2.2:
Second-order Structures / 2.3:
The Completive Construction / 2.3.1:
Binding Occurrences / 2.3.2:
The Semantic and Ontological Contents / 2.4:
The Organization of the HClass Hierarchy / 3.1:
General Notions about Ontologies / 3.1.1:
HClass Architecture / 3.1.2:
The Organization of the HTemp Hierarchy / 3.2:
Recent Examples of "Structured" Ontological Systems / 3.2.1:
Main Features of Some Specific HTemp Structures / 3.2.2:
The Query and Inference Procedures / 3.3:
"Search Patterns" and Low-level Inferences / 4.1:
The Algorithmic Structure of Fum / 4.1.1:
Temporal Information and Indexing / 4.1.2:
High-level Inference Procedures / 4.2:
General Remarks about Some Reasoning Paradigms / 4.2.1:
Hypothesis Rules / 4.2.2:
Transformation Rules / 4.2.3:
Integrating the Two Main Inferencing Modes of NKRL / 4.2.4:
Inference Rules and Internet Filtering / 4.2.5:
Conclusion / 4.3:
Technological Enhancements / 5.1:
Theoretical Enhancements / 5.2:
Appendix A
Appendix B
References
Index
Basic Principles / 1:
Narrative Information in an NKRL Context / 1.1:
Narratology and NKRL / 1.1.1:
3.

電子ブック

EB
Kathryn E. Merrick, Mary Lou Maher
出版情報: Springer eBooks Computer Science , Springer Berlin Heidelberg, 2009
所蔵情報: loading…
目次情報: 続きを見る
Non-Player Characters and Reinforcement Learning / Part I:
Non-Player Characters in Multiuser Games / 1:
Types of Multiuser Games / 1.1:
Massively Multiplayer Online Role-Playing Games / 1.1.1:
Multiuser Simulation Games / 1.1.2:
Open-Ended Virtual Worlds / 1.1.3:
Character Roles in Multiuser Games / 1.2:
Existing Artificial Intelligence Techniques for Non-Player Characters in Multiuser Games / 1.3:
Reflexive Agents / 1.3.1:
Learning Agents / 1.3.2:
Evolutionary Agents / 1.3.3:
Smart Terrain / 1.3.4:
Summary / 1.4:
References / 1.5:
Motivation in Natural and Artificial Agents / 2:
Defining Motivation / 2.1:
Biological Theories of Motivation / 2.2:
Drive Theory / 2.2.1:
Motivational State Theory / 2.2.2:
Arousal / 2.2.3:
Cognitive Theories of Motivation / 2.3:
Curiosity / 2.3.1:
Operant Theory / 2.3.2:
Incentive / 2.3.3:
Achievement Motivation / 2.3.4:
Attribution Theory / 2.3.5:
Intrinsic Motivation / 2.3.6:
Social Theories of Motivation / 2.4:
Conformity / 2.4.1:
Cultural Effect / 2.4.2:
Evolution / 2.4.3:
Combined Motivation Theories / 2.5:
Maslow's Hierarchy of Needs / 2.5.1:
Existence Relatedness Growth Theory / 2.5.2:
Towards Motivated Reinforcement Learning / 2.6:
Defining Reinforcement Learning / 3.1:
Dynamic Programming / 3.1.1:
Monte Carlo Methods / 3.1.2:
Temporal Difference Learning / 3.1.3:
Reinforcement Learning in Complex Environments / 3.2:
Partially Observable Environments / 3.2.1:
Function Approximation / 3.2.2:
Hierarchical Reinforcement Learning / 3.2.3:
Motivated Reinforcement Learning / 3.3:
Using a Motivation Signal in Addition to a Reward Signal / 3.3.1:
Using a Motivation Signal Instead of a Reward Signal / 3.3.2:
Comparing the Behaviour of Learning Agents / 3.4:
Player Satisfaction / 4.1:
Psychological Flow / 4.1.1:
Structural Flow / 4.1.2:
Formalising Non-Player Character Behaviour / 4.2:
Models of Optimality for Reinforcement Learning / 4.2.1:
Characteristics of Motivated Reinforcement Learning / 4.2.2:
Comparing Motivated Reinforcement Learning Agents / 4.3:
Statistical Model for Identifying Learned Tasks / 4.3.1:
Behavioural Variety / 4.3.2:
Behavioural Complexity / 4.3.3:
Developing Curious Characters Using Motivated Reinforcement Learning / 4.4:
Curiosity, Motivation and Attention Focus / 5:
Agents in Complex, Dynamic Environments / 5.1:
States / 5.1.1:
Actions / 5.1.2:
Reward and Motivation / 5.1.3:
Motivation and Attention Focus / 5.2:
Observations / 5.2.1:
Events / 5.2.2:
Tasks and Task Selection / 5.2.3:
Experience-Based Reward as Cognitive Motivation / 5.2.4:
Arbitration Functions / 5.2.5:
A General Experience-Based Motivation Function / 5.2.6:
Curiosity as Motivation for Support Characters / 5.3:
Curiosity as Interesting Events / 5.3.1:
Curiosity as Interesting and Competence / 5.3.2:
Motivated Reinforcement Learning Agents / 5.4:
A General Motivated Reinforcement Learning Model / 6.1:
Algorithms for Motivated Reinforcement Learning / 6.2:
Motivated Flat Reinforcement Learning / 6.2.1:
Motivated Multioption Reinforcement Learning / 6.2.2:
Motivated Hierarchical Reinforcement Learning / 6.2.3:
Curious Characters in Games / 6.3:
Curious Characters for Multiuser Games / 7:
Motivated Reinforcement Learning for Support Characters in Massively Multiplayer Online Role-Playing Games / 7.1:
Character Behaviour in Small-Scale, Isolated Games Locations / 7.2:
Case Studies of Individual Characters / 7.2.1:
General Trends in Character Behaviour / 7.2.2:
Curious Characters for Games in Complex, Dynamic Environments / 7.3:
Designing Characters That Can Multitask / 8.1:
Designing Characters for Complex Tasks / 8.1.1:
Games That Change While Characters Are Learning / 8.2.1:
Curious Characters for Games in Second Life / 8.3.1:
Motivated Reinforcement Learning in Open-Ended Simulation Games / 9.1:
Game Design / 9.1.1:
Character Design / 9.1.2:
Evaluating Character Behaviour in Response to Game Play Sequences / 9.2:
Discussion / 9.2.1:
Future / 9.3:
Towards the Future / 10:
Using Motivated Reinforcement Learning in Non-Player Characters / 10.1:
Other Gaming Applications for Motivated Reinforcement Learning / 10.2:
Dynamic Difficulty Adjustment / 10.2.1:
Procedural Content Generation / 10.2.2:
Beyond Curiosity / 10.3:
Biological Models of Motivation / 10.3.1:
Cognitive Models of Motivation / 10.3.2:
Social Models of Motivation / 10.3.3:
Combined Models of Motivation / 10.3.4:
New Models of Motivated Learning / 10.4:
Motivated Supervised Learning / 10.4.1:
Motivated Unsupervised Learning / 10.4.2:
Evaluating the Behaviour of Motivated Learning Agents / 10.5:
Concluding Remarks / 10.6:
Index / 10.7:
Non-Player Characters and Reinforcement Learning / Part I:
Non-Player Characters in Multiuser Games / 1:
Types of Multiuser Games / 1.1:
4.

電子ブック

EB
Kathryn E. Merrick, Mary Lou Maher
出版情報: SpringerLink Books - AutoHoldings , Springer Berlin Heidelberg, 2009
所蔵情報: loading…
目次情報: 続きを見る
Non-Player Characters and Reinforcement Learning / Part I:
Non-Player Characters in Multiuser Games / 1:
Types of Multiuser Games / 1.1:
Massively Multiplayer Online Role-Playing Games / 1.1.1:
Multiuser Simulation Games / 1.1.2:
Open-Ended Virtual Worlds / 1.1.3:
Character Roles in Multiuser Games / 1.2:
Existing Artificial Intelligence Techniques for Non-Player Characters in Multiuser Games / 1.3:
Reflexive Agents / 1.3.1:
Learning Agents / 1.3.2:
Evolutionary Agents / 1.3.3:
Smart Terrain / 1.3.4:
Summary / 1.4:
References / 1.5:
Motivation in Natural and Artificial Agents / 2:
Defining Motivation / 2.1:
Biological Theories of Motivation / 2.2:
Drive Theory / 2.2.1:
Motivational State Theory / 2.2.2:
Arousal / 2.2.3:
Cognitive Theories of Motivation / 2.3:
Curiosity / 2.3.1:
Operant Theory / 2.3.2:
Incentive / 2.3.3:
Achievement Motivation / 2.3.4:
Attribution Theory / 2.3.5:
Intrinsic Motivation / 2.3.6:
Social Theories of Motivation / 2.4:
Conformity / 2.4.1:
Cultural Effect / 2.4.2:
Evolution / 2.4.3:
Combined Motivation Theories / 2.5:
Maslow's Hierarchy of Needs / 2.5.1:
Existence Relatedness Growth Theory / 2.5.2:
Towards Motivated Reinforcement Learning / 2.6:
Defining Reinforcement Learning / 3.1:
Dynamic Programming / 3.1.1:
Monte Carlo Methods / 3.1.2:
Temporal Difference Learning / 3.1.3:
Reinforcement Learning in Complex Environments / 3.2:
Partially Observable Environments / 3.2.1:
Function Approximation / 3.2.2:
Hierarchical Reinforcement Learning / 3.2.3:
Motivated Reinforcement Learning / 3.3:
Using a Motivation Signal in Addition to a Reward Signal / 3.3.1:
Using a Motivation Signal Instead of a Reward Signal / 3.3.2:
Comparing the Behaviour of Learning Agents / 3.4:
Player Satisfaction / 4.1:
Psychological Flow / 4.1.1:
Structural Flow / 4.1.2:
Formalising Non-Player Character Behaviour / 4.2:
Models of Optimality for Reinforcement Learning / 4.2.1:
Characteristics of Motivated Reinforcement Learning / 4.2.2:
Comparing Motivated Reinforcement Learning Agents / 4.3:
Statistical Model for Identifying Learned Tasks / 4.3.1:
Behavioural Variety / 4.3.2:
Behavioural Complexity / 4.3.3:
Developing Curious Characters Using Motivated Reinforcement Learning / 4.4:
Curiosity, Motivation and Attention Focus / 5:
Agents in Complex, Dynamic Environments / 5.1:
States / 5.1.1:
Actions / 5.1.2:
Reward and Motivation / 5.1.3:
Motivation and Attention Focus / 5.2:
Observations / 5.2.1:
Events / 5.2.2:
Tasks and Task Selection / 5.2.3:
Experience-Based Reward as Cognitive Motivation / 5.2.4:
Arbitration Functions / 5.2.5:
A General Experience-Based Motivation Function / 5.2.6:
Curiosity as Motivation for Support Characters / 5.3:
Curiosity as Interesting Events / 5.3.1:
Curiosity as Interesting and Competence / 5.3.2:
Motivated Reinforcement Learning Agents / 5.4:
A General Motivated Reinforcement Learning Model / 6.1:
Algorithms for Motivated Reinforcement Learning / 6.2:
Motivated Flat Reinforcement Learning / 6.2.1:
Motivated Multioption Reinforcement Learning / 6.2.2:
Motivated Hierarchical Reinforcement Learning / 6.2.3:
Curious Characters in Games / 6.3:
Curious Characters for Multiuser Games / 7:
Motivated Reinforcement Learning for Support Characters in Massively Multiplayer Online Role-Playing Games / 7.1:
Character Behaviour in Small-Scale, Isolated Games Locations / 7.2:
Case Studies of Individual Characters / 7.2.1:
General Trends in Character Behaviour / 7.2.2:
Curious Characters for Games in Complex, Dynamic Environments / 7.3:
Designing Characters That Can Multitask / 8.1:
Designing Characters for Complex Tasks / 8.1.1:
Games That Change While Characters Are Learning / 8.2.1:
Curious Characters for Games in Second Life / 8.3.1:
Motivated Reinforcement Learning in Open-Ended Simulation Games / 9.1:
Game Design / 9.1.1:
Character Design / 9.1.2:
Evaluating Character Behaviour in Response to Game Play Sequences / 9.2:
Discussion / 9.2.1:
Future / 9.3:
Towards the Future / 10:
Using Motivated Reinforcement Learning in Non-Player Characters / 10.1:
Other Gaming Applications for Motivated Reinforcement Learning / 10.2:
Dynamic Difficulty Adjustment / 10.2.1:
Procedural Content Generation / 10.2.2:
Beyond Curiosity / 10.3:
Biological Models of Motivation / 10.3.1:
Cognitive Models of Motivation / 10.3.2:
Social Models of Motivation / 10.3.3:
Combined Models of Motivation / 10.3.4:
New Models of Motivated Learning / 10.4:
Motivated Supervised Learning / 10.4.1:
Motivated Unsupervised Learning / 10.4.2:
Evaluating the Behaviour of Motivated Learning Agents / 10.5:
Concluding Remarks / 10.6:
Index / 10.7:
Non-Player Characters and Reinforcement Learning / Part I:
Non-Player Characters in Multiuser Games / 1:
Types of Multiuser Games / 1.1:
5.

電子ブック

EB
Gabriele Puppis, Takeo Kanade
出版情報: Springer eBooks Computer Science , Springer Berlin Heidelberg, 2010
所蔵情報: loading…
目次情報: 続きを見る
Introduction / 1:
Word Automata and Time Granularities / 2:
Background Knowledge / 2.1:
Words and Languages / 2.1.1:
Periodicity of Words / 2.1.2:
Word Automata / 2.1.3:
Time Granularities / 2.1.4:
The String-Based and Automaton-Based Approaches / 2.2:
The Granspec Formalism / 2.2.1:
From Granspecs to Single-String Automata / 2.2.2:
Counters and Multiple Transitions / 2.2.3:
The Logical Counterpart of RCSSA / 2.2.4:
Compact and Tractable Representations / 2.3:
Nested Repetitions of Words / 2.3.1:
Algorithms on NCSSA / 2.3.2:
Optimizing Representations / 2.3.3:
Reasoning on Sets of Granularities / 2.4:
Languages of Ultimately Periodic Words / 2.4.1:
Ultimately Periodic Automata / 2.4.2:
Algorithms on UPA / 2.4.3:
Applications to Time Granularity / 2.4.4:
Discussion / 2.5:
Tree Automata and Logics / 3:
Graphs and Trees / 3.1:
Tree Automata / 3.1.2:
Monadic Second-Order Logic / 3.1.3:
The Model Checking Problem / 3.1.4:
The Contraction Method for Tree Automata / 3.2:
Features and Types / 3.2.1:
Types and the Acceptance Problem / 3.2.2:
From Trees to Their Retractions / 3.2.3:
An Example / 3.2.4:
Tree Transformations / 3.3:
Tree Recolorings / 3.3.1:
Tree Substitutions / 3.3.2:
Tree Transducers / 3.3.3:
Inverse Substitutions / 3.3.4:
A Summary / 3.3.5:
The Class of Reducible Trees / 3.4:
Compositional Properties of Types / 3.4.1:
Closure Properties / 3.4.2:
Effectiveness of the Contraction Method / 3.5:
Reducible Trees and the Caucal Hierarchy / 3.5.1:
Two-Way Alternating Tree Automata / 3.5.2:
Morphic Trees / 3.5.3:
Layered Temporal Structures / 3.5.4:
Summary / 3.6:
Technical Proofs / A:
Proofs of Theorem 5 and Theorem 6 / A.l:
Proof of Theorem 8 / A.2:
Proof of Proposition 34 / A.3:
References
Notation
Index
Introduction / 1:
Word Automata and Time Granularities / 2:
Background Knowledge / 2.1:
6.

電子ブック

EB
Gabriele Puppis, Takeo Kanade
出版情報: SpringerLink Books - AutoHoldings , Springer Berlin Heidelberg, 2010
所蔵情報: loading…
目次情報: 続きを見る
Introduction / 1:
Word Automata and Time Granularities / 2:
Background Knowledge / 2.1:
Words and Languages / 2.1.1:
Periodicity of Words / 2.1.2:
Word Automata / 2.1.3:
Time Granularities / 2.1.4:
The String-Based and Automaton-Based Approaches / 2.2:
The Granspec Formalism / 2.2.1:
From Granspecs to Single-String Automata / 2.2.2:
Counters and Multiple Transitions / 2.2.3:
The Logical Counterpart of RCSSA / 2.2.4:
Compact and Tractable Representations / 2.3:
Nested Repetitions of Words / 2.3.1:
Algorithms on NCSSA / 2.3.2:
Optimizing Representations / 2.3.3:
Reasoning on Sets of Granularities / 2.4:
Languages of Ultimately Periodic Words / 2.4.1:
Ultimately Periodic Automata / 2.4.2:
Algorithms on UPA / 2.4.3:
Applications to Time Granularity / 2.4.4:
Discussion / 2.5:
Tree Automata and Logics / 3:
Graphs and Trees / 3.1:
Tree Automata / 3.1.2:
Monadic Second-Order Logic / 3.1.3:
The Model Checking Problem / 3.1.4:
The Contraction Method for Tree Automata / 3.2:
Features and Types / 3.2.1:
Types and the Acceptance Problem / 3.2.2:
From Trees to Their Retractions / 3.2.3:
An Example / 3.2.4:
Tree Transformations / 3.3:
Tree Recolorings / 3.3.1:
Tree Substitutions / 3.3.2:
Tree Transducers / 3.3.3:
Inverse Substitutions / 3.3.4:
A Summary / 3.3.5:
The Class of Reducible Trees / 3.4:
Compositional Properties of Types / 3.4.1:
Closure Properties / 3.4.2:
Effectiveness of the Contraction Method / 3.5:
Reducible Trees and the Caucal Hierarchy / 3.5.1:
Two-Way Alternating Tree Automata / 3.5.2:
Morphic Trees / 3.5.3:
Layered Temporal Structures / 3.5.4:
Summary / 3.6:
Technical Proofs / A:
Proofs of Theorem 5 and Theorem 6 / A.l:
Proof of Theorem 8 / A.2:
Proof of Proposition 34 / A.3:
References
Notation
Index
Introduction / 1:
Word Automata and Time Granularities / 2:
Background Knowledge / 2.1:
7.

電子ブック

EB
Milan Studen?, Michael Jordan, Frank P. Kelly, Jon Kleinberg, Bernhard Sch?lkopf, Ian Witten
出版情報: Springer eBooks Computer Science , Springer London, 2005
所蔵情報: loading…
目次情報: 続きを見る
Introduction / 1:
Motivational thoughts / 1.1:
Goals of the monograph / 1.2:
Structure of the book / 1.3:
Basic Concepts / 2:
Conditional independence / 2.1:
Semi-graphoid properties / 2.2:
Formal independence models / 2.2.1:
Semi-graphoids / 2.2.2:
Elementary independence statements / 2.2.3:
Problem of axiomatic characterization / 2.2.4:
Classes of probability measures / 2.3:
Marginally continuous measures / 2.3.1:
Factorizable measures / 2.3.2:
Multiinformation and conditional product / 2.3.3:
Properties of multiinformation function / 2.3.4:
Positive measures / 2.3.5:
Gaussian measures / 2.3.6:
Basic construction / 2.3.7:
Imsets / 2.4:
Graphical Methods / 3:
Undirected graphs / 3.1:
Acyclic directed graphs / 3.2:
Classic chain graphs / 3.3:
Within classic graphical models / 3.4:
Decomposable models / 3.4.1:
Recursive causal graphs / 3.4.2:
Lattice conditional independence models / 3.4.3:
Bubble graphs / 3.4.4:
Advanced graphical models / 3.5:
General directed graphs / 3.5.1:
Reciprocal graphs / 3.5.2:
Joint-response chain graphs / 3.5.3:
Covariance graphs / 3.5.4:
Alternative chain graphs / 3.5.5:
Annotated graphs / 3.5.6:
Hidden variables / 3.5.7:
Ancestral graphs / 3.5.8:
MC graphs / 3.5.9:
Incompleteness of graphical approaches / 3.6:
Structural Imsets: Fundamentals / 4:
Basic class of distributions / 4.1:
Discrete measures / 4.1.1:
Regular Gaussian measures / 4.1.2:
Conditional Gaussian measures / 4.1.3:
Classes of structural imsets / 4.2:
Elementary imsets / 4.2.1:
Semi-elementary and combinatorial imsets / 4.2.2:
Structural imsets / 4.2.3:
Product formula induced by a structural imset / 4.3:
Examples of reference systems of measures / 4.3.1:
Topological assumptions / 4.3.2:
Markov condition / 4.4:
Semi-graphoid induced by a structural imset / 4.4.1:
Markovian measures / 4.4.2:
Equivalence result / 4.5:
Description of Probabilistic Models / 5:
Supermodular set functions / 5.1:
Semi-graphoid produced by a supermodular function / 5.1.1:
Quantitative equivalence of supermodular functions / 5.1.2:
Skeletal supermodular functions / 5.2:
Skeleton / 5.2.1:
Significance of skeletal imsets / 5.2.2:
Description of models by structural imsets / 5.3:
Galois connection / 5.4:
Formal concept analysis / 5.4.1:
Lattice of structural models / 5.4.2:
Equivalence and Implication / 6:
Two concepts of equivalence / 6.1:
Independence and Markov equivalence / 6.1.1:
Independence implication / 6.2:
Direct characterization of independence implication / 6.2.1:
Skeletal characterization of independence implication / 6.2.2:
Testing independence implication / 6.3:
Testing structural imsets / 6.3.1:
Grade / 6.3.2:
Invariants of independence equivalence / 6.4:
Adaptation to a distribution framework / 6.5:
The Problem of Representative Choice / 7:
Baricentral imsets / 7.1:
Standard imsets / 7.2:
Translation of DAG models / 7.2.1:
Translation of decomposable models / 7.2.2:
Imsets of the smallest degree / 7.3:
Decomposition implication / 7.3.1:
Minimal generators / 7.3.2:
Span / 7.4:
Determining and unimarginal classes / 7.4.1:
Imsets with the least lower class / 7.4.2:
Exclusivity of standard imsets / 7.4.3:
Dual description / 7.5:
Coportraits / 7.5.1:
Dual baricentral imsets and global view / 7.5.2:
Learning / 8:
Two approaches to learning / 8.1:
Quality criteria / 8.2:
Criteria for learning DAG models / 8.2.1:
Score equivalent criteria / 8.2.2:
Decomposable criteria / 8.2.3:
Regular criteria / 8.2.4:
Inclusion neighborhood / 8.3:
Standard imsets and learning / 8.4:
Inclusion neighborhood characterization / 8.4.1:
Regular criteria and standard imsets / 8.4.2:
Open Problems / 9:
Theoretical problems / 9.1:
Miscellaneous topics / 9.1.1:
Classification of skeletal imsets / 9.1.2:
Operations with structural models / 9.2:
Reductive operations / 9.2.1:
Expansive operations / 9.2.2:
Cumulative operations / 9.2.3:
Decomposition of structural models / 9.2.4:
Implementation tasks / 9.3:
Interpretation and learning tasks / 9.4:
Meaningful description of structural models / 9.4.1:
Tasks concerning distribution frameworks / 9.4.2:
Learning tasks / 9.4.3:
Appendix / A:
Classes of sets / A.1:
Posets and lattices / A.2:
Graphs / A.3:
Topological concepts / A.4:
Finite-dimensional subspaces and convex cones / A.5:
Linear subspaces / A.5.1:
Convex sets and cones / A.5.2:
Measure-theoretical concepts / A.6:
Measure and integral / A.6.1:
Basic measure-theoretical results / A.6.2:
Information-theoretical concepts / A.6.3:
Conditional probability / A.6.4:
Conditional independence in terms of ?-algebras / A.7:
Concepts from multivariate analysis / A.8:
Matrices / A.8.1:
Statistical characteristics of probability measures / A.8.2:
Multivariate Gaussian distributions / A.8.3:
Elementary statistical concepts / A.9:
Empirical concepts / A.9.1:
Statistical conception / A.9.2:
Likelihood function / A.9.3:
Testing statistical hypotheses / A.9.4:
Distribution framework / A.9.5:
List of Notation
List of Lemmas, Propositions etc
References
Index
Introduction / 1:
Motivational thoughts / 1.1:
Goals of the monograph / 1.2:
8.

電子ブック

EB
Milan Studený, Michael Jordan, Frank P. Kelly, Jon Kleinberg, Bernhard Schölkopf, Ian Witten
出版情報: SpringerLink Books - AutoHoldings , Springer London, 2005
所蔵情報: loading…
目次情報: 続きを見る
Introduction / 1:
Motivational thoughts / 1.1:
Goals of the monograph / 1.2:
Structure of the book / 1.3:
Basic Concepts / 2:
Conditional independence / 2.1:
Semi-graphoid properties / 2.2:
Formal independence models / 2.2.1:
Semi-graphoids / 2.2.2:
Elementary independence statements / 2.2.3:
Problem of axiomatic characterization / 2.2.4:
Classes of probability measures / 2.3:
Marginally continuous measures / 2.3.1:
Factorizable measures / 2.3.2:
Multiinformation and conditional product / 2.3.3:
Properties of multiinformation function / 2.3.4:
Positive measures / 2.3.5:
Gaussian measures / 2.3.6:
Basic construction / 2.3.7:
Imsets / 2.4:
Graphical Methods / 3:
Undirected graphs / 3.1:
Acyclic directed graphs / 3.2:
Classic chain graphs / 3.3:
Within classic graphical models / 3.4:
Decomposable models / 3.4.1:
Recursive causal graphs / 3.4.2:
Lattice conditional independence models / 3.4.3:
Bubble graphs / 3.4.4:
Advanced graphical models / 3.5:
General directed graphs / 3.5.1:
Reciprocal graphs / 3.5.2:
Joint-response chain graphs / 3.5.3:
Covariance graphs / 3.5.4:
Alternative chain graphs / 3.5.5:
Annotated graphs / 3.5.6:
Hidden variables / 3.5.7:
Ancestral graphs / 3.5.8:
MC graphs / 3.5.9:
Incompleteness of graphical approaches / 3.6:
Structural Imsets: Fundamentals / 4:
Basic class of distributions / 4.1:
Discrete measures / 4.1.1:
Regular Gaussian measures / 4.1.2:
Conditional Gaussian measures / 4.1.3:
Classes of structural imsets / 4.2:
Elementary imsets / 4.2.1:
Semi-elementary and combinatorial imsets / 4.2.2:
Structural imsets / 4.2.3:
Product formula induced by a structural imset / 4.3:
Examples of reference systems of measures / 4.3.1:
Topological assumptions / 4.3.2:
Markov condition / 4.4:
Semi-graphoid induced by a structural imset / 4.4.1:
Markovian measures / 4.4.2:
Equivalence result / 4.5:
Description of Probabilistic Models / 5:
Supermodular set functions / 5.1:
Semi-graphoid produced by a supermodular function / 5.1.1:
Quantitative equivalence of supermodular functions / 5.1.2:
Skeletal supermodular functions / 5.2:
Skeleton / 5.2.1:
Significance of skeletal imsets / 5.2.2:
Description of models by structural imsets / 5.3:
Galois connection / 5.4:
Formal concept analysis / 5.4.1:
Lattice of structural models / 5.4.2:
Equivalence and Implication / 6:
Two concepts of equivalence / 6.1:
Independence and Markov equivalence / 6.1.1:
Independence implication / 6.2:
Direct characterization of independence implication / 6.2.1:
Skeletal characterization of independence implication / 6.2.2:
Testing independence implication / 6.3:
Testing structural imsets / 6.3.1:
Grade / 6.3.2:
Invariants of independence equivalence / 6.4:
Adaptation to a distribution framework / 6.5:
The Problem of Representative Choice / 7:
Baricentral imsets / 7.1:
Standard imsets / 7.2:
Translation of DAG models / 7.2.1:
Translation of decomposable models / 7.2.2:
Imsets of the smallest degree / 7.3:
Decomposition implication / 7.3.1:
Minimal generators / 7.3.2:
Span / 7.4:
Determining and unimarginal classes / 7.4.1:
Imsets with the least lower class / 7.4.2:
Exclusivity of standard imsets / 7.4.3:
Dual description / 7.5:
Coportraits / 7.5.1:
Dual baricentral imsets and global view / 7.5.2:
Learning / 8:
Two approaches to learning / 8.1:
Quality criteria / 8.2:
Criteria for learning DAG models / 8.2.1:
Score equivalent criteria / 8.2.2:
Decomposable criteria / 8.2.3:
Regular criteria / 8.2.4:
Inclusion neighborhood / 8.3:
Standard imsets and learning / 8.4:
Inclusion neighborhood characterization / 8.4.1:
Regular criteria and standard imsets / 8.4.2:
Open Problems / 9:
Theoretical problems / 9.1:
Miscellaneous topics / 9.1.1:
Classification of skeletal imsets / 9.1.2:
Operations with structural models / 9.2:
Reductive operations / 9.2.1:
Expansive operations / 9.2.2:
Cumulative operations / 9.2.3:
Decomposition of structural models / 9.2.4:
Implementation tasks / 9.3:
Interpretation and learning tasks / 9.4:
Meaningful description of structural models / 9.4.1:
Tasks concerning distribution frameworks / 9.4.2:
Learning tasks / 9.4.3:
Appendix / A:
Classes of sets / A.1:
Posets and lattices / A.2:
Graphs / A.3:
Topological concepts / A.4:
Finite-dimensional subspaces and convex cones / A.5:
Linear subspaces / A.5.1:
Convex sets and cones / A.5.2:
Measure-theoretical concepts / A.6:
Measure and integral / A.6.1:
Basic measure-theoretical results / A.6.2:
Information-theoretical concepts / A.6.3:
Conditional probability / A.6.4:
Conditional independence in terms of ?-algebras / A.7:
Concepts from multivariate analysis / A.8:
Matrices / A.8.1:
Statistical characteristics of probability measures / A.8.2:
Multivariate Gaussian distributions / A.8.3:
Elementary statistical concepts / A.9:
Empirical concepts / A.9.1:
Statistical conception / A.9.2:
Likelihood function / A.9.3:
Testing statistical hypotheses / A.9.4:
Distribution framework / A.9.5:
List of Notation
List of Lemmas, Propositions etc
References
Index
Introduction / 1:
Motivational thoughts / 1.1:
Goals of the monograph / 1.2:
9.

電子ブック

EB
Daniel S. Yeung, Ian Cloete, Wing W. Y. Ng, Daming Shi
出版情報: Springer eBooks Computer Science , Springer Berlin Heidelberg, 2010
所蔵情報: loading…
目次情報: 続きを見る
Introduction to Neural Networks / 1:
Properties of Neural Networks / 1.1:
Neural Network Learning / 1.2:
Supervised Learning / 1.2.1:
Unsupervised Learning / 1.2.2:
Perceptron / 1.3:
Adaline and Least Mean Square Algorithm / 1.4:
Multilayer Perceptron and Backpropagation Algorithm / 1.5:
Output Layer Learning / 1.5.1:
Hidden Layer Learning / 1.5.2:
Radial Basis Function Networks / 1.6:
Support Vector Machines / 1.7:
Principles of Sensitivity Analysis / 2:
Perturbations in Neural Networks / 2.1:
Neural Network Sensitivity Analysis / 2.2:
Fundamental Methods of Sensitivity Analysis / 2.3:
Geometrical Approach / 2.3.1:
Statistical Approach / 2.3.2:
Summary / 2.4:
Hyper-Rectangle Model / 3:
Hyper-Rectangle Model for Input Space of MLP / 3.1:
Sensitivity Measure of MLP / 3.2:
Discussion / 3.3:
Sensitivity Analysis with Parameterized Activation Function / 4:
Parameterized Antisymmetric Squashing Function / 4.1:
Sensitivity Measure / 4.2:
Localized Generalization Error Model / 4.3:
Introduction / 5.1:
The Localized Generalization Error Model / 5.2:
The Q-Neighborhood and Q-Union / 5.2.1:
The Localized Generalization Error Bound / 5.2.2:
Stochastic Sensitivity Measure for RBFNN / 5.2.3:
Characteristics of the Error Bound / 5.2.4:
Comparing Two Classifiers Using the Error Bound / 5.2.5:
Architecture Selection Using the Error Bound / 5.3:
Critical Vector Learning for RBF Networks / 5.3.1:
Related Work / 6.1:
Construction of RBF Networks with Sensitivity Analysis / 6.2:
RBF Classifiers' Sensitivity to the Kernel Function Centers / 6.2.1:
Orthogonal Least Square Transform / 6.2.2:
Critical Vector Selection / 6.2.3:
Sensitivity Analysis of Prior Knowledge / 6.3:
KBANNs / 7.1:
Inductive Bias / 7.2:
Sensitivity Analysis and Measures / 7.3:
Output-Pattern Sensitivity / 7.3.1:
Output-Weight Sensitivity / 7.3.2:
Output-H Sensitivity / 7.3.3:
Euclidean Distance / 7.3.4:
Promoter Recognition / 7.4:
Data and Initial Domain Theory / 7.4.1:
Experimental Methodology / 7.4.2:
Discussion and Conclusion / 7.5:
Applications / 8:
Input Dimension Reduction / 8.1:
Sensitivity Matrix / 8.1.1:
Criteria for Pruning Inputs / 8.1.2:
Network Optimization / 8.2:
Selective Learning / 8.3:
Hardware Robustness / 8.4:
Measure of Nonlinearity / 8.5:
Parameter Tuning for Neocognitron / 8.6:
Receptive Field / 8.6.1:
Selectivity / 8.6.2:
Sensitivity Analysis of the Neocognitron / 8.6.3:
Bibliography
Introduction to Neural Networks / 1:
Properties of Neural Networks / 1.1:
Neural Network Learning / 1.2:
10.

電子ブック

EB
Daniel S. Yeung, Ian Cloete, Wing W. Y. Ng, Daming Shi
出版情報: SpringerLink Books - AutoHoldings , Springer Berlin Heidelberg, 2010
所蔵情報: loading…
目次情報: 続きを見る
Introduction to Neural Networks / 1:
Properties of Neural Networks / 1.1:
Neural Network Learning / 1.2:
Supervised Learning / 1.2.1:
Unsupervised Learning / 1.2.2:
Perceptron / 1.3:
Adaline and Least Mean Square Algorithm / 1.4:
Multilayer Perceptron and Backpropagation Algorithm / 1.5:
Output Layer Learning / 1.5.1:
Hidden Layer Learning / 1.5.2:
Radial Basis Function Networks / 1.6:
Support Vector Machines / 1.7:
Principles of Sensitivity Analysis / 2:
Perturbations in Neural Networks / 2.1:
Neural Network Sensitivity Analysis / 2.2:
Fundamental Methods of Sensitivity Analysis / 2.3:
Geometrical Approach / 2.3.1:
Statistical Approach / 2.3.2:
Summary / 2.4:
Hyper-Rectangle Model / 3:
Hyper-Rectangle Model for Input Space of MLP / 3.1:
Sensitivity Measure of MLP / 3.2:
Discussion / 3.3:
Sensitivity Analysis with Parameterized Activation Function / 4:
Parameterized Antisymmetric Squashing Function / 4.1:
Sensitivity Measure / 4.2:
Localized Generalization Error Model / 4.3:
Introduction / 5.1:
The Localized Generalization Error Model / 5.2:
The Q-Neighborhood and Q-Union / 5.2.1:
The Localized Generalization Error Bound / 5.2.2:
Stochastic Sensitivity Measure for RBFNN / 5.2.3:
Characteristics of the Error Bound / 5.2.4:
Comparing Two Classifiers Using the Error Bound / 5.2.5:
Architecture Selection Using the Error Bound / 5.3:
Critical Vector Learning for RBF Networks / 5.3.1:
Related Work / 6.1:
Construction of RBF Networks with Sensitivity Analysis / 6.2:
RBF Classifiers' Sensitivity to the Kernel Function Centers / 6.2.1:
Orthogonal Least Square Transform / 6.2.2:
Critical Vector Selection / 6.2.3:
Sensitivity Analysis of Prior Knowledge / 6.3:
KBANNs / 7.1:
Inductive Bias / 7.2:
Sensitivity Analysis and Measures / 7.3:
Output-Pattern Sensitivity / 7.3.1:
Output-Weight Sensitivity / 7.3.2:
Output-H Sensitivity / 7.3.3:
Euclidean Distance / 7.3.4:
Promoter Recognition / 7.4:
Data and Initial Domain Theory / 7.4.1:
Experimental Methodology / 7.4.2:
Discussion and Conclusion / 7.5:
Applications / 8:
Input Dimension Reduction / 8.1:
Sensitivity Matrix / 8.1.1:
Criteria for Pruning Inputs / 8.1.2:
Network Optimization / 8.2:
Selective Learning / 8.3:
Hardware Robustness / 8.4:
Measure of Nonlinearity / 8.5:
Parameter Tuning for Neocognitron / 8.6:
Receptive Field / 8.6.1:
Selectivity / 8.6.2:
Sensitivity Analysis of the Neocognitron / 8.6.3:
Bibliography
Introduction to Neural Networks / 1:
Properties of Neural Networks / 1.1:
Neural Network Learning / 1.2:
文献の複写および貸借の依頼を行う
 文献複写・貸借依頼