close
1.

図書

図書
Stefan Wermter, Jim Austin, David Willshaw (eds.)
出版情報: Berlin ; Tokyo : Springer, c2001  x, 576 p. ; 24 cm
シリーズ名: Lecture notes in computer science ; 2036 . Lecture notes in artificial intelligence
所蔵情報: loading…
2.

図書

図書
Frederic M. Ham, Ivica Kostanic
出版情報: New York, NY : McGraw Hill, c2001  xxx, 642 p. ; 24 cm
所蔵情報: loading…
3.

図書

図書
International Conference on Artificial Neural Networks ; Institution of Electrical Engineers
出版情報: London : Institution of Electrical Engineers, 1999  2v.(xxix,1028p.) ; 30cm
シリーズ名: IEE conference publication
所蔵情報: loading…
4.

図書

図書
edited by Cornelius T. Leondes
出版情報: San Diego : Academic Press, c1998  xxix, 460 p. ; 24 cm
シリーズ名: Neural network systems techniques and applications ; vol. 1
所蔵情報: loading…
5.

図書

図書
Genevieve B. Orr, Klaus-Robert Müller (eds.)
出版情報: Berlin : Springer, c1998  vi, 432 p. ; 24 cm
シリーズ名: Lecture notes in computer science ; 1524
所蔵情報: loading…
目次情報: 続きを見る
Introduction
Speeding Learning
Preface
Efficient BackProp / Yann LeCun ; Leon Bottou ; Genevieve B. Orr ; Klaus-Robert Müller1:
Regularization Techniques to Improve Generalization
Early Stopping - But When? / Lutz Prechelt2:
A Simple Trick for Estimating the Weight Decay Parameter / Thorsteinn S. Rögnvaldsson3:
Controling the Hyperparameter Search in MacKay's Bayesian Neural Network Framework / Tony Plate4:
Adaptive Regularization in Neural Network Modeling / Jan Larsen ; Claus Svarer ; Lars Nonboe Andersen ; Lars Kai Han- sen5:
Large Ensemble Averaging / David Horn ; Ury Naftaly ; Nathan Intrator6:
Improving Network Models and Algorithmic Tricks
Square Unit Augmented, Radially Extended, Multilayer Perceptrons / Gary William Flake7:
A Dozen Tricks with Multitask Learning / Rich Caruana8:
Solving the Ill-Conditioning in Neural Network Learning / Patrick van der Smagt ; Gerd Hirzinger9:
Centering Neural Network Gradient Factors / Nicol N. Schraudolph10:
Avoiding Roundoff Error in Backpropagating Derivatives / 11:
Representing and Incorporating Prior Knowledge in Neural Network Training
Transformation Invariance in Pattern Recognition - Tangent Distance and Tangent Propagation / Patrice Y. Simard ; Yann A. LeCun ; John S. Denker ; Bernard Victorri12:
Combining Neural Networks and Context-Driven Search for On-Line, Printed Handwriting Recognition in the Newton / Larry S. Yaeger ; Brandyn J. Webb ; Richard F. Lyon13:
Neural Network Classification and Prior Class Probabilities / Steve Lawrence ; Ian Burns ; Andrew Back ; Ah Chung Tsoi ; C. Lee Gi- les14:
Applying Divide and Conquer to Large Scale Pattern Recognition Tasks / Jurgen Fritsch ; Michael Finke15:
Tricks for Time Series
Forecasting the Economy with Neural Nets: A Survey of Challenges and Solutions / John Moody16:
How to Train Neural Networks / Ralph Neuneier ; Hans Georg Zimmermann17:
Author Index
Subject Index
Introduction
Speeding Learning
Preface
6.

図書

図書
Stefan Wermter, Ron Sun (eds.)
出版情報: Berlin : Springer, c2000  ix, 401 p. ; 24 cm
シリーズ名: Lecture notes in computer science ; 1778 . Lecture notes in artificial intelligence
所蔵情報: loading…
目次情報: 続きを見る
An Overview of Hybrid Neural Systems / Stefan Wermter ; Ron Sun
Structured Connectionism and Rule Representation
Layered Hybrid Connectionist Models for Cognitive Science / Jerome Feldman ; David Bailey
Types and Quantifiers in SHRUTI: A Connectionist Model of Rapid Reasoning and Relational Processing / Lokendra Shastri
A Recursive Neural Network for Reflexive Reasoning / Steffen Hölldobler ; Yvonne Kalinke ; Jörg Wunderlich
A Novel Modular Neural Architecture for Rule-Based and Similarity-Based Reasoning / Rafal Bogacz ; Christophe Giraud-Carrier
Addressing Knowledge-Representation Issues in Connectionist Symbolic Rule Encoding for General Inference / Nam Seog Park
Towards a Hybrid Model of First-Order Theory Refinement / Nelson A. Hallack ; Gerson Zaverucha ; Valmir C. Barbosa
Distributed Neural Architectures and Language Processing
Dynamical Recurrent Networks for Sequential Data Processing / Stefan C. Kremer ; John F. Kolen
Fuzzy Knowledge and Recurrent Neural Networks: A Dynamical Systems Perspective / Christian W. Omlin ; Lee Giles ; Karvel K. Thornber
Combining Maps and Distributed Representations for Shift-Reduce Parsing / Marshall R. Mayberry ; Risto Miikkulainen
Towards Hybrid Neural Learning Internet Agents / Garen Arevian ; Christo Panchev
A Connectionist Simulation of the Empirical Acquisition of Grammatical Relations / William C. Morris ; Garrison W. Cottrell ; Jeffrey Elman
Large Patterns Make Great Symbols: An Example of Learning from Example / Pentti Kanerva
Context Vectors: A Step Toward a "Grand Unified Representation" / Stephen I. Gallant
Integration of Graphical Rules with Adaptive Learning of Structured Information / Paolo Frasconi ; Marco Gori ; Alessandro Sperduti
Transformation and Explanation
Lessons from Past, Current Issues, and Future Research Directions in Extracting the Knowledge Embedded in Artificial Neural Networks / Alan B. Tickle ; Frederic Maire ; Guido Bologna ; Robert Andrews ; Joachim Diederich
Symbolic Rule Extraction from the DIMLP Neural Network
Understanding State Space Organization in Recurrent Neural Networks with Iterative Function Systems Dynamics / Peter Tino ; Georg Dorffner ; Christian Schittenkopf
Direct Explanations and Knowledge Extraction from a Multilayer Perceptron Network that Performs Low Back Pain Classification / Marilyn L. Vaughn ; Steven J. Cavill ; Stewart J. Taylor ; Michael A. Foy ; Anthony J.B. Fogg
High Order Eigentensors as Symbolic Rules in Competitive Learning / Hod Lipson ; Hava T. Siegelmann
Holistic Symbol Processing and the Sequential RAAM: An Evaluation / James A. Hammerton ; Barry L. Kalman
Robotics, Vision and Cognitive Approaches
Life, Mind, and Robots: The Ins and Outs of Embodied Cognition / Noel Sharkey ; Tom Ziemke
Supplementing Neural Reinforcement Learning with Symbolic Methods
Self-Organizing Maps in Symbol Processing / Timo Honkela
Evolution of Symbolization: Signposts to a Bridge Between Connectionist and Symbolic Systems / Ronan G. Reilly
A Cellular Neural Associative Array for Symbolic Vision / Christos Orovas ; James Austin
Application of Neurosymbolic Integration for Environment Modelling in Mobile Robots / Gerhard Kraetzschmar ; Stefan Sablatnog ; Stefan Enderle ; Gunther Palm
Author Index
An Overview of Hybrid Neural Systems / Stefan Wermter ; Ron Sun
Structured Connectionism and Rule Representation
Layered Hybrid Connectionist Models for Cognitive Science / Jerome Feldman ; David Bailey
7.

図書

図書
Witold Pedrycz
出版情報: Boca Raton, Fla. : CRC Press, c1998  284 p. ; 26 cm
所蔵情報: loading…
8.

図書

図書
Eyal Kolman and Michael Margaliot
出版情報: Berlin : Springer, c2009  xv, 100 p. ; 24 cm
シリーズ名: Studies in fuzziness and soft computing ; 234
所蔵情報: loading…
目次情報: 続きを見る
Preface
List of Abbreviations
List of Symbols
Introduction / 1:
Artificial Neural Networks (ANNs) / 1.1:
Fuzzy Rule-Bases (FRBs) / 1.2:
The ANN-FRB Synergy / 1.3:
Knowledge-Based Neurocomputing / 1.4:
Knowledge Extraction from ANNs / 1.4.1:
Knowledge-Based Design of ANNs / 1.4.2:
The FARB: A Neuro-fuzzy Equivalence / 1.5:
The FARB / 2:
Definition / 2.1:
Input-Output Mapping / 2.2:
The FARB-ANN Equivalence / 3:
The FARB and Feedforward ANNs / 3.1:
Example 1: Knowledge Extraction from a Feedforward ANN / 3.1.1:
Example 2: Knowledge-Based Design of a Feedforward ANN / 3.1.2:
The FARB and First-Order RNNs / 3.2:
First Approach / 3.2.1:
Example 3: Knowledge Extraction from a Simple RNN / 3.2.2:
Second Approach / 3.2.3:
Third Approach / 3.2.4:
Example 4: Knowledge Extraction from an RNN / 3.2.5:
Example 5: Knowledge-Based Design of an RNN / 3.2.6:
The FARB and Second-Order RNNs / 3.3:
Summary / 3.4:
Rule Simplification / 4:
Sensitivity Analysis / 4.1:
A Procedure for Simplifying a FARB / 4.2:
Knowledge Extraction Using the FARB / 5:
The Iris Classification Problem / 5.1:
The LED Display Recognition Problem / 5.2:
FARB Simplification / 5.2.1:
Analysis of the FRB / 5.2.3:
Formal Languages / 5.3:
Formal Languages and RNNs / 5.3.2:
The Trained RNN / 5.3.3:
The Direct Approach / 5.3.4:
The Modular Approach / 6.1.1:
The Counter Module / 6.2.1:
The Sequence-Counter Module / 6.2.2:
The String-Comparator Module / 6.2.3:
The String-to-Num Converter Module / 6.2.4:
The Num-to-String Converter Module / 6.2.5:
The Soft Threshold Module / 6.2.6:
KBD of an RNN for Recognizing the AB Language / 6.2.7:
KBD of an RNN for Recognizing the Balanced Parentheses Language / 6.2.9:
Conclusions and Future Research / 6.2.10:
Future Research / 7.1:
Regularization of Network Training / 7.1.1:
Extracting Knowledge during the Learning Process / 7.1.2:
Knowledge Extraction from Support Vector Machines / 7.1.3:
Knowledge Extraction from Trained Networks / 7.1.4:
Proofs / A:
Details of the LED Recognition Network / B:
References
Index
Preface
List of Abbreviations
List of Symbols
9.

図書

図書
Simon Haykin
出版情報: Upper Saddle River, N.J. : Pearson Education, c2009  934 p. ; 23 cm
所蔵情報: loading…
目次情報: 続きを見る
Preface
Introduction
What is a Neural Network? / 1:
The Human Brain / 2:
Models of a Neuron / 3:
Neural Networks Viewed As Directed Graphs / 4:
Feedback / 5:
Network Architectures / 6:
Knowledge Representation / 7:
Learning Processes / 8:
Learning Tasks / 9:
Concluding Remarks / 10:
Notes and References
Rosenblatt's Perceptron / Chapter 1:
Perceptron / 1.1:
The Perceptron Convergence Theorem / 1.3:
Relation Between the Perceptron and Bayes Classifier for a Gaussian Environment / 1.4:
Computer Experiment: Pattern Classification / 1.5:
The Batch Perceptron Algorithm / 1.6:
Summary and Discussion / 1.7:
Problems
Model Building through Regression / Chapter 2:
Linear Regression Model: Preliminary Considerations / 2.1:
Maximum a Posteriori Estimation of the Parameter Vector / 2.3:
Relationship Between Regularized Least-Squares Estimation and MAP Estimation / 2.4:
The Minimum-Description-Length Principle / 2.5:
Finite Sample-Size Considerations / 2.7:
The Instrumental-Variables Method / 2.8:
The Least-Mean-Square Algorithm / 2.9:
Filtering Structure of the LMS Algorithm / 3.1:
Unconstrained Optimization: a Review / 3.3:
The Wiener Filter / 3.4:
Markov Model Portraying the Deviation of the LMS Algorithm from the Wiener Filter / 3.5:
The Langevin Equation: Characterization of Brownian Motion / 3.7:
Kushner's Direct-Averaging Method / 3.8:
Statistical LMS Learning Theory for Small Learning-Rate Parameter / 3.9:
Computer Experiment I: Linear Prediction / 3.10:
Computer Experiment II: Pattern Classification / 3.11:
Virtues and Limitations of the LMS Algorithm / 3.12:
Learning-Rate Annealing Schedules / 3.13:
Multilayer Perceptrons / 3.14:
Some Preliminaries / 4.1:
Batch Learning and On-Line Learning / 4.3:
The Back-Propagation Algorithm / 4.4:
XOR Problem / 4.5:
Heuristics for Making the Back-Propagation Algorithm Perform Better / 4.6:
Back Propagation and Differentiation / 4.7:
The Hessian and Its Role in On-Line Learning / 4.9:
Optimal Annealing and Adaptive Control of the Learning Rate / 4.10:
Generalization / 4.11:
Approximations of Functions / 4.12:
Cross-Validation / 4.13:
Complexity Regularization and Network Pruning / 4.14:
Virtues and Limitations of Back-Propagation Learning / 4.15:
Supervised Learning Viewed as an Optimization Problem / 4.16:
Convolutional Networks / 4.17:
Nonlinear Filtering / 4.18:
Small-Scale Versus Large-Scale Learning Problems / 4.19:
Kernel Methods and Radial-Basis Function Networks / 4.20:
Cover's Theorem on the Separability of Patterns / 5.1:
The Interpolation Problem / 5.3:
Radial-Basis-Function Networks / 5.4:
K-Means Clustering / 5.5:
Recursive Least-Squares Estimation of the Weight Vector / 5.6:
Hybrid Learning Procedure for RBF Networks / 5.7:
Interpretations of the Gaussian Hidden Units / 5.8:
Kernel Regression and Its Relation to RBF Networks / 5.10:
Support Vector Machines / 5.11:
Optimal Hyperplane for Linearly Separable Patterns / 6.1:
Optimal Hyperplane for Nonseparable Patterns / 6.3:
The Support Vector Machine Viewed as a Kernel Machine / 6.4:
Design of Support Vector Machines / 6.5:
Regression: Robustness Considerations / 6.6:
Optimal Solution of the Linear Regression Problem / 6.9:
The Representer Theorem and Related Issues / 6.10:
Regularization Theory / 6.11:
Hadamard's Conditions for Well-Posedness / 7.1:
Tikhonov's Regularization Theory / 7.3:
Regularization Networks / 7.4:
Generalized Radial-Basis-Function Networks / 7.5:
The Regularized Least-Squares Estimator: Revisited / 7.6:
Additional Notes of Interest on Regularization / 7.7:
Estimation of the Regularization Parameter / 7.8:
Semisupervised Learning / 7.9:
Manifold Regularization: Preliminary Considerations / 7.10:
Differentiable Manifolds / 7.11:
Generalized Regularization Theory / 7.12:
Spectral Graph Theory / 7.13:
Generalized Representer Theorem / 7.14:
Laplacian Regularized Least-Squares Algorithm / 7.15:
Experiments on Pattern Classification Using Semisupervised Learning / 7.16:
Principal-Components Analysis / 7.17:
Principles of Self-Organization / 8.1:
Self-Organized Feature Analysis / 8.3:
Principal-Components Analysis: Perturbation Theory / 8.4:
Hebbian-Based Maximum Eigenfilter / 8.5:
Hebbian-Based Principal-Components Analysis / 8.6:
Case Study: Image Coding / 8.7:
Kernel Principal-Components Analysis / 8.8:
Basic Issues Involved in the Coding of Natural Images / 8.9:
Kernel Hebbian Algorithm / 8.10:
Self-Organizing Maps / 8.11:
Two Basic Feature-Mapping Models / 9.1:
Self-Organizing Map / 9.3:
Properties of the Feature Map / 9.4:
Computer Experiments I: Disentangling Lattice Dynamics Using SOM / 9.5:
Contextual Maps / 9.6:
Hierarchical Vector Quantization / 9.7:
Kernel Self-Organizing Map / 9.8:
Computer Experiment II: Disentangling Lattice Dynamics Using Kernel SOM / 9.9:
Relationship Between Kernel SOM and Kullback-Leibler Divergence / 9.10:
Information-Theoretic Learning Models / 9.11:
Entropy / 10.1:
Maximum-Entropy Principle / 10.3:
Mutual Information / 10.4:
Kullback-Leibler Divergence / 10.5:
Copulas / 10.6:
Mutual Information as an Objective Function to be Optimized / 10.7:
Maximum Mutual Information Principle / 10.8:
Infomax and Redundancy Reduction / 10.9:
Spatially Coherent Features / 10.10:
Spatially Incoherent Features / 10.11:
Independent-Components Analysis / 10.12:
Sparse Coding of Natural Images and Comparison with ICA Coding / 10.13:
Natural-Gradient Learning for Independent-Components Analysis / 10.14:
Maximum-Likelihood Estimation for Independent-Components Analysis / 10.15:
Maximum-Entropy Learning for Blind Source Separation / 10.16:
Maximization of Negentropy for Independent-Components Analysis / 10.17:
Coherent Independent-Components Analysis / 10.18:
Rate Distortion Theory and Information Bottleneck / 10.19:
Optimal Manifold Representation of Data / 10.20:
Stochastic Methods Rooted in Statistical Mechanics / 10.21:
Statistical Mechanics / 11.1:
Markov Chains / 11.3:
Metropolis Algorithm / 11.4:
Simulated Annealing / 11.5:
Gibbs Sampling / 11.6:
Boltzmann Machine / 11.7:
Logistic Belief Nets / 11.8:
Deep Belief Nets / 11.9:
Deterministic Annealing / 11.10:
Analogy of Deterministic Annealing with Expectation-Maximization Algorithm / 11.11:
Dynamic Programming / 11.12:
Markov Decision Process / 12.1:
Bellman's Optimality Criterion / 12.3:
Policy Iteration / 12.4:
Value Iteration / 12.5:
Approximate Dynamic Programming: Direct Methods / 12.6:
Temporal-Difference Learning / 12.7:
Q-Learning / 12.8:
Approximate Dynamic Programming: Indirect Methods / 12.9:
Least-Squares Policy Evaluation / 12.10:
Approximate Policy Iteration / 12.11:
Neurodynamics / 12.12:
Dynamic Systems / 13.1:
Stability of Equilibrium States / 13.3:
Attractors / 13.4:
Neurodynamic Models / 13.5:
Manipulation of Attractors as a Recurrent Network Paradigm / 13.6:
Hopfield Model / 13.7:
The Cohen-Grossberg Theorem / 13.8:
Brain-State-In-A-Box Model / 13.9:
Strange Attractors and Chaos / 13.10:
Dynamic Reconstruction of a Chaotic Process / 13.11:
Bayseian Filtering for State Estimation of Dynamic Systems / 13.12:
State-Space Models / 14.1:
Kalman Filters / 14.3:
The Divergence-Phenomenon and Square-Root Filtering / 14.4:
The Extended Kalman Filter / 14.5:
The Bayesian Filter / 14.6:
Cubature Kalman Filter: Building on the Kalman Filter / 14.7:
Particle Filters / 14.8:
Computer Experiment: Comparative Evaluation of Extended Kalman and Particle Filters / 14.9:
Kalman Filtering in Modeling of Brain Functions / 14.10:
Dynamically Driven Recurrent Networks / 14.11:
Recurrent Network Architectures / 15.1:
Universal Approximation Theorem / 15.3:
Controllability and Observability / 15.4:
Computational Power of Recurrent Networks / 15.5:
Learning Algorithms / 15.6:
Back Propagation Through Time / 15.7:
Real-Time Recurrent Learning / 15.8:
Vanishing Gradients in Recurrent Networks / 15.9:
Supervised Training Framework for Recurrent Networks Using Nonlinear Sequential State Estimators / 15.10:
Computer Experiment: Dynamic Reconstruction of Mackay-Glass Attractor / 15.11:
Adaptivity Considerations / 15.12:
Case Study: Model Reference Applied to Neurocontrol / 15.13:
Bibliography / 15.14:
Index
Preface
Introduction
What is a Neural Network? / 1:
10.

図書

図書
Gustavo Deco, Dragan Obradovic
出版情報: New York : Springer, c1996  xiii, 261 p. ; 25 cm
シリーズ名: Perspectives in neural computing
所蔵情報: loading…
文献の複写および貸借の依頼を行う
 文献複写・貸借依頼