close
1.

図書

図書
International Conference on Artificial Neural Networks ; Institution of Electrical Engineers
出版情報: London : Institution of Electrical Engineers, 1999  2v.(xxix,1028p.) ; 30cm
シリーズ名: IEE conference publication
所蔵情報: loading…
2.

図書

図書
edited by Cornelius T. Leondes
出版情報: San Diego : Academic Press, c1998  xxix, 460 p. ; 24 cm
シリーズ名: Neural network systems techniques and applications ; vol. 1
所蔵情報: loading…
3.

図書

図書
Genevieve B. Orr, Klaus-Robert Müller (eds.)
出版情報: Berlin : Springer, c1998  vi, 432 p. ; 24 cm
シリーズ名: Lecture notes in computer science ; 1524
所蔵情報: loading…
目次情報: 続きを見る
Introduction
Speeding Learning
Preface
Efficient BackProp / Yann LeCun ; Leon Bottou ; Genevieve B. Orr ; Klaus-Robert Müller1:
Regularization Techniques to Improve Generalization
Early Stopping - But When? / Lutz Prechelt2:
A Simple Trick for Estimating the Weight Decay Parameter / Thorsteinn S. Rögnvaldsson3:
Controling the Hyperparameter Search in MacKay's Bayesian Neural Network Framework / Tony Plate4:
Adaptive Regularization in Neural Network Modeling / Jan Larsen ; Claus Svarer ; Lars Nonboe Andersen ; Lars Kai Han- sen5:
Large Ensemble Averaging / David Horn ; Ury Naftaly ; Nathan Intrator6:
Improving Network Models and Algorithmic Tricks
Square Unit Augmented, Radially Extended, Multilayer Perceptrons / Gary William Flake7:
A Dozen Tricks with Multitask Learning / Rich Caruana8:
Solving the Ill-Conditioning in Neural Network Learning / Patrick van der Smagt ; Gerd Hirzinger9:
Centering Neural Network Gradient Factors / Nicol N. Schraudolph10:
Avoiding Roundoff Error in Backpropagating Derivatives / 11:
Representing and Incorporating Prior Knowledge in Neural Network Training
Transformation Invariance in Pattern Recognition - Tangent Distance and Tangent Propagation / Patrice Y. Simard ; Yann A. LeCun ; John S. Denker ; Bernard Victorri12:
Combining Neural Networks and Context-Driven Search for On-Line, Printed Handwriting Recognition in the Newton / Larry S. Yaeger ; Brandyn J. Webb ; Richard F. Lyon13:
Neural Network Classification and Prior Class Probabilities / Steve Lawrence ; Ian Burns ; Andrew Back ; Ah Chung Tsoi ; C. Lee Gi- les14:
Applying Divide and Conquer to Large Scale Pattern Recognition Tasks / Jurgen Fritsch ; Michael Finke15:
Tricks for Time Series
Forecasting the Economy with Neural Nets: A Survey of Challenges and Solutions / John Moody16:
How to Train Neural Networks / Ralph Neuneier ; Hans Georg Zimmermann17:
Author Index
Subject Index
Introduction
Speeding Learning
Preface
4.

図書

図書
Witold Pedrycz
出版情報: Boca Raton, Fla. : CRC Press, c1998  284 p. ; 26 cm
所蔵情報: loading…
5.

図書

図書
Gustavo Deco, Dragan Obradovic
出版情報: New York : Springer, c1996  xiii, 261 p. ; 25 cm
シリーズ名: Perspectives in neural computing
所蔵情報: loading…
6.

図書

図書
edited by Lakhmi C. Jain, V. Rao Vemuri
出版情報: Boca Raton, Fla. : CRC Press, c1999  325 p. ; 25 cm
シリーズ名: The CRC Press international series on computational intelligence / series editor L. C. Jain
所蔵情報: loading…
7.

図書

図書
edited by Omid Omidvar, Patrick van der Smagt
出版情報: San Diego ; Tokyo : Academic Press, c1997  xvii, 346 p. ; 24 cm
所蔵情報: loading…
目次情報: 続きを見る
Neural Network Sonar as a Perceptual Modality for Robotics / W.T. Miller III ; A.L. Kun,lt;/i>
Dynamic Balance of a Biped Walking / Robot. P. van der Smagt ; F. Groen,lt;/i>
Visual Feedback in Motion / D. DeMers ; K. Kreutz-Delgado,lt;/i>
Inverse Kinematics of Dextrous Manipulators / Y. Jin, T. Pipe ; A. Winfield,lt;/i>
Stable Manipulator Trajectory Control Using Neural Networks / P. Gaudiano ; F.H. Guenther ; E. Zalama,lt;/i>
The Neural Dynamics Approach to Sensory-Motor Control / A. Buhlmeier ; G. Maneuffel,lt;/i>
Operant Conditioning in Robots / B. Hallam ; J. Hallam ; G. Hayes,lt;/i>
A Dynamic Net for Robot Control / Ben Krise ; J. van Dam,lt;/i>
Neural Vehicles / J. Heikkonen ; P. Koikkalainen,lt;/i>
Self-Organization and Autonomous Robots
Neural Network Sonar as a Perceptual Modality for Robotics / W.T. Miller III ; A.L. Kun,lt;/i>
Dynamic Balance of a Biped Walking / Robot. P. van der Smagt ; F. Groen,lt;/i>
Visual Feedback in Motion / D. DeMers ; K. Kreutz-Delgado,lt;/i>
8.

図書

図書
Martin Beckerman
出版情報: New York : Wiley, 1997  xviii, 427 p. ; 25 cm
シリーズ名: Adaptive and learning systems for signal processing, communications, and control
所蔵情報: loading…
9.

図書

図書
Jason Kingdon
出版情報: London : Springer, 1997  xii, 227 p. ; 24 cm
シリーズ名: Perspectives in neural computing
所蔵情報: loading…
10.

図書

図書
Hung T. Nguyen and Elbert A. Walker
出版情報: Boca Raton, Fla. ; Tokyo : CRC Press, c1997  266 p. ; 25 cm
所蔵情報: loading…
11.

図書

図書
M. Vidyasagar
出版情報: London : Springer, c1997  xviii, 383 p. ; 25 cm
シリーズ名: Communications and control engineering
所蔵情報: loading…
12.

図書

図書
Carl G. Looney
出版情報: New York : Oxford University Press, 1997  xix, 458 p. ; 25 cm
所蔵情報: loading…
目次情報: 続きを見る
Preface
List of Tables
Fundamentals of Pattern Recognition / Part I:
Basic Concepts of Pattern Recognition / 0:
Decision-Theoretic Algorithms / 1:
Structural Pattern Recognition / 2:
Introductory Neural Networks / Part II:
Artificial Neural Network Structures / 3:
Supervised Training via Error Backpropagation: Derivations / 4:
Advanced Fundamentals of Neural Networks / Part III:
Acceleration and Stabilization of Supervised Gradient Training of MLPs / 5:
Supervised Training via Strategic Search / 6:
Advances in Network Algorithms for Classification and Recognition / 7:
Recurrent Neural Networks / 8:
Neural, Feature, and Data Engineering / Part IV:
Neural Engineering and Testing of FANNs / 9:
Feature and Data Engineering / 10:
Testing and Applications
Some Comparative Studies of Feedforward Artificial Neural Networks / 11:
Pattern Recognition Applications / 12:
Preface
List of Tables
Fundamentals of Pattern Recognition / Part I:
13.

図書

図書
Takeshi Furuhashi, Yoshiki Uchikawa, (eds.)
出版情報: Berlin : Springer-Verlag, c1996  viii, 243 p. ; 24 cm
シリーズ名: Lecture notes in computer science ; 1152 . Lecture notes in artificial intelligence
所蔵情報: loading…
14.

図書

図書
Suganda Jutamulia, editor
出版情報: Bellingham, Wash. : SPIE Optical Engineering Press, c1994  xviii, 692 p. ; 29 cm
シリーズ名: SPIE milestone series / Brian J. Thompson, general editor ; v. MS 96
所蔵情報: loading…
15.

図書

図書
Kenneth Hunt, George Irwin and Kevin Warwick (eds.)
出版情報: Berlin ; New York : Springer, c1995  278 p. ; 24 cm
シリーズ名: Advances in industrial control
所蔵情報: loading…
16.

図書

図書
B.D. Ripley
出版情報: New York : Cambridge University Press, 1996  xi, 403 p. ; 26 cm
所蔵情報: loading…
目次情報: 続きを見る
Introduction and examples / 1:
Statistical decision theory / 2:
Linear discriminant analysis / 3:
Flexible discriminants / 4:
Feed-forward neural networks / 5:
Non-parametric methods / 6:
Tree-structured classifiers / 7:
Belief networks / 8:
Unsupervised methods / 9:
Finding good pattern features / 10:
statistical sidelines / Appendix:
Glossary
References
Author index
Subject index
Introduction and examples / 1:
Statistical decision theory / 2:
Linear discriminant analysis / 3:
17.

図書

図書
Takeshi Furuhashi, (ed.)
出版情報: Berlin ; New York ; Tokyo : Springer, c1995  viii, 223 p. ; 24 cm
シリーズ名: Lecture notes in computer science ; 1011 . Lecture notes in artificial intelligence
所蔵情報: loading…
18.

図書

図書
N. K. Bose, P. Liang
出版情報: New York ; Tokyo : McGraw-Hill, c1996  xxxiii, 478 p. ; 25 cm
シリーズ名: McGraw-Hill series in electrical and computer engineering ; . Communications and signal processing
所蔵情報: loading…
19.

図書

図書
edited by A.M.S. Zalzala and A.S. Morris
出版情報: New York ; Tokyo : Ellis Horwood, 1996  viii, 278 p. ; 25 cm
所蔵情報: loading…
20.

図書

図書
David M. Skapura
出版情報: New York, N.Y. : ACM Press , Reading, Mass. : Addison-Wesley, c1996  xiii, 286 p. ; 24 cm
所蔵情報: loading…
目次情報: 続きを見る
Foundations
Motivation
Neural-Network Fundamentals
Single Neuron Computations
Network Computations
Network Simulation
Foundations Summary
Suggested Readings
Bibliography
Paradigms
The Backpropagation Network
The Counterpropagation Network
Adaptive Resonance Theory
The Multidirectional Associative Memory
The Hopfield Memory
Network-Learning Summary
Application Design
Developing a Data Representation
Pattern Representation Methods
Exemplar Analysis
Training and Performance Evaluation
A Practical Example
Application-Design Summary
Associative Memories
Associative-Memory Definitions
Character Recognition
State-Space Search
Image Interpolation
Diagnostic Aids
Associative-Memory Summary
Business and Financial Applications
Financial Modeling
Market Prediction
Bond Rating
Predicting Commodity Futures
Financial-Applications Summary
Pattern Classification
NETtalk
Radar-Signature Classifier
Prostate-Cancer Detection
Pattern-Classification Summary
Image Processing
Image-Processing Networks
Gender Recognition from Facial Images
Imagery Feature Discovery
Aircraft Tracking in Video Imagery
Image-Processing Summary
Process Control and Robotics
Control Theory
Cart/Pole Balancer
Bipedal-Locomotion Control
Robotic Manipulator Control
Control-Application Summary
Fuzzy Neural Systems
Fuzzy Logic
Implementation of a Fuzzy Network
Fuzzy Neural Inference
Fuzzy Control of BPN Learning
Fuzzy Neural-System Summary
Answers to Selected Exercises
Index
Foundations
Motivation
Neural-Network Fundamentals
21.

図書

図書
Larry R. Medsker
出版情報: Boston : Kluwer Academic, c1994  240 p. ; 25 cm
所蔵情報: loading…
22.

図書

図書
P.J.G. Lisboa, M.J. Taylor
出版情報: New York : Ellis Horwood, 1993  307 p. ; 25 cm
シリーズ名: Ellis Horwood workshop series
所蔵情報: loading…
23.

図書

図書
P.J. Braspenning, F. Thuijsman, A.J.M.M. Weijters, (eds.)
出版情報: Berlin ; New York : Springer, c1995  vii, 293 p. ; 24 cm
シリーズ名: Lecture notes in computer science ; 931
所蔵情報: loading…
24.

図書

図書
Duc Truong Pham and Liu Xing
出版情報: London ; Tokyo : Springer-Verlag, c1995  xiv, 238 p. ; 24 cm
所蔵情報: loading…
25.

図書

図書
James M. Bower and David Beeman
出版情報: Santa Clara, Calif. : TELOS, Springer-Verlag, c1995  xx, 409 p. ; 24 cm
所蔵情報: loading…
26.

図書

図書
Stig I. Andersson (ed.)
出版情報: Berlin ; New York : Springer-Verlag, c1995  vi, 260 p. ; 24 cm
シリーズ名: Lecture notes in computer science ; 888
所蔵情報: loading…
27.

図書

図書
by Bing J. Sheu, Joongho Choi ; with special assistance from Robert C. Chang ... [et al.]
出版情報: Boston : Kluwer Academic Publishers, c1995  xix, 559 p. ; 25 cm
シリーズ名: The Kluwer international series in engineering and computer science ; SECS 304
所蔵情報: loading…
28.

図書

図書
Bart Kosko
出版情報: Englewood Cliffs, N.J. : Prentice Hall, c1992  xxvii, 449 p. ; 24 cm
所蔵情報: loading…
29.

図書

図書
Bahram Nabet, Robert B. Pinter
出版情報: Boca Raton : CRC Press, c1991  xi, 182 p. ; 25 cm
所蔵情報: loading…
30.

図書

図書
LiMin Fu
出版情報: New York : McGraw-Hill, c1994  xix, 460 p. ; 25 cm
シリーズ名: McGraw-Hill computer science series
所蔵情報: loading…
31.

図書

図書
Derong Liu and Anthony N. Michel
出版情報: London ; Berlin ; New York : Springer-Verlag, c1994  xiv, 191 p. ; 24 cm
シリーズ名: Lecture notes in control and information sciences ; 195
所蔵情報: loading…
32.

図書

図書
edited by Vladimir Cherkassky, Jerome H. Friedman, Harry Wechsler
出版情報: Berlin ; Tokyo : Springer-Verlag, c1994  xii, 394 p. ; 25 cm
シリーズ名: NATO ASI series ; Series F . Computer and systems sciences ; v. 136
所蔵情報: loading…
33.

図書

図書
Albert Nigrin
出版情報: Cambridge, Mass. : MIT Press, c1993  xvii, 413 p. ; 24 cm
所蔵情報: loading…
34.

図書

図書
edited by Daniel Gardner
出版情報: Cambridge, Mass. : MIT Press, c1993  xii, 227 p. ; 26 cm
シリーズ名: Computational neuroscience
Bradford book
所蔵情報: loading…
35.

図書

図書
Alan Murray and Lionel Tarassenko
出版情報: London : Chapman & Hall, 1994  xiii, 147 p. ; 24 cm
シリーズ名: Chapman & Hall neural computing ; 2
所蔵情報: loading…
36.

図書

図書
by Hervé A. Bourlard, Nelson Morgan ; foreword by Richard Lippmann
出版情報: Boston : Kluwer Academic Publishers, c1994  xxviii, 312 p. ; 25 cm
シリーズ名: The Kluwer international series in engineering and computer science ; SECS 247 . VLSI, computer architecture, and digital signal processing
所蔵情報: loading…
目次情報: 続きを見る
List of Figures
List of Tables
Notation
Foreword
Preface
Background / I:
Introduction / 1:
Statistical Pattern Classification / 2:
Hidden Markov Models / 3:
Multilayer Perceptions / 4:
Hybrid HMM/MLP Systems / II:
Speech Recognition using ANNs / 5:
Statistical Inference in MLPs / 6:
The Hybrid HMM/MLP Approach / 7:
Experimental Systems / 8:
Context-Dependent MPLs / 9:
System Tradeoffs / 10:
Training Hardware and Software / 11:
Additional Topics / III:
Cross-Validation in MLP Training / 12:
HMM/MLP and Predictive Models / 13:
Feature Extraction by MLP / 14:
Finale / IV:
Final System Overview / 15:
Conclusions / 16:
Bibliography
Index
Acronyms
List of Figures
List of Tables
Notation
37.

図書

図書
edited by Richard J. Mammone
出版情報: London ; New York : Chapman & Hall, 1994  xx, 586 p.
シリーズ名: Chapman & Hall neural computing ; 4
所蔵情報: loading…
38.

図書

図書
Steven K. Rogers, Matthew Kabrisky ; contributing authors, Dennis W. Ruck and Gregory L. Tarr
出版情報: Bellingham, Wash., USA : SPIE Optical Engineering Press, c1991  xi, 220 p. ; 26 cm
シリーズ名: Tutorial texts in optical engineering ; v. TT 4
所蔵情報: loading…
39.

図書

図書
Margaret Euphrasia Sereno
出版情報: Cambridge, Mass. : MIT Press, c1993  vi, 181 p. ; 24 cm
シリーズ名: Neural network modeling and connectionism
Bradford book
所蔵情報: loading…
40.

図書

図書
A. Cichocki, R. Unbehauen
出版情報: New York : J. Wiley, 1993  xvii, 526 p. ; 24 cm
所蔵情報: loading…
目次情報: 続きを見る
Mathematical Preliminaries of Neurocomputing
Architectures and Electronic Implementation of Neural Network Models
Unconstrained Optimization and Learning Algorithms
Neural Networks for Linear, Quadratic Programming and Linear Complementarity Problems
A Neural Network Approach to the On-Line Solution of a System of Linear Algebraic Equations and Related Problems
Neural Networks for Matrix Algebra Problems
Neural Networks for Continuous, Nonlinear, Constrained Optimization Problems
Neural Networks for Estimation, Identification and Prediction
Neural Networks for Discrete and Combinatorial Optimization Problems
Appendices
Subject Index
Mathematical Preliminaries of Neurocomputing
Architectures and Electronic Implementation of Neural Network Models
Unconstrained Optimization and Learning Algorithms
41.

図書

図書
edited by Eric Goles and Servet Martínez
出版情報: Dordrecht ; Boston : Kluwer Academic Publishers, c1992  x, 207 p. ; 25 cm
シリーズ名: Mathematics and its applications ; v. 75
所蔵情報: loading…
42.

図書

図書
edited by Harry Wechsler
出版情報: Boston : Academic Press, c1992  xix, 363 p. ; 24 cm
シリーズ名: Neural networks for perception / edited by Harry Wechsler ; v. 2
所蔵情報: loading…
43.

図書

図書
Yi-Tong Zhou, Rama Chellappa
出版情報: New York ; Tokyo : Springer-Verlag, c1992  xi, 170 p. ; 24 cm
シリーズ名: Research notes in neural computing ; v. 5
所蔵情報: loading…
44.

図書

図書
edited by M.A. Arbib and J.A. Robinson
出版情報: Cambridge, MA : MIT Press, c1990  x, 345 p. ; 24 cm
所蔵情報: loading…
45.

図書

図書
Pierre Peretto
出版情報: Cambridge : Cambridge University Press, c1992  xviii, 473 p. ; 24 cm
シリーズ名: Collection Aléa-Saclay : monographs and texts in statistical physics ; 2
所蔵情報: loading…
目次情報: 続きを見る
Preface
Acknowledgments
Introduction / 1:
The biology of neural networks: a few features for the sake of non-biologists / 2:
The dynamics of neural networks: a stochastic approach / 3:
Hebbian models of associative memory / 4:
Temporal sequences of patterns / 5:
The problem of learning in neural networks / 6:
Learning dynamics in 'visible' neural networks / 7:
Solving the problem of credit assignment / 8:
Self-organization / 9:
Neurocomputation / 10:
Neurocomputers / 11:
A critical view of the modeling of neural networks / 12:
References
Index
Preface
Acknowledgments
Introduction / 1:
46.

図書

図書
A.F. Rocha
出版情報: Berlin ; New York : Springer-Verlag, c1992  xv, 393 p. ; 25 cm
シリーズ名: Lecture notes in computer science ; 638 . Lecture notes in artificial intelligence
所蔵情報: loading…
47.

図書

図書
Bart Kosko, editor
出版情報: Englewood Cliffs, N.J. : Prentice Hall, c1992  xv, 399 p. ; 25 cm
所蔵情報: loading…
48.

図書

図書
Paolo Antognetti and Veljko Milutinović, editors
出版情報: Englewood Cliffs, N.J. : Prentice Hall, 1991  4 v. ; 24 cm
シリーズ名: Prentice Hall advanced reference series ; . Engineering
所蔵情報: loading…
49.

図書

図書
by David P. Morgan, Christopher L. Scofield ; foreword by Leon N. Cooper
出版情報: Boston : Kluwer Academic Publishers, c1991  xvi, 391 p. ; 25 cm
シリーズ名: The Kluwer international series in engineering and computer science ; . VLSI, computer architecture, and digital signal processing
所蔵情報: loading…
50.

図書

図書
E. Domany, J.L. van Hemmen, K. Schulten, (eds.) ; [contributors, V. Braitenberg ... et al.]
出版情報: Berlin ; Tokyo : Springer-Verlag, c1991  xvi, 347 p. ; 25 cm
シリーズ名: Physics of neural networks
所蔵情報: loading…
51.

図書

図書
Hermann Haken
出版情報: Berlin ; Tokyo : Springer-Verlag, c1991  ix, 225 p. ; 25 cm
シリーズ名: Springer series in synergetics ; v. 50
所蔵情報: loading…
52.

図書

図書
edited by R. Linggard, D.J. Myers and C. Nightingale
出版情報: London ; Tokyo : Chapman & Hall, c1992  xii, 442 p. ; 24 cm
シリーズ名: BT telecommunications series ; 1
所蔵情報: loading…
53.

図書

図書
edited by Gail A. Carpenter and Stephen Grossberg
出版情報: Cambridge, Mass. ; London : MIT Press, c1992  467 p. ; 26 cm
所蔵情報: loading…
54.

図書

図書
Marilyn McCord Nelson and W.T. Illingworth
出版情報: Reading, Mass. : Addison-Wesley, c1991  xxii, 344 p. ; 25 cm
所蔵情報: loading…
55.

図書

図書
C. Lee Giles, Marco Gori, eds
出版情報: Berlin ; New York : Springer, c1998  xii, 434 p. ; 24 cm
シリーズ名: Lecture notes in computer science ; 1387 . Lecture notes in artificial intelligence
所蔵情報: loading…
56.

図書

図書
edited by Leon O. Chua ... [et al.]
出版情報: Boston : Kluwer Academic Publishers, c1998  103 p. ; 27 cm
所蔵情報: loading…
目次情報: 続きを見る
Guest Editorial / L. Chua ; E. Pierzchala ; G. Gulak ; A. Rodriguez-Vazquez
A 16 x 16 Cellular Neural Network Universal Chip: The First Complete Single-Chip Dynamic Computer Array with Distributed Memory and with Gray-Scale Input-Output / J. M. Cruz ; L. O. Chua
A 6 x 6 Cells Interconnection-Oriented Programmable Chip for CNN / M. Salerno ; F. Sargeni ; Vincenzo Bonaiuto
Analog VLSI Design Constraints of Programmable Cellular Neural Networks / P. Kinget ; M. Steyaert
Focal-Plane and Multiple Chip VLSI Approaches to CNNs / M. Anguita ; F. J. Pelayo ; E. Ros ; D. Palomar ; A. Prieto
Architecture and Design of 1-D Enhanced Cellular Neural Network Processors for Signal Detection / M. Y. Wang ; B. J. Sheu ; T. W. Berger ; W. C. Young ; A. K. Cho
Analog VLSI Circuits for Competitive Learning Networks / H. C. Card ; D. K. McNeill ; C. R. Schneider
Design of Neural Networks Based on Wave-Parallel Computing Technique / Y. Yuminaka ; Y. Sasaki ; T. Aoki ; T. Higuchi
Guest Editorial / L. Chua ; E. Pierzchala ; G. Gulak ; A. Rodriguez-Vazquez
A 16 x 16 Cellular Neural Network Universal Chip: The First Complete Single-Chip Dynamic Computer Array with Distributed Memory and with Gray-Scale Input-Output / J. M. Cruz ; L. O. Chua
A 6 x 6 Cells Interconnection-Oriented Programmable Chip for CNN / M. Salerno ; F. Sargeni ; Vincenzo Bonaiuto
57.

図書

図書
Stephen I. Gallant
出版情報: Cambridge, Mass. : MIT Press, c1993  xvi, 365 p. ; 24 cm
シリーズ名: Bradford book
所蔵情報: loading…
目次情報: 続きを見る
Foreword
Basics / I:
Introduction and Important Definitions / 1:
Why Connectionist Models? / 1.1:
The Grand Goals of Al and Its Current Impasse / 1.1.1:
The Computational Appeal of Neural Networks / 1.1.2:
The Structure of Connectionist Models / 1.2:
Network Properties / 1.2.1:
Cell Properties / 1.2.2:
Dynamic Properties / 1.2.3:
Learning Properties / 1.2.4:
Two Fundamental Models: Multilayer Perceptrons (MLP's) and Backpropagation Networks (BPN's) / 1.3:
Multilayer Perceptrons (MLP's) / 1.3.1:
Backpropagation Networks (BPN's) / 1.3.2:
Gradient Descent / 1.4:
The Algorithm / 1.4.1:
Practical Problems / 1.4.2:
Comments / 1.4.3:
Historic and Bibliographic Notes / 1.5:
Early Work / 1.5.1:
The Decline of the Perceptron / 1.5.2:
The Rise of Connectionist Research / 1.5.3:
Other Bibliographic Notes / 1.5.4:
Exercises / 1.6:
Programming Project / 1.7:
Representation Issues / 2:
Representing Boolean Functions / 2.1:
Equivalence of {+1, -1,0} and {1,0} Forms / 2.1.1:
Single-Cell Models / 2.1.2:
Nonseparable Functions / 2.1.3:
Representing Arbitrary Boolean Functions / 2.1.4:
Representing Boolean Functions Using Continuous Connectionist Models / 2.1.5:
Distributed Representations / 2.2:
Definition / 2.2.1:
Storage Efficiency and Resistance to Error / 2.2.2:
Superposition / 2.2.3:
Learning / 2.2.4:
Feature Spaces and ISA Relations / 2.3:
Feature Spaces / 2.3.1:
Concept-Function Unification / 2.3.2:
ISA Relations / 2.3.3:
Binding / 2.3.4:
Representing Real-Valued Functions / 2.4:
Approximating Real Numbers by Collections of Discrete Cells / 2.4.1:
Precision / 2.4.2:
Approximating Real Numbers by Collections of Continuous Cells / 2.4.3:
Example: Taxtime! / 2.5:
Programming Projects / 2.6:
Learning In Single-Layer Models / II:
Perceptron Learning and the Pocket Algorithm / 3:
Perceptron Learning for Separable Sets of Training Examples / 3.1:
Statement of the Problem / 3.1.1:
Computing the Bias / 3.1.2:
The Perceptron Learning Algorithm / 3.1.3:
Perceptron Convergence Theorem / 3.1.4:
The Perceptron Cycling Theorem / 3.1.5:
The Pocket Algorithm for Nonseparable Sets of Training Examples / 3.2:
Problem Statement / 3.2.1:
Perceptron Learning Is Poorly Behaved / 3.2.2:
The Pocket Algorithm / 3.2.3:
Ratchets / 3.2.4:
Examples / 3.2.5:
Noisy and Contradictory Sets of Training Examples / 3.2.6:
Rules / 3.2.7:
Implementation Considerations / 3.2.8:
Proof of the Pocket Convergence Theorem / 3.2.9:
Khachiyan's Linear Programming Algorithm / 3.3:
Winner-Take-All Groups or Linear Machines / 3.4:
Generalizes Single-Cell Models / 4.1:
Perceptron Learning for Winner-Take-All Groups / 4.2:
The Pocket Algorithm for Winner-Take-All Groups / 4.3:
Kessler's Construction, Perceptron Cycling, and the Pocket Algorithm Proof / 4.4:
Independent Training / 4.5:
Autoassociators and One-Shot Learning / 4.6:
Linear Autoassociators and the Outer-Product Training Rule / 5.1:
Anderson's BSB Model / 5.2:
Hopfieid's Model / 5.3:
Energy / 5.3.1:
The Traveling Salesman Problem / 5.4:
The Cohen-Grossberg Theorem / 5.5:
Kanerva's Model / 5.6:
Autoassociative Filtering for Feedforward Networks / 5.7:
Concluding Remarks / 5.8:
Mean Squared Error (MSE) Algorithms / 5.9:
Motivation / 6.1:
MSE Approximations / 6.2:
The Widrow-Hoff Rule or LMS Algorithm / 6.3:
Number of Training Examples Required / 6.3.1:
Adaline / 6.4:
Adaptive Noise Cancellation / 6.5:
Decision-Directed Learning / 6.6:
Unsupervised Learning / 6.7:
Introduction / 7.1:
No Teacher / 7.1.1:
Clustering Algorithms / 7.1.2:
k-Means Clustering / 7.2:
Topology-Preserving Maps / 7.2.1:
Example / 7.3.1:
Demonstrations / 7.3.4:
Dimensionality, Neighborhood Size, and Final Comments / 7.3.5:
Art1 / 7.4:
Important Aspects of the Algorithm / 7.4.1:
Art2 / 7.4.2:
Using Clustering Algorithms for Supervised Learning / 7.6:
Labeling Clusters / 7.6.1:
ARTMAP or Supervised ART / 7.6.2:
Learning In Multilayer Models / 7.7:
The Distributed Method and Radial Basis Functions / 8:
Rosenblatt's Approach / 8.1:
The Distributed Method / 8.2:
Cover's Formula / 8.2.1:
Robustness-Preserving Functions / 8.2.2:
Hepatobiliary Data / 8.3:
Artificial Data / 8.3.2:
How Many Cells? / 8.4:
Pruning Data / 8.4.1:
Leave-One-Out / 8.4.2:
Radial Basis Functions / 8.5:
A Variant: The Anchor Algorithm / 8.6:
Scaling, Multiple Outputs, and Parallelism / 8.7:
Scaling Properties / 8.7.1:
Multiple Outputs and Parallelism / 8.7.2:
A Computational Speedup for Learning / 8.7.3:
Computational Learning Theory and the BRD Algorithm / 8.7.4:
Introduction to Computational Learning Theory / 9.1:
PAC-Learning / 9.1.1:
Bounded Distributed Connectionist Networks / 9.1.2:
Probabilistic Bounded Distributed Concepts / 9.1.3:
A Learning Algorithm for Probabilistic Bounded Distributed Concepts / 9.2:
The BRD Theorem / 9.3:
Polynomial Learning / 9.3.1:
Noisy Data and Fallback Estimates / 9.4:
Vapnik-Chervonenkis Bounds / 9.4.1:
Hoeffding and Chernoff Bounds / 9.4.2:
Pocket Algorithm / 9.4.3:
Additional Training Examples / 9.4.4:
Bounds for Single-Layer Algorithms / 9.5:
Fitting Data by Limiting the Number of Iterations / 9.6:
Discussion / 9.7:
Exercise / 9.8:
Constructive Algorithms / 9.9:
The Tower and Pyramid Algorithms / 10.1:
The Tower Algorithm / 10.1.1:
Proof of Convergence / 10.1.2:
A Computational Speedup / 10.1.4:
The Pyramid Algorithm / 10.1.5:
The Cascade-Correlation Algorithm / 10.2:
The Tiling Algorithm / 10.3:
The Upstart Algorithm / 10.4:
Other Constructive Algorithms and Pruning / 10.5:
Easy Learning Problems / 10.6:
Decomposition / 10.6.1:
Expandable Network Problems / 10.6.2:
Limits of Easy Learning / 10.6.3:
Backpropagation / 10.7:
The Backpropagation Algorithm / 11.1:
Statement of the Algorithm / 11.1.1:
A Numerical Example / 11.1.2:
Derivation / 11.2:
Practical Considerations / 11.3:
Determination of Correct Outputs / 11.3.1:
Initial Weights / 11.3.2:
Choice of r / 11.3.3:
Momentum / 11.3.4:
Network Topology / 11.3.5:
Local Minima / 11.3.6:
Activations in [0,1] versus [-1, 1] / 11.3.7:
Update after Every Training Example / 11.3.8:
Other Squashing Functions / 11.3.9:
NP-Completeness / 11.4:
Overuse / 11.5:
Interesting Intermediate Cells / 11.5.2:
Continuous Outputs / 11.5.3:
Probability Outputs / 11.5.4:
Using Backpropagation to Train Multilayer Perceptrons / 11.5.5:
Backpropagation: Variations and Applications / 11.6:
NETtalk / 12.1:
Input and Output Representations / 12.1.1:
Experiments / 12.1.2:
Backpropagation through Time / 12.1.3:
Handwritten Character Recognition / 12.3:
Neocognitron Architecture / 12.3.1:
The Network / 12.3.2:
Robot Manipulator with Excess Degrees of Freedom / 12.3.3:
The Problem / 12.4.1:
Training the Inverse Network / 12.4.2:
Plan Units / 12.4.3:
Simulated Annealing and Boltzmann Machines / 12.4.4:
Simulated Annealing / 13.1:
Boltzmann Machines / 13.2:
The Boltzmann Model / 13.2.1:
Boltzmann Learning / 13.2.2:
The Boltzmann Algorithm and Noise Clamping / 13.2.3:
Example: The 4-2-4 Encoder Problem / 13.2.4:
Remarks / 13.3:
Neural Network Expert Systems / 13.4:
Expert Systems and Neural Networks / 14:
Expert Systems / 14.1:
What Is an Expert System? / 14.1.1:
Why Expert Systems? / 14.1.2:
Historically Important Expert Systems / 14.1.3:
Critique of Conventional Expert Systems / 14.1.4:
Neural Network Decision Systems / 14.2:
Example: Diagnosis of Acute Coronary Occlusion / 14.2.1:
Example: Autonomous Navigation / 14.2.2:
Other Examples / 14.2.3:
Decision Systems versus Expert Systems / 14.2.4:
MACIE, and an Example Problem / 14.3:
Diagnosis and Treatment of Acute Sarcophagal Disease / 14.3.1:
Network Generation / 14.3.2:
Sample Run of Macie / 14.3.3:
Real-Valued Variables and Winner-Take-All Groups / 14.3.4:
Not-Yet-Known versus Unavailable Variables / 14.3.5:
Applicability of Neural Network Expert Systems / 14.4:
Details of the MACIE System / 14.5:
Inferencing and Forward Chaining / 15.1:
Discrete Multilayer Perceptron Models / 15.1.1:
Continuous Variables / 15.1.2:
Winner-Take-All Groups / 15.1.3:
Using Prior Probabilities for More Aggressive Inferencing / 15.1.4:
Confidence Estimation / 15.2:
A Confidence Heuristic Prior to Inference / 15.2.1:
Confidence in Inferences / 15.2.2:
Information Acquisition and Backward Chaining / 15.3:
Concluding Comment / 15.4:
Noise, Redundancy, Fault Detection, and Bayesian Decision Theory / 15.5:
The High Tech Lemonade Corporation's Problem / 16.1:
The Deep Model and the Noise Model / 16.2:
Generating the Expert System / 16.3:
Probabilistic Analysis / 16.4:
Noisy Single-Pattern Boolean Fault Detection Problems / 16.5:
Convergence Theorem / 16.6:
Extracting Rules from networks / 16.7:
Why Rules? / 17.1:
What Kind of Rules? / 17.2:
Criteria / 17.2.1:
Inference Justifications versus Rule Sets / 17.2.2:
Which Variables in Conditions / 17.2.3:
Inference Justifications / 17.3:
MACIE's Algorithm / 17.3.1:
The Removal Algorithm / 17.3.2:
Key Factor Justifications / 17.3.3:
Justifications for Continuous Models / 17.3.4:
Rule Sets / 17.4:
Limiting the Number of Conditions / 17.4.1:
Approximating Rules / 17.4.2:
Conventional + Neural Network Expert Systems / 17.5:
Debugging an Expert System Knowledge Base / 17.5.1:
The Short-Rule Debugging Cycle / 17.5.2:
Appendix Representation Comparisons / 17.6:
DNF Expressions / A.1 DNF Expressions and Polynomial Representability:
Polynomial Representability / A.1.2:
Space Comparison of MLP and DNF Representations / A.1.3:
Speed Comparison of MLP and DNF Representations / A.1.4:
MLP versus DNF Representations / A.1.5:
Decision Trees / A.2:
Representing Decision Trees by MLP's / A.2.1:
Speed Comparison / A.2.2:
Decision Trees versus MLP's / A.2.3:
p-lDiagrams / A.3:
Symmetric Functions and Depth Complexity / A.4:
Bibliography / A.5:
Index
Foreword
Basics / I:
Introduction and Important Definitions / 1:
58.

図書

図書
Tomas Hrycej
出版情報: New York, NY : Wiley, c1992  xiii, 235 p. ; 25 cm
シリーズ名: Sixth-generation computer technology series
所蔵情報: loading…
59.

図書

図書
edited by Alan F. Murray
出版情報: Dordrecht ; Boston : Kluwer Academic Publishers, c1995  xii, 322 p. ; 25 cm
所蔵情報: loading…
60.

図書

図書
by Anne-Johan Annema
出版情報: Boston : Kluwer Academic Publishers, c1995  xiii, 238 p. ; 25 cm
シリーズ名: The Kluwer international series in engineering and computer science ; Analog circuits and signal processing
所蔵情報: loading…
61.

図書

図書
edited by J.G. Taylor
出版情報: Chichester : J. Wiley, 1996  xxvii, 293 p. ; 25 cm
所蔵情報: loading…
62.

図書

図書
edited by Wolfgang Maass, Christopher M. Bishop
出版情報: Cambridge, Mass. : MIT Press, c1999  xxix, 377 p. ; 26 cm
所蔵情報: loading…
目次情報: 続きを見る
Foreword
Neural Pulse Coding
Spike Timing
Population Codes
Hippocampal Place Field
Hardware Models
References
Preface
The Isaac Newton Institute
Overview of the Book
Acknowledgments
Contributors
Basic Concepts and Models / Part I:
Spiking Neurons / 1:
The Problem of Neural Coding / 1.1:
Motivation / 1.1.1:
Rate Codes / 1.1.2:
Rate as a Spike Count (Average over Time) / 1.1.2.1:
Rate as a Spike Density (Average over Several Runs) / 1.1.2.2:
Rate as Population Activity (Average over Several Neurons) / 1.1.2.3:
Candidate Pulse Codes / 1.1.3:
Time-to-First-Spike / 1.1.3.1:
Phase / 1.1.3.2:
Correlations and Synchrony / 1.1.3.3:
Stimulus Reconstruction and Reverse Correlation / 1.1.3.4:
Discussion: Spikes or Rates? / 1.1.4:
Neuron Models / 1.2:
Simple Spiking Neuron Model / 1.2.1:
First Steps towards Coding by Spikes / 1.2.2:
Threshold-Fire Models / 1.2.3:
Spike Response Model -- Further Details / 1.2.3.1:
Integrate-and-Fire Model / 1.2.3.2:
Models of Noise / 1.2.3.3:
Conductance-Based Models / 1.2.4:
Hodgkin-Huxley Model / 1.2.4.1:
Relation to the Spike Response Model / 1.2.4.2:
Compartmental Models / 1.2.4.3:
Rate Models / 1.2.5:
Conclusions / 1.3:
Computing with Spiking Neurons / 2:
Introduction / 2.1:
A Formal Computational Model for a Network of Spiking Neurons / 2.2:
McCulloch-Pitts Neurons versus Spiking Neurons / 2.3:
Computing with Temporal Patterns / 2.4:
Conincidence Detection / 2.4.1:
RBF-Units in the Temporal Domain / 2.4.2:
Computing a Weighted Sum in Temporal Coding / 2.4.3:
Universal Approximation of Continuous Functions with Spiking Neurons Remarks: / 2.4.4:
Other Computations with Temporal Patterns in Networks of Spiking Neurons / 2.4.5:
Computing with a Space-Rate Code / 2.5:
Computing with Firing Rates / 2.6:
Computing with Firing Rates and Temporal Correlations / 2.7:
Networks of Spiking Neurons for Storing and Retrieving Information / 2.8:
Computing on Spike Trains / 2.9:
Pulse-Based Computation in VLSI Neural Networks / 2.10:
Background / 3.1:
Pulsed Coding: A VLSI Perspective / 3.2:
Pulse Amplitude Modulation / 3.2.1:
Pulse Width Modulation / 3.2.2:
Pulse Frequency Modulation / 3.2.3:
Phase or Delay Modulation / 3.2.4:
Noise, Robustness, Accuracy and Speed / 3.2.5:
A MOSFET Introduction / 3.3:
Subthreshold Circuits for Neural Networks / 3.3.1:
Pulse Generation in VLSI / 3.4:
Pulse Intercommunication / 3.4.1:
Pulsed Arithmetic in VLSI / 3.5:
Addition of Pulse Stream Signals / 3.5.1:
Multiplication of Pulse Stream Signals / 3.5.2:
MOS Transconductance Multiplier / 3.5.3:
MOSFET Analog Multiplier / 3.5.4:
Learning in Pulsed Systems / 3.6:
Summary and Issues Raised / 3.7:
Encoding Information in Neuronal Activity / 4:
Synchronization and Oscillations / 4.1:
Temporal Binding / 4.3:
Phase Coding / 4.4:
Dynamic Range and Firing Rate Codes / 4.5:
Interspike Interval Variability / 4.6:
Synapses and Rate Coding / 4.7:
Summary and Implications / 4.8:
Implementations / Part II:
Building Silicon Nervous Systems with Dendritic Tree Neuromorphs / 5:
Why Spikes? / 5.1:
Dendritic Processing of Spikes / 5.1.2:
Tunability / 5.1.3:
Implementation in VLSI / 5.2:
Artificial Dendrites / 5.2.1:
Synapses / 5.2.2:
Dendritic Non-Linearities / 5.2.3:
Spike-Generating Soma / 5.2.4:
Excitability Control / 5.2.5:
Spike Distribution -- Virtual Wires / 5.2.6:
Neuromorphs in Action / 5.3:
Feedback to Threshold-Setting Synapses / 5.3.1:
Discrimination of Complex Spatio-Temporal Patterns / 5.3.2:
Processing of Temporally Encoded Information / 5.3.3:
A Pulse-Coded Communications Infrastructure for Neuromorphic Systems / 5.4:
Neuromorphic Computational Nodes / 6.1:
Neuromorphic aVLSI Neurons / 6.3:
Address Event Representation (AER) / 6.4:
Implementations of AER / 6.5:
Silicon Cortex / 6.6:
Basic Layout / 6.6.1:
Functional Tests of Silicon Cortex / 6.7:
An Example Neuronal Network / 6.7.1:
An Example of Sensory Input to SCX / 6.7.2:
Future Research on AER Neuromorphic Systems / 6.8:
Acknowledgements
Analog VLSI Pulsed Networks for Perceptive Processing / 7:
Analog Perceptive Nets Communication Requirements / 7.1:
Coding Information with Pulses / 7.2.1:
Multiplexing of the Signals Issued by Each Neuron / 7.2.2:
Non-Arbitered PFM Communication / 7.2.3:
Analysis of the NAPFM Communication Systems / 7.3:
Statistical Assumptions / 7.3.1:
Detection / 7.3.2:
Detection by Time-Windowing / 7.3.2.1:
Direct Interpulse Time Measurement / 7.3.2.2:
Performance / 7.3.3:
Data Dependency of System Performance / 7.3.3.1:
Discussion / 7.3.5:
Detection by Direct Interpulse Time Measurement / 7.3.5.1:
Address Coding / 7.4:
Silicon Retina Equipped with the NAPFM Communication System / 7.5:
Circuit Description / 7.5.1:
Noise Measurement Results / 7.5.2:
Projective Field Generation / 7.6:
Overview / 7.6.1:
Anisotropic Current Pulse Spreading in a Nonlinear Network / 7.6.2:
Analysis of the Spatial Response of the Nonlinear Network / 7.6.3:
Analysis of the Size and Shape of the Bubbles Generable by the Nonlinear Network / 7.6.4:
Description of the Integrated Circuit for Orientation Enhancement / 7.7:
System Measurement Results / 7.7.1:
Other Applications / 7.7.4:
Weighted Projective Field Generation / 7.7.4.1:
Complex Projective Field Generation / 7.7.4.2:
Display Interface / 7.8:
Conclusion / 7.9:
Preprocessing for Pulsed Neural VLSI Syste / 8:
A Sound Segmentation System / 8.1:
Signal Processing in Analog VLSI / 8.3:
Continuous Time Active Filters / 8.3.1:
Sampled Data Active Switched Capacitor (SC) Filters / 8.3.2:
Sampled Data Active Switched Current (SI) Filters / 8.3.3:
Palmo -- Pulse Based Signal Processing / 8.3.4:
Basic Palmo Concepts / 8.4.1:
The Palmo Signal Representation / 8.4.1.1:
The Analog Palmo Cell / 8.4.1.2:
A Palmo Signal Processing System / 8.4.1.3:
Sources of Harmonic Distortion in a Palmo System / 8.4.1.4:
A CMOS Analog Palmo Cell Implementation / 8.4.2:
The Analog Palmo Cell: Details of Circuit Operation / 8.4.2.1:
Interconnecting Analog Palmo Cells / 8.4.3:
Results from a Palmo VLSI Device / 8.4.4:
Digital Processing of Palmo Signals / 8.4.5:
CMOS Analog Palmo Cell: Performance / 8.4.6:
Further Work / 8.5:
Digital Simulation of Spiking Neural Networks / 8.7:
Implementation Issues of Pulse-Coded Neural Networks / 9.1:
Discrete-Time Simulation / 9.2.1:
Requisite Arithmetic Precision / 9.2.2:
Basic Procedures of Network Computation / 9.2.3:
Programming Environment / 9.3:
Concepts of Efficient Simulation / 9.4:
Mapping Neural Networks on Parallel Computers / 9.5:
Neuron-Parallelism / 9.5.1:
Synapse-Parallelism / 9.5.2:
Pattern-Parallelism / 9.5.3:
Partitioning of the Network / 9.5.4:
Performance Study / 9.6:
Single PE Workstations / 9.6.1:
Neurocomputer / 9.6.2:
Parallel Computers / 9.6.3:
Results of the Performance Study / 9.6.4:
Design and Analysis of Pulsed Neural Systems / 9.6.5:
Populations of Spiking Neurons / 10:
Model / 10.1:
Population Activity Equation / 10.3:
Integral Equation for the Dynamics / 10.3.1:
Normalization / 10.3.2:
Noise-Free Population Dynamics / 10.4:
Locking / 10.5:
Locking Condition / 10.5.1:
Graphical Interpretation / 10.5.2:
Transients / 10.6:
Incoherent Firing / 10.7:
Determination of the Activity / 10.7.1:
Stability of Asynchronous Firing / 10.7.2:
Collective Excitation Phenomena and Their Applications / 10.8:
Two Variable Formulation of IAF Neurons / 11.1:
Synchronization of Pulse Coupled Oscillators / 11.2:
Clustering via Temporal Segmentation / 11.3:
Limits on Temporal Segmentation / 11.4:
Image Analysis / 11.5:
Image Segmentation / 11.5.1:
Edge Detection / 11.5.2:
Solitary Waves / 11.6:
The Importance of Noise / 11.7:
Acknowledgment / 11.8:
Computing and Learning with Dynamic Synapses / 12:
Biological Data on Dynamic Synapses / 12.1:
Quantitative Models / 12.3:
On the Computational Role of Dynamic Synapses / 12.4:
Implications for Learning in Pulsed Neural Nets / 12.5:
Stochastic Bit-Stream Neural Networks / 12.6:
Basic Neural Modelling / 13.1:
Feedforward Networks and Learning / 13.3:
Probability Level Learning / 13.3.1:
Bit-Stream Level Learning / 13.3.2:
Generalization Analysis / 13.4:
Recurrent Networks / 13.5:
Applications to Graph Colouring / 13.6:
Hardware Implementation / 13.7:
The Stochastic Neuron / 13.7.1:
Calculating Output Derivatives / 13.7.2:
Generating Stochastic Bit-Streams / 13.7.3:
Hebbian Learning of Pulse Timing in the Barn Owl Auditory System / 13.7.4:
Hebbian Learning / 14.1:
Review of Standard Formulations / 14.2.1:
Spike-Based Learning / 14.2.2:
Example / 14.2.3:
Learning Window / 14.2.4:
Barn Owl Auditory System / 14.3:
The Localization Task / 14.3.1:
Auditory Localization Pathway / 14.3.2:
Phase Locking / 14.4:
Neuron Model / 14.4.1:
Phase Locking -- Schematic / 14.4.2:
Simulation Results / 14.4.3:
Delay Tuning by Hebbian Learning / 14.5:
Selection of Delays / 14.5.1:
Foreword
Neural Pulse Coding
Spike Timing
63.

図書

図書
Pierre Baldi, Søren Brunak
出版情報: Cambridge, Mass. : The MIT Press, 1998  xviii, 351 p., [8] p. of plats ; 24 cm
シリーズ名: Adaptive computation and machine learning
Bradford book
所蔵情報: loading…
目次情報: 続きを見る
Series Foreword
Preface
Introduction / 1:
Biological Data in Digital Symbol Sequences / 1.1:
Genomes--Diversity, Size, and Structure / 1.2:
Proteins and Proteomes / 1.3:
On the Information Content of Biological Sequences / 1.4:
Prediction of Molecular Function and Structure / 1.5:
Machine Learning Foundations: The Probabilistic Framework / 2:
Introduction: Bayesian Modeling / 2.1:
The Cox-Jaynes Axioms / 2.2:
Bayesian Inference and Induction / 2.3:
Model Structures: Graphical Models and Other Tricks / 2.4:
Summary / 2.5:
Probabilistic Modeling and Inference: Examples / 3:
The Simplest Sequence Models / 3.1:
Statistical Mechanics / 3.2:
Machine Learning Algorithms / 4:
Dynamic Programming / 4.1:
Gradient Descent / 4.3:
EM/GEM Algorithms / 4.4:
Markov Chain Monte Carlo Methods / 4.5:
Simulated Annealing / 4.6:
Evolutionary and Genetic Algorithms / 4.7:
Learning Algorithms: Miscellaneous Aspects / 4.8:
Neural Networks: The Theory / 5:
Universal Approximation Properties / 5.1:
Priors and Likelihoods / 5.3:
Learning Algorithms: Backpropagation / 5.4:
Neural Networks: Applications / 6:
Sequence Encoding and Output Interpretation / 6.1:
Prediction of Protein Secondary Structure / 6.2:
Prediction of Signal Peptides and Their Cleavage Sites / 6.3:
Applications for DNA and RNA Nucleotide Sequences / 6.4:
Hidden Markov Models: The Theory / 7:
Prior Information and Initialization / 7.1:
Likelihood and Basic Algorithms / 7.3:
Learning Algorithms / 7.4:
Applications of HMMs: General Aspects / 7.5:
Hidden Markov Models: Applications / 8:
Protein Applications / 8.1:
DNA and RNA Applications / 8.2:
Conclusion: Advantages and Limitations of HMMs / 8.3:
Hybrid Systems: Hidden Markov Models and Neural Networks / 9:
Introduction to Hybrid Models / 9.1:
The Single-Model Case / 9.2:
The Multiple-Model Case / 9.3:
Simulation Results / 9.4:
Probabilistic Models of Evolution: Phylogenetic Trees / 9.5:
Introduction to Probabilistic Models of Evolution / 10.1:
Substitution Probabilities and Evolutionary Rates / 10.2:
Rates of Evolution / 10.3:
Data Likelihood / 10.4:
Optimal Trees and Learning / 10.5:
Parsimony / 10.6:
Extensions / 10.7:
Stochastic Grammars and Linguistics / 11:
Introduction to Formal Grammars / 11.1:
Formal Grammars and the Chomsky Hierarchy / 11.2:
Applications of Grammars to Biological Sequences / 11.3:
Likelihood / 11.4:
Applications of SCFGs / 11.6:
Experiments / 11.8:
Future Directions / 11.9:
Internet Resources and Public Databases / 12:
A Rapidly Changing Set of Resources / 12.1:
Databases over Databases and Tools / 12.2:
Databases over Databases / 12.3:
Databases / 12.4:
Sequence Similarity Searches / 12.5:
Alignment / 12.6:
Selected Prediction Servers / 12.7:
Molecular Biology Software Links / 12.8:
Ph.D. Courses over the Internet / 12.9:
HMM/NN Simulator / 12.10:
Statistics / A:
Decision Theory and Loss Functions / A.1:
Quadratic Loss Functions / A.2:
The Bias/Variance Trade-off / A.3:
Combining Estimators / A.4:
Error Bars / A.5:
Sufficient Statistics / A.6:
Exponential Family / A.7:
Gaussian Process Models / A.8:
Variational Methods / A.9:
Information Theory, Entropy, and Relative Entropy / B:
Entropy / B.1:
Relative Entropy / B.2:
Mutual Information / B.3:
Jensen's Inequality / B.4:
Maximum Entropy / B.5:
Minimum Relative Entropy / B.6:
Probabilistic Graphical Models / C:
Notation and Preliminaries / C.1:
The Undirected Case: Markov Random Fields / C.2:
The Directed Case: Bayesian Networks / C.3:
HMM Technicalities, Scaling, Periodic Architectures, State Functions, and Dirichlet Mixtures / D:
Scaling / D.1:
Periodic Architectures / D.2:
State Functions: Bendability / D.3:
Dirichlet Mixtures / D.4:
List of Main Symbols and Abbreviations / E:
References
Index
Series Foreword
Preface
Introduction / 1:
64.

図書

図書
edited by Omid Omidvar, Judith Dayhoff
出版情報: San Diego, Calif. : Academic Press, c1998  xvi, 351 p. ; 24 cm
所蔵情報: loading…
65.

図書

図書
edited by A.B. Bulsari
出版情報: Amsterdam ; New York : Elsevier, 1995  ix, 680 p. ; 25 cm
シリーズ名: Computer-aided chemical engineering ; 6
所蔵情報: loading…
66.

図書

図書
edited by Gail A. Carpenter and Stephen Grossberg
出版情報: Cambridge, Mass. : MIT Press, c1991  691 p. ; 26 cm
所蔵情報: loading…
目次情報: 続きを見る
List Of Authors
Editorial Preface
Neural Network Models for Pattern Recognition and Associative Memory / Chapter 1:
Preface
Abstract
Introduction / 1:
The McCulloch-Pitts Neuron / 2:
Adaptive Filter Formalism / 3:
Logical Calculus and Invariant Patterns / 4:
Perceptrons and Back-Coupled Error Correction / 5:
Adaline and Madaline / 6:
Multi-Level Perceptrons: Early Back Propagation / 7:
Later Back Propagation / 8:
Hebbian Learning / 9:
The Learning Matrix / 10:
Linear Associative Memory (LAM) / 11:
Real-Time Models and Embedding Fields / 12:
Instars and Outstars / 13:
Additive and Shunting Activation Equations / 14:
Learning Equations / 15:
Learning Space-Time Patterns: The Avalanche / 16:
Adaptive Coding and Category Formation / 17:
Shunting Competitive Networks / 18:
Competitive Learning / 19:
Computational Maps / 20:
Instability of Computational Maps / 21:
Adaptive Resonance Theory (ART) / 22:
ART for Associative Memory / 23:
Cognitron and Neocognitron / 24:
Simulated Annealing / 25:
Conclusion / 26:
Bibliography
References
Nonlinear Neural Networks: Principles, Mechanisms, and Architectures / Chapter 2:
Reference
Interdisciplinary Studies during the Nineteenth Century: Helmholtz, Maxwell, and Mach
The Schism between Physics and Psychology
The Nonlinear, Nonlocal, and Nonstationary Phenomena of Mind and Brain
Color Theory / A:
Top-Down Learning, Expectation, and Matching / B:
The Nature of an Enduring Synthesis
Sources of Neural Network Research: Binary, Linear, Continuous-Nonlinear
Binary
Linear
Continuous-Nonlinear / C:
Nonlinear Feedback between Fast Distributed STM Processing and Slow Associative LTM Processing
Principles, Mechanisms, and Architectures
Content-Addressable Memory Storage: A General STM Model and Liapunov Method
Other Liapunov Methods
Testing the Global Consistency of Decisions in Competitive Systems
Stable Production Strategies for a Competitive Market
Sensitive Variable-Load Parallel Processing by Shunting Cooperative-Competitive Networks: Automa...
Physiological Interpretation of Shunting Mechanisms as a Membrane Equation
Sigmoid Feedback, Contrast Enhancement, and Short Term Memory Storage by Shunting Feedback Netwo...
Competitive Learning Models
Stable Self-Organization of Pattern Recognition Codes
Internally Regulated Learning and Performance in Neural Models of Sensory-Motor Control: Adaptiv...
External Error Signals for Learning Adaptive Movement Gains: Push-Pull Opponent Processing
Match-Invariants: Internally Regulated Learning of an Invariant Self-Regulating Target Position ...
Presynaptic Competition for Long Term Memory: Self-Regulating Competitive Learning
Neural Pattern Discrimination / Chapter 3:
Connection with Learning Theory
Local Temporal Discrimination
Choices between Incompatible Behaviors: "Majority Rule" in Non-Recurrent "Interference Patterns"
Local Temporal Generalization: Variable Velocities of Motor Performance
Why are Sensory Pathways in Different Modalities Anatomically Different if Universal Discriminato...
Unselective Filtering of Spatial Patterns by Excitatory Networks
Two Stages of Non-Recurrent Inhibition for Pattern Discrimination
Specific versus Nonspecific Inhibitory Interneurons, Inhibition at the Axon Hillock, Presynaptic ...
Pattern Normalization in Type I Networks
High-Band Filters
Discrimination of Space-Time Patterns
Velocity and Orientation Detectors
Alternative Mechanisms of Pattern Normalization: Saturating Potentials in an On-Off Field, or Lo...
Proof of Proposition 1 / Appendix A:
Proof of Lemma 2 / Appendix B:
Proof of Theorem 1 / Appendix C:
Proof of Lemma 3 / Appendix D:
Proof of Proposition 3 / Appendix E:
Proof of Corollary 1 / Appendix F:
Proof of Proposition 4 / Appendix G:
Neural Expectation: Cerebellar and Retinal Analogs of Cells Fired by Learnable or Unlearned Pattern Classes / Chapter 4:
Neural Expectation: Cerebellar And Retinal Analogs of Cells Fired by Learnable or Unlearned Pattern ...
Theoretical Review
Retinal Analog of R Cells
A Learnable Preset Mechanism: Subtractive Case
Cerebellar Analogs of U Cells
A Learnable Present Mechanism: Multiplicative Case
Self-Organization of Orientation Sensitive Cells in the Striate Cortex / Chapter 5:
Self-Organization of Orientation Sensitive Cells In The Striate Cortex
The Model / I:
The Elements / a:
The Wiring of the Model / b:
The Afferent Organization / c:
The Learning Principle / d:
The Function of the Model / III:
Basic Equations of Evolution
Specification of Details
The Procedure of the Numerical Calculations
The Results without Learning
The Results after a Learning Phase / e:
The Effect of Non-Standard Stimuli
The Sensitivity to Nonspecific Input / g:
Redundancy of Information Storage / h:
Discussion / IV:
Structure of the Model and Generalizations
Comparison with Experiments
Adaptive Pattern Classification and Universal Recoding, I: Parallel Development and Coding of Neural Feature Detectors / Chapter 6:
Adaptive Pattern Classification And Universal Recoding, I: Parallel Development And Coding Of Neural...
The Tuning Process
Ritualistic Pattern Classification
Shunts versus Additive Interactions as Mechanisms of Pattern Classification
What Do Retinal Amacrine Cells Do?
Arousal as a Tuning Mechanism
Arousal as a Search Mechanism
Development of an STM Code
Appendix
The "Neural" Phonetic Typewriter by / Chapter 7:
The ''Neural" Phonetic Typewriter
Why is speech recognition difficult?
The promise of neural computers
Acoustic preprocessing
Vector quantization
The neural network
Shortcut learning algorithm
Phonotopic maps
Postprocessing in symbolic form
Hardware implementations and performance
Counterpropagation Networks / Chapter 8:
Counterpropagation Network
CPN Error Analysis
CPN Variants and Evolutes
Adaptive Pattern Classification and Universal Recoding, Ii: Feedback, Expectation, Olfaction, and Illusions / V:
Adaptive Pattern Classification And Universal Recoding, Ii: Feedback, Expectation, Olfaction, Illusi...
Adaptive Resonance: Stable Coding and Reset of STM
Adaptive Resonance in Reinforcement, Motivation, and Attention
Search and Lock Mechanism
Olfactory Coding and Learned Expectation
Modulation of Nonspecific Arousal by a Learned Expectation Mechanism
Universal Recoding
Search
Slow Noradrenergic Transmitter Accumulation-Depletion as a Search Mechanism
Spatial Frequency Adaptation
Afterimages
A Massively Parallel Architecture for A Self-Organizing Neural Pattern Recognition Machine / Chapter 10:
Introduction: Self-Organization of Neural Recognition Codes
Self-Scaling Computational Units, Self-Adjusting Memory Search, Direct Access, and Attentional Vi...
Bottom-Up Adaptive Filtering and Contrast-Enhancement in Short Term Memory
Top-Down Template Matching and Stabilization of Code Learning
Interactions between Attentional and Orienting Subsystems: STM Reset and Search
Attentional Gain Control and Attentional Priming
Matching: The 2/3 Rule
Code Instability and Code Stability
Using Context to Distinguish Signal from Noise in Patterns of Variable Complexity
Vigilance Level Tunes Categorical Coarseness: Disconfirming Feedback
Rapid Classification of an Arbitrary Type Font
Network Equations: Interactions between Short Term Memory and Long Term Memory Patterns
Direct Access to Subset and Superset Patterns
Weber Law Rule and Associative Decay Rule for Bottom-Up LTM Traces
Template Learning Rule and Associative Decay Rule for Top-Down LTM Traces
Direct Access to Nodes Coding Perfectly Learned Patterns
Initial Strengths of LTM Traces
Summary of the Model
Order of Search and Stable Choices in Short-Term Memory
Stable Category Learning
Critical Feature Patterns and Prototypes
Direct Access after Learning Self-Stabilizes
Order of Search: Mathematical Analysis
Order of Search: Computer Simulations
Biasing the Network towards Uncommitted Nodes
Computer Simulation of Self-Scaling Computational Units: Weighing the Evidence
Concluding Remarks: Self-Stabilization and Unitization within Associative Networks / 27:
Variations on Adaptive Resonance / Chapter 11:
Background
Adaptive Thresholding
Learning with Iterative Short-Time Pattern Presentations
Continuous ARC Operation
Conclusions
ART 2: Self-Organization of Stable Category Recognition Codes for Analog Input Patterns / Chapter 12:
Art 2: Self-Organization of Stable Category Recognition Codes for Analog Input Patterns
Adaptive Resonance Architectures
ART 1: Binary Input Patterns
ART 2: Analog Input Patterns
ART 2 Design Principles
ART 2 STM Equations: F1
ART 2 STM Equations: F2
ART 2 LTM Equations
ART 2 Reset Equations: The Orienting Subsystem
The Match-Reset Tradeoff: Choice of Top-down Initial LTM Values
Learning Increases Mismatch Sensitivity and Confirms Category Choice
Choosing a New Category: Bottom-up LTM Initial Values
The Stability-Plasticity Tradeoff
Alternative ART 2 Architectures
Adaptive Bidirectional Associative Memories / Chapter 13:
Introduction: Storing Data Pairs in Associative Memory Matrices
Discrete Bidirectional Associative Memory (BAM) Stability
BAM Correlation Encoding
Continuous BAMs
Adaptive BAMs
ART 3: Hierarchical Search Using Chemical Transmitters in Self-Organizing Pattern Recognition Architectures / Chapter 14:
Art 3: Hierarchical Search Using Chemical Transmitters in Self-Organizing Pattern Recognition Archit...
Introduction: Distributed Search of ART Network Hierarchies
An ART Search Cycle
ART 2: Three-Layer Competitive Fields
ART Bidirectional Hierarchies and Homology of Fields
ART Cascade
Search in an ART Hierarchy
A New Role for Chemical Transmitters in ART Search
Equations for Transmitter Production, Release, and Inactivation
Alternative ART 3 Systems
Transmitter Release Rate
System Dynamics at Input Onset: An Approximately Linear Filter
System Dynamics after Intrafield Feedback: Amplification of Transmitter Release by Postsynaptic ...
System Dynamics during Reset: Inactivation of Bound Transmitter Channels
Parametric Robustness of the Search Process
Summary of System Dynamics during a Mismatch-Reset Cycle
Automatic STM Reset by Real-Time Input Sequences
Reinforcement Feedback
Notation for Hierarchies
Trade-Off between Weight Size and Pattern Match
ART 3 Simulations: Mismatch Reset and Input Reset of STM Choices
Search Time Invariance at Different Vigilance Values
Reinforcement Reset
Input Hysteresis Simulation
Distributed Code Simulation
Alternative ART 3 Model Simulation
Simulation Equations
Artmap: Supervised Real-Time Learning and Classification of Nonstationary Data by a Self-organizing Neural Network / Chapter 15:
Artmap: Supervised Real-Time Learning and Classification of Nonstatationary Data by a Self-Organizin...
The ARTMAP system
ARTMAP simulations: Distinguishing edible and poisonous mushrooms
ART modules ARTa and ARTb
The Map Field
Simulation Algorithms
Neuronal Activity as A Shaping factor in The Self Organization of Neuron Assemblies / Chapter 16:
Neuronal Activity As A Shaping Factor in The Self-Organization of Neuron Assemblies
A Teleological Argument
Evidence for a Central Control of Local Hebbian Modifications
The Gating Mechanism
Functional Implications of Developmental Plasticity
Probing Cognitive Processes Through The Structure of Event-Related Potentials: An Experimental and Theoretical Analysis / Chapter 17:
Probing Cognitive Processes Through The Structure of Event-Related Potentials During Learning: An Ex...
Attentional Subsystem and Orienting Subsystem
STM Reset and Search
Matching via the 2/3 Rule
ERP Components
Experimental Paradigm
Experimental Results: ERP Profiles
Comparison of ERP Profiles with Adaptive Resonance Theory Mechanisms
Conclusion: The Relationship of Learning to ERPs
Unitization, Automaticity, Temploral Order, and Word Recongnition / Chapter 18:
Unitization, Automaticity, Temporal Order, and Word Recognition
The Word Length Effect
Unitization and Psychological Progress
The Temporal Chunking Problem
All Letters are Sublists
Expectancy Learning and Priming
The McClelland and Rumelhart Model
The Schneider and Shiffrin Model
Parallel Processing and Unlimited Capacity
The Functional Unit of Cognitive Processing: Not Spreading Activation
Capacity versus Matching
Adaptive Filter: The Processing Bridge between Sublist Masking and Temporal Order Information ov...
The LTM Invariance Principle: Temporal Order Information without a Serial Buffer
Spatial Frequency Analysis of Temporal Order Information
Speech Perception and Production by a Self-Organizing Neural Network / Chapter 19:
Speech Perception and Production By A Self-Organizing Neural Network
The Learning of Language Units
Low Stages of Processing: Circular Reactions and the Emerging Auditory and Motor Codes
The Vector Integration to Endpoint Model
Self-Stabilization of Imitation via Motor-to-Auditory Priming
Higher Stages of Processing: Context-Sensitive Chunking and Unitization of the Emerging Auditory ...
Masking Fields
Neural Dynamics of Adaptive Timing and Temporal Discrimination During Associative Learning / Chapter 20:
Introduction: Timing the Expected Delay of a Goal Object in a Spatially Distributed and Nonstatio...
Timing the Balance between Exploration for Novel Rewards and Consummation of Expected Rewards
Distinguishing Expected Nonoccurrences from Unexpected Nonoccurrences: Inhibiting the Negative Co...
Spectral Timing Model / Part I:
Spectral Timing Model: An Application of Gated Dipole Theory
Spectral Timing Equations
The Activation Spectrum
The Habituation Spectrum
The Gated Signal Spectrum
Temporally Selective Associative Learning / D:
The Doubly Gated Signal Spectrum / E:
The Output Signal / F:
Effect of Increasing ISI and US Intensity
Comparison with Nictitating Membrane Conditioning Data
Inverted U in Learning as a Function of ISI
Multiple Timing Peaks
Effect of Increasing US Duration
Effect of Increasing CS Intensity
Timed Gating of Read-Out From the Orienting Subsystem / Part II:
Locating the Timing Circuit within a Self-Organizing Sensory-Cogni-tive and Cognitive-Reinforcem...
Cognitive-Reinforcement Circuit
The Gated Dipole Opponent Process
Adaptive Timing as Spectral Conditioned Reinforcer Learning
Timed Inhibition of the Orienting Subsystem by Drive Representations
Timed Activation of the Hippocampus and the Contingent Negative Variation
Effect of CS Intensity on Timed Motor Behavior
Spatial Coding of Stimulus Intensity by a PTS Shift Map
Effect of Drugs on Timed Motor Behavior
Concluding Remarks: Timing Paradox and Multiple Types of Timing Circuits
Author Index
Subject Index
List Of Authors
Editorial Preface
Neural Network Models for Pattern Recognition and Associative Memory / Chapter 1:
67.

図書

図書
Frank C. Hoppensteadt, Eugene M. Izhikevich
出版情報: New York : Springer, c1997  xvi, 400 p. ; 25 cm
シリーズ名: Applied mathematical sciences ; v. 126
所蔵情報: loading…
68.

図書

図書
R. Caminiti, P.B. Johnson, Y. Burnod, (eds.)
出版情報: Berlin ; New York : Springer-Verlag, c1992  338 p. ; 25 cm
シリーズ名: Experimental brain research series ; 22
所蔵情報: loading…
69.

図書

図書
Paul John Werbos
出版情報: New York : J. Wiley & Sons, c1994  xii, 319 p. ; 25 cm
シリーズ名: Adaptive and learning systems for signal processing, communications, and control
A Wiley-Interscience publication
所蔵情報: loading…
目次情報: 続きを見る
Thesis
Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences
Dynamic Feedback, Statistical Estimation, and Systems Optimization: General Techniques
The Multivariate ARMA(1,1) Model: Its Significance and Estimation
Simulation Studies of Techniques of Time-Series Analysis
General Applications of These Ideas: Practical Hazards and New Possibilities
Nationalism and Social Communications: A Test Case for Mathematical Approaches
Applications and Extensions
Forms of Backpropagation for Sensitivity Analysis, Optimization, and Neural Networks
Backpropagation Through Time: What It Does and How to Do It
Neurocontrol: Where It Is Going and Why It Is Crucial
Neural Networks and the Human Mind: New Mathematics Fits Humanistic Insight
Index
Thesis
Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences
Dynamic Feedback, Statistical Estimation, and Systems Optimization: General Techniques
70.

図書

図書
Teuvo Kohonen
出版情報: Berlin ; New York : Springer, c1995  ix, 362 p. ; 25 cm
シリーズ名: Springer series in information sciences ; 30
所蔵情報: loading…
目次情報: 続きを見る
Mathematical Preliminaries / 1:
Mathematical Concepts and Notations / 1.1:
Vector Space Concepts / 1.1.1:
Matrix Notations / 1.1.2:
Eigenvectors and Eigenvalues of Matrices / 1.1.3:
Further Properties of Matrices / 1.1.4:
On Matrix Differential Calculus / 1.1.5:
Distance Measures for Patterns / 1.2:
Measures of Similarity and Distance in Vector Spaces / 1.2.1:
Measures of Similarity and Distance Between Symbol Strings / 1.2.2:
Averages Over Nonvectorial Variables / 1.2.3:
Statistical Pattern Analysis / 1.3:
Basic Probabilistic Concepts / 1.3.1:
Projection Methods / 1.3.2:
Supervised Classification / 1.3.3:
Unsupervised Classification / 1.3.4:
The Subspace Methods of Classification / 1.4:
The Basic Subspace Method / 1.4.1:
Adaptation of a Model Subspace to Input Subspace / 1.4.2:
The Learning Subspace Method (LSM) / 1.4.3:
Vector Quantization / 1.5:
Definitions / 1.5.1:
Derivation of the VQ Algorithm / 1.5.2:
Point Density in VQ / 1.5.3:
Dynamically Expanding Context / 1.6:
Setting Up the Problem / 1.6.1:
Automatic Determination of Context-Independent Productions / 1.6.2:
Conflict Bit / 1.6.3:
Construction of Memory for the Context-Dependent Productions / 1.6.4:
The Algorithm for the Correction of New Strings / 1.6.5:
Estimation Procedure for Unsuccessful Searches / 1.6.6:
Practical Experiments / 1.6.7:
Neural Modeling / 2:
Models, Paradigms, and Methods / 2.1:
A History of Some Main Ideas in Neural Modeling / 2.2:
Issues on Artificial Intelligence / 2.3:
On the Complexity of Biological Nervous Systems / 2.4:
What the Brain Circuits Are Not / 2.5:
Relation Between Biological and Artificial Neural Networks / 2.6:
What Functions of the Brain Are Usually Modeled? / 2.7:
When Do We Have to Use Neural Computing? / 2.8:
Transformation, Relaxation, and Decoder / 2.9:
Categories of ANNs / 2.10:
A Simple Nonlinear Dynamic Model of the Neuron / 2.11:
Three Phases of Development of Neural Models / 2.12:
Learning Laws / 2.13:
Hebb's Law / 2.13.1:
The Riccati-Type Learning Law / 2.13.2:
The PCA-Type Learning Law / 2.13.3:
Some Really Hard Problems / 2.14:
Brain Maps / 2.15:
The Basic SOM / 3:
A Qualitative Introduction to the SOM / 3.1:
The Original Incremental SOM Algorithm / 3.2:
The "Dot-Product SOM" / 3.3:
Other Preliminary Demonstrations of Topology-Preserving Mappings / 3.4:
Ordering of Reference Vectors in the Input Space / 3.4.1:
Demonstrations of Ordering of Responses in the Output Space / 3.4.2:
Basic Mathematical Approaches to Self-Organization / 3.5:
One-Dimensional Case / 3.5.1:
Constructive Proof of Ordering of Another One-Dimensional SOM / 3.5.2:
The Batch Map / 3.6:
Initialization of the SOM Algorithms / 3.7:
On the "Optimal" Learning-Rate Factor / 3.8:
Effect of the Form of the Neighborhood Function / 3.9:
Does the SOM Algorithm Ensue from a Distortion Measure? / 3.10:
An Attempt to Optimize the SOM / 3.11:
Point Density of the Model Vectors / 3.12:
Earlier Studies / 3.12.1:
Numerical Check of Point Densities in a Finite One-Dimensional SOM / 3.12.2:
Practical Advice for the Construction of Good Maps / 3.13:
Examples of Data Analyses Implemented by the SOM / 3.14:
Attribute Maps with Full Data Matrix / 3.14.1:
Case Example of Attribute Maps Based on Incomplete Data Matrices (Missing Data): "Poverty Map" / 3.14.2:
Using Gray Levels to Indicate Clusters in the SOM / 3.15:
Interpretation of the SOM Mapping / 3.16:
"Local Principal Components" / 3.16.1:
Contribution of a Variable to Cluster Structures / 3.16.2:
Speedup of SOM Computation / 3.17:
Shortcut Winner Search / 3.17.1:
Increasing the Number of Units in the SOM / 3.17.2:
Smoothing / 3.17.3:
Combination of Smoothing, Lattice Growing, and SOM Algorithm / 3.17.4:
Physiological Interpretation of SOM / 4:
Conditions for Abstract Feature Maps in the Brain / 4.1:
Two Different Lateral Control Mechanisms / 4.2:
The WTA Function, Based on Lateral Activity Control / 4.2.1:
Lateral Control of Plasticity / 4.2.2:
Learning Equation / 4.3:
System Models of SOM and Their Simulations / 4.4:
Recapitulation of the Features of the Physiological SOM Model / 4.5:
Similarities Between the Brain Maps and Simulated Feature Maps / 4.6:
Magnification / 4.6.1:
Imperfect Maps / 4.6.2:
Overlapping Maps / 4.6.3:
Variants of SOM / 5:
Overview of Ideas to Modify the Basic SOM / 5.1:
Adaptive Tensorial Weights / 5.2:
Tree-Structured SOM in Searching / 5.3:
Different Definitions of the Neighborhood / 5.4:
Neighborhoods in the Signal Space / 5.5:
Dynamical Elements Added to the SOM / 5.6:
The SOM for Symbol Strings / 5.7:
Initialization of the SOM for Strings / 5.7.1:
The Batch Map for Strings / 5.7.2:
Tie-Break Rules / 5.7.3:
A Simple Example: The SOM of Phonemic Transcriptions / 5.7.4:
Operator Maps / 5.8:
Evolutionary-Learning SOM / 5.9:
Evolutionary-Learning Filters / 5.9.1:
Self-Organization According to a Fitness Function / 5.9.2:
Supervised SOM / 5.10:
The Adaptive-Subspace SOM (ASSOM) / 5.11:
The Problem of Invariant Features / 5.11.1:
Relation Between Invariant Features and Linear Subspaces / 5.11.2:
The ASSOM Algorithm / 5.11.3:
Derivation of the ASSOM Algorithm by Stochastic Approximation / 5.11.4:
ASSOM Experiments / 5.11.5:
Feedback-Controlled Adaptive-Subspace SOM (FASSOM) / 5.12:
Learning Vector Quantization / 6:
Optimal Decision / 6.1:
The LVQ1 / 6.2:
The Optimized-Learning-Rate LVQ1 (OLVQ1) / 6.3:
The Batch-LVQ1 / 6.4:
The Batch-LVQ1 for Symbol Strings / 6.5:
The LVQ2 (LVQ 2.1) / 6.6:
The LVQ3 / 6.7:
Differences Between LVQ1, LVQ2 and LVQ3 / 6.8:
General Considerations / 6.9:
The Hypermap-Type LVQ / 6.10:
The "LVQ-SOM" / 6.11:
Applications / 7:
Preprocessing of Optic Patterns / 7.1:
Blurring / 7.1.1:
Expansion in Terms of Global Features / 7.1.2:
Spectral Analysis / 7.1.3:
Expansion in Terms of Local Features (Wavelets) / 7.1.4:
Recapitulation of Features of Optic Patterns / 7.1.5:
Acoustic Preprocessing / 7.2:
Process and Machine Monitoring / 7.3:
Selection of Input Variables and Their Scaling / 7.3.1:
Analysis of Large Systems / 7.3.2:
Diagnosis of Speech Voicing / 7.4:
Transcription of Continuous Speech / 7.5:
Texture Analysis / 7.6:
Contextual Maps / 7.7:
Artifically Generated Clauses / 7.7.1:
Natural Text / 7.7.2:
Organization of Large Document Files / 7.8:
Statistical Models of Documents / 7.8.1:
Construction of Very Large WEBSOM Maps by the Projection Method / 7.8.2:
The WEBSOM of All Electronic Patent Abstracts / 7.8.3:
Robot-Arm Control / 7.9:
Simultaneous Learning of Input and Output Parameters / 7.9.1:
Another Simple Robot-Arm Control / 7.9.2:
Telecommunications / 7.10:
Adaptive Detector for Quantized Signals / 7.10.1:
Channel Equalization in the Adaptive QAM / 7.10.2:
Error-Tolerant Transmission of Images by a Pair of SOMs / 7.10.3:
The SOM as an Estimator / 7.11:
Symmetric (Autoassociative) Mapping / 7.11.1:
Asymmetric (Heteroassociative) Mapping / 7.11.2:
Software Tools for SOM / 8:
Necessary Requirements / 8.1:
Desirable Auxiliary Features / 8.2:
SOM Program Packages / 8.3:
SOM_PAK / 8.3.1:
SOM Toolbox / 8.3.2:
Nenet (Neural Networks Tool) / 8.3.3:
Viscovery SOMine / 8.3.4:
Examples of the Use of SOMLPAK / 8.4:
File Formats / 8.4.1:
Description of the Programs in SOM_PAK / 8.4.2:
A Typical Training Sequence / 8.4.3:
Neural-Networks Software with the SOM Option / 8.5:
Hardware for SOM / 9:
An Analog Classifier Circuit / 9.1:
Fast Digital Classifier Circuits / 9.2:
SIMD Implementation of SOM / 9.3:
Transputer Implementation of SOM / 9.4:
Systolic-Array Implementation of SOM / 9.5:
The COKOS Chip / 9.6:
The TInMANN Chip / 9.7:
NBISOM_25 Chip / 9.8:
An Overview of SOM Literature / 10:
Books and Review Articles / 10.1:
Early Works on Competitive Learning / 10.2:
Status of the Mathematical Analyses / 10.3:
Zero-Order Topology (Classical VQ) Results / 10.3.1:
Alternative Topological Mappings / 10.3.2:
Alternative Architectures / 10.3.3:
Functional Variants / 10.3.4:
Theory of the Basic SOM / 10.3.5:
The Learning Vector Quantization / 10.4:
Diverse Applications of SOM / 10.5:
Machine Vision and Image Analysis / 10.5.1:
Optical Character and Script Reading / 10.5.2:
Speech Analysis and Recognition / 10.5.3:
Acoustic and Musical Studies / 10.5.4:
Signal Processing and Radar Measurements / 10.5.5:
Industrial and Other Real-World Measurements / 10.5.6:
Process Control / 10.5.8:
Robotics / 10.5.9:
Electronic-Circuit Design / 10.5.10:
Physics / 10.5.11:
Chemistry / 10.5.12:
Biomedical Applications Without Image Processing / 10.5.13:
Neurophysiological Research / 10.5.14:
Data Processing and Analysis / 10.5.15:
Linguistic and AI Problems / 10.5.16:
Mathematical and Other Theoretical Problems / 10.5.17:
Applications of LVQ / 10.6:
Survey of SOM and LVQ Implementations / 10.7:
Glossary of "Neural" Terms / 11:
References
Index
Mathematical Preliminaries / 1:
Mathematical Concepts and Notations / 1.1:
Vector Space Concepts / 1.1.1:
71.

図書

図書
Sigeru Omatu, Marzuki Khalid and Rubiyah Yusof
出版情報: New York : Springer, c1996  xiii, 255 p. ; 24 cm
シリーズ名: Advances in industrial control
所蔵情報: loading…
72.

図書

図書
edited by Erkki Oja and Samuel Kaski
出版情報: Amsterdam : Elsevier, 1999  ix, 390 p. ; 25 cm
所蔵情報: loading…
目次情報: 続きを見る
Selected papers only
Preface / Kohonen Maps
Analyzing and representing multidimentional quantitative and qualitative data: Demographic study of the / Rhône valley
The domeatic consumption of the Canadian families / M. Cottrell ; P. Gaubert ; P. Letremy ; P. Rousset
Value maps: Finding value in markets that are expensive / G.J. Deboeck
Data mining and knowledge discovery with emergent Self-Organizing Feature Maps for multivariate time series / A. Ultsch
Tree structured Self-Organizing Maps / P. Koikkalainen
On the optimization of Self-Organizing Maps by genetic algorithms / D. Polani
Self organization of a massive text document collection / T. Kohonen ; S. Kaski ; K. Lagus ; J. Salojárvi ; J. Honkela ; V. Paatero ; A. Saarela
Document classification with Self-Organizing Maps / D. Merkl
Navigation in databases using Self-Organizing Maps / S.A. Shumsky
Self-Organising Maps in computer aided design of electronic circuits / A. Hemani ; A. Postula
Modeling self-organization in the visual cortex / R. Miikkulainen ; J.A. Bednar ; Y. Choe ; J. Sirosh
A spatio-temporal memory based on SOMs with activity diffusion / N.R. Euliano ; J.C. Principe
Advances in modeling cortical maps / P.G. Morasso ; V. Sanguineti ; F. Frisone
Topology preservation in Self-Organizing Maps / T. Villmann
Second-order learing in Self-Organizing Maps / R. Der ; M. Herrmann
Energy functions for Self-Organizing Maps / T. Heskes
LVQ and single trial EEG classification / G. Pfurtscheller ; M. Pregenzer
Self-Organizing Map in categorization of voice qualities / L. Leinonen
Self-Organizing Map in analysis of large-scale industrial systems / O. Simula ; J. Ahola ; E. Alhoniemi ; J. Himberg ; J. Vesanto
Keyword index
Selected papers only
Preface / Kohonen Maps
Analyzing and representing multidimentional quantitative and qualitative data: Demographic study of the / Rhône valley
73.

図書

図書
Halbert White with A.R. Gallant ... [et al.]
出版情報: Cambridge, Mass., USA ; Oxford : Blackwell, 1992  x, 329 p.
所蔵情報: loading…
74.

図書

図書
N.B. Karayiannis, A.N. Venetsanopoulos
出版情報: Boston : Kluwer Academic, c1993  xii, 440 p. ; 25 cm
シリーズ名: The Kluwer international series in engineering and computer science ; SECS 209
所蔵情報: loading…
75.

図書

図書
Jürgen Schürmann
出版情報: New York : Wiley, c1996  xvii, 373 p. ; 25 cm
所蔵情報: loading…
目次情報: 続きを見る
Statistical Decision Theory
Need for Approximations: Fundamental Approaches
Classification Based on Statistical Models Determined by First-and-Second Order Statistical Moments
Classification Based on Mean-Square Functional Approximations
Polynomial Regression
Multilayer Perceptron Regression
Radial Basis Functions
Measurements, Features, and Feature Section
Reject Criteria and Classifier Performance
Combining Classifiers
Conclusion
STATMOD Program: Description of ftp Package
References
Index
Statistical Decision Theory
Need for Approximations: Fundamental Approaches
Classification Based on Statistical Models Determined by First-and-Second Order Statistical Moments
文献の複写および貸借の依頼を行う
 文献複写・貸借依頼