close
1.

図書

図書
edited by Erkki Oja and Samuel Kaski
出版情報: Amsterdam : Elsevier, 1999  ix, 390 p. ; 25 cm
所蔵情報: loading…
目次情報: 続きを見る
Selected papers only
Preface / Kohonen Maps
Analyzing and representing multidimentional quantitative and qualitative data: Demographic study of the / Rhône valley
The domeatic consumption of the Canadian families / M. Cottrell ; P. Gaubert ; P. Letremy ; P. Rousset
Value maps: Finding value in markets that are expensive / G.J. Deboeck
Data mining and knowledge discovery with emergent Self-Organizing Feature Maps for multivariate time series / A. Ultsch
Tree structured Self-Organizing Maps / P. Koikkalainen
On the optimization of Self-Organizing Maps by genetic algorithms / D. Polani
Self organization of a massive text document collection / T. Kohonen ; S. Kaski ; K. Lagus ; J. Salojárvi ; J. Honkela ; V. Paatero ; A. Saarela
Document classification with Self-Organizing Maps / D. Merkl
Navigation in databases using Self-Organizing Maps / S.A. Shumsky
Self-Organising Maps in computer aided design of electronic circuits / A. Hemani ; A. Postula
Modeling self-organization in the visual cortex / R. Miikkulainen ; J.A. Bednar ; Y. Choe ; J. Sirosh
A spatio-temporal memory based on SOMs with activity diffusion / N.R. Euliano ; J.C. Principe
Advances in modeling cortical maps / P.G. Morasso ; V. Sanguineti ; F. Frisone
Topology preservation in Self-Organizing Maps / T. Villmann
Second-order learing in Self-Organizing Maps / R. Der ; M. Herrmann
Energy functions for Self-Organizing Maps / T. Heskes
LVQ and single trial EEG classification / G. Pfurtscheller ; M. Pregenzer
Self-Organizing Map in categorization of voice qualities / L. Leinonen
Self-Organizing Map in analysis of large-scale industrial systems / O. Simula ; J. Ahola ; E. Alhoniemi ; J. Himberg ; J. Vesanto
Keyword index
Selected papers only
Preface / Kohonen Maps
Analyzing and representing multidimentional quantitative and qualitative data: Demographic study of the / Rhône valley
2.

図書

図書
Witold Pedrycz
出版情報: Boca Raton, Fla. : CRC Press, c1998  284 p. ; 26 cm
所蔵情報: loading…
3.

図書

図書
Pierre Baldi, Søren Brunak
出版情報: Cambridge, Mass. : The MIT Press, 1998  xviii, 351 p., [8] p. of plats ; 24 cm
シリーズ名: Adaptive computation and machine learning
Bradford book
所蔵情報: loading…
目次情報: 続きを見る
Series Foreword
Preface
Introduction / 1:
Biological Data in Digital Symbol Sequences / 1.1:
Genomes--Diversity, Size, and Structure / 1.2:
Proteins and Proteomes / 1.3:
On the Information Content of Biological Sequences / 1.4:
Prediction of Molecular Function and Structure / 1.5:
Machine Learning Foundations: The Probabilistic Framework / 2:
Introduction: Bayesian Modeling / 2.1:
The Cox-Jaynes Axioms / 2.2:
Bayesian Inference and Induction / 2.3:
Model Structures: Graphical Models and Other Tricks / 2.4:
Summary / 2.5:
Probabilistic Modeling and Inference: Examples / 3:
The Simplest Sequence Models / 3.1:
Statistical Mechanics / 3.2:
Machine Learning Algorithms / 4:
Dynamic Programming / 4.1:
Gradient Descent / 4.3:
EM/GEM Algorithms / 4.4:
Markov Chain Monte Carlo Methods / 4.5:
Simulated Annealing / 4.6:
Evolutionary and Genetic Algorithms / 4.7:
Learning Algorithms: Miscellaneous Aspects / 4.8:
Neural Networks: The Theory / 5:
Universal Approximation Properties / 5.1:
Priors and Likelihoods / 5.3:
Learning Algorithms: Backpropagation / 5.4:
Neural Networks: Applications / 6:
Sequence Encoding and Output Interpretation / 6.1:
Prediction of Protein Secondary Structure / 6.2:
Prediction of Signal Peptides and Their Cleavage Sites / 6.3:
Applications for DNA and RNA Nucleotide Sequences / 6.4:
Hidden Markov Models: The Theory / 7:
Prior Information and Initialization / 7.1:
Likelihood and Basic Algorithms / 7.3:
Learning Algorithms / 7.4:
Applications of HMMs: General Aspects / 7.5:
Hidden Markov Models: Applications / 8:
Protein Applications / 8.1:
DNA and RNA Applications / 8.2:
Conclusion: Advantages and Limitations of HMMs / 8.3:
Hybrid Systems: Hidden Markov Models and Neural Networks / 9:
Introduction to Hybrid Models / 9.1:
The Single-Model Case / 9.2:
The Multiple-Model Case / 9.3:
Simulation Results / 9.4:
Probabilistic Models of Evolution: Phylogenetic Trees / 9.5:
Introduction to Probabilistic Models of Evolution / 10.1:
Substitution Probabilities and Evolutionary Rates / 10.2:
Rates of Evolution / 10.3:
Data Likelihood / 10.4:
Optimal Trees and Learning / 10.5:
Parsimony / 10.6:
Extensions / 10.7:
Stochastic Grammars and Linguistics / 11:
Introduction to Formal Grammars / 11.1:
Formal Grammars and the Chomsky Hierarchy / 11.2:
Applications of Grammars to Biological Sequences / 11.3:
Likelihood / 11.4:
Applications of SCFGs / 11.6:
Experiments / 11.8:
Future Directions / 11.9:
Internet Resources and Public Databases / 12:
A Rapidly Changing Set of Resources / 12.1:
Databases over Databases and Tools / 12.2:
Databases over Databases / 12.3:
Databases / 12.4:
Sequence Similarity Searches / 12.5:
Alignment / 12.6:
Selected Prediction Servers / 12.7:
Molecular Biology Software Links / 12.8:
Ph.D. Courses over the Internet / 12.9:
HMM/NN Simulator / 12.10:
Statistics / A:
Decision Theory and Loss Functions / A.1:
Quadratic Loss Functions / A.2:
The Bias/Variance Trade-off / A.3:
Combining Estimators / A.4:
Error Bars / A.5:
Sufficient Statistics / A.6:
Exponential Family / A.7:
Gaussian Process Models / A.8:
Variational Methods / A.9:
Information Theory, Entropy, and Relative Entropy / B:
Entropy / B.1:
Relative Entropy / B.2:
Mutual Information / B.3:
Jensen's Inequality / B.4:
Maximum Entropy / B.5:
Minimum Relative Entropy / B.6:
Probabilistic Graphical Models / C:
Notation and Preliminaries / C.1:
The Undirected Case: Markov Random Fields / C.2:
The Directed Case: Bayesian Networks / C.3:
HMM Technicalities, Scaling, Periodic Architectures, State Functions, and Dirichlet Mixtures / D:
Scaling / D.1:
Periodic Architectures / D.2:
State Functions: Bendability / D.3:
Dirichlet Mixtures / D.4:
List of Main Symbols and Abbreviations / E:
References
Index
Series Foreword
Preface
Introduction / 1:
4.

図書

図書
edited by Cornelius T. Leondes
出版情報: San Diego : Academic Press, c1998  xxix, 460 p. ; 24 cm
シリーズ名: Neural network systems techniques and applications ; vol. 1
所蔵情報: loading…
5.

図書

図書
edited by Omid Omidvar, Patrick van der Smagt
出版情報: San Diego ; Tokyo : Academic Press, c1997  xvii, 346 p. ; 24 cm
所蔵情報: loading…
目次情報: 続きを見る
Neural Network Sonar as a Perceptual Modality for Robotics / W.T. Miller III ; A.L. Kun,lt;/i>
Dynamic Balance of a Biped Walking / Robot. P. van der Smagt ; F. Groen,lt;/i>
Visual Feedback in Motion / D. DeMers ; K. Kreutz-Delgado,lt;/i>
Inverse Kinematics of Dextrous Manipulators / Y. Jin, T. Pipe ; A. Winfield,lt;/i>
Stable Manipulator Trajectory Control Using Neural Networks / P. Gaudiano ; F.H. Guenther ; E. Zalama,lt;/i>
The Neural Dynamics Approach to Sensory-Motor Control / A. Buhlmeier ; G. Maneuffel,lt;/i>
Operant Conditioning in Robots / B. Hallam ; J. Hallam ; G. Hayes,lt;/i>
A Dynamic Net for Robot Control / Ben Krise ; J. van Dam,lt;/i>
Neural Vehicles / J. Heikkonen ; P. Koikkalainen,lt;/i>
Self-Organization and Autonomous Robots
Neural Network Sonar as a Perceptual Modality for Robotics / W.T. Miller III ; A.L. Kun,lt;/i>
Dynamic Balance of a Biped Walking / Robot. P. van der Smagt ; F. Groen,lt;/i>
Visual Feedback in Motion / D. DeMers ; K. Kreutz-Delgado,lt;/i>
6.

図書

図書
James M. Bower and David Beeman
出版情報: Santa Clara, Calif. : TELOS, Springer-Verlag, c1995  xx, 409 p. ; 24 cm
所蔵情報: loading…
7.

図書

図書
edited by A.M.S. Zalzala and A.S. Morris
出版情報: New York ; Tokyo : Ellis Horwood, 1996  viii, 278 p. ; 25 cm
所蔵情報: loading…
8.

図書

図書
Kenneth Hunt, George Irwin and Kevin Warwick (eds.)
出版情報: Berlin ; New York : Springer, c1995  278 p. ; 24 cm
シリーズ名: Advances in industrial control
所蔵情報: loading…
9.

図書

図書
B.D. Ripley
出版情報: New York : Cambridge University Press, 1996  xi, 403 p. ; 26 cm
所蔵情報: loading…
目次情報: 続きを見る
Introduction and examples / 1:
Statistical decision theory / 2:
Linear discriminant analysis / 3:
Flexible discriminants / 4:
Feed-forward neural networks / 5:
Non-parametric methods / 6:
Tree-structured classifiers / 7:
Belief networks / 8:
Unsupervised methods / 9:
Finding good pattern features / 10:
statistical sidelines / Appendix:
Glossary
References
Author index
Subject index
Introduction and examples / 1:
Statistical decision theory / 2:
Linear discriminant analysis / 3:
10.

図書

図書
N. K. Bose, P. Liang
出版情報: New York ; Tokyo : McGraw-Hill, c1996  xxxiii, 478 p. ; 25 cm
シリーズ名: McGraw-Hill series in electrical and computer engineering ; . Communications and signal processing
所蔵情報: loading…
11.

図書

図書
by Anne-Johan Annema
出版情報: Boston : Kluwer Academic Publishers, c1995  xiii, 238 p. ; 25 cm
シリーズ名: The Kluwer international series in engineering and computer science ; Analog circuits and signal processing
所蔵情報: loading…
12.

図書

図書
Bahram Nabet, Robert B. Pinter
出版情報: Boca Raton : CRC Press, c1991  xi, 182 p. ; 25 cm
所蔵情報: loading…
13.

図書

図書
edited by Gail A. Carpenter and Stephen Grossberg
出版情報: Cambridge, Mass. ; London : MIT Press, c1992  467 p. ; 26 cm
所蔵情報: loading…
14.

図書

図書
Tomas Hrycej
出版情報: New York, NY : Wiley, c1992  xiii, 235 p. ; 25 cm
シリーズ名: Sixth-generation computer technology series
所蔵情報: loading…
15.

図書

図書
Paolo Antognetti and Veljko Milutinović, editors
出版情報: Englewood Cliffs, N.J. : Prentice Hall, 1991  4 v. ; 24 cm
シリーズ名: Prentice Hall advanced reference series ; . Engineering
所蔵情報: loading…
16.

図書

図書
by David P. Morgan, Christopher L. Scofield ; foreword by Leon N. Cooper
出版情報: Boston : Kluwer Academic Publishers, c1991  xvi, 391 p. ; 25 cm
シリーズ名: The Kluwer international series in engineering and computer science ; . VLSI, computer architecture, and digital signal processing
所蔵情報: loading…
17.

図書

図書
edited by Harry Wechsler
出版情報: Boston : Academic Press, c1992  xix, 363 p. ; 24 cm
シリーズ名: Neural networks for perception / edited by Harry Wechsler ; v. 2
所蔵情報: loading…
18.

図書

図書
Yi-Tong Zhou, Rama Chellappa
出版情報: New York ; Tokyo : Springer-Verlag, c1992  xi, 170 p. ; 24 cm
シリーズ名: Research notes in neural computing ; v. 5
所蔵情報: loading…
19.

図書

図書
Halbert White with A.R. Gallant ... [et al.]
出版情報: Cambridge, Mass., USA ; Oxford : Blackwell, 1992  x, 329 p.
所蔵情報: loading…
20.

図書

図書
edited by M.A. Arbib and J.A. Robinson
出版情報: Cambridge, MA : MIT Press, c1990  x, 345 p. ; 24 cm
所蔵情報: loading…
21.

図書

図書
by Hervé A. Bourlard, Nelson Morgan ; foreword by Richard Lippmann
出版情報: Boston : Kluwer Academic Publishers, c1994  xxviii, 312 p. ; 25 cm
シリーズ名: The Kluwer international series in engineering and computer science ; SECS 247 . VLSI, computer architecture, and digital signal processing
所蔵情報: loading…
目次情報: 続きを見る
List of Figures
List of Tables
Notation
Foreword
Preface
Background / I:
Introduction / 1:
Statistical Pattern Classification / 2:
Hidden Markov Models / 3:
Multilayer Perceptions / 4:
Hybrid HMM/MLP Systems / II:
Speech Recognition using ANNs / 5:
Statistical Inference in MLPs / 6:
The Hybrid HMM/MLP Approach / 7:
Experimental Systems / 8:
Context-Dependent MPLs / 9:
System Tradeoffs / 10:
Training Hardware and Software / 11:
Additional Topics / III:
Cross-Validation in MLP Training / 12:
HMM/MLP and Predictive Models / 13:
Feature Extraction by MLP / 14:
Finale / IV:
Final System Overview / 15:
Conclusions / 16:
Bibliography
Index
Acronyms
List of Figures
List of Tables
Notation
22.

図書

図書
edited by Richard J. Mammone
出版情報: London ; New York : Chapman & Hall, 1994  xx, 586 p.
シリーズ名: Chapman & Hall neural computing ; 4
所蔵情報: loading…
23.

図書

図書
Albert Nigrin
出版情報: Cambridge, Mass. : MIT Press, c1993  xvii, 413 p. ; 24 cm
所蔵情報: loading…
24.

図書

図書
Paul John Werbos
出版情報: New York : J. Wiley & Sons, c1994  xii, 319 p. ; 25 cm
シリーズ名: Adaptive and learning systems for signal processing, communications, and control
A Wiley-Interscience publication
所蔵情報: loading…
目次情報: 続きを見る
Thesis
Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences
Dynamic Feedback, Statistical Estimation, and Systems Optimization: General Techniques
The Multivariate ARMA(1,1) Model: Its Significance and Estimation
Simulation Studies of Techniques of Time-Series Analysis
General Applications of These Ideas: Practical Hazards and New Possibilities
Nationalism and Social Communications: A Test Case for Mathematical Approaches
Applications and Extensions
Forms of Backpropagation for Sensitivity Analysis, Optimization, and Neural Networks
Backpropagation Through Time: What It Does and How to Do It
Neurocontrol: Where It Is Going and Why It Is Crucial
Neural Networks and the Human Mind: New Mathematics Fits Humanistic Insight
Index
Thesis
Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences
Dynamic Feedback, Statistical Estimation, and Systems Optimization: General Techniques
25.

図書

図書
P.J.G. Lisboa, M.J. Taylor
出版情報: New York : Ellis Horwood, 1993  307 p. ; 25 cm
シリーズ名: Ellis Horwood workshop series
所蔵情報: loading…
26.

図書

図書
Derong Liu and Anthony N. Michel
出版情報: London ; Berlin ; New York : Springer-Verlag, c1994  xiv, 191 p. ; 24 cm
シリーズ名: Lecture notes in control and information sciences ; 195
所蔵情報: loading…
27.

図書

図書
edited by Alan F. Murray
出版情報: Dordrecht ; Boston : Kluwer Academic Publishers, c1995  xii, 322 p. ; 25 cm
所蔵情報: loading…
28.

図書

図書
Simon Haykin
出版情報: Upper Saddle River, N.J. : Pearson Education, c2009  934 p. ; 23 cm
所蔵情報: loading…
目次情報: 続きを見る
Preface
Introduction
What is a Neural Network? / 1:
The Human Brain / 2:
Models of a Neuron / 3:
Neural Networks Viewed As Directed Graphs / 4:
Feedback / 5:
Network Architectures / 6:
Knowledge Representation / 7:
Learning Processes / 8:
Learning Tasks / 9:
Concluding Remarks / 10:
Notes and References
Rosenblatt's Perceptron / Chapter 1:
Perceptron / 1.1:
The Perceptron Convergence Theorem / 1.3:
Relation Between the Perceptron and Bayes Classifier for a Gaussian Environment / 1.4:
Computer Experiment: Pattern Classification / 1.5:
The Batch Perceptron Algorithm / 1.6:
Summary and Discussion / 1.7:
Problems
Model Building through Regression / Chapter 2:
Linear Regression Model: Preliminary Considerations / 2.1:
Maximum a Posteriori Estimation of the Parameter Vector / 2.3:
Relationship Between Regularized Least-Squares Estimation and MAP Estimation / 2.4:
The Minimum-Description-Length Principle / 2.5:
Finite Sample-Size Considerations / 2.7:
The Instrumental-Variables Method / 2.8:
The Least-Mean-Square Algorithm / 2.9:
Filtering Structure of the LMS Algorithm / 3.1:
Unconstrained Optimization: a Review / 3.3:
The Wiener Filter / 3.4:
Markov Model Portraying the Deviation of the LMS Algorithm from the Wiener Filter / 3.5:
The Langevin Equation: Characterization of Brownian Motion / 3.7:
Kushner's Direct-Averaging Method / 3.8:
Statistical LMS Learning Theory for Small Learning-Rate Parameter / 3.9:
Computer Experiment I: Linear Prediction / 3.10:
Computer Experiment II: Pattern Classification / 3.11:
Virtues and Limitations of the LMS Algorithm / 3.12:
Learning-Rate Annealing Schedules / 3.13:
Multilayer Perceptrons / 3.14:
Some Preliminaries / 4.1:
Batch Learning and On-Line Learning / 4.3:
The Back-Propagation Algorithm / 4.4:
XOR Problem / 4.5:
Heuristics for Making the Back-Propagation Algorithm Perform Better / 4.6:
Back Propagation and Differentiation / 4.7:
The Hessian and Its Role in On-Line Learning / 4.9:
Optimal Annealing and Adaptive Control of the Learning Rate / 4.10:
Generalization / 4.11:
Approximations of Functions / 4.12:
Cross-Validation / 4.13:
Complexity Regularization and Network Pruning / 4.14:
Virtues and Limitations of Back-Propagation Learning / 4.15:
Supervised Learning Viewed as an Optimization Problem / 4.16:
Convolutional Networks / 4.17:
Nonlinear Filtering / 4.18:
Small-Scale Versus Large-Scale Learning Problems / 4.19:
Kernel Methods and Radial-Basis Function Networks / 4.20:
Cover's Theorem on the Separability of Patterns / 5.1:
The Interpolation Problem / 5.3:
Radial-Basis-Function Networks / 5.4:
K-Means Clustering / 5.5:
Recursive Least-Squares Estimation of the Weight Vector / 5.6:
Hybrid Learning Procedure for RBF Networks / 5.7:
Interpretations of the Gaussian Hidden Units / 5.8:
Kernel Regression and Its Relation to RBF Networks / 5.10:
Support Vector Machines / 5.11:
Optimal Hyperplane for Linearly Separable Patterns / 6.1:
Optimal Hyperplane for Nonseparable Patterns / 6.3:
The Support Vector Machine Viewed as a Kernel Machine / 6.4:
Design of Support Vector Machines / 6.5:
Regression: Robustness Considerations / 6.6:
Optimal Solution of the Linear Regression Problem / 6.9:
The Representer Theorem and Related Issues / 6.10:
Regularization Theory / 6.11:
Hadamard's Conditions for Well-Posedness / 7.1:
Tikhonov's Regularization Theory / 7.3:
Regularization Networks / 7.4:
Generalized Radial-Basis-Function Networks / 7.5:
The Regularized Least-Squares Estimator: Revisited / 7.6:
Additional Notes of Interest on Regularization / 7.7:
Estimation of the Regularization Parameter / 7.8:
Semisupervised Learning / 7.9:
Manifold Regularization: Preliminary Considerations / 7.10:
Differentiable Manifolds / 7.11:
Generalized Regularization Theory / 7.12:
Spectral Graph Theory / 7.13:
Generalized Representer Theorem / 7.14:
Laplacian Regularized Least-Squares Algorithm / 7.15:
Experiments on Pattern Classification Using Semisupervised Learning / 7.16:
Principal-Components Analysis / 7.17:
Principles of Self-Organization / 8.1:
Self-Organized Feature Analysis / 8.3:
Principal-Components Analysis: Perturbation Theory / 8.4:
Hebbian-Based Maximum Eigenfilter / 8.5:
Hebbian-Based Principal-Components Analysis / 8.6:
Case Study: Image Coding / 8.7:
Kernel Principal-Components Analysis / 8.8:
Basic Issues Involved in the Coding of Natural Images / 8.9:
Kernel Hebbian Algorithm / 8.10:
Self-Organizing Maps / 8.11:
Two Basic Feature-Mapping Models / 9.1:
Self-Organizing Map / 9.3:
Properties of the Feature Map / 9.4:
Computer Experiments I: Disentangling Lattice Dynamics Using SOM / 9.5:
Contextual Maps / 9.6:
Hierarchical Vector Quantization / 9.7:
Kernel Self-Organizing Map / 9.8:
Computer Experiment II: Disentangling Lattice Dynamics Using Kernel SOM / 9.9:
Relationship Between Kernel SOM and Kullback-Leibler Divergence / 9.10:
Information-Theoretic Learning Models / 9.11:
Entropy / 10.1:
Maximum-Entropy Principle / 10.3:
Mutual Information / 10.4:
Kullback-Leibler Divergence / 10.5:
Copulas / 10.6:
Mutual Information as an Objective Function to be Optimized / 10.7:
Maximum Mutual Information Principle / 10.8:
Infomax and Redundancy Reduction / 10.9:
Spatially Coherent Features / 10.10:
Spatially Incoherent Features / 10.11:
Independent-Components Analysis / 10.12:
Sparse Coding of Natural Images and Comparison with ICA Coding / 10.13:
Natural-Gradient Learning for Independent-Components Analysis / 10.14:
Maximum-Likelihood Estimation for Independent-Components Analysis / 10.15:
Maximum-Entropy Learning for Blind Source Separation / 10.16:
Maximization of Negentropy for Independent-Components Analysis / 10.17:
Coherent Independent-Components Analysis / 10.18:
Rate Distortion Theory and Information Bottleneck / 10.19:
Optimal Manifold Representation of Data / 10.20:
Stochastic Methods Rooted in Statistical Mechanics / 10.21:
Statistical Mechanics / 11.1:
Markov Chains / 11.3:
Metropolis Algorithm / 11.4:
Simulated Annealing / 11.5:
Gibbs Sampling / 11.6:
Boltzmann Machine / 11.7:
Logistic Belief Nets / 11.8:
Deep Belief Nets / 11.9:
Deterministic Annealing / 11.10:
Analogy of Deterministic Annealing with Expectation-Maximization Algorithm / 11.11:
Dynamic Programming / 11.12:
Markov Decision Process / 12.1:
Bellman's Optimality Criterion / 12.3:
Policy Iteration / 12.4:
Value Iteration / 12.5:
Approximate Dynamic Programming: Direct Methods / 12.6:
Temporal-Difference Learning / 12.7:
Q-Learning / 12.8:
Approximate Dynamic Programming: Indirect Methods / 12.9:
Least-Squares Policy Evaluation / 12.10:
Approximate Policy Iteration / 12.11:
Neurodynamics / 12.12:
Dynamic Systems / 13.1:
Stability of Equilibrium States / 13.3:
Attractors / 13.4:
Neurodynamic Models / 13.5:
Manipulation of Attractors as a Recurrent Network Paradigm / 13.6:
Hopfield Model / 13.7:
The Cohen-Grossberg Theorem / 13.8:
Brain-State-In-A-Box Model / 13.9:
Strange Attractors and Chaos / 13.10:
Dynamic Reconstruction of a Chaotic Process / 13.11:
Bayseian Filtering for State Estimation of Dynamic Systems / 13.12:
State-Space Models / 14.1:
Kalman Filters / 14.3:
The Divergence-Phenomenon and Square-Root Filtering / 14.4:
The Extended Kalman Filter / 14.5:
The Bayesian Filter / 14.6:
Cubature Kalman Filter: Building on the Kalman Filter / 14.7:
Particle Filters / 14.8:
Computer Experiment: Comparative Evaluation of Extended Kalman and Particle Filters / 14.9:
Kalman Filtering in Modeling of Brain Functions / 14.10:
Dynamically Driven Recurrent Networks / 14.11:
Recurrent Network Architectures / 15.1:
Universal Approximation Theorem / 15.3:
Controllability and Observability / 15.4:
Computational Power of Recurrent Networks / 15.5:
Learning Algorithms / 15.6:
Back Propagation Through Time / 15.7:
Real-Time Recurrent Learning / 15.8:
Vanishing Gradients in Recurrent Networks / 15.9:
Supervised Training Framework for Recurrent Networks Using Nonlinear Sequential State Estimators / 15.10:
Computer Experiment: Dynamic Reconstruction of Mackay-Glass Attractor / 15.11:
Adaptivity Considerations / 15.12:
Case Study: Model Reference Applied to Neurocontrol / 15.13:
Bibliography / 15.14:
Index
Preface
Introduction
What is a Neural Network? / 1:
29.

図書

図書
Wenjun Zhang
出版情報: Singapore : World Scientific, c2010  xiii, 296 p. ; 24 cm
所蔵情報: loading…
目次情報: 続きを見る
Preface
Introduction / Chapter 1:
Computational Ecology / 1:
Artificial Neural Networks and Ecological Applications / 2:
Artificial Neural Networks: Principles, Theories and Algorithms / Part I:
Feedforward Neural Networks / Chapter 2:
Linear Separability and Perceptron
Some Analogies of Multilayer Feedforward Networks
Functionability of Multilayer Feedforward Networks / 3:
Linear Neural Networks / Chapter 3:
LMS Rule
Radial Basis Function Neural Networks / Chapter 4:
Theory of RBF Neural Network
Regularized RBF Neural Network
RBF Neural Network Learning
Probabilistic Neural Network / 4:
Generalized Regression Neural Network / 5:
Functional Link Neural Network / 6:
Wavelet Neural Network / 7:
BP Neural Network / Chapter 5:
BP Algorithm
BP Theorem
BP Training
Limitations and Improvements of BP Algorithm
Self-Organizing Neural Networks / Chapter 6:
Self-Organizing Feature Map Neural Network
Self-Organizing Competitive Learning Neural Network
Hamming Neural Network
WTA Neural Network
LVQ Neural Network
Adaptive Resonance Theory
Feedback Neural Networks / Chapter 7:
Elman Neural Network
Hopfield Neural Networks
Simulated Annealing
Boltzmann Machine
Design and Customization of Artificial Neural Networks / Chapter 8:
Mixture of Experts
Hierarchical Mixture of Experts
Neural Network Controller
Customization of Neural Networks
Learning Theory, Architecture Choice and Interpretability of Neural Networks / Chapter 9:
Learning Theory
Architecture Choice
Interpretability of Neural Networks
Mathematical Foundations of Artificial Neural Networks / Chapter 10:
Bayesian Methods
Randomization, Bootstrap and Monte Carlo Techniques
Stochastic Process and Stochastic Differential Equation
Interpolation
Function Approximation
Optimization Methods
Manifold and Differential Geometry
Functional Analysis / 8:
Algebraic Topology / 9:
Motion Stability / 10:
Entropy of a System / 11:
Distance or Similarity Measures / 12:
Matlab Neural Network Toolkit / Chapter 11:
Functions of Perceptron
Functions of Linear Neural Networks
Functions of BP Neural Network
Functions of Self-Organizing Neural Networks
Functions of Radial Basis Neural Networks
Functions of Probabilistic Neural Network
Function of Generalized Regression Neural Network
Functions of Hopfield Neural Network
Function of Elman Neural Network
Applications of Artificial Neural Networks in Ecology / Part II:
Dynamic Modeling of Survival Process / Chapter 12:
Model Description
Data Description
Results
Discussion
Simulation of Plant Growth Process / Chapter 13:
Data Source
Simulation of Food Intake Dynamics / Chapter 14:
Species Richness Estimation and Sampling Data Documentation / Chapter 15:
Estimation of Plant Species Richness on Grassland
Documentation of Sampling Data of Invertebrates
Modeling Arthropod Abundance from Plant Composition of Grassland Community / Chapter 16:
Pattern Recognition and Classification of Ecosystems and Functional Groups / Chapter 17:
Modeling Spatial Distribution of Arthropods / Chapter 18:
Risk Assessment of Species Invasion and Establishment / Chapter 19:
Invasion Risk Assessment Based on Species Assemblages
Determination of Abiotic Factors Influencing Species Invasion
Prediction of Surface Ozone / Chapter 20:
BP Prediction of Daily Total Ozone
MLP Prediction of Hourly Ozone Levels
Modeling Dispersion and Distribution of Oxide and Nitrate Pollutants / Chapter 21:
Modeling Nitrogen Dioxide Dispersion
Simulation of Nitrate Distribution in Ground Water
Modeling Terrestrial Biomass / Chapter 22:
Estimation of Aboveground Grassland Biomass
Estimation of Trout Biomass
References
Index
Preface
Introduction / Chapter 1:
Computational Ecology / 1:
30.

図書

図書
by Bing J. Sheu, Joongho Choi ; with special assistance from Robert C. Chang ... [et al.]
出版情報: Boston : Kluwer Academic Publishers, c1995  xix, 559 p. ; 25 cm
シリーズ名: The Kluwer international series in engineering and computer science ; SECS 304
所蔵情報: loading…
31.

図書

図書
Stephen I. Gallant
出版情報: Cambridge, Mass. : MIT Press, c1993  xvi, 365 p. ; 24 cm
シリーズ名: Bradford book
所蔵情報: loading…
目次情報: 続きを見る
Foreword
Basics / I:
Introduction and Important Definitions / 1:
Why Connectionist Models? / 1.1:
The Grand Goals of Al and Its Current Impasse / 1.1.1:
The Computational Appeal of Neural Networks / 1.1.2:
The Structure of Connectionist Models / 1.2:
Network Properties / 1.2.1:
Cell Properties / 1.2.2:
Dynamic Properties / 1.2.3:
Learning Properties / 1.2.4:
Two Fundamental Models: Multilayer Perceptrons (MLP's) and Backpropagation Networks (BPN's) / 1.3:
Multilayer Perceptrons (MLP's) / 1.3.1:
Backpropagation Networks (BPN's) / 1.3.2:
Gradient Descent / 1.4:
The Algorithm / 1.4.1:
Practical Problems / 1.4.2:
Comments / 1.4.3:
Historic and Bibliographic Notes / 1.5:
Early Work / 1.5.1:
The Decline of the Perceptron / 1.5.2:
The Rise of Connectionist Research / 1.5.3:
Other Bibliographic Notes / 1.5.4:
Exercises / 1.6:
Programming Project / 1.7:
Representation Issues / 2:
Representing Boolean Functions / 2.1:
Equivalence of {+1, -1,0} and {1,0} Forms / 2.1.1:
Single-Cell Models / 2.1.2:
Nonseparable Functions / 2.1.3:
Representing Arbitrary Boolean Functions / 2.1.4:
Representing Boolean Functions Using Continuous Connectionist Models / 2.1.5:
Distributed Representations / 2.2:
Definition / 2.2.1:
Storage Efficiency and Resistance to Error / 2.2.2:
Superposition / 2.2.3:
Learning / 2.2.4:
Feature Spaces and ISA Relations / 2.3:
Feature Spaces / 2.3.1:
Concept-Function Unification / 2.3.2:
ISA Relations / 2.3.3:
Binding / 2.3.4:
Representing Real-Valued Functions / 2.4:
Approximating Real Numbers by Collections of Discrete Cells / 2.4.1:
Precision / 2.4.2:
Approximating Real Numbers by Collections of Continuous Cells / 2.4.3:
Example: Taxtime! / 2.5:
Programming Projects / 2.6:
Learning In Single-Layer Models / II:
Perceptron Learning and the Pocket Algorithm / 3:
Perceptron Learning for Separable Sets of Training Examples / 3.1:
Statement of the Problem / 3.1.1:
Computing the Bias / 3.1.2:
The Perceptron Learning Algorithm / 3.1.3:
Perceptron Convergence Theorem / 3.1.4:
The Perceptron Cycling Theorem / 3.1.5:
The Pocket Algorithm for Nonseparable Sets of Training Examples / 3.2:
Problem Statement / 3.2.1:
Perceptron Learning Is Poorly Behaved / 3.2.2:
The Pocket Algorithm / 3.2.3:
Ratchets / 3.2.4:
Examples / 3.2.5:
Noisy and Contradictory Sets of Training Examples / 3.2.6:
Rules / 3.2.7:
Implementation Considerations / 3.2.8:
Proof of the Pocket Convergence Theorem / 3.2.9:
Khachiyan's Linear Programming Algorithm / 3.3:
Winner-Take-All Groups or Linear Machines / 3.4:
Generalizes Single-Cell Models / 4.1:
Perceptron Learning for Winner-Take-All Groups / 4.2:
The Pocket Algorithm for Winner-Take-All Groups / 4.3:
Kessler's Construction, Perceptron Cycling, and the Pocket Algorithm Proof / 4.4:
Independent Training / 4.5:
Autoassociators and One-Shot Learning / 4.6:
Linear Autoassociators and the Outer-Product Training Rule / 5.1:
Anderson's BSB Model / 5.2:
Hopfieid's Model / 5.3:
Energy / 5.3.1:
The Traveling Salesman Problem / 5.4:
The Cohen-Grossberg Theorem / 5.5:
Kanerva's Model / 5.6:
Autoassociative Filtering for Feedforward Networks / 5.7:
Concluding Remarks / 5.8:
Mean Squared Error (MSE) Algorithms / 5.9:
Motivation / 6.1:
MSE Approximations / 6.2:
The Widrow-Hoff Rule or LMS Algorithm / 6.3:
Number of Training Examples Required / 6.3.1:
Adaline / 6.4:
Adaptive Noise Cancellation / 6.5:
Decision-Directed Learning / 6.6:
Unsupervised Learning / 6.7:
Introduction / 7.1:
No Teacher / 7.1.1:
Clustering Algorithms / 7.1.2:
k-Means Clustering / 7.2:
Topology-Preserving Maps / 7.2.1:
Example / 7.3.1:
Demonstrations / 7.3.4:
Dimensionality, Neighborhood Size, and Final Comments / 7.3.5:
Art1 / 7.4:
Important Aspects of the Algorithm / 7.4.1:
Art2 / 7.4.2:
Using Clustering Algorithms for Supervised Learning / 7.6:
Labeling Clusters / 7.6.1:
ARTMAP or Supervised ART / 7.6.2:
Learning In Multilayer Models / 7.7:
The Distributed Method and Radial Basis Functions / 8:
Rosenblatt's Approach / 8.1:
The Distributed Method / 8.2:
Cover's Formula / 8.2.1:
Robustness-Preserving Functions / 8.2.2:
Hepatobiliary Data / 8.3:
Artificial Data / 8.3.2:
How Many Cells? / 8.4:
Pruning Data / 8.4.1:
Leave-One-Out / 8.4.2:
Radial Basis Functions / 8.5:
A Variant: The Anchor Algorithm / 8.6:
Scaling, Multiple Outputs, and Parallelism / 8.7:
Scaling Properties / 8.7.1:
Multiple Outputs and Parallelism / 8.7.2:
A Computational Speedup for Learning / 8.7.3:
Computational Learning Theory and the BRD Algorithm / 8.7.4:
Introduction to Computational Learning Theory / 9.1:
PAC-Learning / 9.1.1:
Bounded Distributed Connectionist Networks / 9.1.2:
Probabilistic Bounded Distributed Concepts / 9.1.3:
A Learning Algorithm for Probabilistic Bounded Distributed Concepts / 9.2:
The BRD Theorem / 9.3:
Polynomial Learning / 9.3.1:
Noisy Data and Fallback Estimates / 9.4:
Vapnik-Chervonenkis Bounds / 9.4.1:
Hoeffding and Chernoff Bounds / 9.4.2:
Pocket Algorithm / 9.4.3:
Additional Training Examples / 9.4.4:
Bounds for Single-Layer Algorithms / 9.5:
Fitting Data by Limiting the Number of Iterations / 9.6:
Discussion / 9.7:
Exercise / 9.8:
Constructive Algorithms / 9.9:
The Tower and Pyramid Algorithms / 10.1:
The Tower Algorithm / 10.1.1:
Proof of Convergence / 10.1.2:
A Computational Speedup / 10.1.4:
The Pyramid Algorithm / 10.1.5:
The Cascade-Correlation Algorithm / 10.2:
The Tiling Algorithm / 10.3:
The Upstart Algorithm / 10.4:
Other Constructive Algorithms and Pruning / 10.5:
Easy Learning Problems / 10.6:
Decomposition / 10.6.1:
Expandable Network Problems / 10.6.2:
Limits of Easy Learning / 10.6.3:
Backpropagation / 10.7:
The Backpropagation Algorithm / 11.1:
Statement of the Algorithm / 11.1.1:
A Numerical Example / 11.1.2:
Derivation / 11.2:
Practical Considerations / 11.3:
Determination of Correct Outputs / 11.3.1:
Initial Weights / 11.3.2:
Choice of r / 11.3.3:
Momentum / 11.3.4:
Network Topology / 11.3.5:
Local Minima / 11.3.6:
Activations in [0,1] versus [-1, 1] / 11.3.7:
Update after Every Training Example / 11.3.8:
Other Squashing Functions / 11.3.9:
NP-Completeness / 11.4:
Overuse / 11.5:
Interesting Intermediate Cells / 11.5.2:
Continuous Outputs / 11.5.3:
Probability Outputs / 11.5.4:
Using Backpropagation to Train Multilayer Perceptrons / 11.5.5:
Backpropagation: Variations and Applications / 11.6:
NETtalk / 12.1:
Input and Output Representations / 12.1.1:
Experiments / 12.1.2:
Backpropagation through Time / 12.1.3:
Handwritten Character Recognition / 12.3:
Neocognitron Architecture / 12.3.1:
The Network / 12.3.2:
Robot Manipulator with Excess Degrees of Freedom / 12.3.3:
The Problem / 12.4.1:
Training the Inverse Network / 12.4.2:
Plan Units / 12.4.3:
Simulated Annealing and Boltzmann Machines / 12.4.4:
Simulated Annealing / 13.1:
Boltzmann Machines / 13.2:
The Boltzmann Model / 13.2.1:
Boltzmann Learning / 13.2.2:
The Boltzmann Algorithm and Noise Clamping / 13.2.3:
Example: The 4-2-4 Encoder Problem / 13.2.4:
Remarks / 13.3:
Neural Network Expert Systems / 13.4:
Expert Systems and Neural Networks / 14:
Expert Systems / 14.1:
What Is an Expert System? / 14.1.1:
Why Expert Systems? / 14.1.2:
Historically Important Expert Systems / 14.1.3:
Critique of Conventional Expert Systems / 14.1.4:
Neural Network Decision Systems / 14.2:
Example: Diagnosis of Acute Coronary Occlusion / 14.2.1:
Example: Autonomous Navigation / 14.2.2:
Other Examples / 14.2.3:
Decision Systems versus Expert Systems / 14.2.4:
MACIE, and an Example Problem / 14.3:
Diagnosis and Treatment of Acute Sarcophagal Disease / 14.3.1:
Network Generation / 14.3.2:
Sample Run of Macie / 14.3.3:
Real-Valued Variables and Winner-Take-All Groups / 14.3.4:
Not-Yet-Known versus Unavailable Variables / 14.3.5:
Applicability of Neural Network Expert Systems / 14.4:
Details of the MACIE System / 14.5:
Inferencing and Forward Chaining / 15.1:
Discrete Multilayer Perceptron Models / 15.1.1:
Continuous Variables / 15.1.2:
Winner-Take-All Groups / 15.1.3:
Using Prior Probabilities for More Aggressive Inferencing / 15.1.4:
Confidence Estimation / 15.2:
A Confidence Heuristic Prior to Inference / 15.2.1:
Confidence in Inferences / 15.2.2:
Information Acquisition and Backward Chaining / 15.3:
Concluding Comment / 15.4:
Noise, Redundancy, Fault Detection, and Bayesian Decision Theory / 15.5:
The High Tech Lemonade Corporation's Problem / 16.1:
The Deep Model and the Noise Model / 16.2:
Generating the Expert System / 16.3:
Probabilistic Analysis / 16.4:
Noisy Single-Pattern Boolean Fault Detection Problems / 16.5:
Convergence Theorem / 16.6:
Extracting Rules from networks / 16.7:
Why Rules? / 17.1:
What Kind of Rules? / 17.2:
Criteria / 17.2.1:
Inference Justifications versus Rule Sets / 17.2.2:
Which Variables in Conditions / 17.2.3:
Inference Justifications / 17.3:
MACIE's Algorithm / 17.3.1:
The Removal Algorithm / 17.3.2:
Key Factor Justifications / 17.3.3:
Justifications for Continuous Models / 17.3.4:
Rule Sets / 17.4:
Limiting the Number of Conditions / 17.4.1:
Approximating Rules / 17.4.2:
Conventional + Neural Network Expert Systems / 17.5:
Debugging an Expert System Knowledge Base / 17.5.1:
The Short-Rule Debugging Cycle / 17.5.2:
Appendix Representation Comparisons / 17.6:
DNF Expressions / A.1 DNF Expressions and Polynomial Representability:
Polynomial Representability / A.1.2:
Space Comparison of MLP and DNF Representations / A.1.3:
Speed Comparison of MLP and DNF Representations / A.1.4:
MLP versus DNF Representations / A.1.5:
Decision Trees / A.2:
Representing Decision Trees by MLP's / A.2.1:
Speed Comparison / A.2.2:
Decision Trees versus MLP's / A.2.3:
p-lDiagrams / A.3:
Symmetric Functions and Depth Complexity / A.4:
Bibliography / A.5:
Index
Foreword
Basics / I:
Introduction and Important Definitions / 1:
32.

図書

図書
Larry R. Medsker
出版情報: Boston : Kluwer Academic, c1994  240 p. ; 25 cm
所蔵情報: loading…
33.

図書

図書
Bart Kosko, editor
出版情報: Englewood Cliffs, N.J. : Prentice Hall, c1992  xv, 399 p. ; 25 cm
所蔵情報: loading…
文献の複写および貸借の依頼を行う
 文献複写・貸借依頼