Foreword |
Preface |
List of Tables |
List of Figures |
Introduction / 1: |
Pattern Recognition in Brief / 1.1: |
Data acquisition / 1.2.1: |
Feature selection/extraction / 1.2.2: |
Classification / 1.2.3: |
Knowledge Discovery in Databases (KDD) / 1.3: |
Data Mining / 1.4: |
Data mining tasks / 1.4.1: |
Data mining tools / 1.4.2: |
Applications of data mining / 1.4.3: |
Different Perspectives of Data Mining / 1.5: |
Database perspective / 1.5.1: |
Statistical perspective / 1.5.2: |
Pattern recognition perspective / 1.5.3: |
Research issues and challenges / 1.5.4: |
Scaling Pattern Recognition Algorithms to Large Data Sets / 1.6: |
Data reduction / 1.6.1: |
Dimensionality reduction / 1.6.2: |
Active learning / 1.6.3: |
Data partitioning / 1.6.4: |
Granular computing / 1.6.5: |
Efficient search algorithms / 1.6.6: |
Significance of Soft Computing in KDD / 1.7: |
Scope of the Book / 1.8: |
Multiscale Data Condensation / 2: |
Data Condensation Algorithms / 2.1: |
Condensed nearest neighbor rule / 2.2.1: |
Learning vector quantization / 2.2.2: |
Astrahan's density-based method / 2.2.3: |
Multiscale Representation of Data / 2.3: |
Nearest Neighbor Density Estimate / 2.4: |
Multiscale Data Condensation Algorithm / 2.5: |
Experimental Results and Comparisons / 2.6: |
Density estimation / 2.6.1: |
Test of statistical significance / 2.6.2: |
Classification: Forest cover data / 2.6.3: |
Clustering: Satellite image data / 2.6.4: |
Rule generation: Census data / 2.6.5: |
Study on scalability / 2.6.6: |
Choice of scale parameter / 2.6.7: |
Summary / 2.7: |
Unsupervised Feature Selection / 3: |
Feature Extraction / 3.1: |
Feature Selection / 3.3: |
Filter approach / 3.3.1: |
Wrapper approach / 3.3.2: |
Feature Selection Using Feature Similarity (FSFS) / 3.4: |
Feature similarity measures / 3.4.1: |
Feature selection through clustering / 3.4.2: |
Feature Evaluation Indices / 3.5: |
Supervised indices / 3.5.1: |
Unsupervised indices / 3.5.2: |
Representation entropy / 3.5.3: |
Comparison: Classification and clustering performance / 3.6: |
Redundancy reduction: Quantitative study / 3.6.2: |
Effect of cluster size / 3.6.3: |
Active Learning Using Support Vector Machine / 3.7: |
Support Vector Machine / 4.1: |
Incremental Support Vector Learning with Multiple Points / 4.3: |
Statistical Query Model of Learning / 4.4: |
Query strategy / 4.4.1: |
Confidence factor of support vector set / 4.4.2: |
Learning Support Vectors with Statistical Queries / 4.5: |
Experimental Results and Comparison / 4.6: |
Classification accuracy and training time / 4.6.1: |
Effectiveness of the confidence factor / 4.6.2: |
Margin distribution / 4.6.3: |
Rough-fuzzy Case Generation / 4.7: |
Soft Granular Computing / 5.1: |
Rough Sets / 5.3: |
Information systems / 5.3.1: |
Indiscernibility and set approximation / 5.3.2: |
Reducts / 5.3.3: |
Dependency rule generation / 5.3.4: |
Linguistic Representation of Patterns and Fuzzy Granulation / 5.4: |
Rough-fuzzy Case Generation Methodology / 5.5: |
Thresholding and rule generation / 5.5.1: |
Mapping dependency rules to cases / 5.5.2: |
Case retrieval / 5.5.3: |
Rough-fuzzy Clustering / 5.6: |
Clustering Methodologies / 6.1: |
Algorithms for Clustering Large Data Sets / 6.3: |
Clarans: Clustering large applications based upon randomized search / 6.3.1: |
Birch: Balanced iterative reducing and clustering using hierarchies / 6.3.2: |
Dbscan: Density-based spatial clustering of applications with noise / 6.3.3: |
Sting: Statistical information grid / 6.3.4: |
CemmiStri: Clustering using EM, Minimal Spanning Tree and Rough-fuzzy Initialization / 6.4: |
Mixture model estimation via EM algorithm / 6.4.1: |
Rough set initialization of mixture parameters / 6.4.2: |
Mapping reducts to mixture parameters / 6.4.3: |
Graph-theoretic clustering of Gaussian components / 6.4.4: |
Multispectral Image Segmentation / 6.5: |
Discretization of image bands / 6.6.1: |
Integration of EM, MST and rough sets / 6.6.2: |
Index for segmentation quality / 6.6.3: |
Experimental results and comparison / 6.6.4: |
Rough Self-Organizing Map / 6.7: |
Self-Organizing Maps (SOM) / 7.1: |
Learning / 7.2.1: |
Effect of neighborhood / 7.2.2: |
Incorporation of Rough Sets in SOM (RSOM) / 7.3: |
Unsupervised rough set rule generation / 7.3.1: |
Mapping rough set rules to network weights / 7.3.2: |
Rule Generation and Evaluation / 7.4: |
Extraction methodology / 7.4.1: |
Evaluation indices / 7.4.2: |
Clustering and quantization error / 7.5: |
Performance of rules / 7.5.2: |
Classification, Rule Generation and Evaluation using Modular Rough-fuzzy MLP / 7.6: |
Ensemble Classifiers / 8.1: |
Association Rules / 8.3: |
Rule generation algorithms / 8.3.1: |
Rule interestingness / 8.3.2: |
Classification Rules / 8.4: |
Rough-fuzzy MLP / 8.5: |
Fuzzy MLP / 8.5.1: |
Rough set knowledge encoding / 8.5.2: |
Modular Evolution of Rough-fuzzy MLP / 8.6: |
Algorithm / 8.6.1: |
Evolutionary design / 8.6.2: |
Rule Extraction and Quantitative Evaluation / 8.7: |
Rule extraction methodology / 8.7.1: |
Quantitative measures / 8.7.2: |
Rule extraction / 8.8: |
Role of Soft-Computing Tools in KDD / 8.9: |
Fuzzy Sets / A.1: |
Clustering / A.1.1: |
Association rules / A.1.2: |
Functional dependencies / A.1.3: |
Data summarization / A.1.4: |
Web application / A.1.5: |
Image retrieval / A.1.6: |
Neural Networks / A.2: |
Clustering and self organization / A.2.1: |
Regression / A.2.3: |
Neuro-fuzzy Computing / A.3: |
Genetic Algorithms / A.4: |
Other Hybridizations / A.5: |
Data Sets Used in Experiments / B: |
References |
Index |
About the Authors |
Foreword |
Preface |
List of Tables |
List of Figures |
Introduction / 1: |
Pattern Recognition in Brief / 1.1: |