An Overview of Language Processing / 1: |
Linguistics and Language Processing / 1.1: |
Applications of Language Processing / 1.2: |
The Different Domains of Language Processing / 1.3: |
Phonetics / 1.4: |
Lexicon and Morphology / 1.5: |
Syntax / 1.6: |
Syntax as Defined by Noam Chomsky / 1.6.1: |
Syntax as Relations and Dependencies / 1.6.2: |
Semantics / 1.7: |
Discourse and Dialogue / 1.8: |
Why Speech and Language Processing Are Difficult / 1.9: |
Ambiguity / 1.9.1: |
Models and Their Implementation / 1.9.2: |
An Example of Language Technology in Action: the Persona Project / 1.10: |
Overview of Persona / 1.10.1: |
The Persona's Modules / 1.10.2: |
Further Reading / 1.11: |
Corpus Processing Tools / 2: |
Corpora / 2.1: |
Types of Corpora / 2.1.1: |
Corpora and Lexicon Building / 2.1.2: |
Corpora as Knowledge Sources for the Linguist / 2.1.3: |
Finite-State Automata / 2.2: |
A Description / 2.2.1: |
Mathematical Definition of Finite-State Automata / 2.2.2: |
Finite-State Automata in Prolog / 2.2.3: |
Deterministic and Nondeterministic Automata / 2.2.4: |
Building a Deterministic Automata from a Nondeterministic One / 2.2.5: |
Searching a String with a Finite-State Automaton / 2.2.6: |
Operations on Finite-State Automata / 2.2.7: |
Regular Expressions / 2.3: |
Repetition Metacharacters / 2.3.1: |
The Longest Match / 2.3.2: |
Character Classes / 2.3.3: |
Nonprintable Symbols or Positions / 2.3.4: |
Union and Boolean Operators / 2.3.5: |
Operator Combination and Precedence / 2.3.6: |
Programming with Regular Expressions / 2.4: |
Perl / 2.4.1: |
Matching / 2.4.2: |
Substitutions / 2.4.3: |
Translating Characters / 2.4.4: |
String Operators / 2.4.5: |
Back References / 2.4.6: |
Finding Concordances / 2.5: |
Concordances in Prolog / 2.5.1: |
Concordances in Perl / 2.5.2: |
Approximate String Matching / 2.6: |
Edit Operations / 2.6.1: |
Minimum Edit Distance / 2.6.2: |
Searching Edits in Prolog / 2.6.3: |
Encoding, Entropy, and Annotation Schemes / 2.7: |
Encoding Texts / 3.1: |
Character Sets / 3.2: |
Representing Characters / 3.2.1: |
Unicode / 3.2.2: |
The Unicode Encoding Schemes / 3.2.3: |
Locales and Word Order / 3.3: |
Presenting Time, Numerical Information, and Ordered Words / 3.3.1: |
The Unicode Collation Algorithm / 3.3.2: |
Markup Languages / 3.4: |
A Brief Background / 3.4.1: |
An Outline of XML / 3.4.2: |
Writing a DTD / 3.4.3: |
Writing an XML Document / 3.4.4: |
Namespaces / 3.4.5: |
Codes and Information Theory / 3.5: |
Entropy / 3.5.1: |
Huffman Encoding / 3.5.2: |
Cross Entropy / 3.5.3: |
Perplexity and Cross Perplexity / 3.5.4: |
Entropy and Decision Trees / 3.6: |
Decision Trees / 3.6.1: |
Inducing Decision Trees Automatically / 3.6.2: |
Counting Words / 3.7: |
Counting Words and Word Sequences / 4.1: |
Words and Tokens / 4.2: |
What Is a Word? / 4.2.1: |
Breaking a Text into Words: Tokenization / 4.2.2: |
Tokenizing Texts / 4.3: |
Tokenizing Texts in Prolog / 4.3.1: |
Tokenizing Texts in Perl / 4.3.2: |
N-grams / 4.4: |
Some Definitions / 4.4.1: |
Counting Unigrams in Prolog / 4.4.2: |
Counting Unigrams with Perl / 4.4.3: |
Counting Bigrams with Perl / 4.4.4: |
Probabilistic Models of a Word Sequence / 4.5: |
The Maximum Likelihood Estimation / 4.5.1: |
Using ML Estimates with Nineteen Eighty-Four / 4.5.2: |
Smoothing N-gram Probabilities / 4.6: |
Sparse Data / 4.6.1: |
Laplace's Rule / 4.6.2: |
Good-Turing Estimation / 4.6.3: |
Using N-grams of Variable Length / 4.7: |
Linear Interpolation / 4.7.1: |
Back-off / 4.7.2: |
Quality of a Language Model / 4.8: |
Intuitive Presentation / 4.8.1: |
Entropy Rate / 4.8.2: |
Perplexity / 4.8.3: |
Collocations / 4.9: |
Word Preference Measurements / 4.9.1: |
Extracting Collocations with Perl / 4.9.2: |
Application: Retrieval and Ranking of Documents on the Web / 4.10: |
Words, Parts of Speech, and Morphology / 4.11: |
Words / 5.1: |
Parts of Speech / 5.1.1: |
Features / 5.1.2: |
Two Significant Parts of Speech: The Noun and the Verb / 5.1.3: |
Lexicons / 5.2: |
Encoding a Dictionary / 5.2.1: |
Building a Trie in Prolog / 5.2.2: |
Finding a Word in a Trie / 5.2.3: |
Morphology / 5.3: |
Morphemes / 5.3.1: |
Morphs / 5.3.2: |
Inflection and Derivation / 5.3.3: |
Language Differences / 5.3.4: |
Morphological Parsing / 5.4: |
Two-Level Model of Morphology / 5.4.1: |
Interpreting the Morphs / 5.4.2: |
Finite-State Transducers / 5.4.3: |
Conjugating a French Verb / 5.4.4: |
Prolog Implementation / 5.4.5: |
Operations on Finite-State Transducers / 5.4.6: |
Morphological Rules / 5.5: |
Two-Level Rules / 5.5.1: |
Rules and Finite-State Transducers / 5.5.2: |
Rule Composition: An Examplewith French Irregular Verbs / 5.5.3: |
Application Examples / 5.6: |
Part-of-Speech Tagging Using Rules / 5.7: |
Resolving Part-of-Speech Ambiguity / 6.1: |
A Manual Method / 6.1.1: |
Which Method to Use to Automatically Assign Parts of Speech / 6.1.2: |
Tagging with Rules / 6.2: |
Brill's Tagger / 6.2.1: |
Implementation in Prolog / 6.2.2: |
Deriving Rules Automatically / 6.2.3: |
Confusion Matrices / 6.2.4: |
Unknown Words / 6.3: |
Standardized Part-of-Speech Tagsets / 6.4: |
Multilingual Part-of-Speech Tags / 6.4.1: |
Parts of Speechfor English / 6.4.2: |
An Annotation Schemefor Swedish / 6.4.3: |
Part-of-Speech Tagging Using Stochastic Techniques / 6.5: |
The Noisy Channel Model / 7.1: |
Presentation / 7.1.1: |
The N-gram Approximation / 7.1.2: |
Tagging a Sentence / 7.1.3: |
The Viterbi Algorithm: An Intuitive Presentation / 7.1.4: |
Markov Models / 7.2: |
Markov Chains / 7.2.1: |
Hidden Markov Models / 7.2.2: |
Three Fundamental Algorithms to Solve Problems with HMMs / 7.2.3: |
The Forward Procedure / 7.2.4: |
Viterbi Algorithm / 7.2.5: |
The Backward Procedure / 7.2.6: |
The Forward-Backward Algorithm / 7.2.7: |
Tagging with Decision Trees / 7.3: |
An Application of the Noisy Channel Model: Spell Checking / 7.4: |
A Second Application: Language Models for Machine Translation / 7.6: |
Parallel Corpora / 7.6.1: |
Alignment / 7.6.2: |
Translation / 7.6.3: |
Phrase-Structure Grammars in Prolog / 7.7: |
Using Prolog to Write Phrase-Structure Grammars / 8.1: |
Representing Chomsky's Syntactic Formalism in Prolog / 8.2: |
Constituents / 8.2.1: |
Tree Structures / 8.2.2: |
Phrase-Structure Rules / 8.2.3: |
The Definite Clause Grammar (DCG) Notation / 8.2.4: |
Parsing with DCGs / 8.3: |
Translating DCGs into Prolog Clauses / 8.3.1: |
Parsing and Generation / 8.3.2: |
Left-Recursive Rules / 8.3.3: |
Parsing Ambiguity / 8.4: |
Using Variables / 8.5: |
Gender and Number Agreement / 8.5.1: |
Obtaining the Syntactic Structure / 8.5.2: |
Application: Tokenizing Texts Using DCG Rules / 8.6: |
Word Breaking / 8.6.1: |
Recognition of Sentence Boundaries / 8.6.2: |
Semantic Representation / 8.7: |
A-Calculus / 8.7.1: |
Embedding A-Expressions into DCG Rules / 8.7.2: |
Semantic Composition of Verbs / 8.7.3: |
An Application of Phrase-Structure Grammars and a Worked Example / 8.8: |
Partial Parsing / 8.9: |
Is Syntax Necessary? / 9.1: |
Word Spotting and Template Matching / 9.2: |
ELIZA / 9.2.1: |
Word Spotting in Prolog / 9.2.2: |
Multiword Detection / 9.3: |
Multiwords / 9.3.1: |
AStandard Multiword Annotation / 9.3.2: |
Detecting Multiwords with Rules / 9.3.3: |
Running the Program / 9.3.4: |
Noun Groups and Verb Groups / 9.4: |
Groups Versus Recursive Phrases / 9.4.1: |
DCG Rules to Detect Noun Groups / 9.4.2: |
DCG Rules to Detect Verb Groups / 9.4.3: |
Running the Rules / 9.4.4: |
Group Detection as a Tagging Problem / 9.5: |
Tagging Gaps / 9.5.1: |
Tagging Words / 9.5.2: |
Using Symbolic Rules / 9.5.3: |
Using Statistical Tagging / 9.5.4: |
Cascading Partial Parsers / 9.6: |
Elementary Analysis of Grammatical Functions / 9.7: |
Main Functions / 9.7.1: |
Extracting Other Groups / 9.7.2: |
An Annotation Scheme for Groups in French / 9.8: |
Application: The FASTUS System / 9.9: |
The Message Understanding Conferences / 9.9.1: |
The Syntactic Layers of the FASTUS System / 9.9.2: |
Evaluationof Information Extraction Systems / 9.9.3: |
Syntactic Formalisms / 9.10: |
Introduction / 10.1: |
Chomsky's Grammar in Syntactic Structures / 10.2: |
Constituency: A Formal Definition / 10.2.1: |
Transformations / 10.2.2: |
Transformations and Movements / 10.2.3: |
Gap Threading / 10.2.4: |
Gap Threading to Parse Relative Clauses / 10.2.5: |
Standardized Phrase Categories for English / 10.3: |
Unification-Based Grammars / 10.4: |
Representing Features in Prolog / 10.4.1: |
A Formalism for Features and Rules / 10.4.3: |
Features Organization / 10.4.4: |
Features and Unification / 10.4.5: |
A Unification Algorithm for Feature Structures / 10.4.6: |
Dependency Grammars / 10.5: |
Properties of a Dependency Graph / 10.5.1: |
Valence / 10.5.3: |
Dependencies and Functions / 10.5.4: |
Parsing Techniques / 10.6: |
Bottom-up Parsing / 11.1: |
The Shift-Reduce Algorithm / 11.2.1: |
Implementing Shift-Reduce Parsing in Prolog / 11.2.2: |
Differences Between Bottom-up and Top-down Parsing / 11.2.3: |
Chart Parsing / 11.3: |
Backtracking and Efficiency / 11.3.1: |
Structure of a Chart / 11.3.2: |
The Active Chart / 11.3.3: |
Modules of an Earley Parser / 11.3.4: |
The Earley Algorithm in Prolog / 11.3.5: |
The Earley Parser to Handle Left-Recursive Rules and Empty Symbols / 11.3.6: |
Probabilistic Parsing of Context-Free Grammars / 11.4: |
A Description of PCFGs / 11.5: |
The Bottom-up Chart / 11.5.1: |
The Cocke-Younger-Kasami Algorithm in Prolog / 11.5.2: |
Adding Probabilities to the CYK Parser / 11.5.3: |
Parser Evaluation / 11.6: |
Constituency-Based Evaluation / 11.6.1: |
Dependency-Based Evaluation / 11.6.2: |
PerformanceofPCFG Parsing / 11.6.3: |
Parsing Dependencies / 11.7: |
Dependency Rules / 11.7.1: |
Extending the Shift-Reduce Algorithm to Parse Dependencies / 11.7.2: |
Nivre's Parser in Prolog / 11.7.3: |
Finding Dependencies Using Constraints / 11.7.4: |
Parsing Dependencies Using Statistical Techniques / 11.7.5: |
Semantics and Predicate Logic / 11.8: |
Language Meaning and Logic: An Illustrative Example / 12.1: |
Formal Semantics / 12.3: |
First-Order Predicate Calculus to Represent the State of Affairs / 12.4: |
Variables and Constants / 12.4.1: |
Predicates / 12.4.2: |
Querying the Universe of Discourse / 12.5: |
Mapping Phrases onto Logical Formulas / 12.6: |
Representing Nouns and Adjectives / 12.6.1: |
Representing Noun Groups / 12.6.2: |
Representing Verbs and Prepositions / 12.6.3: |
The Case of Determiners / 12.7: |
Determiners and Logic Quantifiers / 12.7.1: |
Translating Sentences Using Quantifiers / 12.7.2: |
A General Representation of Sentences / 12.7.3: |
Compositionality to Translate Phrases to Logical Forms / 12.8: |
Translating the Noun Phrase / 12.8.1: |
Translating the Verb Phrase / 12.8.2: |
Augmenting the Database and Answering Questions / 12.9: |
Declarations / 12.9.1: |
Questions with Existential and Universal Quantifiers / 12.9.2: |
Prolog and Unknown Predicates / 12.9.3: |
Other Determiners and Questions / 12.9.4: |
Application: The Spoken Language Translator / 12.10: |
Translating Spoken Sentences / 12.10.1: |
Compositional Semantics / 12.10.2: |
Semantic Representation Transfer / 12.10.3: |
Lexical Semantics / 12.11: |
Beyond Formal Semantics / 13.1: |
La langue etlaparole / 13.1.1: |
Language and the Structure of the World / 13.1.2: |
Lexical Structures / 13.2: |
Some Basic Terms and Concepts / 13.2.1: |
Ontological Organization / 13.2.2: |
Lexical Classes and Relations / 13.2.3: |
Semantic Networks / 13.2.4: |
Building a Lexicon / 13.3: |
The Lexicon and Word Senses / 13.3.1: |
Verb Models / 13.3.2: |
Definitions / 13.3.3: |
An Example of Exhaustive Lexical Organization: Word Net / 13.4: |
Nouns / 13.4.1: |
Adjectives / 13.4.2: |
Verbs / 13.4.3: |
Automatic Word Sense Disambiguation / 13.5: |
Senses as Tags / 13.5.1: |
Associating a Word with a Context / 13.5.2: |
Guessing the Topic / 13.5.3: |
Naive Bayes / 13.5.4: |
Using Constraints on Verbs / 13.5.5: |
Using Dictionary Definitions / 13.5.6: |
An Unsupervised Algorithm to Tag Senses / 13.5.7: |
Senses and Languages / 13.5.8: |
Case Grammars / 13.6: |
Cases in Latin / 13.6.1: |
Cases and Thematic Roles / 13.6.2: |
Parsing with Cases / 13.6.3: |
Semantic Grammars / 13.6.4: |
Extending Case Grammars / 13.7: |
Frame Net / 13.7.1: |
A Statistical Method to Identify Semantic Roles / 13.7.2: |
An Example of Case Grammar Application: EVAR / 13.8: |
EVAR's Ontology and Syntactic Classes / 13.8.1: |
Cases in EVAR / 13.8.2: |
Discourse / 13.9: |
Discourse: A Minimalist Definition / 14.1: |
A Description of Discourse / 14.2.1: |
Discourse Entities / 14.2.2: |
References: An Application-Oriented View / 14.3: |
References and Noun Phrases / 14.3.1: |
Finding Names - Proper Nouns / 14.3.2: |
Coreference / 14.4: |
Anaphora / 14.4.1: |
Solving Coreferences in an Example / 14.4.2: |
A Standard Coreference Annotation / 14.4.3: |
References: A More Formal View / 14.5: |
Generating Discourse Entities: The Existential Quantifier / 14.5.1: |
Retrieving Discourse Entities: Definite Descriptions / 14.5.2: |
Generating Discourse Entities: The Universal Quantifier / 14.5.3: |
Centering: A Theory on Discourse Structure / 14.6: |
Solving Coreferences / 14.7: |
A Simplistic Method: Using Syntactic and Semantic Compatibility / 14.7.1: |
Solving Coreferences with Shallow Grammatical Information / 14.7.2: |
Salience in a Multimodal Context / 14.7.3: |
Using a Machine-Learning Technique to Resolve Coreferences / 14.7.4: |
More Complex Phenomena: Ellipses / 14.7.5: |
Discourse and Rhetoric / 14.8: |
Ancient Rhetoric: An Outline / 14.8.1: |
Rhetorical Structure Theory / 14.8.2: |
Types of Relations / 14.8.3: |
Implementing Rhetorical Structure Theory / 14.8.4: |
Events and Time / 14.9: |
Events / 14.9.1: |
Event Types / 14.9.2: |
Temporal Representation of Events / 14.9.3: |
Events and Tenses / 14.9.4: |
Time ML, an Annotation Scheme for Time and Events / 14.10: |
Dialogue / 14.11: |
Why a Dialogue? / 15.1: |
Simple Dialogue Systems / 15.3: |
Dialogue Systems Based on Automata / 15.3.1: |
Dialogue Modeling / 15.3.2: |
Speech Acts: A Theory of Language Interaction / 15.4: |
Speech Acts and Human-Machine Dialogue / 15.5: |
Speech Acts as a Tagging Model / 15.5.1: |
Speech Acts Tags Used in the SUNDIAL Project / 15.5.2: |
Dialogue Parsing / 15.5.3: |
Interpreting Speech Acts / 15.5.4: |
EVAR: A Dialogue Application Using Speech Acts / 15.5.5: |
Taking Beliefs and Intentions into Account / 15.6: |
Representing Mental States / 15.6.1: |
The STRIPS Planning Algorithm / 15.6.2: |
Causality / 15.6.3: |
An Introduction to Prolog / 15.7: |
A Short Background / A.1: |
Basic Features of Prolog / A.2: |
Facts / A.2.1: |
Terms / A.2.2: |
Queries / A.2.3: |
Logical Variables / A.2.4: |
Shared Variables / A.2.5: |
Data Types in Prolog / A.2.6: |
Rules / A.2.7: |
Running a Program / A.3: |
Unification / A.4: |
Substitution and Instances / A.4.1: |
Terms and Unification / A.4.2: |
The Herbrand Unification Algorithm / A.4.3: |
Example / A.4.4: |
The Occurs-Check / A.4.5: |
Resolution / A.5: |
Modus Ponens / A.5.1: |
A Resolution Algorithm / A.5.2: |
Derivation Trees and Backtracking / A.5.3: |
Tracing and Debugging / A.6: |
Cuts, Negation, and Related Predicates / A.7: |
Cuts / A.7.1: |
Negation / A.7.2: |
The once/1 Predicate / A.7.3: |
Lists / A.8: |
Some List-Handling Predicates / A.9: |
The member/2 Predicate / A.9.1: |
The append/3 Predicate / A.9.2: |
The delete/3 Predicate / A.9.3: |
The intersection/3 Predicate / A.9.4: |
The reverse/2 Predicate / A.9.5: |
The Mode of an Argument / A.9.6: |
Operators and Arithmetic / A.10: |
Operators / A.10.1: |
Arithmetic Operations / A.10.2: |
Comparison Operators / A.10.3: |
Lists and Arithmetic: The length/2 Predicate / A.10.4: |
Lists and Comparison: The quicksort/2 Predicate / A.10.5: |
Some Other Built-in Predicates / A.11: |
Type Predicates / A.11.1: |
Term Manipulation Predicates / A.11.2: |
Handling Run-Time Errors and Exceptions / A.12: |
Dynamically Accessing and Updatingthe Database / A.13: |
Accessing a Clause: The clause/2 Predicate / A.13.1: |
Dynamic and Static Predicates / A.13.2: |
Adding a Clause: The asserta/1 and 1 assertz/Predicates / A.13.3: |
Removing Clauses: The retract/1 and abolish/2 Predicates / A.13.4: |
Handling Unknown Predicates / A.13.5: |
All-Solutions Predicates / A.14: |
Fundamental Search Algorithms / A.15: |
Representing the Graph / A.15.1: |
Depth-First Search / A.15.2: |
Breadth-First Search / A.15.3: |
A* Search / A.15.4: |
Input/Output / A.16: |
Reading and Writing Characters with Edinburgh Prolog / A.16.1: |
Reading and Writing Terms with Edinburgh Prolog / A.16.2: |
Opening and Closing Files with Edinburgh Prolog / A.16.3: |
Reading and Writing Characters with Standard Prolog / A.16.4: |
Reading and Writing Terms with Standard Prolog / A.16.5: |
Opening and Closing Files with Standard Prolog / A.16.6: |
Writing Loops / A.16.7: |
Developing Prolog Programs / A.17: |
Presentation Style / A.17.1: |
Improving Programs / A.17.2: |
Index |
References |