Preface |
Preface to the Second Edition |
Introduction / 1: |
Mathematical Formulation |
Example: A Transportation Problem |
Continuous versus Discrete Optimization |
Constrained and Unconstrained Optimization |
Global and Local Optimization |
Stochastic and Deterministic Optimization |
Convexity |
Optimization Algorithms |
Notes and References |
Fundamentals of Unconstrained Optimization / 2: |
What Is a Solution? / 2.1: |
Recognizing a Local Minimum |
Nonsmooth Problems |
Overview of Algorithms / 2.2: |
Two Strategies: Line Search and Trust Region |
Search Directions for Line Search Methods |
Models for Trust-Region Methods |
Scaling |
Exercises |
Line Search Methods / 3: |
Step Length / 3.1: |
The Wolfe Conditions |
The Goldstein Conditions |
Sufficient Decrease and Backtracking |
Convergence of Line Search Methods / 3.2: |
Rate of Convergence / 3.3: |
Convergence Rate of Steepest Descent |
Newton's Method |
Quasi-Newton Methods |
Newton's Method with Hessian Modification / 3.4: |
Eigenvalue Modification |
Adding a Multiple of the Identity |
Modified Cholesky Factorization |
Modified Symmetric Indefinite Factorization |
Step-Length Selection Algorithms / 3.5: |
Interpolation |
Initial Step Length |
A Line Search Algorithm for the Wolfe Conditions |
Trust-Region Methods / 4: |
Outline of the Trust-Region Approach |
Algorithms Based on the Cauchy Point / 4.1: |
The Cauchy Point |
Improving on the Cauchy Point |
The Dogleg Method |
Two-Dimensional Subspace Minimization |
Global Convergence / 4.2: |
Reduction Obtained by the Cauchy Point |
Convergence to Stationary Points |
Iterative Solution of the Subproblem / 4.3: |
The Hard Case |
Proof of Theorem 4.1 |
Convergence of Algorithms Based on Nearly Exact Solutions |
Local Convergence of Trust-Region Newton Methods / 4.4: |
Other Enhancements / 4.5: |
Trust Regions in Other Norms |
Conjugate Gradient Methods / 5: |
The Linear Conjugate Gradient Method / 5.1: |
Conjugate Direction Methods |
Basic Properties of the Conjugate Gradient Method |
A Practical Form of the Conjugate Gradient Method |
Preconditioning |
Practical Preconditioners |
Nonlinear Conjugate Gradient Methods / 5.2: |
The Fletcher-Reeves Method |
The Polak-Ribiere Method and Variants |
Quadratic Termination and Restarts |
Behavior of the Fletcher-Reeves Method |
Numerical Performance |
The BFGS Method / 6: |
Properties of the BFGS Method |
Implementation |
The SR1 Method / 6.2: |
Properties of SR1 Updating |
The Broyden Class / 6.3: |
Convergence Analysis / 6.4: |
Global Convergence of the BFGS Method |
Superlinear Convergence of the BFGS Method |
Convergence Analysis of the SR1 Method |
Large-Scale Unconstrained Optimization / 7: |
Inexact Newton Methods / 7.1: |
Local Convergence of Inexact Newton Methods |
Line Search Newton-CG Method |
Trust-Region Newton-CG Method |
Preconditioning the Trust-Region Newton-CG Method |
Trust-Region Newton-Lanczos Method |
Limited-Memory Quasi-Newton Methods / 7.2: |
Limited-Memory BFGS |
Relationship with Conjugate Gradient Methods |
General Limited-Memory Updating |
Compact Representation of BFGS Updating |
Unrolling the Update |
Sparse Quasi-Newton Updates / 7.3: |
Algorithms for Partially Separable Functions / 7.4: |
Perspectives and Software / 7.5: |
Calculating Derivatives / 8: |
Finite-Difference Derivative Approximations / 8.1: |
Approximating the Gradient |
Approximating a Sparse Jacobian |
Approximating the Hessian |
Approximating a Sparse Hessian |
Automatic Differentiation / 8.2: |
An Example |
The Forward Mode |
The Reverse Mode |
Vector Functions and Partial Separability |
Calculating Jacobians of Vector Functions |
Calculating Hessians: Forward Mode |
Calculating Hessians: Reverse Mode |
Current Limitations |
Derivative-Free Optimization / 9: |
Finite Differences and Noise / 9.1: |
Model-Based Methods / 9.2: |
Interpolation and Polynomial Bases |
Updating the Interpolation Set |
A Method Based on Minimum-Change Updating |
Coordinate and Pattern-Search Methods / 9.3: |
Coordinate Search Method |
Pattern-Search Methods |
A Conjugate-Direction Method / 9.4: |
Nelder-Mead Method / 9.5: |
Implicit Filtering / 9.6: |
Least-Squares Problems / 10: |
Background / 10.1: |
Linear Least-Squares Problems / 10.2: |
Algorithms for Nonlinear Least-Squares Problems / 10.3: |
The Gauss-Newton Method |
Convergence of the Gauss-Newton Method |
The Levenberg-Marquardt Method |
Implementation of the Levenberg-Marquardt Method |
Convergence of the Levenberg-Marquardt Method |
Methods for Large-Residual Problems |
Orthogonal Distance Regression / 10.4: |
Nonlinear Equations / 11: |
Local Algorithms / 11.1: |
Newton's Method for Nonlinear Equations |
Broyden's Method |
Tensor Methods |
Practical Methods / 11.2: |
Merit Functions |
Continuation/Homotopy Methods / 11.3: |
Motivation |
Practical Continuation Methods |
Theory of Constrained Optimization / 12: |
Local and Global Solutions |
Smoothness |
Examples / 12.1: |
A Single Equality Constraint |
A Single Inequality Constraint |
Two Inequality Constraints |
Tangent Cone and Constraint Qualifications / 12.2: |
First-Order Optimality Conditions / 12.3: |
First-Order Optimality Conditions: Proof / 12.4: |
Relating the Tangent Cone and the First-Order Feasible Direction Set |
A Fundamental Necessary Condition |
Farkas' Lemma |
Proof of Theorem 12.1 |
Second-Order Conditions / 12.5: |
Second-Order Conditions and Projected Hessians |
Other Constraint Qualifications / 12.6: |
A Geometric Viewpoint / 12.7: |
Lagrange Multipliers and Sensitivity / 12.8: |
Duality / 12.9: |
Linear Programming: The Simplex Method / 13: |
Linear Programming |
Optimality and Duality / 13.1: |
Optimality Conditions |
The Dual Problem |
Geometry of the Feasible Set / 13.2: |
Bases and Basic Feasible Points |
Vertices of the Feasible Polytope |
The Simplex Method / 13.3: |
Outline |
A Single Step of the Method |
Linear Algebra in the Simplex Method / 13.4: |
Other Important Details / 13.5: |
Pricing and Selection of the Entering Index |
Starting the Simplex Method |
Degenerate Steps and Cycling |
The Dual Simplex Method / 13.6: |
Presolving / 13.7: |
Where Does the Simplex Method Fit? / 13.8: |
Linear Programming: Interior-Point Methods / 14: |
Primal-Dual Methods / 14.1: |
The Central Path |
Central Path Neighborhoods and Path-Following Methods |
Practical Primal-Dual Algorithms / 14.2: |
Corrector and Centering Steps |
Step Lengths |
Starting Point |
A Practical Algorithm |
Solving the Linear Systems |
Other Primal-Dual Algorithms and Extensions / 14.3: |
Other Path-Following Methods |
Potential-Reduction Methods |
Extensions |
Fundamentals of Algorithms for Nonlinear Constrained Optimization / 14.4: |
Categorizing Optimization Algorithms / 15.1: |
The Combinatorial Difficulty of Inequality-Constrained Problems / 15.2: |
Elimination of Variables / 15.3: |
Simple Elimination using Linear Constraints |
General Reduction Strategies for Linear Constraints |
Effect of Inequality Constraints |
Merit Functions and Filters / 15.4: |
Filters |
The Maratos Effect / 15.5: |
Second-Order Correction and Nonmonotone Techniques / 15.6: |
Nonmonotone (Watchdog) Strategy |
Quadratic Programming / 16: |
Equality-Constrained Quadratic Programs / 16.1: |
Properties of Equality-Constrained QPs |
Direct Solution of the KKT System / 16.2: |
Factoring the Full KKT System |
Schur-Complement Method |
Null-Space Method |
Iterative Solution of the KKT System / 16.3: |
CG Applied to the Reduced System |
The Projected CG Method |
Inequality-Constrained Problems / 16.4: |
Optimality Conditions for Inequality-Constrained Problems |
Degeneracy |
Active-Set Methods for Convex QPs / 16.5: |
Specification of the Active-Set Method for Convex QP |
Further Remarks on the Active-Set Method |
Finite Termination of Active-Set Algorithm on Strictly Convex QPs |
Updating Factorizations |
Interior-Point Methods / 16.6: |
Solving the Primal-Dual System |
Step Length Selection |
A Practical Primal-Dual Method |
The Gradient Projection Method / 16.7: |
Cauchy Point Computation |
Subspace Minimization |
Penalty and Augmented Lagrangian Methods / 16.8: |
The Quadratic Penalty Method / 17.1: |
Algorithmic Framework |
Convergence of the Quadratic Penalty Method |
Ill Conditioning and Reformulations |
Nonsmooth Penalty Functions / 17.2: |
A Practical l[subscript 1] Penalty Method |
A General Class of Nonsmooth Penalty Methods |
Augmented Lagrangian Method: Equality Constraints / 17.3: |
Motivation and Algorithmic Framework |
Properties of the Augmented Lagrangian |
Practical Augmented Lagrangian Methods / 17.4: |
Bound-Constrained Formulation |
Linearly Constrained Formulation |
Unconstrained Formulation |
Sequential Quadratic Programming / 17.5: |
Local SQP Method / 18.1: |
SQP Framework |
Inequality Constraints |
Preview of Practical SQP Methods / 18.2: |
IQP and EQP |
Enforcing Convergence |
Algorithmic Development / 18.3: |
Handling Inconsistent Linearizations |
Full Quasi-Newton Approximations |
Reduced-Hessian Quasi-Newton Approximations |
Second-Order Correction |
A Practical Line Search SQP Method / 18.4: |
Trust-Region SQP Methods / 18.5: |
A Relaxation Method for Equality-Constrained Optimization |
Sl[subscript 1] QP (Sequential l[subscript 1] Quadratic Programming) |
Sequential Linear-Quadratic Programming (SLQP) |
A Technique for Updating the Penalty Parameter |
Nonlinear Gradient Projection / 18.6: |
Interior-Point Methods for Nonlinear Programming / 18.7: |
Two Interpretations / 19.1: |
A Basic Interior-Point Algorithm / 19.2: |
Primal vs. Primal-Dual System / 19.3: |
Updating the Barrier Parameter |
Handling Nonconvexity and Singularity |
Step Acceptance: Merit Functions and Filters |
Quasi-Newton Approximations |
Feasible Interior-Point Methods |
A Line Search Interior-Point Method / 19.4: |
A Trust-Region Interior-Point Method / 19.5: |
An Algorithm for Solving the Barrier Problem |
Step Computation |
Lagrange Multipliers Estimates and Step Acceptance |
Description of a Trust-Region Interior-Point Method |
The Primal Log-Barrier Method / 19.6: |
Global Convergence Properties / 19.7: |
Failure of the Line Search Approach |
Modified Line Search Methods |
Global Convergence of the Trust-Region Approach |
Superlinear Convergence / 19.8: |
Background Material / 19.9: |
Elements of Linear Algebra / A.1: |
Vectors and Matrices |
Norms |
Subspaces |
Eigenvalues, Eigenvectors, and the Singular-Value Decomposition |
Determinant and Trace |
Matrix Factorizations: Cholesky, LU, QR |
Symmetric Indefinite Factorization |
Sherman-Morrison-Woodbury Formula |
Interlacing Eigenvalue Theorem |
Error Analysis and Floating-Point Arithmetic |
Conditioning and Stability |
Elements of Analysis, Geometry, Topology / A.2: |
Sequences |
Rates of Convergence |
Topology of the Euclidean Space R[superscript n] |
Convex Sets in R[superscript n] |
Continuity and Limits |
Derivatives |
Directional Derivatives |
Mean Value Theorem |
Implicit Function Theorem |
Order Notation |
Root-Finding for Scalar Equations |
A Regularization Procedure / B: |
References |
Index |