close
1.

図書

図書
Yann Collette, Patrick Siarry
出版情報: Berlin ; New York : Springer, c2003  ix, 293 p. ; 24 cm
シリーズ名: Decision Engineering
所蔵情報: loading…
目次情報: 続きを見る
Introduction: Multiobjective Optimization and Domination
Scalar Methods
Interactive Methods
Fuzzy Methods
Methods which use a Metaheuristic
Decision Aid Methods
Performances Measurement
Test Functions of Multiobjective Optimization Methods
Attempt to Classify Multiobjective Optimization Methods
Qualification of Scientific Software / Case Study No.1:
Study of the Extension of a Telecommunication Network / Case Study No.2:
Multicriteria Decision Tools to Deal with Bids / Case Study No.3:
Conclusion
References
Index
Introduction: Multiobjective Optimization and Domination
Scalar Methods
Interactive Methods
2.

図書

図書
Andrzej Ruszczyński
出版情報: Princeton, N.J. : Princeton University Press, c2006  xii, 448 p. ; 24 cm
所蔵情報: loading…
3.

図書

図書
Jorge Nocedal, Stephen J. Wright
出版情報: New York : Springer, c2006  xxii, 664 p. ; 24 cm
シリーズ名: Springer series in operations research and financial engineering
所蔵情報: loading…
目次情報: 続きを見る
Preface
Preface to the Second Edition
Introduction / 1:
Mathematical Formulation
Example: A Transportation Problem
Continuous versus Discrete Optimization
Constrained and Unconstrained Optimization
Global and Local Optimization
Stochastic and Deterministic Optimization
Convexity
Optimization Algorithms
Notes and References
Fundamentals of Unconstrained Optimization / 2:
What Is a Solution? / 2.1:
Recognizing a Local Minimum
Nonsmooth Problems
Overview of Algorithms / 2.2:
Two Strategies: Line Search and Trust Region
Search Directions for Line Search Methods
Models for Trust-Region Methods
Scaling
Exercises
Line Search Methods / 3:
Step Length / 3.1:
The Wolfe Conditions
The Goldstein Conditions
Sufficient Decrease and Backtracking
Convergence of Line Search Methods / 3.2:
Rate of Convergence / 3.3:
Convergence Rate of Steepest Descent
Newton's Method
Quasi-Newton Methods
Newton's Method with Hessian Modification / 3.4:
Eigenvalue Modification
Adding a Multiple of the Identity
Modified Cholesky Factorization
Modified Symmetric Indefinite Factorization
Step-Length Selection Algorithms / 3.5:
Interpolation
Initial Step Length
A Line Search Algorithm for the Wolfe Conditions
Trust-Region Methods / 4:
Outline of the Trust-Region Approach
Algorithms Based on the Cauchy Point / 4.1:
The Cauchy Point
Improving on the Cauchy Point
The Dogleg Method
Two-Dimensional Subspace Minimization
Global Convergence / 4.2:
Reduction Obtained by the Cauchy Point
Convergence to Stationary Points
Iterative Solution of the Subproblem / 4.3:
The Hard Case
Proof of Theorem 4.1
Convergence of Algorithms Based on Nearly Exact Solutions
Local Convergence of Trust-Region Newton Methods / 4.4:
Other Enhancements / 4.5:
Trust Regions in Other Norms
Conjugate Gradient Methods / 5:
The Linear Conjugate Gradient Method / 5.1:
Conjugate Direction Methods
Basic Properties of the Conjugate Gradient Method
A Practical Form of the Conjugate Gradient Method
Preconditioning
Practical Preconditioners
Nonlinear Conjugate Gradient Methods / 5.2:
The Fletcher-Reeves Method
The Polak-Ribiere Method and Variants
Quadratic Termination and Restarts
Behavior of the Fletcher-Reeves Method
Numerical Performance
The BFGS Method / 6:
Properties of the BFGS Method
Implementation
The SR1 Method / 6.2:
Properties of SR1 Updating
The Broyden Class / 6.3:
Convergence Analysis / 6.4:
Global Convergence of the BFGS Method
Superlinear Convergence of the BFGS Method
Convergence Analysis of the SR1 Method
Large-Scale Unconstrained Optimization / 7:
Inexact Newton Methods / 7.1:
Local Convergence of Inexact Newton Methods
Line Search Newton-CG Method
Trust-Region Newton-CG Method
Preconditioning the Trust-Region Newton-CG Method
Trust-Region Newton-Lanczos Method
Limited-Memory Quasi-Newton Methods / 7.2:
Limited-Memory BFGS
Relationship with Conjugate Gradient Methods
General Limited-Memory Updating
Compact Representation of BFGS Updating
Unrolling the Update
Sparse Quasi-Newton Updates / 7.3:
Algorithms for Partially Separable Functions / 7.4:
Perspectives and Software / 7.5:
Calculating Derivatives / 8:
Finite-Difference Derivative Approximations / 8.1:
Approximating the Gradient
Approximating a Sparse Jacobian
Approximating the Hessian
Approximating a Sparse Hessian
Automatic Differentiation / 8.2:
An Example
The Forward Mode
The Reverse Mode
Vector Functions and Partial Separability
Calculating Jacobians of Vector Functions
Calculating Hessians: Forward Mode
Calculating Hessians: Reverse Mode
Current Limitations
Derivative-Free Optimization / 9:
Finite Differences and Noise / 9.1:
Model-Based Methods / 9.2:
Interpolation and Polynomial Bases
Updating the Interpolation Set
A Method Based on Minimum-Change Updating
Coordinate and Pattern-Search Methods / 9.3:
Coordinate Search Method
Pattern-Search Methods
A Conjugate-Direction Method / 9.4:
Nelder-Mead Method / 9.5:
Implicit Filtering / 9.6:
Least-Squares Problems / 10:
Background / 10.1:
Linear Least-Squares Problems / 10.2:
Algorithms for Nonlinear Least-Squares Problems / 10.3:
The Gauss-Newton Method
Convergence of the Gauss-Newton Method
The Levenberg-Marquardt Method
Implementation of the Levenberg-Marquardt Method
Convergence of the Levenberg-Marquardt Method
Methods for Large-Residual Problems
Orthogonal Distance Regression / 10.4:
Nonlinear Equations / 11:
Local Algorithms / 11.1:
Newton's Method for Nonlinear Equations
Broyden's Method
Tensor Methods
Practical Methods / 11.2:
Merit Functions
Continuation/Homotopy Methods / 11.3:
Motivation
Practical Continuation Methods
Theory of Constrained Optimization / 12:
Local and Global Solutions
Smoothness
Examples / 12.1:
A Single Equality Constraint
A Single Inequality Constraint
Two Inequality Constraints
Tangent Cone and Constraint Qualifications / 12.2:
First-Order Optimality Conditions / 12.3:
First-Order Optimality Conditions: Proof / 12.4:
Relating the Tangent Cone and the First-Order Feasible Direction Set
A Fundamental Necessary Condition
Farkas' Lemma
Proof of Theorem 12.1
Second-Order Conditions / 12.5:
Second-Order Conditions and Projected Hessians
Other Constraint Qualifications / 12.6:
A Geometric Viewpoint / 12.7:
Lagrange Multipliers and Sensitivity / 12.8:
Duality / 12.9:
Linear Programming: The Simplex Method / 13:
Linear Programming
Optimality and Duality / 13.1:
Optimality Conditions
The Dual Problem
Geometry of the Feasible Set / 13.2:
Bases and Basic Feasible Points
Vertices of the Feasible Polytope
The Simplex Method / 13.3:
Outline
A Single Step of the Method
Linear Algebra in the Simplex Method / 13.4:
Other Important Details / 13.5:
Pricing and Selection of the Entering Index
Starting the Simplex Method
Degenerate Steps and Cycling
The Dual Simplex Method / 13.6:
Presolving / 13.7:
Where Does the Simplex Method Fit? / 13.8:
Linear Programming: Interior-Point Methods / 14:
Primal-Dual Methods / 14.1:
The Central Path
Central Path Neighborhoods and Path-Following Methods
Practical Primal-Dual Algorithms / 14.2:
Corrector and Centering Steps
Step Lengths
Starting Point
A Practical Algorithm
Solving the Linear Systems
Other Primal-Dual Algorithms and Extensions / 14.3:
Other Path-Following Methods
Potential-Reduction Methods
Extensions
Fundamentals of Algorithms for Nonlinear Constrained Optimization / 14.4:
Categorizing Optimization Algorithms / 15.1:
The Combinatorial Difficulty of Inequality-Constrained Problems / 15.2:
Elimination of Variables / 15.3:
Simple Elimination using Linear Constraints
General Reduction Strategies for Linear Constraints
Effect of Inequality Constraints
Merit Functions and Filters / 15.4:
Filters
The Maratos Effect / 15.5:
Second-Order Correction and Nonmonotone Techniques / 15.6:
Nonmonotone (Watchdog) Strategy
Quadratic Programming / 16:
Equality-Constrained Quadratic Programs / 16.1:
Properties of Equality-Constrained QPs
Direct Solution of the KKT System / 16.2:
Factoring the Full KKT System
Schur-Complement Method
Null-Space Method
Iterative Solution of the KKT System / 16.3:
CG Applied to the Reduced System
The Projected CG Method
Inequality-Constrained Problems / 16.4:
Optimality Conditions for Inequality-Constrained Problems
Degeneracy
Active-Set Methods for Convex QPs / 16.5:
Specification of the Active-Set Method for Convex QP
Further Remarks on the Active-Set Method
Finite Termination of Active-Set Algorithm on Strictly Convex QPs
Updating Factorizations
Interior-Point Methods / 16.6:
Solving the Primal-Dual System
Step Length Selection
A Practical Primal-Dual Method
The Gradient Projection Method / 16.7:
Cauchy Point Computation
Subspace Minimization
Penalty and Augmented Lagrangian Methods / 16.8:
The Quadratic Penalty Method / 17.1:
Algorithmic Framework
Convergence of the Quadratic Penalty Method
Ill Conditioning and Reformulations
Nonsmooth Penalty Functions / 17.2:
A Practical l[subscript 1] Penalty Method
A General Class of Nonsmooth Penalty Methods
Augmented Lagrangian Method: Equality Constraints / 17.3:
Motivation and Algorithmic Framework
Properties of the Augmented Lagrangian
Practical Augmented Lagrangian Methods / 17.4:
Bound-Constrained Formulation
Linearly Constrained Formulation
Unconstrained Formulation
Sequential Quadratic Programming / 17.5:
Local SQP Method / 18.1:
SQP Framework
Inequality Constraints
Preview of Practical SQP Methods / 18.2:
IQP and EQP
Enforcing Convergence
Algorithmic Development / 18.3:
Handling Inconsistent Linearizations
Full Quasi-Newton Approximations
Reduced-Hessian Quasi-Newton Approximations
Second-Order Correction
A Practical Line Search SQP Method / 18.4:
Trust-Region SQP Methods / 18.5:
A Relaxation Method for Equality-Constrained Optimization
Sl[subscript 1] QP (Sequential l[subscript 1] Quadratic Programming)
Sequential Linear-Quadratic Programming (SLQP)
A Technique for Updating the Penalty Parameter
Nonlinear Gradient Projection / 18.6:
Interior-Point Methods for Nonlinear Programming / 18.7:
Two Interpretations / 19.1:
A Basic Interior-Point Algorithm / 19.2:
Primal vs. Primal-Dual System / 19.3:
Updating the Barrier Parameter
Handling Nonconvexity and Singularity
Step Acceptance: Merit Functions and Filters
Quasi-Newton Approximations
Feasible Interior-Point Methods
A Line Search Interior-Point Method / 19.4:
A Trust-Region Interior-Point Method / 19.5:
An Algorithm for Solving the Barrier Problem
Step Computation
Lagrange Multipliers Estimates and Step Acceptance
Description of a Trust-Region Interior-Point Method
The Primal Log-Barrier Method / 19.6:
Global Convergence Properties / 19.7:
Failure of the Line Search Approach
Modified Line Search Methods
Global Convergence of the Trust-Region Approach
Superlinear Convergence / 19.8:
Background Material / 19.9:
Elements of Linear Algebra / A.1:
Vectors and Matrices
Norms
Subspaces
Eigenvalues, Eigenvectors, and the Singular-Value Decomposition
Determinant and Trace
Matrix Factorizations: Cholesky, LU, QR
Symmetric Indefinite Factorization
Sherman-Morrison-Woodbury Formula
Interlacing Eigenvalue Theorem
Error Analysis and Floating-Point Arithmetic
Conditioning and Stability
Elements of Analysis, Geometry, Topology / A.2:
Sequences
Rates of Convergence
Topology of the Euclidean Space R[superscript n]
Convex Sets in R[superscript n]
Continuity and Limits
Derivatives
Directional Derivatives
Mean Value Theorem
Implicit Function Theorem
Order Notation
Root-Finding for Scalar Equations
A Regularization Procedure / B:
References
Index
Preface
Preface to the Second Edition
Introduction / 1:
4.

図書

図書
Michael D. Intriligator
出版情報: Philadelphia, Pa : Society for Industrial and Applied Mathematics, c2002  xix, 508 p. ; 23 cm
シリーズ名: Classics in applied mathematics ; 39
所蔵情報: loading…
目次情報: 続きを見る
Preface to the Classics Edition
Preface
Introduction / Part 1:
Economizing and the Economy / 1:
The Economizing Problem / 1.1:
Institutions of the Economy / 1.2:
Economics / 1.3:
Static Optimization / Part 2:
The Mathematical Programming Problem / 2:
Formal Statement of the Problem / 2.1:
Types of Maxima, the Weierstrass Theorem, and the Local-Global Theorem / 2.2:
Geometry of the Problem / 2.3:
Classical Programming / 3:
The Unconstrained Case / 3.1:
The Method of Lagrange Multipliers / 3.2:
The Interpretation of the Lagrange Multipliers / 3.3:
Problems
Nonlinear Programming / 4:
The Case of No Inequality Constraints / 4.1:
The Kuhn-Tucker Conditions / 4.2:
The Kuhn-Tucker Theorem / 4.3:
Solution Algorithms / 4.4:
Linear Programming / 5:
The Dual Problems of Linear Programming / 5.1:
The Lagrangian Approach; Existence, Duality and Complementary Slackness Theorems / 5.2:
The Interpretation of the Dual / 5.3:
The Simplex Algorithm / 5.4:
Game Theory / 6:
Classification and Description of Games / 6.1:
Two-person, Zero-sum Games / 6.2:
Two-person Nonzero-sum Games / 6.3:
Cooperative Games / 6.4:
Games With Infinitely Many Players / 6.5:
Applications of Static Optimization / Part 3:
Theory of the Household / 7:
Commodity Space / 7.1:
The Preference Relation / 7.2:
The Neoclassical Problem of the Household / 7.3:
Comparative Statics of the Household / 7.4:
Revealed Preference / 7.5:
von Neumann-Morgenstern Utility / 7.6:
Theory of the Firm / 8:
The Production Function / 8.1:
The Neoclassical Theory of the Firm / 8.2:
Comparative Statics of the Firm / 8.3:
Imperfect Competition: Monopoly and Monopsony / 8.4:
Competition Among the Few: Oligopoly and Oligopsony / 8.5:
General Equilibrium / 9:
The Classical Approach: Counting Equations and Unknowns / 9.1:
The Input-Output Linear Programming Approach / 9.2:
The Neoclassical Excess Demand Approach / 9.3:
Stability of Equilibrium / 9.4:
The von Neumann Model of an Expanding Economy / 9.5:
Welfare Economics / 10:
The Geometry of the Problem in the 2 x 2 x 2 Case / 10.1:
Competitive Equilibrium and Pareto Optimality / 10.2:
Market Failure / 10.3:
Optimality Over Time / 10.4:
Dynamic Optimization / Part 4:
The Control Problem / 11:
Some Special Cases / 11.1:
Types of Control / 11.3:
The Control Problem as One of Programming in an Infinite Dimensional Space; the Generalized Weierstrass Theorem / 11.4:
Calculus of Variations / 12:
Euler Equation / 12.1:
Necessary Conditions / 12.2:
Transversality Condition / 12.3:
Constraints / 12.4:
Dynamic Programming / 13:
The Principle of Optimality and Bellman's Equation / 13.1:
Dynamic Programming and the Calculus of Variations / 13.2:
Dynamic Programming Solution of Multistage Optimization Problems / 13.3:
Maximum Principle / 14:
Costate Variables, the Hamiltonian, and the Maximum Principle / 14.1:
The Interpretation of the Costate Variables / 14.2:
The Maximum Principle and the Calculus of Variations / 14.3:
The Maximum Principle and Dynamic Programming / 14.4:
Examples / 14.5:
Differential Games / 15:
Two-Person Deterministic Continuous Differential Games / 15.1:
Two-Person Zero-Sum Differential Games / 15.2:
Pursuit Games / 15.3:
Coordination Differential Games / 15.4:
Noncooperative Differential Games / 15.5:
APPLICATIONS OF DYNAMIC OPTIMIZATION / Part 5:
Optimal Economic Growth / 16:
The Neoclassical Growth Model / 16.1:
Neoclassical Optimal Economic Growth / 16.2:
The Two Sector Growth Model / 16.3:
Heterogeneous Capital Goods / 16.4:
Appendices
Analysis / Appendix A:
Sets / A.1:
Relations and Functions / A.2:
Metric Spaces / A.3:
Vector Spaces / A.4:
Convex Sets and Functions / A.5:
Differential Calculus / A.6:
Differential Equations / A.7:
Matrices / Appendix B:
Basic Definitions and Examples / B.1:
Some Special Matrices / B.2:
Matrix Relations and Operations / B.3:
Scalar Valued Functions Defined on Matrices / B.4:
Inverse Matrix / B.5:
Linear Equations and Linear Inequalities / B.6:
Linear Transformations; Characteristic Roots and Vectors / B.7:
Quadratic Forms / B.8:
Matrix Derivatives / B.9:
Index
Preface to the Classics Edition
Preface
Introduction / Part 1:
5.

図書

図書
edited by Panos M. Pardalos and Mauricio G.C. Resende
出版情報: New York ; Tokyo : Oxford University Press, 2002  xviii, 1095 p. ; 26 cm
所蔵情報: loading…
6.

図書

図書
Osman Güler
出版情報: New York, NY ; London : Springer, c2010  xviii, 439 p. ; 25 cm
シリーズ名: Graduate texts in mathematics ; 258
所蔵情報: loading…
目次情報: 続きを見る
Calculus in Vector Spaces
Unconstrained Optimization
Convex Analysis
Theory of Convex Polyhedra
Some Basic Optimization Algorithms
Theory of Lagrange Multipliers
Semi-infinite Programming
Duality Theory and Convex Programming
References
Index
Calculus in Vector Spaces
Unconstrained Optimization
Convex Analysis
7.

図書

図書
Wilhelm Forst, Dieter Hoffmann
出版情報: New York : Springer, c2010  xviii, 402 p. ; 27 cm
シリーズ名: Springer undergraduate texts in mathematics and technology
所蔵情報: loading…
8.

図書

図書
Ryan Kastner, Anup Hosangadi, Farzan Fallah
出版情報: Cambridge, UK : Cambridge University Press, 2010  viii, 187 p. ; 26 cm
所蔵情報: loading…
目次情報: 続きを見る
List of abbreviations
Preface
Introduction / 1:
Overview / 1.1:
Salient features of this book / 1.2:
Organization / 1.3:
Target audience / 1.4:
Use of polynomial expressions and linear systems / 2:
Chapter overview / 2.1:
Approximation algorithms / 2.2:
Computer graphics / 2.3:
Digital signal processing (DSP) / 2.4:
Cryptography / 2.5:
Address calculation in data intensive applications / 2.6:
Summary / 2.7:
Software compilation / 3:
Basic software compiler structure / 3.1:
Algebraic transformations in optimizing software compilers / 3.3:
Hardware synthesis / 3.4:
Hardware synthesis design flow / 4.1:
System specification / 4.3:
Program representation / 4.4:
Algorithmic optimization / 4.5:
Resource allocation / 4.6:
Operation scheduling / 4.7:
Resource binding / 4.8:
Case study: FIR filter / 4.9:
Fundamentals of digital arithmetic / 4.10:
Basic number representation / 5.1:
Two-operand addition / 5.3:
Multiple-operand addition / 5.4:
Polynomial expressions / 5.5:
Problem formulation / 6.1:
Related optimization techniques / 6.4:
Algebraic optimization of arithmetic expressions / 6.5:
Experimental results / 6.6:
Optimal solutions for reducing the number of operations in arithmetic expressions / 6.7:
Linear systems / 6.8:
Linear system basics / 7.1:
Single-constant multiplication (SCM) / 7.3:
Multiple-constant multiplication (MCM) / 7.5:
Overview of linear system optimizations / 7.6:
Transformation of a linear system into a polynomial expression / 7.7:
Optimization for synthesis using two-operand adders / 7.8:
FIR filter optimization / 7.9:
Synthesis for multiple-operand addition / 7.10:
Delay-aware optimization / 7.11:
Software optimization / 7.12:
Index / 7.13:
List of abbreviations
Preface
Introduction / 1:
9.

図書

図書
edited by Xiaoqi Yang and Kok Lay Teo and Lou Caccetta
出版情報: Dordrecht : Kluwer Academic Publishers, c2001  xxxvii, 412 p. ; 25 cm
シリーズ名: Applied optimization ; vol. 52
所蔵情報: loading…
10.

図書

図書
Thomas F. Edgar, David M. Himmelblau, Leon S. Lasdon
出版情報: New York : McGraw-Hill, 2001  xv, 651 p. ; 24 cm
シリーズ名: McGraw-Hill chemical engineering series
McGraw-Hill higher education
所蔵情報: loading…
文献の複写および貸借の依頼を行う
 文献複写・貸借依頼