Primary Navigation
Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.
Book Contents Navigation
1. Chapter I: Vectors
Basics
Scalar Product, Norms and Angles
Projection on a line
Orthogonalization: the Gram-Schmidt procedure
Hyperplanes and half-spaces
Linear Functions
Application: data visualization by projection on a line
Exercises
Image Compression via Least-Squares
2. Matrices
Matrix Products
Special Matrices
Control of a unit mass
The QR decomposition of a matrix
Matrix Inverses
Linear Maps
Matrix Norms
Applications
3. Linear Equations
4. Least-Squares and Variants
Ordinary Least-Squares Problem
Variants of the Least-Squares Problem
Applications of Least-Squares
5. Eigenvalues
Definitions
Spectral Theorem
Positive Semi-Definite Matrices
Principal Component Analysis
6. Singular Values
Least-squares and SVD
Applications of SVD
7. Overview
8. Optimization Models
9. Home
10. Software
11. Links
12. Index
13. Contact
14. Strong Duality
15. Antenna Arrays
16. Localization
17. Circuit Design
18. Solving triangular systems of equations: backwards substitution example
Set of solutions to the least-squares problem via QR decomposition
Regularized Least-Squares Problem
A two-dimensional toy optimization problem
Linearly Constrained Least-Squares Problems
19. Convexity
20. LP and QP
21. SOCP
22. Robust LP
23. GP
24. SDP
25. Non-Convex Problems
26. Weak Duality
27. Strong Duality
28. Antenna Arrays
29. Localization
30. Circuit Design
Temperatures at different airports
Bag-of-words representation of text
Basis in high dimension
Dimension of an affine subspace
Bag-of-words representation of text: measure of document similarity
Gram matrix
Rate of return of a financial portfolio
Beer-Lambert Law in Absorption Spectrometry
Two orthogonal vectors
Cauchy-Schwartz Inequality
Dual Norm
Definition: vector norm
Sample and weighted mean, expected value
Sample variance and standard deviation
Lines in high dimension
Optimal set of Least-Squares via SVD
Euclidean projection on a set
Dimension of hyperplanes
An hyperplane in 3D
Linear maps: equivalent definitions
Gradient of a function
Gradient of a linear function
Linearization of a non-linear function
Log-Sum-Exp (LSE) Function and Properties
Hessian of a Function
Power laws
Power law model fitting
A symmetric matrix
Edge weight matrix of a graph
Kernel Least-Squares
Laplacian matrix of a graph
Quadratic functions in two variables
Hessian of a quadratic function
Representation of a two-variable quadratic function
A diagonal matrix and its associated quadratic form
A squared linear function
Quadratic Approximation of the Log-Sum-Exp Function
A theorem on positive semidefinite forms and eigenvalues
Sample covariance matrix
Determinant of a square matrix
Rayleigh quotients
The SVD theorem
Largest singular value norm of a matrix
Matrix Properties via SVD
Solving Linear Equations via SVD
Eigenvalue Decomposition of a Symmetric Matrix
Spectral theorem: eigenvalue decomposition for symmetric matrices
Low-rank approximation of a matrix
Applications of SVD: market data analysis
Linear regression via least-squares
Gauss’ Bet
Auto-Regressive (AR) models for time-series prediction
Portfolio Optimization via Linearly Constrained Least-Squares
Nomenclature
Global vs. local minima
Nomenclature of a toy 2D optimization problem
A toy 2D optimization problem: geometric view via the epigraph form
Sensitivity Analysis
Some Limitations of OLS
Low-rank approximation of a 4 x 5 matrix via its SVD
Singular value decomposition of a 4x5 matrix
Pseudo-Inverse of a Matrix
Pseudo-inverse of a 4 times 5 matrix via its SVD
Applications of SVD: image compression
Appendix
This is where you can add appendices or other back matter.
Previous/next navigation
Hyper-Textbook: Optimization Models and Applications Copyright © by L. El Ghaoui. All Rights Reserved.