Properties

  • Existence: the range and rank of a matrix
  • Unicity: the nullspace of a matrix
  • Fundamental theorem of linear algebra
  • Matrix inverses

Consider the linear equation in x \in \mathbb{R}^{n}:

    \[Ax=y,\]

where A \in \mathbb{R}^{m \times n} and y \in \mathbb{R}^{m} are given.

 

Existence: range and rank of a matrix

Range

The range (or, image) of a m \times n matrix A is defined as the following subset of \mathbb{R}^{m}:

    \[R(A) := \{Ax : x \in \mathbb{R}^{n} \}.\]

The range describes the vectors y=Ax that can be attained in the output space by an arbitrary choice of a vector x in the input space. The range is simply the span of the columns of A.

If y \notin R(A), we say that the linear equation Ax=y is infeasible.

The matlab function orth accepts a matrix A as input, and returns a matrix, the columns of which span the range of the matrix A, and are mutually orthogonal. Hence, U^{T}U=I, where r is the dimension of the range.

Matlab syntax
>> U = orth(A); % columns of U span the range of A, and U'*U = identity

Example:

Rank

The dimension of the range is called the rank of the matrix. As we will see later, the rank cannot exceed any one of the dimensions of the matrix A: r \le \min(m, n). A matrix is said to be full rank if r = \min(m ,n).

Matlab syntax
r = rank(A); % r is the rank of A

Note that the rank is a very ‘‘brittle’’ notion, in that small changes in the entries of the matrix can dramatically change its rank. Random matrices, such as ones generated using the Matlab command rand, are full rank. We will develop here a better, more numerically reliable notion.

Examples:

Full row rank matrices

The matrix A is said to be full row rank (or, onto) if the range is the whole output space, \mathbb{R}^{m}. The name ‘‘full row rank’’ comes from the fact that the rank equals the row dimension of A.

An equivalent condition for A to be full row rank is that the square, m \times m matrix AA^{T} is invertible, meaning that it has full rank, mProof.

Unicity: nullspace of a matrix

Nullspace
The nullspace (or, kernel) of a m \times n matrix A is the following subspace of \mathbf{R}^n :

    \[\mathbf{N}(A):=\left\{x \in \mathbf{R}^n: A x=0\right\} .\]

The nullspace describes the ambiguity in x given y=A x : any z \in \mathbf{N}(A) will be such that A(x+z)=y, so x cannot be determined by the sole knowledge of y if the nullspace is not reduced to the singleton \{0\}.
The matlab function null accepts a matrix A as input, and returns a matrix, the columns of which span the nullspace of the matrix A, and are mutually orthogonal. Hence, U^T U=I_r, where r is the dimension of the range.
Matlab syntax
U=\operatorname{null}(A) ; \% columns of A span the nullspace of A, and U^{\prime} * U=I
Example:
– Nullspace of a simple matrix.
Full column rank matrices
The matrix A is said to be full column rank (or, one-to-one) if its nullspace is the singleton \{0\}. In this case, if we denote by a_i the n columns of A, the equation

    \[(A x=) \sum_{i=1}^n a_i x_i=0\]

has x=0 as the unique solution. Hence, A is one-to-one if and only if its columns are independent.
The term “one-to-one” comes from the fact that for such matrices, the condition y=A x uniquely determines x, since A x_1=y and A x_2=y implies A\left(x_1-x_2\right)=0, so that the solution is unique: x_1=x_2. The name “full column rank” comes from the fact that the rank equals the column dimension of A.

An equivalent condition for A to be full column rank is that the square, n \times n matrix A^{T}A is invertible, meaning that it has full rank, n. (Proof)

ExampleNullspace of a transpose incidence matrix.

Fundamental theorem of linear algebra

A basic result of linear algebra is that the nullspace of a m \times n matrix and the range of the transpose matrix are orthogonal sets, in the sense that any two vectors, each chosen in one of the sets, are orthogonal. Further, their dimensions sum up to n. That is, the two sets form an orthogonal decomposition of the whole space.

Fundamental theorem of linear algebra

Let A \in \mathbb{R}^{m \times n}. The sets N(A) and R(A^{T}) form an orthogonal decomposition of \mathbb{R}^{n}, in the sense that any vector x \in \mathbb{R}^{n} can be written as

    \[x=y+z, y \in N(A), z \in R(A^{T}), y^{T}z=0.\]

Proof: the proof uses the Singular Value Decomposition seen later.

Example: XXX.

We will see later that the rank of a matrix is equal to that of its transpose. Thus, a consequence of the theorem is that

    \[\operatorname{Rank}(A)+\operatorname{dimN}(A)=n .\]

The interpretation of the above is that n is the number of degrees of freedom in input x; Rank (A) gives the number of degrees of freedom that remain in the output, while \operatorname{dimN}(A) counts the number of dimensions of x that are “crushed” to zero by the mapping x \rightarrow A x.
Matrix inverses
Left and right inverses
It can be shown that a m \times n matrix A is full row rank if and only if it has a right inverse, that is, there exist a matrix B such that A B=I_m, where I_m is the m \times m identity matrix.

It can be shown that a m \times n matrix A is full column rank if and only if it has a left inverse, that is, there exist a matrix B such that B A=I_n, where I_n is the n \times n identity matrix. Hence, for a one-to-one matrix, the equation y=A x has always a unique solution, x=B y.
Invertible matrices
If square n \times n matrix is full row rank if and only if it is full row rank, and vice-versa. In this case, we simply say that the matrix is invertible. Then, there exist a unique left- and right inverse, and both are equal to a matrix called the inverse, and denoted A^{-1}. The inverse satisfies A \cdot A^{-1}=A^{-1} \cdot A=I_n.
For an invertible n \times n matrix, the nullspace is a the zero subspace \{0\}, and the range is the whole space, \mathbf{R}^n. In addition, the equation y=A x then always has a unique solution for every y.
There is a closed-form expression of the inverse, based on the notion of determinant. This expression is useful for theoretical reasons but never used in practice. Later we will see how to compute the matrix inverse in a numerically more efficient way.
A useful property is the expression of the inverse of a product of two square, invertible matrices A, B:(A B)^{-1}=B^{-1} A^{-1}.

License

Hyper-Textbook: Optimization Models and Applications Copyright © by L. El Ghaoui. All Rights Reserved.

Share This Book