- The lines intersect at a single point. Then the system has
**a unique solution**corresponding to that point. - The lines are parallel (and distinct) and so do not intersect. Then the system has
**no solution**. - The lines are identical. Then the system has
**infinitely many**solutions---one for each point on the (common) line.

Definition 1.1 **Elementary Operations**

The following operations, called **elementary operations**, can routinely be performed on systems of linear equations to produce equivalent systems.

- Interchange two equations.
- Multiply one equation by a nonzero number.
- Add a multiple of one equation to a different equation.

Theorem 1.1.1

Suppose that a sequence of elementary operations is performed on a system of linear equations. Then the resulting system has the same set of solutions as the original, so the two systems are equivalent.

Definition 1.2 Elementary Row Operations

The following are called **elementary row operations** on a matrix.

- Interchange two rows.
- Multiply one row by a nonzero number.
- Add a multiple of one row to a different row.

Example 1.1.3 Find all solutions to the following system of equations.

\begin{equation*}
\arraycolsep=1pt
\begin{array}{rlrlrcr}
3x & + & 4y & + & z & = & 1 \\
2x & + & 3y & & & = & 0 \\
4x & + & 3y & - & z & = & -2
\end{array}
\end{equation*}

Definition 1.3 **row-echelon form** (reduced)

A matrix is said to be in **row-echelon form** (and will be called a** row-echelon matrix** if it satisfies the following three conditions:
** reduced row-echelon form** (and will be called a **reduced row-echelon matrix **if, in addition, it satisfies the following condition:
4. Each leading $1$ is the only nonzero entry in its column.

- All
**zero rows**(consisting entirely of zeros) are at the bottom. - The first nonzero entry from the left in each nonzero row is a $1$, called the leading $1$ for that row.
- Each leading $1$ is to the right of all
**leading**$1$s in the rows above it.

\begin{equation*}
\left[ \begin{array}{rrrrrrr}
\multicolumn{1}{r|}{0} & 1 & * & * & * & * & * \\
\cline{2-3}
0 & 0 & \multicolumn{1}{r|}{0} & 1 & * & * & * \\
\cline{4-4}
0 & 0 & 0 & \multicolumn{1}{r|}{0} & 1 & * & * \\
\cline{5-6}
0 & 0 & 0 & 0 & 0 & \multicolumn{1}{r|}{0} & 1 \\
\cline{7-7}
0 & 0 & 0 & 0 & 0 & 0 & 0
\end{array} \right] \end{equation*}

Theorem 1.2.1

Every matrix can be brought to (reduced) row-echelon form by a sequence of elementary row operations.

Gaussian Algorithm

Step 1. If the matrix consists entirely of zeros, stop---it is already in row-echelon form.
Step 2. Otherwise, find the first column from the left containing a nonzero entry (call it $a$), and move the row containing that entry to the top position.
Step 3. Now multiply the new top row by $1/a$ to create a leading $1$.
Step 4. By subtracting multiples of that row from rows below it, make each entry below the leading $1$ zero. This completes the first row, and all further row operations are carried out on the remaining rows.
Step 5. Repeat steps 1--4 on the matrix consisting of the remaining rows.
The process stops when either no rows remain at step 5 or the remaining rows consist entirely of zeros.

Example 1.2.2 Solve the following system of equations.

\begin{equation*}
\arraycolsep=1pt
\begin{array}{rlrlrcr}
3x & + & y & - & 4z & = & -1 \\
x & & & + & 10z & = & 5 \\
4x & + & y & + & 6z & = & 1
\end{array}
\end{equation*}

Gaussian Elimination

To solve a system of linear equations proceed as follows:

- Carry the augmented matrix\index{augmented matrix}\index{matrix!augmented matrix} to a reduced row-echelon matrix using elementary row operations.
- If a row $\left[ \begin{array}{cccccc} 0 & 0 & 0 & \cdots & 0 & 1 \end{array} \right]$ occurs, the system is inconsistent.
- Otherwise, assign the nonleading variables (if any) as parameters, and use the equations corresponding to the reduced row-echelon matrix to solve for the leading variables in terms of the parameters.

Definition 1.4 Rank of a matrix

The **rank **of matrix $A$ is the number of leading $1$s in any row-echelon matrix to which $A$ can be carried by row operations.

Example 1.2.5

Compute the rank of $A =
\left[ \begin{array}{rrrr}
1 & 1 & -1 & 4 \\
2 & 1 & 3 & 0 \\
0 & 1 & -5 & 8
\end{array} \right]$.

Theorem 1.2.2

Suppose a system of $m$ equations in $n$ variables is **consistent**, and that the rank of the augmented matrix is $r$.

- The set of solutions involves exactly $n - r$ parameters.
- If $r < n$, the system has infinitely many solutions.
- If $r = n$, the system has a unique solution.

. This occurs when a row $\left[ \begin{array}{ccccc} 0 & 0 & \cdots & 0 & 1 \end{array} \right]$ occurs in the row-echelon form. This is the case where the system is inconsistent.**No solution**-
. This occurs when every variable is a leading variable.**Unique solution** -
. This occurs when the system is consistent and there is at least one nonleading variable, so at least one parameter is involved.**Infinitely many solutions**

Example 1.3.1

Show that the following homogeneous system has nontrivial solutions.
\begin{equation*}
\arraycolsep=1pt
\begin{array}{rlrlrlrcr}
x_1 & - & x_2 & + & 2x_3 & - & x_4 & = & 0 \\
2x_1 & + &2x_2 & & & + & x_4 & = & 0 \\
3x_1 & + & x_2 & + & 2x_3 & - & x_4 & = & 0
\end{array}
\end{equation*}

Theorem 1.3.1

If a homogeneous system of linear equations has more variables than equations, then it has a nontrivial solution (in fact, infinitely many).

Example 1.3.2

We call the graph of an equation $ax^2 + bxy + cy^2 + dx + ey + f = 0$ a **conic **if the numbers $a$, $b$, and $c$ are not all zero. Show that there is at least one conic through any five points in the plane that are not all on a line.

Example 1.3.3

If $\vect{x} =
\left[ \begin{array}{r}
3 \\
-2 \\
\end{array} \right]$ and $\left[ \begin{array}{r}
-1 \\
1 \\
\end{array} \right]$
then $ 2\vect{x} + 5\vect{y} =
\left[ \begin{array}{r}
6 \\
-4 \\
\end{array} \right]
+
\left[ \begin{array}{r}
-5 \\
5 \\
\end{array} \right]
=
\left[ \begin{array}{r}
1 \\
1 \\
\end{array} \right]$.

Example 1.3.4

Let $ \vect{x} =
\left[ \begin{array}{r}
1 \\
0 \\
1
\end{array} \right], \vect{y} =
\left[ \begin{array}{r}
2 \\
1 \\
0
\end{array} \right]$
and $\vect{z} =
\left[ \begin{array}{r}
3 \\
1 \\
1
\end{array} \right]$. If $\vect{v} =
\left[ \begin{array}{r}
0 \\
-1 \\
2
\end{array} \right]$
and $\vect{w} =
\left[ \begin{array}{r}
1 \\
1 \\
1
\end{array} \right]$,
determine whether $\vect{v}$ and $\vect{w}$ are linear combinations of $\vect{x}$, $\vect{y}$ and $\vect{z}$.

Example 1.3.5

Solve the homogeneous system with coefficient matrix
\begin{equation*}
A =
\left[ \begin{array}{rrrr}
1 & -2 & 3 & -2 \\
-3 & 6 & 1 & 0 \\
-2 & 4 & 4 & -2 \\
\end{array} \right]
\end{equation*}

Definition 1.5 Basic Solutions

The gaussian algorithm systematically produces solutions to any homogeneous linear system, called **basic solutions**, one for every parameter.

Convention:

Any nonzero scalar multiple of a basic solution will still be called a basic solution.

Theorem 1.3.2

Let $A$ be an $m \times n$ matrix of rank $r$, and consider the homogeneous system in $n$ variables with $A$ as coefficient matrix. Then:

- The system has exactly $n-r$ basic solutions, one for each parameter.
- Every solution is a linear combination of these basic solutions.

Example 1.3.6

Find basic solutions of the homogeneous system with coefficient matrix $A$, and express every solution as a linear combination of the basic solutions, where
\begin{equation*}
A = \left[ \begin{array}{rrrrr}
1 & -3 & 0 & 2 & 2 \\
-2 & 6 & 1 & 2 & -5 \\
3 & -9 & -1 & 0 & 7 \\
-3 & 9 & 2 & 6 & -8
\end{array} \right]
\end{equation*}

Arthur Cayley (1821-1895) showed his mathematical talent early and graduated from Cambridge in 1842 as senior wrangler. With no employment in mathematics in view, he took legal training and worked as a lawyer while continuing to do mathematics, publishing nearly 300 papers in fourteen years. Finally, in 1863, he accepted the Sadlerian professorship in Cambridge and remained there for the rest of his life, valued for his administrative and teaching skills as well as for his scholarship. His mathematical achievements were of the first rank. In addition to originating matrix theory and the theory of determinants, he did fundamental work in group theory, in higher-dimensional geometry, and in the theory of invariants. He was one of the most prolific mathematicians of all time and produced 966 papers.

- If a matrix has size $m \times n$, it has $m$ rows and $n$ columns.
- If we speak of the $(i, j)$-entry of a matrix, it lies in row $i$ and column $j$.
- If an entry is denoted $a_{ij}$, the first subscript $i$ refers to the row and the second subscript $j$ to the column in which $a_{ij}$ lies.

- They have the same size.
- Corresponding entries are equal.

Example 2.1.1

Given $A = \left[ \begin{array}{cc}
a & b \\
c & d
\end{array}
\right]$, $B = \left[ \begin{array}{rrr}
1 & 2 & -1 \\
3 & 0 & 1
\end{array}
\right]$ and
$C = \left[ \begin{array}{rr}
1 & 0 \\
-1 & 2
\end{array}
\rightB]$
discuss the possibility that $A = B$, $B = C$, $A = C$.

Definition 2.1 Matrix Addition

If $A$ and $B$ are matrices of the same size, their **sum** $A + B$ is the matrix formed by adding corresponding entries.

Example 2.1.2

If $A = \left[ \begin{array}{rrr}
2 & 1 & 3 \\
-1 & 2 & 0
\end{array} \right]$
and $B = \left[ \begin{array}{rrr}
1 & 1 & -1 \\
2 & 0 & 6
\end{array} \right]$,
compute $A + B$.

Example 2.1.3

Find $a$, $b$, and $c$ if $\left[ \begin{array}{ccc}
a & b & c
\end{array} \right] + \left[
\begin{array}{ccc}
c & a & b
\end{array} \right]
= \left[ \begin{array}{ccc}
3 & 2 & -1
\end{array} \right]$.

Example 2.1.4

Let $A = \left[ \begin{array}{rrr}
3 & -1 & 0 \\
1 & 2 & -4
\end{array} \right]$, $B = \left[ \begin{array}{rrr}
1 & -1 & 1 \\
-2 & 0 & 6
\end{array} \right]$, $C = \left[ \begin{array}{rrr}
1 & 0 & -2 \\
3 & 1 & 1
\end{array} \right]$.

Compute $-A$, $A - B$, and $A + B - C$.

Example 2.1.5

Solve
$\left[ \begin{array}{rr}
3 & 2 \\
-1 & 1
\end{array} \right] + X = \left[ \begin{array}{rr}
1 & 0 \\
-1 & 2
\end{array} \right]$
where $X$ is a matrix.

Definition 2.2 Matrix Scalar Multiplication

More generally, if $A$ is any matrix and $k$ is any number, the **scalar multiple** $kA$ is the matrix obtained from $A$ by multiplying each entry of $A$ by $k$.

Example 2.1.6

If $A = \left[ \begin{array}{rrr}
3 & -1 & 4 \\
2 & 0 & 1
\end{array} \right]$
and $ B = \left[ \begin{array}{rrr}
1 & 2 & -1 \\
0 & 3 & 2
\end{array} \right],$
compute $5A$, $\frac{1}{2}B$, and $3A - 2B$.

Example 2.1.7

If $kA = 0$, show that either $k = 0$ or $A = 0$.

Theorem 2.1.1

Let $A$, $B$, and $C$ denote arbitrary $m \times n$ matrices where $m$ and $n$ are fixed. Let $k$ and $p$ denote arbitrary real numbers. Then

- $A + B = B + A$.
- $A + (B + C) = (A + B) + C$.
- There is an $m \times n$ matrix $0$, such that $0 + A = A$ for each $A$.
- For each $A$ there is an $m \times n$ matrix, $-A$, such that $A + (-A) = 0$.
- $k(A + B) = kA + kB$.
- $(k + p)A = kA + pA$.
- $(kp)A = k(pA)$.
- $1A = A$.

Example 2.1.8

Simplify $2(A + 3C) - 3(2C - B) - 3 \left[ 2(2A + B - 4C) - 4(A - 2C) \right]$ where $A, B$ and $C$ are all matrices of the same size.

`[h5p id="43"]`

Definition 2.3 Transpose of a Matrix

If $A$ is an $m \times n$ matrix, the **transpose** of $A$, written $A^{T}$, is the $n \times m$ matrix whose rows are just the columns of $A$ in the same order.

Example 2.1.9

Write down the transpose of each of the following matrices.
\begin{equation*}
A = \left[ \begin{array}{r}
1 \\
3 \\
2
\end{array} \right] \quad
B = \left[ \begin{array}{rrr}
5 & 2 & 6
\end{array} \right] \quad
C = \left[ \begin{array}{rr}
1 & 2 \\
3 & 4 \\
5 & 6
\end{array} \right] \quad
D = \left[ \begin{array}{rrr}
3 & 1 & -1 \\
1 & 3 & 2 \\
-1 & 2 & 1
\end{array} \right]
\end{equation*}

Theorem 2.1.2

Let $A$ and $B$ denote matrices of the same size, and let $k$ denote a scalar.

- If $A$ is an $m \times n$ matrix, then $A^{T}$ is an $n \times m$ matrix.
- $(A^{T})^{T} = A$.
- $(kA)^{T} = kA^{T}$.
- $(A + B)^{T} = A^{T} + B^{T}$.

Example 2.1.10

Solve for $A$ if $\left(2A^{T} - 3 \left[ \begin{array}{rr}
1 & 2 \\
-1 & 1
\end{array} \right] \right)^{T} = \left[ \begin{array}{rr}
2 & 3 \\
-1 & 2
\end{array} \right]$.

`[h5p id="39"]`

The matrix $D = \left[ \begin{array}{rr}
1 & 2 \\
2 & 5 \end{array}\right]$ in Example 2.1.9 has the property that $D = D^{T}$. Such matrices are important; a matrix $A$ is called Example 2.1.11

If $A$ and $B$ are symmetric $n \times n$ matrices, show that $A + B$ is symmetric.

Example 2.1.12

Suppose a square matrix $A$ satisfies $A = 2A^{T}$. Show that necessarily $A = 0$.

Definition 2.4 The set $\mathbb{R}^n$ of ordered $n$-tuples of real numbers

Let $\mathbb{R}$ denote the set of all real numbers. The set of all ordered $n$-tuples from $\mathbb{R}$ has a special notation:

\begin{equation*}
\mathbb{R}^{n} \mbox{ denotes the set of all ordered }n\mbox{-tuples of real numbers.}
\end{equation*}

Example 2.2.1

Write the system
$\left\lbrace
\arraycolsep=1pt
\begin{array}{rrrrrrr}
3x_{1} & + & 2x_{2} & - & 4x_{3} & = & 0 \\
x_{1} & - & 3x_{2} & + & x_{3} & = & 3 \\
& & x_{2} & - & 5x_{3} & = & -1
\end{array} \right.$
in the form given in (2.4).

Definition 2.5 Matrix-Vector Multiplication

Let $A = \left[ \begin{array}{cccc}
\textbf{a}_{1} &\textbf{a}_{2} & \cdots & \textbf{a}_{n}
\end{array} \right]$ be an $m \times n$ matrix, written in terms of its columns $\textbf{a}_{1}, \textbf{a}_{2}, \dots, \textbf{a}_{n}$. If $\textbf{x} = \left[ \begin{array}{c}
x_{1} \\
x_{2} \\
\vdots \\
x_{n}
\end{array} \right]$
is any n-vector, the product $A\textbf{x}$ is defined to be the $m$-vector given by:
\begin{equation*}
A\textbf{x} = x_{1}\textbf{a}_{1} + x_{2}\textbf{a}_{2} + \cdots + x_{n}\textbf{a}_{n}
\end{equation*}

Theorem 2.2.1

- Every system of linear equations has the form $A\textbf{x} = \textbf{b}$ where $A$ is the coefficient matrix, $\textbf{b}$ is the constant matrix, and $\textbf{x}$ is the matrix of variables.
- The system $A\textbf{x} = \textbf{b}$ is consistent if and only if $\textbf{b}$ is a linear combination of the columns of $A$.
- If $\textbf{a}_{1}, \textbf{a}_{2}, \dots, \textbf{a}_{n}$ are the columns of $A$ and if $\textbf{x} = \left[ \begin{array}{c} x_{1} \\ x_{2} \\ \vdots \\ x_{n} \end{array} \right]$, then $\textbf{x}$ is a solution to the linear system $A\textbf{x} = \textbf{b}$ if and only if $x_{1}, x_{2}, \dots, x_{n}$ are a solution of the vector equation \begin{equation*} x_{1}\textbf{a}_{1} + x_{2}\textbf{a}_{2} + \cdots + x_{n}\textbf{a}_{n} = \textbf{b} \end{equation*}

`[h5p id="41"]`

Example 2.2.2

If $A = \left[ \begin{array}{rrrr}
2 & -1 & 3 & 5 \\
0 & 2 & -3 & 1 \\
-3 & 4 & 1 & 2
\end{array} \right]$ and
$\textbf{x} = \left[ \begin{array}{r}
2 \\
1 \\
0 \\
-2
\end{array} \right]$, compute $A\textbf{x}$.

`[h5p id="40"]`

Example 2.2.3

Given columns $\textbf{a}_{1}$, $\textbf{a}_{2}$, $\textbf{a}_{3}$, and $\textbf{a}_{4}$ in $\mathbf{R}^3$, write $2\textbf{a}_{1} - 3\textbf{a}_{2} + 5\textbf{a}_{3} + \textbf{a}_{4}$ in the form $A\textbf{x}$ where $A$ is a matrix and $\textbf{x}$ is a vector.

Example 2.2.4

Let $A = \left[ \begin{array}{cccc}
\textbf{a}_{1} & \textbf{a}_{2} & \textbf{a}_{3} & \textbf{a}_{4}
\end{array} \right]$ be the $3 \times 4$ matrix given in terms of its columns
$\textbf{a}_{1} = \left[ \begin{array}{r}
2 \\
0 \\
-1
\end{array} \right]$, $\textbf{a}_{2} = \left[ \begin{array}{r}
1 \\
1 \\
1
\end{array} \right]$, $\textbf{a}_{3} = \left[ \begin{array}{r}
3 \\
-1 \\
-3
\end{array} \right]$, and $\textbf{a}_{4} = \left[ \begin{array}{r}
3 \\
1 \\
0
\end{array} \right]$.
In each case below, either express $\textbf{b}$ as a linear combination of $\textbf{a}_{1}$, $\textbf{a}_{2}$, $\textbf{a}_{3}$, and $\textbf{a}_{4}$, or show that it is not such a linear combination. Explain what your answer means for the corresponding system $A\textbf{x} = \textbf{b}$ of linear equations.
1. $\textbf{b} = \left[ \begin{array}{r}
1 \\
2 \\
3
\end{array} \right] $
2. $\textbf{b} = \left[ \begin{array}{r}
4 \\
2 \\
1
\end{array} \right]$

Example 2.2.5

Taking $A$ to be the zero matrix, we have $0\textbf{x} = \textbf{0}$ for all vectors $\textbf{x}$ by Definition 2.5 because every column of the zero matrix is zero. Similarly, $A\textbf{0} = \textbf{0}$ for all matrices $A$ because every entry of the zero vector is zero.

Example 2.2.6

If $I = \left[ \begin{array}{rrr}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{array} \right]$, show that $I\textbf{x} = \textbf{x}$ for any vector $\textbf{x}$ in $\mathbf{R}^3$.

Theorem 2.2.2

Let $A$ and $B$ be $m \times n$ matrices, and let $\vect{x}$ and $\vect{y}$ be $n$-vectors in $\RR^n$. Then:

- $A(\textbf{x} + \textbf{y}) = A\textbf{x} + A\textbf{y}$.
- $A(a\textbf{x}) = a(A\textbf{x}) = (aA)\textbf{x}$ for all scalars $a$.
- $(A + B)\textbf{x} = A\textbf{x} + B\textbf{x}$.

Theorem 2.2.3

Suppose $\textbf{x}_{1}$ is any particular solution to the system $A\textbf{x} = \textbf{b}$ of linear equations. Then every solution $\textbf{x}_{2}$ to $A\textbf{x} = \textbf{b}$ has the form
\begin{equation*}
\textbf{x}_{2} = \textbf{x}_{0} + \textbf{x}_{1}
\end{equation*}
for some solution $\textbf{x}_{0}$ of the associated homogeneous system $A\textbf{x} = \textbf{0}$.

Example 2.2.7

Express every solution to the following system as the sum of a specific solution plus a solution to the associated homogeneous system.
\begin{equation*}
\arraycolsep=1pt
\begin{array}{rrrrrrrrr}
x_{1} & - & x_{2} & - & x_{3} & + & 3x_{4} & = & 2 \\
2x_{1} & - & x_{2} & - & 3x_{3}& + & 4x_{4} & = & 6 \\
x_{1} & & & - & 2x_{3} & + & x_{4} & = & 4
\end{array}
\end{equation*}

Theorem 2.2.4

Let $A\textbf{x} = \textbf{b}$ be a system of equations with augmented matrix $\left[ \begin{array}{c|c} A & \textbf{b}
\end{array}\right]$. Write $\text{rank} A = r$.

1. $\text{rank} \left[ \begin{array}{c|c} A & \textbf{b}
\end{array}\right]$ is either $r$ or $r+1$.

2. The system is consistent if and only if $\text{rank} \left[ \begin{array}{c|c} A & \textbf{b}
\end{array}\right] = r$.

3. The system is inconsistent if and only if $\text{rank} \left[ \begin{array}{c|c} A & \textbf{b}
\end{array}\right] = r+1$.

Definition 2.6 Dot Product in $\mathbb{R}^n$

If $(a_{1}, a_{2}, \dots, a_{n})$ and $(b_{1}, b_{2}, \dots, b_{n})$ are two ordered $n$-tuples, their $\textbf{dot product}$ is defined to be the number
\begin{equation*}
a_{1}b_{1} + a_{2}b_{2} + \dots + a_{n}b_{n}
\end{equation*}
obtained by multiplying corresponding entries and adding the results.

`[h5p id="42"]`

Theorem 2.2.5 Dot Product Rule

Let $A$ be an $m \times n$ matrix and let $\textbf{x}$ be an $n$-vector. Then each entry of the vector $A\textbf{x}$ is the dot product of the corresponding row of $A$ with $\textbf{x}$.

Example 2.2.8

If $A = \left[ \begin{array}{rrrr}
2 & -1 & 3 & 5 \\
0 & 2 & -3 & 1 \\
-3 & 4 & 1 & 2
\end{array} \right]$
and $\textbf{x} = \left[ \begin{array}{r}
2 \\
1 \\
0 \\
-2
\end{array} \right]$, compute $A\textbf{x}$.

Example 2.2.9

Write the following system of linear equations in the form $A\textbf{x} = \textbf{b}$.
\begin{equation*}
\arraycolsep=1pt
\begin{array}{rrrrrrrrrrr}
5x_{1} & - & x_{2} & + & 2x_{3} & + & x_{4} & - & 3x_{5} & = & 8 \\
x_{1} & + & x_{2} & + & 3x_{3} & - & 5x_{4} & + & 2x_{5} & = & -2 \\
-x_{1} & + & x_{2} & - & 2x_{3} & + & & - & 3x_{5} & = & 0
\end{array}
\end{equation*}

Example 2.2.10

If $A$ is the zero $m \times n$ matrix, then $A\textbf{x} = \textbf{0}$ for each $n$-vector $\textbf{x}$.

Definition 2.7 The Identity Matrix

For each $n > 2$, the $\textbf{identity matrix}$ $I_{n}$ is the $n \times n$ matrix with 1s on the main diagonal (upper left to lower right), and zeros elsewhere.

Example 2.2.11

For each $n \geq 2$ we have $I_{n}\textbf{x} = \textbf{x}$ for each $n$-vector $\textbf{x}$ in $\mathbf{R}^n$.

Example 2.2.12

Let $A = \left[ \begin{array}{cccc}
\textbf{a}_{1} & \textbf{a}_{2} & \cdots & \textbf{a}_{n}
\end{array} \right]$ be any $m \times n$ matrix with columns $\textbf{a}_{1}, \textbf{a}_{2}, \dots, \textbf{a}_{n}$. If $\textbf{e}_{j}$ denotes column $j$ of the $n \times n$ identity matrix $I_{n}$, then $A\textbf{e}_{j} = \textbf{a}_{j}$ for each $j = 1, 2, \dots, n$.

Theorem 2.2.6

Let $A$ and $B$ be $m \times n$ matrices. If $A\textbf{x} = B\textbf{x}$ for all $\textbf{x}$ in $\mathbf{R}^n$, then $A = B$.

Definition 2.9 Matrix Multiplication

Let $A$ be an $m \times n$ matrix, let $B$ be an $n \times k$ matrix, and write $B = \left[ \begin{array}{cccc}
\textbf{b}_{1} & \textbf{b}_{2} & \cdots & \textbf{b}_{k}
\end{array} \right]$ where $\textbf{b}_{j}$ is column $j$ of $B$ for each $j$. The product matrix $AB$ is the $m \times k$ matrix defined as follows:
\begin{equation*}
AB = A \left[ \begin{array}{cccc} \textbf{b}_{1} & \textbf{b}_{2} & \cdots & \textbf{b}_{k} \end{array} \right] = \left[ \begin{array}{cccc} A\textbf{b}_{1} & A\textbf{b}_{2} & \cdots & A\textbf{b}_{k} \end{array} \right]
\end{equation*}

Theorem 2.3.1

Let $A$ be an $m \times n$ matrix and let $B$ be an $n \times k$ matrix. Then the product matrix $AB$ is $m \times k$ and satisfies
\begin{equation*}
A(B\vec{x}) = (AB)\vec{x} \quad \mbox{ for all } \vec{x} \mbox{ in } \mathbf{R}^{k}
\end{equation*}

Example 2.3.1

Compute $AB$ if $A = \left[ \begin{array}{rrr}
2 & 3 & 5 \\
1 & 4 & 7 \\
0 & 1 & 8
\end{array} \right]$
and
$B = \left[\begin{array}{rr}
8 & 9 \\
7 & 2 \\
6 & 1
\end{array} \right]$.

Theorem 2.3.2 Dot Product Rule

Let $A$ and $B$ be matrices of sizes $m \times n$ and $n \times k$, respectively. Then the $(i, j)$-entry of $AB$ is the dot
product of row $i$ of $A$ with column $j$ of $B$.

Example 2.3.3

Compute $AB$ if $A = \left[ \begin{array}{rrr}
2 & 3 & 5 \\
1 & 4 & 7 \\
0 & 1 & 8
\end{array} \right]$
and $B = \left[ \begin{array}{rr}
8 & 9 \\
7 & 2 \\
6 & 1
\end{array} \right]$.

Example 2.3.4

Compute the $(1, 3)$- and $(2, 4)$-entries of $AB$ where
\begin{equation*}
A = \left[ \begin{array}{rrr}
3 & -1 & 2 \\
0 & 1 & 4
\end{array} \right] \mbox{ and } B = \left[ \begin{array}{rrrr}
2 & 1 & 6 & 0 \\
0 & 2 & 3 & 4 \\
-1 & 0 & 5 & 8
\end{array} \right].
\end{equation*}
Then compute $AB$.

Example 2.3.5

If $A = \left[ \begin{array}{ccc}
1 & 3 & 2\end{array}\right]$ and $B = \left[ \begin{array}{r}
5 \\
6 \\
4
\end{array} \right]$, compute $A^{2}$, $AB$, $BA$, and $B^{2}$ when they are defined.

Example 2.3.6

Let $A = \left[ \begin{array}{rr}
6 & 9 \\
-4 & -6
\end{array} \right]$ and $B = \left[ \begin{array}{rr}
1 & 2 \\
-1 & 0
\end{array} \right]$. Compute $A^{2}$, $AB$, $BA$.

Example 2.3.7

If $A$ is any matrix, then $IA = A$ and $AI = A$, and where $I$ denotes an identity matrix of a size so that the multiplications are defined.

Theorem 2.3.3

Assume that $a$ is any scalar, and that $A$, $B$, and $C$ are matrices of sizes such that the indicated matrix products are defined. Then:
1. $IA = A$ and $AI = A$ where $I$ denotes an identity matrix.
2. $A(BC) = (AB)C$.
3. $A(B + C) = AB + AC$.
4. $(B + C)A = BA + CA$.
5. $a(AB) = (aA)B = A(aB)$.
6. $(AB)^{T} = B^{T}A^{T}$.

Example 2.3.8

Simplify the expression $A(BC - CD) + A(C - B)D - AB(C - D)$.

Example 2.3.9

Suppose that $A$, $B$, and $C$ are $n \times n$ matrices and that both $A$ and $B$ commute with $C$; that is, $AC = CA$ and $BC = CB$. Show that $AB$ commutes with $C$.

Example 2.3.10

Show that $AB = BA$ if and only if $(A - B)(A + B) = A^{2} - B^{2}$.

Example 2.3.11

Consider a system $A\vec{x} = \vec{b}$ of linear equations where $A$ is an $m \times n$ matrix. Assume that a matrix $C$ exists such that $CA = I_{n}$. If the system $A\vec{x} = \vec{b}$ *has* a solution, show that this solution must be $C\vec{b}$. Give a condition guaranteeing that $C\vec{b}$ *is in fact *a solution.

`[h5p id="38"]`

Definition 2.11 Matrix Inverses

If $A$ is a square matrix, a matrix $B$ is called an **inverse **of $A$ if and only if
\begin{equation*}
AB = I \quad \mbox{ and } \quad BA = I
\end{equation*}
A matrix $A$ that has an inverse is called an $\textbf{invertible matrix}.$

Example 2.4.1

Show that $B = \left[ \begin{array}{rr}
-1 & 1 \\
1 & 0
\end{array} \right]$
is an inverse of $A = \left[ \begin{array}{rr}
0 & 1 \\
1 & 1
\end{array} \right]$.

Example 2.4.2

Show that $A = \left[ \begin{array}{rr}
0 & 0 \\
1 & 3
\end{array} \right]$
has no inverse.

Theorem 2.4.1

If $B$ and $C$ are both inverses of $A$, then $B = C$.

Example 2.4.3

If $A = \left[ \begin{array}{rr}
0 & -1 \\
1 & -1
\end{array} \right]$, show that $A^{3} = I$ and so find $A^{-1}$.

Example 2.4.4

If $A = \left[ \begin{array}{cc}
a & b \\
c & d
\end{array} \right]$, show that $A$ has an inverse if and only if $\func{det } A \neq 0$, and in this case
\begin{equation*}
A^{-1} = \frac{1}{\func{det } A} \func{adj } A
\end{equation*}

Theorem 2.4.2

Suppose a system of $n$ equations in $n$ variables is written in matrix form as
\begin{equation*}
A\vec{x} = \vec{b}
\end{equation*}
If the $n \times n$ coefficient matrix $A$ is invertible, the system has the unique solution
\begin{equation*}
\vec{x} = A^{-1}\vec{b}
\end{equation*}

Example 2.4.5

Use Example 2.4.4 to solve the system $\left\lbrace \arraycolsep=1pt \begin{array}{rrrrr}
5x_{1} & - & 3x_{2} & = & -4 \\
7x_{1} & + & 4x_{2} & = & 8
\end{array} \right.$.

Matrix Inversion Algorithm

If $A$ is an invertible (square) matrix, there exists a sequence of elementary row operations that carry $A$ to the identity matrix $I$ of the same size, written $A \to I$. This same series of row operations carries $I$ to $A^{-1}$; that is, $I \to A^{-1}$. The algorithm can be summarized as follows:
\begin{equation*}
\left[ \begin{array}{cc}
A & I
\end{array} \right] \rightarrow
\left[ \begin{array}{cc}
I & A^{-1}
\end{array} \right]
\end{equation*}
where the row operations on $A$ and $I$ are carried out simultaneously.

Example 2.4.6

Use the inversion algorithm to find the inverse of the matrix
\begin{equation*}
A = \left[ \begin{array}{rrr}
2 & 7 & 1 \\
1 & 4 & -1 \\
1 & 3 & 0
\end{array} \right]
\end{equation*}

Theorem 2.4.3

If $A$ is an $n \times n$ matrix, either $A$ can be reduced to $I$ by elementary row operations or it cannot. In the
first case, the algorithm produces $A^{-1}$; in the second case, $A^{-1}$ does not exist.

Example 2.4.7: Cancellation Laws

Let $A$ be an invertible matrix. Show that:
1. If $AB = AC$, then $B = C$.
2. If $BA = CA$, then $B = C$.

Example 2.4.8

If $A$ is an invertible matrix, show that the transpose $A^{T}$ is also invertible. Show further that the inverse of $A^{T}$ is just the transpose of $A^{-1}$; in symbols, $(A^{T})^{-1} = (A^{-1})^{T}$.

Example 2.4.9

If $A$ and $B$ are invertible $n \times n$ matrices, show that their product $AB$ is also invertible and $(AB)^{-1} = B^{-1}A^{-1}$.

Theorem 2.4.4

All the following matrices are square matrices of the same size.
1. $I$ is invertible and $I^{-1} = I$.
2. If $A$ is invertible, so is $A^{-1}$, and $(A^{-1})^{-1} = A$.
3. If $A$ and $B$ are invertible, so is $AB$, and $(AB)^{-1} = B^{-1}A^{-1}$.
4. If $A_{1}, A_{2}, \dots, A_{k}$ are all invertible, so is their product $A_{1}A_{2} \cdots A_{k}$, and
\begin{equation*}
(A_{1}A_{2} \cdots A_{k})^{-1} = A_{k}^{-1} \cdots A_{2}^{-1}A_{1}^{-1}.
\end{equation*}
5. If $A$ is invertible, so is $A^k$ for any $k \geq 1$, and $(A^{k})^{-1} = (A^{-1})^{k}$.
6. If $A$ is invertible and $a \neq 0$ is a number, then $aA$ is invertible and $(aA)^{-1} = \frac{1}{a}A^{-1}$.
7. If $A$ is invertible, so is its transpose $A^{T}$, and $(A^{T})^{-1} = (A^{-1})^{T}$.

Corollary 2.4.1

A square matrix $A$ is invertible if and only if $A^{T}$ is invertible.

Example 2.4.10

Find $A$ if $(A^{T} - 2I)^{-1} = \left[ \begin{array}{rr}
2 & 1 \\
-1 & 0
\end{array} \right]$.

Theorem 2.4.5 Inverse Theorem

The following conditions are equivalent for an $n \times n$ matrix $A$:
1. $A$ is invertible.
2. The homogeneous system $A\vec{x} = \vec{0}$ has only the trivial solution $\vec{x} = \vec{0}$.
3. $A$ can be carried to the identity matrix $I_{n}$ by elementary row operations.
4. The system $A\vec{x} = \vec{b}$ has at least one solution $\vec{x}$ for every choice of column $\vec{b}$.
5. There exists an $n \times n$ matrix $C$ such that $AC = I_{n}$.

Corollary 2.4.1

If $A$ and $C$ are square matrices such that $AC = I$, then also $CA = I$. In particular, both $A$ and $C$ are invertible, $C = A^{-1}$, and $A = C^{-1}$.

Corollary 2.4.2

An $n \times n$ matrix $A$ is invertible if and only if $\func{rank }A = n$.

Example 3.1.1

\begin{align*}
\func{det}\left[ \begin{array}{rrr}
2 & 3 & 7 \\
-4 & 0 & 6 \\
1 & 5 & 0
\end{array} \right]
&= 2 \left| \begin{array}{rr}
0 & 6 \\
5 & 0
\end{array} \right|
- 3 \left| \begin{array}{rr}
-4 & 6 \\
1 & 0
\end{array} \right|
+ 7 \left| \begin{array}{rr}
-4 & 0 \\
1 & 5
\end{array} \right| \\
&= 2 (-30) - 3(-6) + 7(-20) \\
&= -182
\end{align*}

Definition 3.1 Cofactors of a matrix

Assume that determinants of $(n - 1) \times (n - 1)$ matrices have been defined. Given the $n \times n$ matrix $A$, let
$A_{ij}$ denote the $ (n - 1) \times (n - 1) $ matrix obtained from A by deleting row $i$ and column $ j.$
Then the $(i,j)$-**cofactor** $c_{ij}(A)$ is the scalar defined by
\begin{equation*}
c_{ij}(A) = (-1)^{i+j} \func{det}(A_{ij})
\end{equation*}
Here $(-1)^{i+j}$ is called the **sign** of the $(i, j)$-position.

Example 3.1.2

Find the cofactors of positions $(1, 2), (3, 1)$, and $(2, 3)$ in the following matrix.
\begin{equation*}
A = \left[ \begin{array}{rrr}
3 & -1 & 6 \\
5 & 2 & 7 \\
8 & 9 & 4
\end{array} \right] \end{equation*}

Definition 3.2 Cofactor expansion of a Matrix

Assume that determinants of $(n - 1) \times (n - 1)$ matrices have been defined. If $A = \left[ a_{ij} \right]$ is $n \times n$ define
\begin{equation*}
\func{det } A = a_{11}c_{11}(A) + a_{12}c_{12}(A) + \cdots + a_{1n}c_{1n}(A)
\end{equation*}
This is called the **cofactor expansion** of $\func{det } A$ along row $1$.

Theorem 3.1.1 Cofactor Expansion Theorem

The determinant of an $n \times n$ matrix $A$ can be computed by using the cofactor expansion along any row or column
of $A$. That is $\func{det } A$ can be computed by multiplying each entry of the row or column by the corresponding cofactor and adding the results.

Example 3.1.3

Compute the determinant of
$
A = \left[ \begin{array}{rrr}
3 & 4 & 5 \\
1 & 7 & 2 \\
9 & 8 & -6
\end{array}
\right]$.

Example 3.1.4

Compute $\func{det } A$ where
$A = \left[ \begin{array}{rrrr}
3 & 0 & 0 & 0 \\
5 & 1 & 2 & 0 \\
2 & 6 & 0 & -1 \\
-6 & 3 & 1 & 0
\end{array}
\right]$.

Theorem 3.1.2

Let $A$ denote an $n \times n$ matrix.

- If A has a row or column of zeros, $\func{det } A = 0$.
- If two distinct rows (or columns) of $A$ are interchanged, the determinant of the resulting matrix is $- \func{det } A$.
- If a row (or column) of $A$ is multiplied by a constant $u$, the determinant of the resulting matrix is $u(\func{det } A)$.
- If two distinct rows (or columns) of $A$ are identical, $\func{det } A = 0$.
- If a multiple of one row of $A$ is added to a different row (or if a multiple of a column is added to a different column), the determinant of the resulting matrix is $\func{det } A$.

Example 3.1.5

Evaluate $\func{det } A$ when
$A = \left[ \begin{array}{rrr}
1 & -1 & 3 \\
1 & 0 & -1 \\
2 & 1 & 6
\end{array}
\right]$.

Example 3.1.6

If $\func{det} \left[ \begin{array}{rrr}
a & b & c \\
p & q & r \\
x & y & z
\end{array}
\right] = 6$,
evaluate $\func{det } A$ where $A = \left[ \begin{array}{ccc}
a+x & b+y & c+z \\
3x & 3y & 3z \\
-p & -q & -r
\end{array}\right]$.

Example 3.1.7

Find the values of $x$ for which $\func{det } A = 0$, where
$A = \left[ \begin{array}{ccc}
1 & x & x \\
x & 1 & x \\
x & x & 1
\end{array}
\right]$.

Example 3.1.8

If $a_1$, $a_2$, and $a_3$ are given show that
\begin{equation*}
\func{det}\left[ \begin{array}{ccc}
1 & a_1 & a_1^2 \\
1 & a_2 & a_2^2 \\
1 & a_3 & a_3^2
\end{array}
\right] = (a_3-a_1)(a_3-a_2)(a_2-a_1)
\end{equation*}

Theoerem 3.1.3

If A is an $ n \times n$ matrix, then $\func{det}(uA) = u^n \func{det } A$ for any number $u$.

Example 3.1.9

Evaluate $\func{det } A$ if
$A = \left[ \begin{array}{rrrr}
a & 0 & 0 & 0 \\
u & b & 0 & 0 \\
v & w & c & 0 \\
x & y & z & d
\end{array} \right]$.

Theorem 3.1.4

If A is a square triangular matrix, then det A is the product of the entries on the main diagonal.

Theorem 3.2.1 Product Theorem

If $A$ and $B$ are $n \times n$ matrices, then $\func{det}(AB) = \func{det } A \func{det } B$.

Example 3.2.1

If $A = \left[ \begin{array}{rr}
a & b \\
-b & a
\end{array} \right]$ and $B = \left[ \begin{array}{rr}
c & d \\
-d & c
\end{array} \right]$
then $AB = \left[ \begin{array}{cc}
ac-bd & ad+bc \\
-(ad+bc) & ac-bd
\end{array} \right]$.
Hence $\func{det } A \func{det } B = \func{det}(AB)$ gives the identity
\begin{equation*}
(a^2 + b^2)(c^2+d^2) = (ac-bd)^2 + (ad+bc)^2
\end{equation*}

Theorem 3.2.2

An $n \times n$ matrix $A$ is invertible if and only if $\func{det } A \neq 0$. When this is the case,
$ \func{det} (A^{-1}) = \frac{1}{\func{det } A}$

Example 3.2.2

For which values of $c$ does $A = \left[ \begin{array}{rcr}
1 & 0 & -c \\
-1 & 3 & 1 \\
0 & 2c & -4
\end{array} \right]$
have an inverse?

Example 3.2.3

If a product $A_1A_2\cdots A_k$ of square matrices is invertible, show that each $A_i$ is invertible.

Theorem 3.2.3

If $A$ is any square matrix, $\func{det } A^T = \func{det } A$.

Example 3.2.4

If $\func{det } A = 2$ and $\func{det } B = 5$, calculate $\func{det}(A^3 B^{-1}A^TB^2)$.

Example 3.2.5

A square matrix is called $\textbf{orthogonal}$ if $A^{-1} = A^T$. What are the possible values of $\func{det } A$ if $A$ is orthogonal?

Definition 3.3 Adjugate of a Matrix

The $\textbf{adjugate}$ of $A$, denoted $\func{adj }(A)$, is the transpose of this cofactor matrix; in symbols,
\begin{equation*}
\func{adj}(A) = \left[ c_{ij}(A) \right]^T
\end{equation*}

Example 3.2.6

Compute the adjugate of $A = \left[ \begin{array}{rrr}
1 & 3 & -2 \\
0 & 1 & 5 \\
-2 & -6 & 7
\end{array}\right]$
and calculate $A (\func{adj } A)$ and $(\func{adj } A)A$.

Theorem 3.2.4 Adjugate formula

If A is any square matrix, then
\begin{equation*}
A(\func{adj } A) = (\func{det } A)I = (\func{adj } A)A
\end{equation*}
In particular, if det A $\neq$ 0, the inverse of A is given by
\begin{equation*}
A^{-1} =\frac{1}{\func{det } A}\func{adj } A
\end{equation*}

Example 3.2.7

Find the $(2, 3)$-entry of $A^{-1}$ if $A = \left[ \begin{array}{rrr}
2 & 1 & 3 \\
5 & -7 & 1 \\
3 & 0 & -6
\end{array}\right]$.

Example 3.2.8

If $A$ is $n \times n$, $n \geq 2$, show that $\func{det}(\func{adj }A) = (\func{det }A)^{n-1}$.

Theorem 3.2.5 Cramer's Rule

If $A$ is an invertible $n \times n$ matrix, the solution to the system
\begin{equation*}
A\vec{x}=\vec{b}
\end{equation*}
of $n$ equations in the variables $x_{1}, x_{2}, \dots, x_{n}$ is given by
\begin{equation*}
x_1 = \frac{\func{det } A_1}{\func{det } A}, \; x_2 = \frac{\func{det } A_2}{\func{det } A}, \;\cdots, \; x_n = \frac{\func{det } A_n}{\func{det } A}
\end{equation*}
where, for each $k$, $A_k$ is the matrix obtained from $A$ by replacing column $k$ by $\vec{b}$.

Example 3.2.9

Find $x_{1}$, given the following system of equations.
\begin{equation*}
\arraycolsep=1pt
\begin{array}{rrrrrrr}
5x_1 & + & x_2 & - & x_3 & = & 4\\
9x_1& + & x_2 & - & x_3 & = & 1 \\
x_1& - & x_2 & + & 5x_3 & = & 2 \\
\end{array}
\end{equation*}

Example 3.3.1

Consider the evolution of the population of a species of birds. Because the number of males and females are nearly equal, we count only females. We assume that each female remains a juvenile for one year and then becomes an adult, and that only adults have offspring. We make three assumptions about reproduction and survival rates:

- The number of juvenile females hatched in any year is twice the number of adult females alive the year before (we say the $\textbf{reproduction rate} $is 2).
- Half of the adult females in any year survive to the next year (the $\textbf{adult survival rate}$ is $\frac{1}{2}$).
- One-quarter of the juvenile females in any year survive into adulthood (the $\textbf{juvenile survival rate}$ is $\frac{1}{4}$).

Theorem 3.3.1

If $A = PDP^{-1}$ then $A^{k} = PD^{k}P^{-1}$ for each $k = 1, 2, \dots$.

Definition 3.4 Eigenvalues and Eigenvectors of a Matrix

If $A$ is an $n \times n$ matrix, a number $\lambda$ is called an $\textbf{eigenvalue}$ of $A$ if
\begin{equation*}
A\vec{x} = \lambda \vec{x} \mbox{ for some column } \vec{x} \neq \vec{0} \mbox{ in } \RR^n
\end{equation*}
In this case, $\vec{x}$ is called an $\textbf{eigenvector}$ of $A$ corresponding to the eigenvalue $\lambda$, or a $\lambda$-$\textbf{eigenvector} $for short.

Example 3.3.2

If $A = \left[ \begin{array}{rr}
3 & 5 \\
1 & -1
\end{array}\right]$ and $\vec{x} = \left[ \begin{array}{r}
5 \\
1
\end{array}\right]$ then $A\vec{x} = 4 \vec{x}$ so $\lambda = 4$ is an eigenvalue of $A$ with corresponding eigenvector $\vec{x}$.

Definition 3.5 Characteristic Polynomial of a Matrix

If $A$ is an $n \times n$ matrix, the $\textbf{characteristic polynomial}$ $c_{A}(x)$ of $A$ is defined by
\begin{equation*}
c_A(x) = \func{det}(xI - A)
\end{equation*}

Theorem 3.3.2

Let $A$ be an $n \times n$ matrix.

- The eigenvalues $\lambda$ of $A$ are the roots of the characteristic polynomial $c_{A}(x)$ of $A$.
- The $\lambda$-eigenvectors $\vec{x}$ are the nonzero solutions to the homogeneous system

Example 3.3.3

Find the characteristic polynomial of the matrix $A = \left[ \begin{array}{rr}
3 & 5 \\
1 & -1
\end{array}\right]$
discussed in Example 3.3.2, and then find all the eigenvalues and their eigenvectors.

Example 3.3.5

If $A$ is a square matrix, show that $A$ and $A^{T}$ have the same characteristic polynomial, and hence the same eigenvalues.

Theorem 4.1.1

Let $\vec{v} = \left[
\begin{array}{c}
x \\
y \\
z
\end{array}
\right]$ be a vector.

- $|| \vec{v} || = \sqrt{x^2 + y^2 + z^2}$.
- $\vec{v} = \vec{0}$ if and only if $|| \vec{v} || = 0$
- $|| a \vec{v} || = |a| || \vec{v} ||$ for all scalars $a$.

- In Figure 4.1.2, $|| \vec{v}||$ is the hypotenuse of the right triangle $OQP$, and so $|| \vec{v} ||^2 = h^{2} + z^{2}$ by Pythagoras' theorem. But $h$ is the hypotenuse of the right triangle $ORQ$, so $h^{2} = x^{2} + y^{2}$. Now (1) follows by eliminating $h^{2}$ and taking positive square roots.
- If $|| \vec{v} ||$ = 0, then $x^{2} + y^{2} + z^{2} = 0$ by (1). Because squares of real numbers are nonnegative, it follows that $x = y = z = 0$, and hence that $\vec{v} = \vec{0}$. The converse is because $|| \vec{0}|| = 0$.
- We have $a\vec{v} = \left[ \begin{array}{ccc} ax & ay & az \end{array}\right]^{T}$ so (1) gives \begin{equation*} || a\vec{v}||^2 = (ax)^2 + (ay)^2 + (az)^2 = a^{2}|| \vec{v}||^2 \end{equation*} Hence $|| a\vec{v} || = \sqrt{a^2} || \vec{v} ||$, and we are done because $\sqrt{a^2} = |a|$ for any real number $a$.

Example 4.1.1

If $\vec{v} = \left[
\begin{array}{r}
2 \\
-3 \\
3
\end{array}
\right]$
then $|| \vec{v} || = \sqrt{4 + 9 + 9} = \sqrt{22}$. Similarly if $\vec{v} = \left[
\begin{array}{r}
2 \\
-3
\end{array}
\right]$
in 2-space then $|| \vec{v} || = \sqrt{4+9} = \sqrt{13}$.

Theorem 4.1.2

Let $\vec{v} \neq \vec{0}$ and $\vec{w} \neq \vec{0}$ be vectors in $\mathbb{R}^3$. Then $\vec{v} = \vec{w}$ as matrices if and only if $\vec{v}$ and $\vec{w}$ have the same direction and the same length.

Definition 4.1 Geometric vectors

Suppose that $A$ and $B$ are any two points in $\mathbb{R}^3$. In Figure 4.1.4 the line segment from $A$ to $B$ is denoted $\vec{AB}$ and is called the $\textbf{geometric vector}$ from $A$ to $B$. Point $A$ is called the $\textbf{tail}$ of $\longvect{AB}$, $B$ is called the $\textbf{tip}$ and the $\textbf{length} $is denoted $|| \vec{AB} ||$.

The Parallelogram Law

In the parallelogram determined by two vectors $\vec{v}$ and $\vec{w}$, the vector $\vec{v} + \vec{w}$ is the diagonal with the same tail as $\vec{v}$ and $\vec{w}$.

Theorem 4.1.3

If $\vec{v}$ and $\vec{w}$ have a common tail, then $\vec{v} - \vec{w}$ is the vector from the tip of $\vec{w}$ to the tip of $\vec{v}$.

Theorem 4.1.4

Let $P_{1}(x_{1}, y_{1}, z_{1})$ and $P_{2}(x_{2}, y_{2}, z_{2})$ be two points. Then:

- $\vec{P_{1}P}_{2} = \left[ \begin{array}{c} x_{2} - x_{1} \\ y_{2} - y_{1} \\ z_{2} - z_{1} \end{array} \right]$.
- The distance between $P_{1}$ and $P_{2}$ is $\sqrt{(x_{2} - x_{1})^2 + (y_{2} - y_{1})^2 + (z_{2} - z_{1})^2}.$

Example 4.1.3

The distance between $P_{1}(2, -1, 3)$ and $P_{2}(1, 1, 4)$ is $\sqrt{(-1)^2 + (2)^2 + (1)^2} = \sqrt{6}$, and the vector from $P_{1}$ to $P_{2}$ is
$\vec{P_{1}P}_{2} = \left[
\begin{array}{r}
-1 \\
2 \\
1
\end{array}
\right]$.

Scalar Multiple Law

If a is a real number and $\vec{v} \neq \vec{0}$ is a vector then:

- The length of $a\vec{v}$ is $|| a\vec{v} || = |a| || \vec{v}||$.
- If $a\vec{v} \neq \vec{0}$, the direction of $a\vec{v}$ is the same as $\vec{v}$ if $a>0$; opposite to $\vec{v}$ if $a<0.$

Example 4.1.4

If $\vec{v} \neq \vec{0}$ show that $\frac{1}{|| \vec{v} ||}{ \vec{v}$ is the unique unit vector in the same direction as $\vec{v}.$

Definition 4.2 Parallel vectors in $\mathbb{R}^3$

Two nonzero vectors are called $\textbf{parallel}$ if they have the same or opposite direction.

Theorem 4.1.5

Two nonzero vectors $\vec{v}$ and $\vec{w}$ are parallel if and only if one is a scalar multiple of the other.

Example 4.1.5

Given points $P(2, -1, 4)$, $Q(3, -1, 3)$, $A(0, 2, 1)$, and $B(1, 3, 0)$, determine if $\vec{PQ}$ and $\vec{AB}$ are parallel.

Definition 4.3 Direction Vector of a Line

We call a nonzero vector $\vec{d} \neq \vec{0}$ a **direction vector **for the line if it is parallel to $\vec{AB}$ for some pair of distinct points $A$ and $B$ on the line.

Vector Equation of a line

The line parallel to $\vec{d} \neq \vec{0}$ through the point with vector $\vec{p}_{0}$ is given by
\begin{equation*}
\vec{p} = \vec{p}_{0} + t\vec{d} \quad t \mbox{ any scalar}
\end{equation*}
In other words, the point $P$ with vector $\vec{p}$ is on this line if and only if a real number t exists such that $\vec{p} = \vec{p}_{0} + t\vec{d}$.

Parametric Equations of a line

The line through $P_{0}(x_{0}, y_{0}, z_{0})$ with direction vector
$\vec{d} = \left[
\begin{array}{c}
a \\
b \\
c
\end{array}
\right]
\neq \vec{0}$ is given by
\begin{equation*}
\begin{array}{ll}
x = x_{0} + ta &\\
y = y_{0} + tb & t \mbox{ any scalar}\\
z = z_{0} + tc &
\end{array}
\end{equation*}
In other words, the point $P(x, y, z)$ is on this line if and only if a real number $t$ exists such that $x = x_{0} + ta$, $y = y_{0} + tb$, and $z = z_{0} + tc$.

Example 4.1.6

Find the equations of the line through the points $P_{0}(2, 0, 1)$ and $P_{1}(4, -1, 1)$.

Example 4.1.7

Determine whether the following lines intersect and, if so, find the point of intersection.
\begin{equation*}
\begin{array}{lcl}
x = 1 - 3t & & x = -1 + s\\
y = 2 + 5t & & y = 3 - 4s\\
z = 1 + t & & z = 1 -s
\end{array}
\end{equation*}

Definition 4.4 Dot Product in [latex size ="26"]\mathbb{R}^3[/latex]

Given vectors
$\vec{v} = \left[
\begin{array}{c}
x_{1} \\
y_{1}\\
z_{1}
\end{array} \right]$ and
$\vec{w} = \left[
\begin{array}{c}
x_{2} \\
y_{2}\\
z_{2}
\end{array} \right]$, their **dot product **$\vec{v} \cdot \vec{w}$ is a number defined
\begin{equation*}
\vec{v} \dotprod \vec{w} = x_{1}x_{2} + y_{1}y_{2} + z_{1}z_{2} = \vec{v}^T\vec{w}
\end{equation*}

Example 4.2.1

If $\vec{v} = \left[
\begin{array}{r}
2 \\
-1 \\
3
\end{array} \right]$
and $\vec{w} = \left[
\begin{array}{r}
1 \\
4\\
-1
\end{array} \right]$, then $\vec{v} \cdot \vec{w} = 2 \cdot 1 + (-1) \cdot 4 + 3 \cdot (-1) = -5$.

Theorem 4.2.1

Let $\vec{u}$, $\vec{v}$, and $\vec{w}$ denote vectors in $\mathbb{R}^3$ (or $\mathbb{R}^2$).

- $\vec{v} \cdot \vec{w}$ is a real number.
- $\vec{v} \cdot \vec{w} = \vec{w} \cdot \vec{v}$.
- $\vec{v} \cdot \vec{0} = 0 = \vec{0} \cdot \vec{v}$.
- $\vec{v} \cdot \vecc{v} = ||\vec{v}||^{2}$.
- $(k\vec{v}) \cdot \vec{w} = k(\vec{w} \cdot \vec{v}) = \vec{v} \cdot (k\vec{w})$ for all scalars $k$.
- $\vec{u} \cdot (\vec{v} \pm \vec{w}) = \vec{u} \cdot \vec{v} \pm \vec{u} \cdot \vec{w}.$

Example 4.2.2

Verify that $||\vec{v} -3\vec{w}||^{2} = 1$ when $||\vec{v}||= 2$, $||\vec{w}|| = 1$, and $\vec{v} \cdot \vec{w} = 2$.

Laws of Cosine

If a triangle has sides $a$, $b$, and $c$, and if $\theta$ is the interior angle opposite $c$ then
\begin{equation*}
c^2 = a^2 + b^2 -2ab \cos\theta
\end{equation*}

Theorem 4.2.2

Let $\vec{v}$ and $\vec{w}$ be nonzero vectors. If $\theta$ is the angle between $\vec{v}$ and $\vec{w}$, then
\begin{equation*}
\vec{v} \cdot \vec{w} = || \vec{v} || || \vec{w} || \cos\theta
\end{equation*}

Example 4.2.3

Compute the angle between
$\vec{u} = \left[
\begin{array}{r}
-1 \\
1 \\
2
\end{array}
\right]$ and
$\vec{v} = \left[
\begin{array}{r}
2 \\
1 \\
-1
\end{array}
\right]$.

Definition 4.5 Orthogonal Vectors in [latex size ="26"]\mathbb{R}^3[/latex]

Two vectors $\vec{v}$ and $\vec{w}$ are said to be \textbf{orthogonal}\index{orthogonal vectors}\index{vectors!orthogonal vectors} if $\vect{v} = \vect{0}$ or $\vect{w} = \vect{0}$ or the angle between them is $\frac{\pi}{2}$.

Theorem 4.2.3

Two vectors $\vec{v}$ and $\vec{w}$ are orthogonal if and only if $\vec{v} \cdot \vec{w} = 0$.

Example 4.2.4

Show that the points $P(3, -1, 1)$, $Q(4, 1, 4)$, and $R(6, 0, 4)$ are the vertices of a right triangle.