4 Vector Geometry

4.1 Vectors and Lines

In this chapter we study the geometry of 3-dimensional space. We view a point in 3-space as an arrow from the origin to that point. Doing so provides a “picture” of the point that is truly worth a thousand words.

Vectors in \mathbb{R}^3

Introduce a coordinate system in 3-dimensional space in the usual way. First, choose a point O called the \textit{origin}, then choose three mutually perpendicular lines through O, called the x, y, and z \textit{axes}, and establish a number scale on each axis with zero at the origin. Given a point P in 3-space we associate three numbers x, y, and z with P, as described in Figure 4.1.1.

Cartesian Coordinates

These numbers are called the \textit{coordinates} of P, and we denote the point as (x, y, z), or P(x, y, z) to emphasize the label P. The result is called a \textit{cartesian} coordinate system for 3-space, and the resulting description of 3-space is called \textit{cartesian geometry}.

As in the plane, we introduce vectors by identifying each point P(x, y, z) with the vector
\vec{v} = \left[ \begin{array}{c} x \\ y \\ z \end{array} \right] in \mathbb{R}^3, represented by the \textbf{arrow}from the origin to P as in Figure 4.1.1. Informally, we say that the point P has vector \vec{v}, and that vector \vec{v} has point P. In this way 3-space is identified with \mathbb{R}^3, and this identification will be made throughout this chapter, often without comment. In particular, the terms “vector” and “point” are interchangeable. The resulting description of 3-space is called \textbf{vector geometry}. Note that the origin is \vec{0} = \left[ \begin{array}{c} 0 \\ 0 \\ 0 \end{array} \right].

 

Length and direction

We are going to discuss two fundamental geometric properties of vectors in \mathbb{R}^3: length and direction. First, if \vec{v} is a vector with point P, the \textbf{length} of vector \vec{v} is defined to be the distance from the origin to P, that is the length of the arrow representing \vec{v}. The following properties of length will be used frequently.

Theorem 4.1.1

Let \vec{v} = \left[ \begin{array}{c} x \\ y \\ z \end{array} \right] be a vector.

  1. || \vec{v} || = \sqrt{x^2 + y^2 + z^2}.
  2. \vec{v} = \vec{0} if and only if || \vec{v} || = 0
  3. || a \vec{v} || = |a| || \vec{v} || for all scalars a.

Proof:

Proof of Theorem 4.1.1

Let \vec{v} have point P(x, y, z).

  1. In Figure 4.1.2, || \vec{v}|| is the hypotenuse of the right triangle OQP, and so || \vec{v} ||^2 = h^{2} + z^{2} by Pythagoras’ theorem. But h is the hypotenuse of the right triangle ORQ, so h^{2} = x^{2} + y^{2}. Now (1) follows by eliminating h^{2} and taking positive square roots.
  2. If || \vec{v} || = 0, then x^{2} + y^{2} + z^{2} = 0 by (1). Because squares of real numbers are nonnegative, it follows that x = y = z = 0, and hence that \vec{v} = \vec{0}. The converse is because || \vec{0}|| = 0.
  3. We have a\vec{v} = \left[ \begin{array}{ccc} ax & ay & az \end{array}\right]^{T} so (1) gives

        \begin{equation*} || a\vec{v}||^2 = (ax)^2 + (ay)^2 + (az)^2 = a^{2}|| \vec{v}||^2 \end{equation*}

    Hence || a\vec{v} || = \sqrt{a^2} ||  \vec{v} ||, and we are done because \sqrt{a^2} = |a| for any real number a.

Example 4.1.1

If \vec{v} = \left[ \begin{array}{r} 2 \\ -3 \\ 3 \end{array} \right]
then || \vec{v} || = \sqrt{4 + 9 + 9} = \sqrt{22}. Similarly if \vec{v} = \left[ \begin{array}{r} 2 \\ -3 \end{array} \right]
in 2-space then || \vec{v} || = \sqrt{4+9} = \sqrt{13}.

When we view two nonzero vectors as arrows emanating from the origin, it is clear geometrically what we mean by saying that they have the same or opposite \textbf{direction}. This leads to a fundamental new description of vectors.

 

 

 

Theorem 4.1.2

Let \vec{v} \neq \vec{0} and \vec{w} \neq \vec{0} be vectors in \mathbb{R}^3. Then \vec{v} = \vec{w} as matrices if and only if \vec{v} and \vec{w} have the same direction and the same length.

Proof:

If \vec{v} = \vec{w}, they clearly have the same direction and length. Conversely, let \vec{v} and \vec{w} be vectors with points P(x, y, z) and Q(x_{1}, y_{1}, z_{1}) respectively. If \vec{v} and \vec{w} have the same length and direction then, geometrically, P and Q must be the same point.

Proof of theorem 4.1.2

Hence x = x_{1}, y = y_{1}, and z = z_{1}, that is \vec{v} = \left[ \begin{array}{c} x \\ y \\ z \end{array} \right] = \left[ \begin{array}{c} x_{1} \\ y_{1} \\ z_{1} \end{array} \right] = \vec{w}.

Note that a vector’s length and direction do \textit{not} depend on the choice of coordinate system in \mathbb{R}^3. Such descriptions are important in applications because physical laws are often stated in terms of vectors, and these laws cannot depend on the particular coordinate system used to describe the situation.

Geometric Vectors

If A and B are distinct points in space, the arrow from A to B has length and direction.

Vector between A and B

Hence,

Definition 4.1 Geometric vectors

Suppose that A and B are any two points in \mathbb{R}^3. In Figure 4.1.4 the line segment from A to B is denoted \vec{AB} and is called the \textbf{geometric vector} from A to B. Point A is called the \textbf{tail} of \longvect{AB}, B is called the \textbf{tip}  and the \textbf{length}is denoted || \vec{AB} ||.

Note that if \vec{v} is any vector in \mathbb{R}^3 with point P then \vec{v} = \vec{OP} is itself a geometric vector where O is the origin. Referring to \vec{AB} as a “vector” seems justified by Theorem 4.1.2 because it has a direction (from A to B) and a length || \vec{AB} ||. However there appears to be a problem because two geometric vectors can have the same length and direction even if the tips and tails are different.

equal vectors

For example \vec{AB} and \vec{PQ} in Figure 4.1.5 have the same length \sqrt{5} and the same direction (1 unit left and 2 units up) so, by Theorem 4.1.2, they are the same vector! The best way to understand this apparent paradox is to see \vec{AB} and \vec{PQ} as different \textit{representations} of the same underlying vector \left[ \begin{array}{r} -1 \\ 2 \end{array} \right]. Once it is clarified, this phenomenon is a great benefit because, thanks to Theorem 4.1.2, it means that the same geometric vector can be positioned anywhere in space; what is important is the length and direction, not the location of the tip and tail. This ability to move geometric vectors about is very useful.

The Parallelogram Law

The Parallelogram Law

We now give an intrinsic description of the sum of two vectors \vec{v} and \vec{w} in \mathbb{R}^3, that is a description that depends only on the lengths and directions of \vec{v} and \vec{w} and not on the choice of coordinate system. Using Theorem 4.1.2 we can think of these vectors as having a common tail A. If their tips are P and Q respectively, then they both lie in a plane \mathcal{P} containing A, P, and Q, as shown in Figure 4.1.6. The vectors \vec{v} and \vec{w} create a parallelogram in \mathcal{P}, shaded in Figure 4.1.6, called the parallelogram \textbf{determined} by \vec{v} and \vec{w}.

 

If we now choose a coordinate system in the plane \mathcal{P} with A as origin, then the parallelogram law in the plane shows that their sum \vec{v} + \vec{w} is the diagonal of the parallelogram they determine with tail A. This is an intrinsic description of the sum \vec{v} + \vec{w} because it makes no reference to coordinates. This discussion proves:

The Parallelogram Law

In the parallelogram determined by two vectors \vec{v} and \vec{w}, the vector \vec{v} + \vec{w} is the diagonal with the same tail as \vec{v} and \vec{w}.

the parallelogram law

Because a vector can be positioned with its tail at any point, the parallelogram law leads to another way to view vector addition. In Figure 4.1.7 (a) the sum \vec{v} + \vec{w} of two vectors \vec{v} and \vec{w} is shown as given by the parallelogram law. If \vec{w} is moved so its tail coincides with the tip of \vec{v} (shown in (b)) then the sum \vec{v} + \vec{w} is seen as “first \vec{v} and then \vec{w}. Similarly, moving the tail of \vec{v} to the tip of \vec{w} shows in (c) that \vec{v} + \vec{w} is “first \vec{w} and then \vec{v}.” This will be referred to as the \textbf{tip-to-tail rule}, and it gives a graphic illustration of why \vec{v} + \vec{w} = \vec{w} + \vec{v}.

Since \vec{AB} denotes the vector from a point A to a point B, the tip-to-tail rule takes the easily remembered form

    \begin{equation*} \vec{AB} + \vec{BC} = \vec{AC} \end{equation*}

for any points A, B, and C.

 

sum of three vectors

 

 

One reason for the importance of the tip-to-tail rule is that it means two or more vectors can be added by placing them tip-to-tail in sequence. This gives a useful “picture” of the sum of several vectors, and is illustrated for three vectors in Figure 4.1.8 where \vec{u} + \vec{v} + \vec{w} is viewed as first \vec{u}, then \vec{v}, then \vec{w}.

 

 

sum and difference of vectors

 

There is a simple geometrical way to visualize the (matrix) \textbf{difference}\vec{v} - \vec{w} of two vectors. If \vec{v} and \vec{w} are positioned so that they have a common tail A , and if B and C are their respective tips, then the tip-to-tail rule gives \vec{w} + \vec{CB} = \vec{v}. Hence \vec{v} - \vec{w} = \vec{CB} is the vector from the tip of \vec{w} to the tip of \vec{v}. Thus both \vec{v} - \vec{w} and \vec{v} + \vec{w} appear as diagonals in the parallelogram determined by \vec{v} and \vec{w} (see Figure 4.1.9.

 

Theorem 4.1.3

If \vec{v} and \vec{w} have a common tail, then \vec{v} - \vec{w} is the vector from the tip of \vec{w} to the tip of \vec{v}.

One of the most useful applications of vector subtraction is that it gives a simple formula for the vector from one point to another, and for the distance between the points.

Theorem 4.1.4

Let P_{1}(x_{1}, y_{1}, z_{1}) and P_{2}(x_{2}, y_{2}, z_{2}) be two points. Then:

  1. \vec{P_{1}P}_{2} = \left[ \begin{array}{c} x_{2} - x_{1} \\ y_{2} - y_{1} \\ z_{2} - z_{1} \end{array} \right].
  2. The distance between P_{1} and P_{2} is \sqrt{(x_{2} - x_{1})^2 + (y_{2} - y_{1})^2 + (z_{2} - z_{1})^2}.

Can you prove these results?

 

 

Example 4.1.3

The distance between P_{1}(2, -1, 3) and P_{2}(1, 1, 4) is \sqrt{(-1)^2 + (2)^2 + (1)^2} = \sqrt{6}, and the vector from P_{1} to P_{2} is
\vec{P_{1}P}_{2} = \left[ \begin{array}{r} -1 \\ 2 \\ 1 \end{array} \right].

The next theorem tells us what happens to the length and direction of a scalar multiple of a given vector.

Scalar Multiple Law

If a is a real number and \vec{v} \neq \vec{0} is a vector then:

  • The length of a\vec{v} is || a\vec{v} || = |a| || \vec{v}||.
  • If a\vec{v} \neq \vec{0}, the direction of a\vec{v} is the same as \vec{v} if a>0; opposite to \vec{v} if a<0.

Proof:

The first statement is true due to Theorem 4.1.1.

To prove the second statement, let O denote the origin in \mathbb{R}^3. Let \vec{v} have point P, and choose any plane containing O and P. If we set up a coordinate system in this plane with O as origin, then \vec{v} = \vec{OP} so the result follows from the scalar multiple law in the plane.

A vector \vec{u} is called a \textbf{unit vector} if || \vect{u} || = 1. Then
\vec{i} = \left[ \begin{array}{c} 1 \\ 0 \\ 0 \end{array} \right],  \vec{j} = \left[ \begin{array}{c} 0 \\ 1 \\ 0 \end{array} \right], and \vec{k} = \left[ \begin{array}{c} 0 \\ 0 \\ 1 \end{array} \right]
are unit vectors, called the \textbf{coordinate} vectors.

Example 4.1.4

If \vec{v} \neq \vec{0} show that \frac{1}{|| \vec{v} ||}{ \vec{v} is the unique unit vector in the same direction as \vec{v}.

Solution:
The vectors in the same direction as \vec{v} are the scalar multiples a\vec{v} where a > 0. But || a\vec{v} ||= |a| ||\vec{v} || = a || \vec{v} || when a > 0, so a\vec{v} is a unit vector if and only if a = \frac{1}{|| \vect{v} ||}.

 

 

 

Definition 4.2 Parallel vectors in \mathbb{R}^3

Two nonzero vectors are called \textbf{parallel} if they have the same or opposite direction.

Theorem 4.1.5

Two nonzero vectors \vec{v} and \vec{w} are parallel if and only if one is a scalar multiple of the other.

 

 

Example 4.1.5

Given points P(2, -1, 4), Q(3, -1, 3), A(0, 2, 1), and B(1, 3, 0), determine if \vec{PQ} and \vec{AB} are parallel.

Solution:

By Theorem 4.1.3, \vec{PQ} = (1, 0, -1) and \vec{AB} = (1, 1, -1). If \vec{PQ} = t\vec{AB}
then (1, 0, -1) = (t, t, -t), so 1 = t and 0 = t, which is impossible. Hence \vec{PQ} is \textit{not} a scalar multiple of \vec{AB}, so these vectors are not parallel by Theorem 4.1.5.

 

Lines in Space

These vector techniques can be used to give a very simple way of describing  straight lines in space. In order to do this, we first need a way to
specify the orientation of such a line.

Definition 4.3 Direction Vector of a Line

We call a nonzero vector \vec{d} \neq \vec{0} a direction vector for the line if it is parallel to \vec{AB} for some pair of distinct points A and B on the line.

Note that any nonzero scalar multiple of \vec{d} would also serve as a direction vector of the line.

We use the fact that there is exactly one line that passes through a particular point P_{0}(x_{0}, y_{0}, z_{0}) and has a given direction vector
\vec{d} = \left[ \begin{array}{c} a \\ b \\ c \end{array} \right]. We want to describe this line by giving a condition on x, y, and z that the point P(x, y, z) lies on this line. Let
\vec{p}_{0} = \left[ \begin{array}{c} x_{0} \\ y_{0} \\ z_{0} \end{array} \right]
and  \vec{p} = \left[ \begin{array}{c} x \\ y \\ z \end{array} \right] denote the vectors of P_{0} and P, respectively.

Direction vector of a line
Figure 4.1.10

Then

    \begin{equation*} \vec{p} = \vec{p}_{0} + \vec{P_{0}P} \end{equation*}

Hence P lies on the line if and only if \vec{P_{0}P} is parallel to \vec{d}—that is, if and only if \vec{P_{0}P} = t\vec{d} for some scalar t by Theorem 4.1.5. Thus \vec{p} is the vector of a point on the line if and only if \vec{p} = \vec{p}_{0} + t\vec{d} for some scalar t.

 

 

 

Vector Equation of a line

The line parallel to \vec{d} \neq \vec{0} through the point with vector \vec{p}_{0} is given by

    \begin{equation*} \vec{p} = \vec{p}_{0} + t\vec{d} \quad t \mbox{ any scalar} \end{equation*}

In other words, the point P with vector \vec{p} is on this line if and only if a real number t exists such that \vec{p} = \vec{p}_{0} + t\vec{d}.

 

In component form the vector equation becomes

    \begin{equation*} \left[ \begin{array}{c} x \\ y \\ z \end{array} \right] = \left[ \begin{array}{c} x_{0} \\ y_{0} \\ z_{0} \end{array} \right] + t \left[ \begin{array}{c} a \\ b \\ c \end{array} \right] \end{equation*}

Equating components gives a different description of the line.

Parametric Equations of a line

The line through P_{0}(x_{0}, y_{0}, z_{0}) with direction vector
\vec{d} = \left[ \begin{array}{c} a \\ b \\ c \end{array} \right] \neq \vec{0} is given by

    \begin{equation*} \begin{array}{ll} x = x_{0} + ta &\\ y = y_{0} + tb & t \mbox{ any scalar}\\ z = z_{0} + tc & \end{array} \end{equation*}

In other words, the point P(x, y, z) is on this line if and only if a real number t exists such that x = x_{0} + ta, y = y_{0} + tb, and z = z_{0} + tc.

 

 

Example 4.1.6

Find the equations of the line through the points P_{0}(2, 0, 1) and P_{1}(4, -1, 1).

Solution:

Let
\vec{d} = \vec{P_{0}P}_{1} = \left[ \begin{array}{c} 2 \\ 1 \\ 0 \end{array} \right]
denote the vector from P_{0} to P_{1}. Then \vec{d} is parallel to the line (P_{0} and P_{1} are on the line), so \vec{d} serves as a direction vector for the line. Using P_{0} as the point on the line leads to the parametric equations

    \begin{equation*} \begin{array}{ll} x = 2 + 2t &\\ y = -t & t \mbox{ a parameter}\\ z = 1 & \end{array} \end{equation*}

Note that if P_{1} is used (rather than P_{0}), the equations are

    \begin{equation*} \begin{array}{ll} x = 4 + 2s &\\ y = -1 - s & s \mbox{ a parameter}\\ z = 1 & \end{array} \end{equation*}

These are different from the preceding equations, but this is merely the result of a change of parameter. In fact, s = t - 1.

Example 4.1.7

Determine whether the following lines intersect and, if so, find the point of intersection.

    \begin{equation*} \begin{array}{lcl} x = 1 - 3t & & x = -1 + s\\ y = 2 + 5t & & y = 3 - 4s\\ z = 1 + t & & z = 1 -s \end{array} \end{equation*}

Solution:
Suppose P(x, y, z) with vector \vec{p} lies on both lines. Then

    \begin{equation*} \left[ \begin{array}{c} 1 - 3t \\ 2 + 5t \\ 1 + t \end{array} \right] = \left[ \begin{array}{c} x \\ y \\ z \end{array} \right] = \left[ \begin{array}{c} -1 + s \\ 3 - 4s \\ 1 - s \end{array} \right] \mbox{ for some } t \mbox{ and } s, \end{equation*}

where the first (second) equation is because P lies on the first (second) line. Hence the lines intersect if and only if the three equations

    \begin{align*} 1 - 3t & = -1 + s\\ 2 + 5t & = 3 - 4s\\ 1 + t & = 1 -s \end{align*}

have a solution. In this case, t = 1 and s = -1 satisfy all three equations, so the lines do intersect and the point of intersection is

    \begin{equation*} \vec{p} = \left[ \begin{array}{c} 1 - 3t \\ 2 + 5t \\ 1 + t \end{array} \right] = \left[ \begin{array}{r} -2 \\ 7 \\ 2 \end{array} \right] \end{equation*}

using t = 1. Of course, this point can also be found from
\vec{p} = \left[ \begin{array}{c} -1 + s \\ 3 - 4s \\ 1 - s \end{array} \right] using s = -1.

 

 

 

4.2 Projections and Planes

Suppose a point P and a plane are given and it is desired to find the point Q that lies in the plane and is closest to P, as shown in Figure 4.2.1.

A point P and a Plane
Figure 4.2.1

 

Clearly, what is required is to find the line through P that is perpendicular to the plane and then to obtain Q as the point of intersection of this line with the plane. Finding the line perpendicular to the plane requires a way to determine when two vectors are perpendicular. This can be done using the idea of the dot product of two vectors.

 

The Dot Product and Angles

Definition 4.4 Dot Product in \mathbb{R}^3

Given vectors
\vec{v} = \left[ \begin{array}{c} x_{1} \\ y_{1}\\ z_{1} \end{array} \right] and
\vec{w} = \left[ \begin{array}{c} x_{2} \\ y_{2}\\ z_{2} \end{array} \right], their dot product \vec{v} \cdot \vec{w} is a number defined

    \begin{equation*} \vec{v} \dotprod \vec{w} = x_{1}x_{2} + y_{1}y_{2} + z_{1}z_{2} = \vec{v}^T\vec{w} \end{equation*}

Because \vec{v} \cdot  \vec{w} is a number, it is sometimes called the scalar product of \vec{v} and \vec{w}.

Example 4.2.1

If  \vec{v} = \left[ \begin{array}{r} 2 \\ -1 \\ 3 \end{array} \right]
and \vec{w} = \left[ \begin{array}{r} 1 \\ 4\\ -1 \end{array} \right], then \vec{v} \cdot \vec{w} = 2 \cdot 1 + (-1) \cdot 4 + 3 \cdot (-1) = -5.

Theorem 4.2.1

Let \vec{u}, \vec{v}, and \vec{w} denote vectors in \mathbb{R}^3 (or \mathbb{R}^2).

  1. \vec{v} \cdot \vec{w} is a real number.
  2. \vec{v} \cdot \vec{w} = \vec{w} \cdot \vec{v}.
  3. \vec{v} \cdot \vec{0} = 0 = \vec{0} \cdot \vec{v}.
  4. \vec{v} \cdot \vecc{v} = ||\vec{v}||^{2}.
  5. (k\vec{v}) \cdot \vec{w} = k(\vec{w} \cdot \vec{v}) = \vec{v} \cdot (k\vec{w}) for all scalars k.
  6. \vec{u} \cdot (\vec{v} \pm \vec{w}) = \vec{u} \cdot \vec{v} \pm \vec{u} \cdot \vec{w}.

The readers are invited to prove these properties using the definition of dot products.

Example 4.2.2

Verify that ||\vec{v} -3\vec{w}||^{2} = 1 when ||\vec{v}||= 2, ||\vec{w}|| = 1, and \vec{v} \cdot \vec{w} = 2.

Solution:

We apply Theorem 4.2.1 several times:

    \begin{align*} || \vec{v} - 3\vec{w} || ^2 &= (\vec{v} - 3\vec{w}) \cdot(\vec{v} - 3\vec{w}) \\ &= \vec{v} \cdot (\vec{v} - 3\vec{w}) - 3\vec{w} \cdot(\vec{v} - 3\vec{w}) \\ &= \vec{v} \cdot \vec{v} - 3(\vec{v} \cdot \vec{w}) - 3(\vec{w} \cdot \vec{v}) + 9(\vec{w} \cdot \vec{w}) \\ &=|| \vec{v} ||^2 - 6(\vec{v} \cdot \vec{w}) + 9|| \vec{w} ||^2 \\ &= 4 - 12 + 9 = 1 \end{align*}

There is an intrinsic description of the dot product of two nonzero vectors in \RR^3. To understand it we require the following result from trigonometry.

Laws of Cosine

If a triangle has sides a, b, and c, and if \theta is the interior angle opposite c then

    \begin{equation*} c^2 = a^2 + b^2 -2ab \cos\theta \end{equation*}

Laws of Cosine
Figure 4.2.2

Proof:

We prove it when is \theta acute, that is 0 \leq \theta < \frac{\pi}{2}; the obtuse case is similar. In Figure 4.2.2 we have p = a \sin \theta and q = a \cos \theta.

Hence Pythagoras’ theorem gives

    \begin{align*} c^2 = p^2 + (b - q)^2 &= a^2\sin^2\theta + (b - a\cos\theta)^2 \\ &= a^2(\sin^2\theta + \cos^2\theta) +b^2 - 2ab\cos\theta \end{align*}

The law of cosines follows because \sin^{2} \theta + \cos^{2} \theta = 1 for any angle \theta.

 

Note that the law of cosines reduces to Pythagoras’ theorem if \theta is a right angle (because \cos\frac{\pi}{2} = 0).

Now let \vec{v} and \vec{w} be nonzero vectors positioned with a common tail. Then they determine a unique angle \theta in the range

    \begin{equation*} 0 \leq \theta \leq \pi \end{equation*}

This angle \theta will be called the angle between \vec{v} and \vec{w}. Clearly \vect{v} and \vect{w} are parallel if \theta is either 0 or \pi. Note that we do not define the angle between \vec{v} and \vec{w} if one of these vectors is \vec{0}.

The next result gives an easy way to compute the angle between two nonzero vectors using the dot product.

Theorem 4.2.2

Let \vec{v} and \vec{w} be nonzero vectors. If \theta is the angle between \vec{v} and \vec{w}, then

    \begin{equation*} \vec{v} \cdot \vec{w} = || \vec{v} || || \vec{w} || \cos\theta \end{equation*}

Laws of Cosine
Figure 4.2.4

Proof:

We calculate || \vec{v} - \vec{w}||^{2} in two ways. First apply the law of cosines to the triangle in Figure 4.2.4 to obtain:

    \begin{equation*} || \vec{v} - \vec{w} ||^2 = || \vec{v} ||^2 + || \vec{w} ||^2 - 2|| \vec{v} || || \vec{w} || \cos\theta \end{equation*}

 

On the other hand, we use Theorem 4.2.1:

    \begin{align*} || \vec{v} - \vec{w} ||^2 &= (\vec{v} - \vec{w}) \cdot (\vec{v} - \vec{w}) \\ &= \vec{v} \cdot \vec{v} - \vec{v} \cdot \vecc{w} - \vec{w} \cdot \vec{v} + \vec{w} \cdot \vec{w} \\ &= || \vec{v} ||^2 - 2(\vec{v} \cdot \vec{w}) + || \vec{w} ||^2 \end{align*}

Comparing these we see that -2 ||\vec{v}|| ||\vec{w} || \cos \theta = -2(\vec{v} \cdot \vec{w}), and the result follows.

If \vec{v} and \vec{w} are nonzero vectors, Theorem 4.2.2 gives an intrinsic description of \vec{v} \cdot \vec{w} because || \vec{v}||, || \vec{w} ||, and the angle \theta between \vec{v} and \vec{w} do not depend on the choice of coordinate system. Moreover, since || \vec{v} || and || \vec{w}|| are nonzero (\vec{v} and \vec{w} are nonzero vectors), it gives a formula for the cosine of the angle \theta:

    \begin{equation*} \cos\theta = \frac{\vec{v} \cdot \vec{w}}{|| \vec{v} || || \vec{w} ||} \end{equation*}

Since 0 \leq \theta \leq \pi, this can be used to find \theta.

 

Example 4.2.3

Compute the angle between
\vec{u} = \left[ \begin{array}{r} -1 \\ 1 \\ 2 \end{array} \right]  and
\vec{v} = \left[ \begin{array}{r} 2 \\ 1 \\ -1 \end{array} \right].

Solution:

Compute \cos\theta = \frac{\vec{v} \cdot \vec{w}}{|| \vec{v} ||  || \vec{w} ||} = \frac{-2 + 1 -2}{\sqrt{6}\sqrt{6}} = -\frac{1}{2}. Now recall that \cos\theta and \sin \theta are defined so that (\cos \theta, \sin \theta) is the point on the unit circle determined by the angle \theta (drawn counterclockwise, starting from the positive x axis). In the present case, we know that \cos \theta = -\frac{1}{2} and that 0 \leq \theta \leq \pi. Because \cos\frac{\pi}{3} = \frac{1}{2}, it follows that \theta = \frac{2\pi}{3}.

If \vec{v} and \vec{w} are nonzero,  the previous example shows that \cos \theta has the same sign as \vet{v} \cdot \vec{w}, so

    \begin{equation*} \begin{array}{lll} \vec{v} \cdot \vec{w} > 0 & \mbox{if and only if } & \theta \mbox{ is acute } (0 \leq \theta < \frac{\pi}{2}) \\ \vec{v} \cdot \vec{w} < 0 & \mbox{if and only if } & \theta \mbox{ is obtuse } (\frac{\pi}{2} < \theta \leq 0) \\ \vec{v} \cdot \vec{w} = 0 & \mbox{if and only if } & \theta = \frac{\pi}{2} \end{array} \end{equation*}

In this last case, the (nonzero) vectors are perpendicular. The following terminology is used in linear algebra:

 

 

 

Definition 4.5 Orthogonal Vectors in \mathbb{R}^3

Two vectors \vec{v} and \vec{w} are said to be \textbf{orthogonal}\index{orthogonal vectors}\index{vectors!orthogonal vectors} if \vect{v} = \vect{0} or \vect{w} = \vect{0} or the angle between them is \frac{\pi}{2}.

Since \vec{v} \cdot \vec{w} = 0 if either \vec{v} = \vec{0} or \vec{w} = \vec{0}, we have the following theorem:

Theorem 4.2.3

Two vectors \vec{v} and \vec{w} are orthogonal if and only if \vec{v} \cdot \vec{w} = 0.

 

 

Example 4.2.4

Show that the points P(3, -1, 1), Q(4, 1, 4), and R(6, 0, 4) are the vertices of a right triangle.

Solution:

The vectors along the sides of the triangle are

    \begin{equation*} \vec{PQ} = \left[ \begin{array}{r} 1 \\ 2 \\ 3 \end{array} \right],\ \vec{PR} = \left[ \begin{array}{r} 3 \\ 1 \\ 3 \end{array} \right], \mbox{ and } \vec{QR} = \left[ \begin{array}{r} 2 \\ -1 \\ 0 \end{array} \right] \end{equation*}

Evidently \vec{PQ} \cdot \vec{QR} = 2 - 2 + 0 = 0, so \vec{PQ} and \vec{QR} are orthogonal vectors. This means sides PQ and QR are perpendicular—that is, the angle at Q is a right angle.

 

 

Projections

In applications of vectors, it is frequently useful to write a vector as the sum of two orthogonal vectors.

Projection
Figure 4.2.5

If a nonzero vector \vec{d} is specified, the key idea is to be able to write an arbitrary vector \vec{u} as a sum of two vectors,

    \begin{equation*} \vec{u} = \vec{u}_{1} + \vec{u}_{2} \end{equation*}

where \vec{u}_{1} is parallel to \vec{d} and \vec{u}_{2} = \vec{u} - \vec{u}_{1} is orthogonal to \vec{d}. Suppose that \vec{u} and \vec{d} \neq \vec{0} emanate from a common tail Q (see Figure 4.2.5). Let P be the tip of \vec{u}, and let P_{1} denote the foot of the perpendicular from P to the line through Q parallel to \vec{d}.

Then \vec{u}_{1} = \vec{QP}_{1} has the required properties:

1. \vec{u}_{1} is parallel to \vec{d}.

2. \vec{u}_{2} = \vec{u} - \vec{u}_{1} is orthogonal to \vec{d}.

3. \vec{u} = \vec{u}_{1} + \vec{u}_{2}.

Definition 4.6 Projection in \mathbb{R}^3

The vector \vec{u}_{1} = \vec{QP}_{1} in Figure 4.2.6 is called the projection of \vec{u} on \vec{d}

It is denoted

    \begin{equation*} \vec{u}_1 = proj_{\vec{d}} {\vec{u}} \end{equation*}

In Figure 4.2.5 (a) the vector \vec{u}_{1} = proj_{\vec{d}}{\vec{u}} has the same direction as \vec{d}; however, \vec{u}_{1} and \vec{d} have opposite directions if the angle between \vec{u} and \vec{d} is greater than \frac{\pi}{2} (see Figure 4.2.5 (b)). Note that the projection \vec{u}_1 = proj_{\vec{d}}{\vec{u}} is zero if and only if \vec{u} and \vec{d} are orthogonal.

Calculating the projection of \vec{u} on \vec{d} \neq \vec{0} is remarkably easy.

Theorem 4.2.4

Let \vec{u} and \vec{d} \neq \vec{0} be vectors.

  1. The projection of \vec{u} on \vec{d} is given by proj_{\vec{d}}{\vec{u}} = \frac{\vec{u} \dotprod \vec{d}}{|| \vect{d} ||^2} \vec{d}.
  2. The vector \vec{u} - proj_{\vec{d}}{\vec{u}} is orthogonal to \vec{d}.

Proof:

The vector \vec{u}_{1} = proj_{\vec{d}}{\vec{u}} is parallel to \vec{d} and so has the form \vec{u}_{1} = t\vec{d} for some scalar t. The requirement that \vec{u} - \vec{u}_{1} and \vec{d} are orthogonal determines t. In fact, it means that (\vec{u} - \vec{u}_{1}) \cdot \vec{d} = 0 by Theorem 4.2.3. If \vec{u}_{1} = t\vec{d} is substituted here, the condition is

    \begin{equation*} 0 = (\vec{u} - t\vec{d}) \cdot \vec{d} = \vec{u} \cdot \vec{d} - t(\vec{d} \cdot \vec{d}) = \vec{u} \cdot \vec{d} - t||\vec{d}|| ^2 \end{equation*}

It follows that t = \frac{\vec{u} \cdot \vec{d}}{|| \vec{d} ||^2}, where the assumption that \vec{d} \neq \vec{0} guarantees that ||\vect{d} ||^{2} \neq 0.

Example 4.2.5

Find the projection of
\vec{u} = \left[ \begin{array}{r} 2 \\ -3 \\ 1 \end{array} \right]
on  \vec{d} = \left[ \begin{array}{r} 1\\ -1 \\ 3 \end{array} \right]
and express \vec{u} = \vec{u}_{1} + \vec{u}_{2} where \vec{u}_{1} is parallel to \vec{d} and \vec{u}_{2} is orthogonal to \vec{d}.

Solution:

The projection \vec{u}_{1} of \vec{u} on \vec{d} is

    \begin{equation*} \vec{u}_{1} = proj_{\vec{d}}{\vec{u}} = \frac{\vec{u} \cdot \vec{d}}{||  \vec{d}|| ^2}\vec{d} = \frac{2 + 3 + 3}{1^2 + (-1)^2 + 3^2} \left[ \begin{array}{r} 1\\ -1 \\ 3 \end{array} \right] = \frac{8}{11}\left[ \begin{array}{r} 1\\ -1 \\ 3 \end{array} \right] \end{equation*}

Hence \vec{u}_{2} = \vec{u} - \vec{u}_{1} = \frac{1}{11}\left[ \begin{array}{r} 14\\ -25 \\ -13 \end{array} \right], and this is orthogonal to \vec{d} by Theorem 4.2.4 (alternatively, observe that \vec{d} \cdot \vec{u}_{2} = 0). Since \vec{u} = \vec{u}_{1} + \vec{u}_{2}, we are done.

 

Note that the idea of projections can be used to find the shortest distance from a point to a straight line in \mathbb{R}^3 which is ||\vec{u_1}||, the length of the vector that’s orthogonal to the direction vector of the line.

 

 

Planes

Definition 4.7 Normal vector in a plane

A nonzero vector \vec{n} is called a normal for a plane if it is orthogonal to every vector in the plane.

For example, the unit vector \vec{k} = (0,0,1) is a normal vector for x-y plane.

plane
Figure 4.2.6

Given a point P_{0} = P_{0}(x_{0}, y_{0}, z_{0}) and a nonzero vector \vec{n}, there is a unique plane through P_{0} with normal \vec{n}, shaded in Figure 4.2.6. A point P = P(x, y, z) lies on this plane if and only if the vector \vec{P_{0}P} is orthogonal to \vec{n}—that is, if and only if \vec{n} \cdot\vec{P_{0}P} = 0. Because \vec{P_{0}P} = \left[ \begin{array}{c} x - x_{0}\\ y - y_{0}\\ z - z_{0} \end{array} \right] this gives the following result:

 

Scalar equation of a plane

The plane through P_{0}(x_{0}, y_{0}, z_{0}) with normal \vec{n} = \left[ \begin{array}{c} a\\ b\\ c \end{array} \right] \neq \vec{0}
as a normal vector is given by

    \begin{equation*} a(x - x_{0}) + b(y - y_{0}) + c(z - z_{0}) = 0 \end{equation*}

In other words, a point P(x, y, z) is on this plane if and only if x, y, and z satisfy this equation.

Example 4.2.8

Find an equation of the plane through P_{0}(1, -1, 3) with \vec{n} = \left[ \begin{array}{r} 3\\ -1\\ 2 \end{array} \right]
as normal.

Solution:

Here the general scalar equation becomes

    \begin{equation*} 3(x - 1) - (y + 1) + 2(z - 3) = 0 \end{equation*}

This simplifies to 3x - y + 2z = 10.

If we write d = ax_{0} + by_{0} + cz_{0}, the scalar equation shows that every plane with normal \vec{n} = \left[ \begin{array}{r} a\\ b\\ c \end{array} \right]
has a linear equation of the form

(4.2)   \begin{equation*}  ax + by + cz = d \end{equation*}

for some constant d. Conversely, the graph of this equation is a plane with \vec{n} = \left[ \begin{array}{r} a\\ b\\ c \end{array} \right] as a normal vector (assuming that a, b, and c are not all zero).

 

Example 4.2.9

Find an equation of the plane through P_{0}(3, -1, 2) that is parallel to the plane with equation 2x - 3y = 6.

Solution:

The plane with equation 2x -3y = 6 has normal \vec{n} = \left[ \begin{array}{r} 2\\ -3\\ 0 \end{array} \right]. Because the two planes are parallel, \vec{n} serves as a normal for the plane we seek, so the equation is 2x - 3y = d for some d according to (4.2). Insisting that P_{0}(3, -1, 2) lies on the plane determines d; that is, d = 2 \cdot 3 - 3(-1) = 9. Hence, the equation is 2x - 3y = 9.

Consider points P_{0}(x_{0}, y_{0}, z_{0}) and P(x, y, z) with vectors \vec{p}_{0} = \left[ \begin{array}{r} x_{0}\\ y_{0}\\ z_{0} \end{array} \right]
and
\vec{p}= \left[ \begin{array}{r} x\\ y\\ z \end{array} \right].
Given a nonzero vector \vec{n}, the scalar equation of the plane through P_{0}(x_{0}, y_{0}, z_{0}) with normal \vec{n} = \left[ \begin{array}{r} a\\ b\\ c \end{array} \right] takes the vector form:

Vector Equation of a Plane

The plane with normal \vec{n} \neq \vect\{0} through the point with vector \vec{p}_{0} is given by

    \begin{equation*} \vec{n} \cdot (\vec{p} - \vec{p}_{0}) = 0 \end{equation*}

In other words, the point with vector \vec{p} is on the plane if and only if \vec{p} satisfies this condition.

Moreover, Equation (4.2)  translates as follows:

Every plane with normal \vec{n} has vector equation \vec{n} \cdot \vec{p} = d for some number d.

Example 4.2.10

Find the shortest distance from the point P(2, 1, -3) to the plane with equation 3x - y + 4z = 1. Also find the point Q on this plane closest to P.

Solution:

The plane in question has normal \vec{n} = \left[ \begin{array}{r} 3\\ -1\\ 4 \end{array} \right]. Choose any point P_{0} on the plane—say P_{0}(0, -1, 0)—and let Q(x, y, z) be the point on the plane closest to P (see the diagram). The vector from P_{0} to P is \vec{u} = \left[ \begin{array}{r} 2\\ 2\\ -3 \end{array} \right]. Now erect \vec{n} with its tail at P_{0}. Then \vec{QP} = \vec{u}_{1} and \vec{u}_{1} is the projection of \vec{u} on \vec{n}:

    \begin{equation*} \vec{u}_{1} = \frac{\vec{n} \cdot \vec{u}}{|| \vect{n} ||^2}\vec{n} = \frac{-8}{26} \left[ \begin{array}{r} 3\\ -1 \\ 4 \end{array} \right] = \frac{-4}{13} \left[ \begin{array}{r} 3\\ -1 \\ 4 \end{array} \right] \end{equation*}

Hence the distance is || \vec{QP} || = || \vec{u}_{1} || = \frac{4\sqrt{26}}{13}. To calculate the point Q, let \vec{q} = \left[ \begin{array}{r} x\\ y \\ z \end{array} \right]
and
\vec{p}_{0} = \left[ \begin{array}{r} 0\\ -1 \\ 0 \end{array} \right]
be the vectors of Q and P_{0}. Then

    \begin{equation*} \vec{q} = \vec{p}_{0} + \vec{u} - \vec{u}_{1} =  \left[ \begin{array}{r} 0\\ -1 \\ 0 \end{array} \right] + \left[ \begin{array}{r} 2\\ 2 \\ -3 \end{array} \right] + \frac{4}{13} \left[ \begin{array}{r} 3\\ -1 \\ 4 \end{array} \right] = \left[ \def\arraystretch{1.5} \begin{array}{r} \frac{38}{13}\\ \frac{9}{13}\\ \frac{-23}{13} \end{array} \right] \end{equation*}

This gives the coordinates of Q(\frac{38}{13}, \frac{9}{13}, \frac{-23}{13}).

 

 

The Cross Product

If P, Q, and R are three distinct points in \mathbb{R}^3 that are not all on some line, it is clear geometrically that there is a unique plane containing all three. The vectors \longvect{PQ} and \longvect{PR} both lie in this plane, so finding a normal amounts to finding a nonzero vector orthogonal to both \longvect{PQ} and \longvect{PR}. The cross product provides a systematic way to do this.

Definition 4.8 Cross Product

Given vectors \vec{v}_{1}= \left[ \begin{array}{c} x_{1}\\ y_{1} \\ z_{1} \end{array} \right] and  \vec{v}_{2}= \left[ \begin{array}{c} x_{2}\\ y_{2} \\ z_{2} \end{array} \right], define the cross product \vec{v}_{1} \times \vec{v}_{2} by

    \begin{equation*} \vec{v}_{1} \times \vec{v}_{2} = \left[ \begin{array}{c} y_{1}z_{2} - z_{1}y_{2}\\ -(x_{1}z_{2} - z_{1}x_{2}) \\ x_{1}y_{2} - y_{1}x_{2} \end{array} \right] \end{equation*}

 

Because it is a vector, \vec{v}_{1} \times \vec{v}_{2} is often called the vector product. There is an easy way to remember this definition using the coordinate vectors:

    \begin{equation*} \vec{i}= \left[ \begin{array}{c} 1\\ 0 \\ 0 \end{array} \right], \ \vec{j}= \left[ \begin{array}{c} 0\\ 1 \\ 0 \end{array} \right], \mbox{ and } \vec{k}= \left[ \begin{array}{c} 0\\ 0 \\ 1 \end{array} \right] \end{equation*}

They are vectors of length 1 pointing along the positive x, y, and z axes. The reason for the name is that any vector can be written as

    \begin{equation*} \left[ \begin{array}{c} x\\ y \\ z \end{array} \right] = x\vec{i} + y\vec{j} + z\vec{k} \end{equation*}

With this, the cross product can be described as follows:

Determinant form of the cross product

If \vec{v}_{1}= \left[ \begin{array}{c} x_{1}\\ y_{1}\\ z_{1} \end{array} \right] and \vec{v}_{2}= \left[ \begin{array}{c} x_{2}\\ y_{2}\\ z_{2} \end{array} \right] are two vectors, then

    \begin{equation*} \vec{v}_{1} \times \vec{v}_{2} = \func{det}\left[ \begin{array}{ccc} \vec{i} & x_{1} & x_{2}\\ \vec{j} & y_{1} & y_{2}\\ \vec{k} & z_{1} & z_{2} \end{array} \right] = \left| \begin{array}{cc} y_{1} & y_{2}\\ z_{1} & z_{2} \end{array} \right|\vec{i} - \left| \begin{array}{cc} x_{1} & x_{2}\\ z_{1} & z_{2} \end{array} \right|\vec{j} + \left| \begin{array}{cc} x_{1} & x_{2}\\ y_{1} & y_{2} \end{array} \right|\vec{k} \end{equation*}

where the determinant is expanded along the first column.

Example 4.2.11

If \vec{v} = \left[\begin{array}{r} 2\\ -1\\ 4 \end{array} \right] and \vec{w} = \left[ \begin{array}{r} 1\\ 3\\ 7 \end{array} \right], then

    \begin{align*} \vec{v}_{1} \times \vec{v}_{2} = \func{det}\left[ \begin{array}{rrr} \vec{i} & 2 & 1\\ \vec{j} & -1 & 3\\ \vec{k} & 4 & 7 \end{array} \right] &= \left| \begin{array}{rr} -1 & 3\\ 4 & 7 \end{array} \right|\vec{i} - \left| \begin{array}{rr} 2 & 1\\ 4 & 7 \end{array} \right|\vec{j} + \left| \begin{array}{rr} 2 & 1\\ -1 & 3 \end{array} \right|\vec{k}\\ &= -19\vec{i} - 10\vec{j} + 7\vec{k}\\ &= \left[ \begin{array}{r} -19\\ -10\\ 7 \end{array} \right] \end{align*}

Observe that \vec{v} \times \vec{w} is orthogonal to both \vec{v} and \vec{w} in Example 4.2.11. This holds in general as can be verified directly by computing \vec{v} \cdot (\vec{v} \times \vec{w}) and \vec{w} \cdot (\vec{v} \times \vec{w}), and is recorded as the first part of the following theorem. It will follow from a more general result which, together with the second part, will be proved later on.

Theorem 4.2.5

Let \vec{v} and \vec{w} be vectors in \mathbb{R}^3:

  1.  \vec{v} \times \vec{w} is a vector orthogonal to both \vec{v} and \vec{w}.
  2. If \vec{v} and \vec{w} are nonzero, then \vec{v} \times \vec{w} = \vec{0} if and only if \vec{v} and \vec{w} are parallel.

Recall that

    \begin{equation*} \vec{v} \cdot \vec{w} = 0 \quad \mbox{ if and only if }\vec{v}\mbox{ and }\vec{w}\mbox{ are orthogonal.} \end{equation*}

Example 4.2.12

Find the equation of the plane through P(1, 3, -2), Q(1, 1, 5), and R(2, -2, 3).

Solution:

The vectors
\vec{PQ} = \left[ \begin{array}{r} 0\\ -2\\ 7 \end{array} \right] and
\vec{PR} = \left[ \begin{array}{r} 1\\ -5\\ 5 \end{array} \right]
lie in the plane, so

    \begin{equation*} \vec{PQ} \times \vec{PR} = \func{det}\left[ \begin{array}{rrr} \vec{i} & 0 & 1\\ \vec{j} & -2 & -5\\ \vec{k} & 7 & 5 \end{array} \right] = 25\vec{i} + 7\vec{j} + 2\vec{k} = \left[ \begin{array}{r} 25\\ 7\\ 2 \end{array} \right] \end{equation*}

is a normal for the plane (being orthogonal to both \vec{PQ} and \vec{PR}). Hence the plane has equation

    \begin{equation*} 25x + 7y + 2z = d \quad \mbox{ for some number }d. \end{equation*}

Since P(1, 3, -2) lies in the plane we have 25 \cdot 1 + 7 \cdot 3 + 2(-2) = d. Hence d = 42 and the equation is 25x + 7y + 2z = 42. Can you verify that he same equation can be obtained  if \vec{QP} and \vec{QR}, or \vec{RP} and \vec{RQ}, are used as the vectors in the plane?

 

 

 

 

 

4.3 More on the Cross Product

The cross product \vec{v} \times \vec{w} of two \mathbb{R}^3  -vectors \vec{v} = \left[ \begin{array}{r} x_{1}\\ y_{1}\\ z_{1} \end{array} \right] and \vec{w} = \left[ \begin{array}{r} x_{2}\\ y_{2}\\ z_{2} \end{array} \right]
was defined in Section 4.2 where we observed that it can be best remembered using a determinant:

(4.3)   \begin{equation*}  \vec{v} \times \vec{w} = \func{det}\left[ \begin{array}{rrr} \vec{i} & x_{1} & x_{2}\\ \vec{j} & y_{1} & y_{2}\\ \vec{k} & z_{1} & z_{2} \end{array} \right] = \left| \begin{array}{rr} y_{1} & y_{2}\\ z_{1} & z_{2} \end{array} \right|\vec{i} - \left| \begin{array}{rr} x_{1} & x_{2}\\ z_{1} & z_{2} \end{array} \right|\vec{j} + \left| \begin{array}{rr} x_{1} & x_{2}\\ y_{1} & y_{2} \end{array} \right|\vec{k} \end{equation*}

Here \vec{i} = \left[ \begin{array}{r} 1\\ 0\\ 0 \end{array} \right]\vec{j} = \left[ \begin{array}{r} 0\\ 1\\ 0 \end{array} \right], and
\vec{k} = \left[ \begin{array}{r} 1\\ 0\\ 0 \end{array} \right] are the coordinate vectors, and the determinant is expanded along the first column. We observed (but did not prove) in Theorem 4.2.5 that \vec{v} \times \vec{w} is orthogonal to both \vec{v} and \vec{w}. This follows easily from the next result.

Theorem 4.3.1

If \vec{u} = \left[ \begin{array}{r} x_{0}\\ y_{0}\\ z_{0} \end{array} \right], \vec{v} = \left[ \begin{array}{r} x_{1}\\ y_{1}\\ z_{1} \end{array} \right], and  \vec{w} = \left[ \begin{array}{r} x_{2}\\ y_{2}\\ z_{2} \end{array} \right], then  \vec{u} \cdot (\vec{v} \times \vec{w}) = \func{det}\left[ \begin{array}{rrr} x_{0} & x_{1} & x_{2}\\ y_{0} & y_{1} & y_{2}\\ z_{0} & z_{1} & z_{2} \end{array} \right].

Proof:

Recall that \vec{u} \cdot (\vec{v} \times \vec{w}) is computed by multiplying corresponding components of \vec{u} and \vec{v} \times \vec{w} and then adding. Using equation (4.3), the result is:

    \begin{equation*} \vec{u} \cdot (\vec{v} \times \vec{w}) = x_{0}\left(\left| \begin{array}{rr} y_{1} & y_{2}\\ z_{1} & z_{2} \end{array} \right|\right) + y_{0}\left(- \left| \begin{array}{rr} x_{1} & x_{2}\\ z_{1} & z_{2} \end{array} \right|\right) +z_{0}\left( \left| \begin{array}{rr} x_{1} & x_{2}\\ y_{1} & y_{2} \end{array} \right|\right) = \func{det}\left[ \begin{array}{rrr} x_{0} & x_{1} & x_{2}\\ y_{0} & y_{1} & y_{2}\\ z_{0} & z_{1} & z_{2} \end{array} \right] \end{equation*}

where the last determinant is expanded along column 1.

The result in Theorem 4.3.1 can be succinctly stated as follows: If \vec{u}, \vec{v}, and \vec{w} are three vectors in \mathbb{R}^3, then

    \begin{equation*} \vec{u} \cdot(\vec{v} \times \vec{w}) = \func{det} \left[ \begin{array}{ccc} \vec{u} & \vec{v} & \vec{w}\end{array}\right] \end{equation*}

where \left[ \begin{array}{ccc} \vec{u} & \vec{v} & \vec{w}\end{array}\right] denotes the matrix with \vec{u}, \vec{v}, and \vec{w} as its columns. Now it is clear that \vec{v} \times \vec{w} is orthogonal to both \vec{v} and \vec{w} because the determinant of a matrix is zero if two columns are identical.

 

 

 

Because of (4.3) and Theorem 4.3.1, several of the following properties of the cross product follow from
properties of determinants (they can also be verified directly).

Theorem 4.3.2

Let \vec{u}, \vec{v}, and \vec{w} denote arbitrary vectors in \mathbb{R}^3 .

  1.  \vec{u} \times \vec{v} is a vector.
  2.  \vec{u} \times \vec{v} is orthogonal to both \vec{u} and \vec{v}.
  3. \vec{u} \times \vec{0} = \vec{0} = \vec{0} \times \vec{u}.
  4. \vec{u} \times \vec{u} = \vec{0}.
  5. \vec{u} \times \vec{v} = -(\vec{v} \times \vec{u}).
  6. (k\vec{u}) \times \vec{v} = k(\vec{u} \times \vec{v}) = \vec{u} \times (k\vec{v}) for any scalar k.
  7. \vec{u} \times (\vec{v} + \vec{w}) = (\vec{u} \times \vec{v}) + (\vec{u} \times \vec{w}).
  8. (\vec{v} + \vec{w}) \times \vec{u} = (\vec{v} \times \vec{u}) + (\vec{w} \times \vec{u}).

 

We have seen some of these results in the past; can you prove 6,7, and 8?

 

 

We now come to a fundamental relationship between the dot and cross products.

Theorem 4.3.3 Lagrange Identity

If \vec{u} and \vec{v} are any two vectors in \mathbb{R}^3 , then

    \begin{equation*} || \vec{u} \times \vec{v} ||  = || \vec{u} ||^2 || \vec{v} || ^2 - (\vec{u} \cdot \vec{v})^2 \end{equation*}

Proof:

Given \vec{u} and \vec{v}, introduce a coordinate system and write
\vec{u} = \left[ \begin{array}{r} x_{1}\\ y_{1}\\ z_{1} \end{array} \right] and
\vec{v} = \left[ \begin{array}{r} x_{2}\\ y_{2}\\ z_{2} \end{array} \right] in component form. Then all the terms in the identity can be computed in terms of the components.

An expression for the magnitude of the vector \vec{u} \times \vec{v} can be easily obtained from the Lagrange identity. If \theta is the angle between \vec{u} and \vec{v}, substituting \vec{u} \cdot \vec{v} = || \vec{u}|| ||\vec{v}||  \cos \theta into the Lagrange identity gives

    \begin{equation*} || \vec{u} \times \vec{v} ||^2 = || \vec{u} || ^2|| \vec{v} ||^2 - || \vec{u}||^2|| \vec{v} ||^2\cos^2\theta = || \vec{u} ||^2 || \vec{v} ||^2\sin^2\theta \end{equation*}

using the fact that 1 - \cos^{2} \theta = \sin^{2} \theta. But \sin \theta is nonnegative on the range 0 \leq \theta \leq \pi, so taking the positive square root of both sides gives

    \begin{equation*} || \vec{u} \times \vec{v} || = || \vec{u} || ||\vec{v} || \sin\theta. \end{equation*}

area of parallelogram
Figure 4.3.1

This expression for ||\vec{u} \times \vec{v}|| makes no reference to a coordinate system and, moreover, it has a nice geometrical interpretation. The parallelogram determined by the vectors \vec{u} and \vec{v} has base length || \vec{v}|| and altitude || \vec{u}|| \sin \theta . Hence the area of the parallelogram formed by \vec{u} and \vec{v} is

    \begin{equation*} (|| \vec{u} || \sin\theta) || \vec{v} || = || \vec{u} \times \vec{v}|| \end{equation*}

 

Theorem 4.3.4

If \vec{u} and \vec{v} are two nonzero vectors and \theta is the angle between \vec{u} and \vec{v}, then:

  1.  ||\vec{u} \times \vec{v}|| = || \vec{u}|| ||\vec{v}|| \sin \theta = the area of the parallelogram determined by \vec{u} and \vec{v}.
  2. \vec{u} and \vec{v} are parallel if and only if \vec{u} \times \vec{v} = \vec{0}.

Proof of 2:

By (1), \vec{u} \times \vec{v} = \vec{0} if and only if the area of the parallelogram is zero. The area vanishes if and only if \vec{u} and \vec{v} have the same or opposite direction—that is, if and only if they are parallel.

 

 

Example 4.3.1

Find the area of the triangle with vertices P(2, 1, 0), Q(3, -1, 1), and R(1, 0, 1).

Solution:

We have
\vec{RP} = \left[ \begin{array}{r} 1\\ 1\\ -1 \end{array} \right] and \vec{RQ} = \left[ \begin{array}{r} 2\\ -1\\ 0 \end{array} \right]. The area of the triangle is half the area of the parallelogram formed by these vectors, and so equals \frac{1}{2} || \vec{RP} \times \vec{RQ} ||. We have

    \begin{equation*} \vec{RP} \times \vec{RQ} = \func{det}\left[ \begin{array}{rrr} \vect{i} & 1 & 2\\ \vect{j} & 1 & -1\\ \vect{k} & -1 & 0 \end{array} \right] = \left[ \begin{array}{r} -1\\ -2\\ -3 \end{array} \right] \end{equation*}

so the area of the triangle is \frac{1}{2} ||\vec{RP} \times \vec{RQ} || = \frac{1}{2}\sqrt{1 + 4 + 9} = \frac{1}{2}\sqrt{14}.

parallelepiped
Figure 4.3.2

If three vectors \vec{u}, \vec{v}, and \vec{w} are given, they determine a “squashed” rectangular solid called a parallelepiped (Figure 4.3.2), and it is often useful to be able to find the volume of such a solid. The base of the solid is the parallelogram determined by \vec{u} and \vec{v}, so it has area A = || \vec{u} \times \vec{v}||. The height of the solid is the length h of the projection of \vec{w} on \vec{u} \times \vec{v}. Hence

    \begin{equation*} h = \left| \frac{\vec{w} \cdot (\vec{u} \times \vec{v})}{|| \vec{u} \times \vec{v} ||^2}\right| || \vec{u} \times \vec{v} || = \frac{|\vec{w} \cdot (\vec{u} \times \vec{v})|}{|| \vec{u} \times \vec{v} ||} = \frac{|\vec{w} \cdot (\vec{u} \times \vec{v})|}{A} \end{equation*}

 

 

Thus the volume of the parallelepiped is hA = |\vec{w} \cdot (\vec{u} \times \vec{v})|. This proves

Theorem 4.3.5

The volume of the parallelepiped determined by three vectors \vec{w}, \vec{u}, and \vec{v} is given by |\vec{w} \cdot (\vec{u} \times \vec{v})|.

 

 

Example 4.3.2

Find the volume of the parallelepiped determined by the vectors

    \begin{equation*} \vec{w} = \left[ \begin{array}{r} 1\\ 2\\ -1 \end{array} \right], \vec{u} = \left[ \begin{array}{r} 1\\ 1\\ 0 \end{array} \right], \vec{v} = \left \begin{array}{r} -2\\ 0\\ 1 \end{array} \right] \end{equation*}

Solution:

By Theorem 4.3.1, \vec{w} \cdot (\vec{u} \times \vec{v}) = \func{det}\left[ \begin{array}{rrr} 1 & 1 & -2\\ 2 & 1 & 0\\ -1 & 0 & 1 \end{array} \right] = -3.
Hence the volume is |\vec{w} \cdot (\vec{u} \times \vec{v})| = |-3| = 3 by Theorem 4.3.5.

We can now give an intrinsic description of the cross product \vec{u} \times \vec{v}.

 

 

Right-hand Rule

If the vector \vec{u} \times \vec{v} is grasped in the right hand and the fingers curl around from \vec{u} to \vec{v} through the angle \theta, the thumb points in the direction for \vec{u} \times \vec{v}.

To indicate why this is true, introduce coordinates in \mathbb{R}^3 as follows: Let \vec{u} and \vec{v} have a common tail O, choose the origin at O, choose the x axis so that \vec{u} points in the positive x direction, and then choose the y axis so that \vec{v} is in the xy plane and the positive y axis is on the same side of the x axis as \vec{v}. Then, in this system, \vec{u} and \vec{v} have component form
\vec{u} = \left[ \begin{array}{r} a\\ 0\\ 0 \end{array} \right] and \vec{v} = \left[ \begin{array}{r} b\\ c\\ 0 \end{array} \right]
where a > 0 and c > 0. Can you draw a graph based on the description here?

The right-hand rule asserts that \vec{u} \times \vec{v} should point in the positive z direction. But our definition of \vec{u} \times \vec{v} gives

    \begin{equation*} \vec{u} \times \vec{v} = \func{det}\left[ \begin{array}{rrr} \vect{i} & a & b\\ \vect{j} & 0 & c\\ \vect{k} & 0 & 0 \end{array} \right] = \left[ \begin{array}{c} 0\\ 0\\ ac \end{array} \right] = (ac)\vect{k} \end{equation*}

and (ac) \vec{k} has the positive z direction because ac > 0.

 

 

 

 

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Linear Algebra with Applications Copyright © by Xinli Wang is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.

Share This Book