Linear Functions

  • Linear and affine functions
  • First-order approximation of non-linear functions
  • Other sources of linear models

Linear and affine functions

Definition

Linear functions are functions which preserve scaling and addition of the input argument. Affine functions are ‘‘linear plus constant’’ functions.

Formal definition, linear and affine functions. A function f: \mathbb{R}^n \rightarrow \mathbb{R} is linear if and only if f preserves scaling and addition of its arguments:

  • for every x \in \mathbb{R}^n, and \alpha \in \mathbb{R}, f(\alpha x) = \alpha f(x); and
  • for every x_1, x_2 \in \mathbb{R}^n, f(x_1 + x_2) = f(x_1) + f(x_2).

A function f is affine if and only if the function \tilde{f}: \mathbb{R}^n \rightarrow \mathbb{R} with values \tilde{f}(x) = f(x) - f(0)  is linear. diamondsuit

An alternative characterization of linear functions is given here.

Examples: Consider the functions f_1, f_2, f_3: \mathbb{R}^2 \rightarrow \mathbb{R} with values

  1. f_1(x) = 3.2 x_1 + 2 x_2,
  2. f_2(x) = 3.2 x_1 + 2x_2 + 0.15,
  3. f_3(x) = 0.001 x_2^2 +2.3 x_1 + 0.3 x_2.

The function f_1 is linear; f_2 is affine; and f_3 is neither. diamondsuit

Connection with vectors via the scalar product

The following shows the connection between linear functions and scalar products.

Theorem: Representation of affine function via the scalar product:

A function f: \mathbb{R}^n \rightarrow \mathbb{R} is affine if and only if it can be expressed via a scalar product:

f(x) = a^Tx+b,

for some unique pair (a,b), with a \in \mathbb{R}^n and b \in \mathbb{R}, given by a_i = f(e_i), with e_i the i-th unit vector in \mathbb{R}^n, i = 1, \cdots, n, and b = f(0). The function is linear if and only if b = 0.

The theorem shows that a vector can be seen as a (linear) function from the ‘‘input“ space \mathbb{R}^n to the ‘‘output” space \mathbb{R}. Both points of view (matrices as simple collections of numbers, or as linear functions) are useful.

Gradient of an affine function

The gradient of a function f: \mathbb{R}^n \rightarrow \mathbb{R} at a point x, denoted \nabla f(x), is the vector of first derivatives with respect to x_1, \cdots, x_n (see here for a formal definition and examples). When n=1 (there is only one input variable), the gradient is simply the derivative.

An affine function f: \mathbb{R}^n \rightarrow \mathbb{R}, with values f(x) = a^Tx+b has a very simple gradient: the constant vector a. That is, for an affine function f, we have for every x:

\nabla f(x) = a

Examplegradient of a linear function.

Interpretations

The interpretation of a,b are as follows.

  • The b = f(0) is the constant term. For this reason, it is sometimes referred to as the bias, or intercept (as it is the point where f intercepts the vertical axis if we were to plot the graph of the function).
  • The terms a_j, j = 1, \cdots, n, which correspond to the gradient of f, give the coefficients of influence of x_j on f. For example, if a_1 >> a_3, then the first component of x has much greater influence on the value of f(x) than the third.

Example: Beer-Lambert law in absorption spectrometry.

First-order approximation of non-linear functions

Many functions are non-linear. A common engineering practice is to approximate a given non-linear map with a linear (or affine) one, by taking derivatives. This is the main reason for linearity to be such a ubiquitous tool in Engineering.

One-dimensional case

Consider a function of one variable f: \mathbb{R} \rightarrow \mathbb{R}, and assume it is differentiable everywhere. Then we can approximate the values function at a point x near a point x_0 as follows:

f(x) \simeq l(x):= f(x_0) + f'(x_0)(x-x_0),

where f'(x) denotes the derivative of f at x.

Multi-dimensional case

With more than one variable, we have a similar result. Let us approximate a differentiable function f: \mathbb{R}^n \rightarrow \mathbb{R} by a linear function l, so that f and l coincide up and including to the first derivatives. The corresponding approximation l is called the first-order approximation to f at x_0.

The approximate function l must be of the form

l(x) = a^Tx+b,

where a \in \mathbb{R}^n and b \in \mathbb{R}. Our condition that l coincides with f up and including to the first derivatives shows that we must have

\nabla l(x) = a =\nabla f(x_0), a^Tx_0 + b = f(x_0),

where \nabla f(x_0) the gradient, of f at x_0. Solving for a,b we obtain the following result:

Theorem: First-order expansion of a function.

The first-order approximation of a differentiable function f at a point x_0 is of the form

f(x) \approx l(x) = f(x_0) + \nabla f(x_0)^T(x-x_0),

where \nabla f(x_0) \in \mathbb{R}^n is the gradient of f at x_0.

Examplea linear approximation to a non-linear function.

Other sources of linear models

Linearity can arise from a simple change of variables. This is best illustrated with a specific example.

ExamplePower laws.

License

Hyper-Textbook: Optimization Models and Applications Copyright © by L. El Ghaoui. All Rights Reserved.

Share This Book