115 7.4. Markov Chains — Mathematics for Public and Occupational Health Professionals

<!– pb_fixme –>

Markov Chains

We will now study stochastic processes, experiments in which the outcomes of events depend on the previous outcomes. Such a process or experiment is called a Markov Chain or Markov process. The process was first studied by a Russian mathematician named Andrei A. Markov in the early 1900s.

A small town is served by two telephone companies, Mama Bell and Papa Bell. Due to their aggressive sales tactics, each month 40% of Mama Bell customers switch to Papa Bell, that is, the other 60% stay with Mama Bell. On the other hand, 30% of the Papa Bell customers switch to Mama Bell. The above information can be expressed in a matrix which lists the probabilities of going from one state into another state. This matrix is called a transition matrix.

 

 

The reader should observe that a transition matrix is always a square matrix because all possible states must have both rows and columns. All entries in a transition matrix are non-negative as they represent probabilities. Furthermore, since all possible outcomes are considered in the Markov process, the sum of the row entries is always 1.

 

 

Let W denote that Professor Symons walks and B denote that he rides his bicycle. We use the following tree diagram to compute the probabilities.

 

Monday Tuesday Wednesday

 

 

 

 

Certain Markov chains, called regular Markov chains, tend to stabilize in the long run. It so happens that the transition matrix we have used in the the above examples is just such a Markov chain. The next example deals with the long term trend or steady-state situation for that matrix.

 

 

When this happens, we say that the system is in steady-state or state of equilibrium. In this situation, all row vectors are equal. If the original matrix is an n by n matrix, we get n vectors that are all the same. We call this vector a fixed probability vector or the equilibrium vectorE. In the above problem, the fixed probability vector E is \left[\begin{array}{cc} 1/3 & 2/3\end{array}\right]. Furthermore, if the equilibrium vector E is multiplied by the original matrix T, the result is the equilibrium vector E. That is,


Regular Markov Chains

A Markov chain reaches a state of equilibrium if it is a regular Markov chain. A Markov chain is said to be a regular Markov chain if some power of it has only positive entries.

As we take higher powers of T, Tn, as n becomes large, approaches a state of equilibrium. The equilibrium distribution vector E can be found by letting ET = E.

 

 

T=\left[\begin{array}{cc} 0.60 & 0.40 \\0.30 & 0.70\end{array}\right] and M=\left[\begin{array}{cc} 0.20 & 0.80\end{array}\right]

 

Can the equilibrium vector E be found without raising the transition matrix to large powers? The answer to this question provides us with a way to find the equilibrium vector E. The answer lies in the fact that ET = E. Since we have the matrix T, we can determine E from the statement ET = E. The example below illustrates this approach.

 

 

Practice questions

1. A survey of American car buyers indicates that if a person buys a Ford, there is a 60% chance that their next purchase will be a Ford, while owners of a GM will buy a GM again with a probability of 0.80. Express the buying habits of these consumers in a transition matrix.

2. A hockey player decides to either shoot the puck (S) or pass it to a teammate (P) according to the following transition matrix.

Find the following:

3. The local police department conducts a campaign to reduce the rates of texting and driving in the community. The effects of the campaign are summarized in the transition matrix below:

If 35% of people in the community reported texting and driving before the campaign:

4. A large company conducted a training program with their employees to reduce the incidence of slips, trips and falls in the workplace. About 15% of workers reported a slip, trip or fall accident the previous year (year 1). After the training program (year 2), 75% of those who previously reported an accident reported no further accidents, while 5% of those who didn’t report a previous accident reported one this year.

 

<!– pb_fixme –>

<!– pb_fixme –>
<!– pb_fixme –>

License

Math Copyright © by sboschman. All Rights Reserved.

Share This Book