How does the age distribution change from one time period to the next? The model makes the following three assumptions:
Note that the total tree population does not change over time.
Assume db = 0.1, dy = 0.2, dm = 0.3, do = 0.4. Set this up as a Markov Chain. What is the steady state vector for the age distribution in the forest?
Sally and Becky are playing tennis. When deuce is reached, the player winning the next point has advantage. On the following point, the player either wins the game or the game returns to deuce. Suppose that at deuce, Sally has probability 2/3 of winning the next point and Becky has 1/3 probability of winning the point. When Sally has advantage she has probability 3/4 of winning the next point and when Becky has advantage she has probability 1/2 of winning the next point. Set this up as a Markov Chain.
If the game is at deuce, find how long the game is expected to last and the probability that Becky wins.
If Sally has advantage, what is the probability she eventually wins the game?
Create a Markov Chain transition matrix to represent this loaded die, such that the probability of each state is equally likely in the steady state vector, but none of the individual transition vectors equals the steady state vector.
Generate 5 sequences of 100 rolls from this Markov chain. Then generate a mixture of 10 sequences, 5 which are generated by the steady state Markov Chain, and 5 which are generated by your loaded Markov Chain, and send these to rest of the class.
Write a program to discriminate between the data from other groups in the class. Namely, read in the first 5 sequences to generate your expected probabilities of their loaded die. Then, read in the 10 mixed sequences and for each one, use the log probabilities to determine which Markov Chain generated each sequence.