Markov Matrix: Zombie Outbreak Modeling

Background

Markov Chains are mathematical systems which can describe changes from one state to another. When you have states whose probability of changing adds up to 1 (E.G. whether it will rain tomorrow), this can be modeled as a Markov Matrix. For each day n+1, n+2… n+i, there is a different probability of it raining, until it achieves a steady state. The state n probability only depends on the state n-1 probability, which only depends on n-2, etc. If you’re working with a Markov Matrix it will always have an eigenvalue of 1 and it will always achieve a steady state.

We can apply this knowledge to population changes where samples are moving between groups but always add up to the same population count. This is different from predator/prey modeling where reproduction is occurring.

Zombies!

obligatory xkcd

As a fun problem, let’s say there’s a zombie outbreak at your workplace. Waves occur every hour (t) where some humans become zombies. In this same hour the survivors will turn some zombies back into humans. Each action will cause a shift in the population, moving both zombies and humans. The total number of people at your workplace will stay the same because there is no true death or additions to the population.

The transition matrix Q is essentially a linear combination:

Transition m.0Transition m.1
human population80% remain humans10% become human
zombie population20% become zombies90% remain zombies
Hourly (t) change in population

You’ll notice that the function found on our rows n which describes the change in human population adds up to below 100% while the zombie popluation goes to over 100%. This indicates that we will have more zombies than humans in our final state. On columns m we can see how the transitions happen for each hour t.

When you compute the characteristic polynomial to find λ, the eigenvalue, you will have two solutions, one of which will be equal to 1. This is because our transitions describe a closed system where the population is not growing overall but shifting between groups in two phases.

Brute Force Optimization with Markov Matrix

So you might think you can just keep plugging the results of y(t+1) and z(t+1) back into this equation and eventually get to your steady state through brute force. You would be correct:

However since this is a Markov Matrix with a λ=1, we can optimize this by computing the steady state with matrix power:

This new transition matrix is used to compute the final state:

Conclusion

While this is hardly a real-world problem, it has applications to reinforcement learning. The scope of this problem is quite small here with only two transitions. This is the first step to dealing with a larger problem space.

Additional reading: This article has more information on the mathematical theory behind this.