The purpose of this post is to present the very basics of potential theory for finite Markov chains. This post is by no means a complete presentation but rather aims to show that there are intuitive finite analogs of the potential kernels that arise when studying Markov chains on general state spaces. By presenting a piece of potential theory for Markov chains without the complications of measure theory I hope the reader will be able to appreciate the big picture of the general theory.

This post is inspired by a recent attempt by the HIPS group to read the book “General irreducible Markov chains and non-negative operators” by Nummelin.

Let be the transition matrix of a discrete-time Markov chain on a finite state space such that is the probability of transitioning from state to state . We call a Markov chain *absorbing* if there is at least one state such that the chain can never leave that state once entered. Such a state is called an *absorbing state*, and non-absorbing states are called *transient states*. An *ergodic* Markov chain is such that every state is reachable from every other state in one or more moves. A chain is called a *regular* Markov chain if all entries of are greater than zero for some . We will focus on absorbing chains first, and then look at ergodic and regular chains.

By permuting the states of an absorbing chain so that the transient states come first, we can write the transition matrix of the absorbing chain as

(1)

The matrix describes the transition probabilities between transient states, the transition probabilities from transient to absorbing states ( should not be the matrix of all zeros), and is the identity matrix since the chain stays at absorbing states.

Notice that the iterates as . We can see this by noting that the probability that the chain does not reach an absorbing state from a transient state is the sum of the corresponding row of , call it . The probability of not being absorbed in steps is , and since this probability goes to so the individual entries of converge to as well. We define the *fundamental matrix* for an absorbing Markov chain as

(2)

Each entry of , , can be interpreted as the expected number of times the chain is in state if it started in state . Note that is the finite analog of the potential kernel from Nummelin, and it can be shown that .

The fundamental matrix, , can be used to compute many interesting quantities of an absorbing Markov chain (which probably explains the name fundamental matrix). For instance, we can compute the expected time until the chain is absorbed as , where is a vector where is the expected number of steps until the chain reaches an absorbing state starting from state , and is a vector of ones. We can also compute the probability that a particular absorbing state is reached given a starting state. Let be a matrix where is the probability of reaching absorbing state starting at state . It turns out that , where is in the decomposition of above.

Now we turn our attention to ergodic and regular Markov chains. Let be the transition matrix of a regular Markov chain, then the iterates, converge to a matrix, , such that all rows of are the same. Call the shared row so that . The vector is called the *stationary distribution* of the chain. We can also define a fundamental matrix for ergodic and regular Markov chains.

As a first attempt at the form of the fundamental matrix for an ergodic or regular chain we could try . However, does note have an inverse since the columns are linearly dependent. The reason we could define for absorbing chains was because the iterates were decaying to . Since , we may instead try defining

(3)

The terms in this series decay to and the series ends up converging. We then have is the fundamental matrix of an ergodic or regular chain.

As in the absorbing case can be used to compute some interesting properties of the chain. For instance, the mean first passage time is the expected number of steps required to reach state from state which we denote by . It can be shown that . The fundamental matrix also appears in the asymptotic variance in the Central Limit Theorem for Markov chains.

I highly recommend reading Chapter 11 from “Introduction to Probability” by Grinstead and Snell (available for free here) for a nice introduction to finite Markov chains. It is written at the undergraduate level, but it is very clear and contains some ideas and results that you probably didn’t see in your first treatment of Markov chains. Another nice reference for finite Markov chains that covers potential theory is “Finite Markov Chains” by Kemeny and Snell.

As a quick aside, notice that the fundamental matrices defined here are the inverses of matrices that look very similar to graph laplacians. The short story is that potential theory for Markov chains (and processes more generally) is related to classical potential theory in physics. In fact there is a deep connection between Markov chains and electric networks. See this book by Doyle and Snell for a treatment.

Could you proof this: Theorem 11.3 In an absorbing Markov chain, the probability that the process will be absorbed is 1 (i.e., Q^n → 0 as n → ∞).??