Let be a directed graph with vertices and edges . Multiple edges between the same pair of vertices are allowed. For concreteness’ sake, think of the vertices as routers, and the edges as (one-way) connections. Let denote the set of probabilities, and be a function which assigns some probability to each edge. Think of as the probability that a single message sent along the edge from the source router will successfully reach the target router on the other end.

Suppose that when a router receives a message on an incoming connection, it immediately resends it on all outgoing connections. For , let denote the probability that, under this “flooding” scenario, *at least one copy* of a message originating at will eventually reach .

For example, consider the simple network shown below.

A message sent from along the upper route through has an probability of arriving at . By definition a message sent along the bottom route has an probability of arriving at . One way to think about computing the overall probability is to compute the probability that it is *not* the case that the message *fails* to traverse *both* links, that is, . Alternatively, in general we can see that , so as well. Intuitively, since the two events are not mutually exclusive, if we add them we are double-counting the situation where both links work, so we subtract the probability of both working.

The question is, given some graph and some specified nodes and , how can we efficiently compute ? For now I am calling this the “network reliability problem” (though I fully expect someone to point out that it already has a name). Note that it might make the problem a bit easier to restrict to directed *acyclic* graphs; but the problem is still well-defined even in the presence of cycles.

This problem turned out to be surprisingly more difficult and interesting than it first appeared. In a future post or two I will explain my solution, with a Haskell implementation. In the meantime, feel free to chime in with thoughts, questions, solutions, or pointers to the literature.

### Like this:

Like Loading...

*Related*

##
About Brent

Associate Professor of Computer Science at Hendrix College. Functional programmer, mathematician, teacher, pianist, follower of Jesus.

At first I thought this was just straightforward matrix multiplication. But then I realized the algebraic structure we’re working in isn’t even a semi-ring so matrix multiplication doesn’t have the nice properties you expect. So I’m looking forward to your next installment!

Right, exactly!

http://r6.ca/blog/20110808T035622Z.html

In the context of the above link.

~~~~ { .haskell }

data BY = R | S | T deriving (Eq, Ord, Enum, Bounded, Ix)

newtype Prob = Prob Double

instance Show Prob where show (Prob x) = show x

instance Semiring Prob where

zero = Prob 0

one = Prob 1

Prob x Prob y = Prob (x+y-x*y)

Prob x Prob y = Prob (x*y)

instance StarSemiring Prob where

star (Prob x) = Prob (recip (1 – x))

exampleProbMatrix :: Matrix BY Double

exampleProbMatrix = matrix value

where

value (R :-> R) = 0

value (R :-> S) = 0

value (R :-> T) = 0.9

value (S :-> R) = 0.5

value (S :-> S) = 0

value (S :-> T) = 0.7

value (T :-> R) = 0

value (T :-> S) = 0

value (T :-> T) = 0

o1 = printMatrix . fmap Prob $ exampleProbMatrix

o2 = printMatrix . star . fmap Prob $ exampleProbMatrix

~~~~

That’s what I thought at first too. But that’s not actually a semiring.

https://en.wikipedia.org/wiki/Markov_chain should solve the problem

Wouldn’t you have to make a transition matrix, though? That is, the state space is actually the set of all subsets of graph vertices. Or did you have something more clever/efficient in mind?

None concrete solution but match exactly with markov chains and P calculus on it. My search would start with it. E.g. https://en.wikipedia.org/wiki/Markov_chain_Monte_Carlo (nice problem! I’m curious :)

In which sense is this different from belief propagation?

Thanks for the pointer! I think I remember learning about belief propagation a long time ago in a Machine Learning class in grad school, but had forgotten about it. From looking into it briefly, it does seem related, but it doesn’t seem like exactly the same thing. If there is indeed a deeper connection I’d be happy to have it explained to me.

It seems to me that the problem you are posing is to compute a marginal distribution from a bayesian network. This is exactly what the sum-product algorithm does.

David Barber’s “Bayesian Reasoning and Machine Learning” would be a reference on this. Personally I worked on some code implementing a belief propagation algorithm in Python, but there is a much bigger field surrounding this problem.

If you are not yet convinced, please let me know what needs explaining.

OK, yes, I see now — I think you’re right. I had trouble seeing the connection before since Bayesian networks are so much more general (allowing arbitrary value types at nodes and allowing arbitrary functions expressing conditional probabilities).

Pingback: Network reliability | The Math Less Traveled

Let’s make the problem a bit more complicated: Imagine an additional node ‘u’ with a single incoming edge from ‘t’ with a probability of 1.0. Is the network defined such that ‘u’ will receive the message twice 31.5% of the time?

An interesting question to explore is in a complex network what is the expected number of times that a node will see a message.

If u has a single incoming edge from t with probability 1, then yes, it might see the message twice. The way I have defined the problem, that doesn’t matter, but I completely agree with you that it is also interesting to count the number of message copies a given node sees.

Have you had a look at (Probabilistic) NetKAT?

I am generally familiar with the work on NetKAT though not in great detail. Do you think it is connected somehow?

Pingback: The network reliability problem and star semirings | blog :: Brent -> [String]