Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Calculating the probability of system failure in a distributed network

I am trying to build a mathematical model of the availability of a file in a distributed file-system. I posted this question at MathOverflow but this might as well be classified as a CS-question so I give it a shot here as well.

The system works like this: a node stores a file (encoed using erasure codes) at r*b remotes nodes, where r is the replication-factor and b is an integer constant. Erasure-coded files have the property that the file can be restored iff at least b of the remote nodes are available and return its part of the file.

The simplest approach to this is to assume that all remote nodes are independent of each other and have the same availability p. With these assumptions the availability of a file follows the Binomial distribution, i.e. Binomial distribution

Unfortunately these two assumptions can introduce a non-neligible error, as shown by this paper: http://deim.urv.cat/~lluis.pamies/uploads/Main/icpp09-paper.pdf.

One way to overcome the assumption that all nodes have the same availability is to calculate the probability of each possible combination of availaible/non-available node and take the sum of all these outcomes (which is sort of what they suggest in the paper above, just more formally than what I just described). You can see this approach as a binary tree with depth r*b and each leave is one possible combination of available/not-available nodes. The file's availability is the same thing as the probablity that you arrive at a leave with >=b available nodes. This approach is more correct but has a computational cost of Ordo. Also, it doesn't deal with the assumption of node independence.

Do you guys have any ideas of a good approximation which introduces less error than the binomial distribution-aproximation but with better computational cost than ?

You can assume that the availability-data of each node is a set of tuples consisting of (measurement-date, node measuring, node being measured, succes/failure-bit). With this data you could for example calculate the correlation of the availability between the nodes and the availability variance.

like image 927
Yrlec Avatar asked Jun 22 '10 11:06

Yrlec


People also ask

How do you find the probability of a system failure?

R(t) = 1 –F(t) , orR(t) = 1 –Π[1 −Rj(t)] . For example, if two components are arranged in parallel, each with reliability R 1 = R 2 = 0.9, that is, F 1 = F 2 = 0.1, the resultant probability of failure is F = 0.1 × 0.1 = 0.01.

What is the probability of failure of a structure?

probability of Failure (POF) (see Figure 3) is derived when the Load Distribution (base shear) is greater than the Resistance Distribution (RSR). Base shear and RSR derived from the push-over analysis is multiplied by a factor 'Bias' to obtain as an accurate result as the mean values.

What is the probability of failure?

Reference Definition by Inspectioneering.com: Probability of Failure (POF) is likelihood that a piece of equipment will fail at a given time and an important part of effective risk analyses. The POF is half of the equation when determining risk as part of Risk Based Inspection (RBI) methodology.


2 Answers

For large r and b you can use a method called Monte-Carlo integration, see e.g. Monte Carlo integration on Wikipedia (and/or chapter 3.1.2 of SICP) to compute the sum. For small r and b and significantly different node-failure probabilities p[i] the exact method is superior. The exact definition of small and large will depend on a couple of factors and is best tried out experimentally.

Specific sample code: This is a very basic sample code (in Python) to demonstrate how such a procedure could work:

def montecarlo(p, rb, N):
    """Corresponds to the binomial coefficient formula."""
    import random
    succ = 0

    # Run N samples
    for i in xrange(N):
        # Generate a single test case
        alivenum = 0
        for j in xrange(rb):
            if random.random()<p: alivenum += 1
        # If the test case succeeds, increase succ
        if alivenum >= b: succ += 1
    # The final result is the number of successful cases/number of total cases
    # (I.e., a probability between 0 and 1)
    return float(succ)/N

The function corresponds to the binomial test case and runs N tests, checking if b nodes out of r*b nodes are alive with a probability of failure of p. A few experiments will convince you that you need values of N in the range of thousands of samples before you can get any reasonable results, but in principle the complexity is O(N*r*b). The accuracy of the result scales as sqrt(N), i.e., to increase accuracy by a factor of two you need to increase N by a factor of four. For sufficiently large r*b this method will be clearly superior.

Extension of the approximation: You obviously need to design the test case such, that it respects all the properties of the system. You have suggested a couple of extensions, some of which can be easily implemented while others can not. Let me give you a couple of suggestions:

1) In the case of distinct but uncorrelated p[i], the changes of the above code are minimal: In the function head you pass an array instead of a single float p and you replace the line if random.random()<p: alivenum += 1 by

if random.random()<p[j]: alivenum += 1

2) In the case of correlated p[i] you need additional information about the system. The situation I was referring to in my comment could be a network like this:

A--B--C
   |  |
   D  E
   |
   F--G--H
      |
      J

In this case A might be the "root node" and a failure of node D could imply the automatic failure with 100% probability of nodes F, G, H and J; while a failure of node F would automatically bring down G, H and J etc. At least this was the case I was referring to in my comment (which is a plausible interpretation since you talk about a tree structure of probabilities in the original question). In such a situation you would need modify the code that p refers to a tree structure and for j in ... traverses the tree, skipping the lower branches from the current node as soon as a test fails. The resulting test is still whether alivenum >= b as before, of course.

3) This approach will fail if the network is a cyclic graph that cannot be represented by a tree structure. In such a case you need to first create graph nodes that are either dead or alive and then run a routing algorithm on the graph to count the number of unique, reachable nodes. This won't increase the time-complexity of the algorithm, but obviously the code complexity.

4) Time dependence is a non-trivial, but possible modification if you know the m.t.b.f/r (mean-times-between-failures/repairs) since this can give you the probabilities p of either the tree-structure or the uncorrelated linear p[i] by a sum of exponentials. You will then have to run the MC-procedure at different times with the corresponding results for p.

5) If you merely have the log files (as hinted in your last paragraph) it will require a substantial modification of the approach which is beyond what I can do on this board. The log-files would need to be sufficiently thorough to allow to reconstruct a model for the network graph (and thus the graph of p) as well as the individual values of all nodes of p. Otherwise, accuracy would be unreliable. These log-files would also need to be substantially longer than the time-scales of failures and repairs, an assumptions which may not be realistic in real-life networks.

like image 121
user8472 Avatar answered Sep 24 '22 07:09

user8472


Assuming each node has a constant, known and independent availability rate, A divide and conquer approach come to mind.

Say you have N nodes.

  1. Split them into two sets of N/2 nodes.
  2. For each side, compute the probability that any number of nodes ([0,N/2]) are down.
  3. Multiply and sum these as needed to find the probability that any number ([0,N]) of the full set is down.

Step 2 can be done recursively and at the top level you can sum as need to find how often more than some number are down.

I don't know the complexity of this but if I has to guess, I'd say at or below O(n^2 log n)


The mechanics of this can be illustrated on a terminal case. Say we have 5 nodes with up times p1...p5. We can split this into segments A withp1...p2 and B with p3...p5. We then can process these to find the "N nodes up" times for each segment:

For A:

a_2

a_1

a_0

For B:

b_3

b_2

The final results for this stage can be found by multiplying each of the a's with each of the b's and summing appropriately.

v[0] = a[0]*b[0]
v[1] = a[1]*b[0] + a[0]*b[1]
v[2] = a[2]*b[0] + a[1]*b[1] + a[0]*b[2]
v[3] =             a[2]*b[1] + a[1]*b[2] + a[0]*b[3]
v[4] =                         a[2]*b[2] + a[1]*b[3]
v[5] =                                     a[2]*b[3]
like image 37
BCS Avatar answered Sep 24 '22 07:09

BCS