- #1
- 11,308
- 8,736
In an earlier thread, Science Vulnerability to Bugs, I mentioned the case
I am not a computer scientist, but I had a very computer science like follow-up thought on this subject.
We can define Risk=Probability*Exposure, let's say R=P*E. When we apply that to a software component, P is the probability of a flaw in the software and E is the number of places where the component is used.
Our confidence in a component increases with time and with the diversity of use without flaws being reported, thus P is a function of time t and of E. With time E may also grow, so that E is a function of time t. The interesting question is which outraces the other. Most quality improvement programs focus exclusively on P.
Given those, we could compute R(t). We should then be able to use ##\frac{dR}{dt}## as an index of the accepability of risk. ##\frac{dR}{dt}=0## is a likely choice for the boundary between acceptable and unacceptable risks. It also suggests that capping E is an alternative to lowering P as a remedy for excessive risk.
This is the kind of question that I expect should have been explored when discussion of shared reuable software components first became popular in the 1980s.
My question is, has this subject been explored in computer science? If yes, are there links?
Here is a similar case.http://catless.ncl.ac.uk/Risks/29.60.html said:Faulty image analysis software may invalidate 40,000 fMRI studies
In another recent case (can't find the link), an author decided to un-license his public domain contribution and withdrew it from publicly shared libraries, which broke very many products dependent on it.http://catless.ncl.ac.uk/Risks/29/59#subj8 said:Severe flaws in widely used open source library put many projects at risk
I am not a computer scientist, but I had a very computer science like follow-up thought on this subject.
We can define Risk=Probability*Exposure, let's say R=P*E. When we apply that to a software component, P is the probability of a flaw in the software and E is the number of places where the component is used.
Our confidence in a component increases with time and with the diversity of use without flaws being reported, thus P is a function of time t and of E. With time E may also grow, so that E is a function of time t. The interesting question is which outraces the other. Most quality improvement programs focus exclusively on P.
Given those, we could compute R(t). We should then be able to use ##\frac{dR}{dt}## as an index of the accepability of risk. ##\frac{dR}{dt}=0## is a likely choice for the boundary between acceptable and unacceptable risks. It also suggests that capping E is an alternative to lowering P as a remedy for excessive risk.
This is the kind of question that I expect should have been explored when discussion of shared reuable software components first became popular in the 1980s.
My question is, has this subject been explored in computer science? If yes, are there links?