1) So, on which basis did you decide that the probability of the event was “medium” , and more importantly, how did you evaluate the probability of the events? By using ORE we did not define classes, rather we ranked risks by looking at their possible intolerable part for the specific case. Probabilities were defined by methods which are applicable to available data sets, by selecting the most appropriate methodology for each scenario. Inevitable uncertainties were given due consideration as ranges of probabilities were considered and the ranking was based on maximum intolerable risk.
2) Which is the basis for defining the consequence (loss) classes? How did you ended up considering that 20M$ loss was worse then 5 casualties and had to be used as the driving parameter for the selection of the consequence class? By using ORE we did not need to define consequences classes. We did not need to arbitrarily select “the worse” between a physical loss or human losses, or environmental losses. Rather we used well established methodologies to define multi-parameter functions (or, as an alternative style of application: we kept physical/environmental losses separated from human losses).
3) Which studies did you develop to define the various classes limits? On which basis did you select the limits? See above.
4) Why did you limit the highest class to -x- casualties and -y- millions? What about any scenario that would overcome that value? Did you imply it does not exist? We did not do any of the above, avoided those pitfalls by using ORE which does not set an arbitrary upper limit to losses.
5) …is the method you used “State of the Art”? Is it compliant with Risk Management Standard (ISO, COSO, ONR)? ORE takes into account the latest literature on the subject, avoids all PIGs pitfalls, and is compliant with international standards insofar it uses a well defined glossary and definitions, is a logical development based on sound mathematics and logic.
6) What lead you to use ORE? We decided to use ORE because we understand the limitation and gross conceptual mistakes lined to using PIGs, and we refuse to do what everyone does as we recognize that common practice is not an excuse for negligent approaches.
7) Commercial PIGs software generally bear a disclaimer saying: “beware users”…this software is just a way to display an information treatment that is common use….That’s logical, and ORE applications are not different. However, the ORE designers know that those notes are not written to cover a fundamental pitfall of ORE, but rather to protect from abusive assumptions on data made by the end-user.
8) Which criteria did you use to select the colours of your cells, which correspond to various levels of criticality? If we understand well, your criticality criteria is used as a pseudo tolerability criteria, whereby red color means highest risk, that should be dealt with, mitigated immediately, yellow means attention and green means “they are ok”, right? We did not do any of that by using ORE. Our tolerability criteria was established using repeatable methods specifically for the client’s operation under consideration.
9) There are numerous tolerability criteria published since the mid ’60s. How come your color threshold does not match any known tolerability criteria, and how come that cells overcome or straddle those tolerability criteria? There are no cells in our ORE, no colors, and our tolerability criteria either matches well-known societal thresholds, or uses specifically developed threshold (for physical losses) which suits client’s organization needs and requirements.
10) Using “credible scenario” is a censoring decision. How come you felt entitled to censor your analysis? We did not censor our scenarios using ORE.
11) Using “average p, C” is a biasing decision. How come you felt entitled to bias your analysis towards the center for each single scenario? We did not do any of that. It was not necessary to do that with ORE.
12) In your opening statement you say that scenarios entering in your PIG have to be credible scenarios. What threshold to credibility did you use? How does it match with your PIGs cells limits? By using ORE we considered a threshold of 10-5 to 10-6 as credibility scenario which is compliant with best practices in highly regulated industries, like, for example, chemical processing. We did not run in the conundrum you describe which arise when using PIGs.
At the end of this drill, we can say that the user/you will be feeling in a strong position to further argue the case because you have used State of the Art methodologies and referred to well-known published rules.
Should the rules change, it will be easy to assess the changes, if you use ORE.
Using ORE is a winning strategy.
Do not wait any further. It’s most likely easier than you think to upgrade from PIGs to ORE!