Erinevus lehekülje "Thesis:Fuzziness as a Measure of Uncertainty in Quantitative Security Metrics" redaktsioonide vahel

Allikas: Kursused
Mine navigeerimisribale Mine otsikasti
5. rida: 5. rida:
 
Some input parameters used by the security models, e.g. the attack scenario description in the form of an ''attack graph'', contains thousands, or even dozens of thousands of nodes, and is not meant for human processing in any way. Indeed we strive for automation of input data gathering by means of data mining, but full automation is not achievable. We may face situations, when we get several different estimations for different data sources. We cannot treat them equally, as these data sources have different degree of ''reliability'', or ''confidence'' in the precision of the provided estimations, provided by these data sources. In certain cases automated data gathering may fail to provide values for certain input parameters at all, and in this case we should fall back to human estimations.  
 
Some input parameters used by the security models, e.g. the attack scenario description in the form of an ''attack graph'', contains thousands, or even dozens of thousands of nodes, and is not meant for human processing in any way. Indeed we strive for automation of input data gathering by means of data mining, but full automation is not achievable. We may face situations, when we get several different estimations for different data sources. We cannot treat them equally, as these data sources have different degree of ''reliability'', or ''confidence'' in the precision of the provided estimations, provided by these data sources. In certain cases automated data gathering may fail to provide values for certain input parameters at all, and in this case we should fall back to human estimations.  
  
Humans have proven themselfes to have really hard time estimating parameters in the quantitative domain. Experiments have shown that, as a rule, such results are not realiable, and are often meaningless for analysis. Humans in their nature think in categories, e.g. Low / Medium / High / Very high, etc., and estimating values on such an ordinal scale is more common to them. Even if we ask them to estimate a quantitative parameter, such as ''Cost'', their estimations will contain some degree of uncertainty in this estimation.  
+
Humans have proven themselfes to have really hard time estimating parameters in the quantitative domain. Experiments have shown that, as a rule, such results are not realiable, and are often meaningless for analysis. Humans in their nature think in categories, e.g. Low / Medium / High / Very high, etc., and estimating values on such an ordinal scale is more common to them. Even if we ask them to estimate a quantitative parameter, such as ''Cost'', their estimations will contain some degree of uncertainty.  
  
 
For example:  
 
For example:  

Redaktsioon: 15. veebruar 2015, kell 16:54

Back to the list of topics.

As any other model, security models take a set of input parameters, perform some sort of model-specific transformations and calculations, and output the result. Quantitative analysis models require quantitative inputs, and produce quantitative results. In the case of simple models and not so big amount of input parameters, these parameters may be evaluated by domain experts. The same team of experts analyses the results of such an analysis.

Some input parameters used by the security models, e.g. the attack scenario description in the form of an attack graph, contains thousands, or even dozens of thousands of nodes, and is not meant for human processing in any way. Indeed we strive for automation of input data gathering by means of data mining, but full automation is not achievable. We may face situations, when we get several different estimations for different data sources. We cannot treat them equally, as these data sources have different degree of reliability, or confidence in the precision of the provided estimations, provided by these data sources. In certain cases automated data gathering may fail to provide values for certain input parameters at all, and in this case we should fall back to human estimations.

Humans have proven themselfes to have really hard time estimating parameters in the quantitative domain. Experiments have shown that, as a rule, such results are not realiable, and are often meaningless for analysis. Humans in their nature think in categories, e.g. Low / Medium / High / Very high, etc., and estimating values on such an ordinal scale is more common to them. Even if we ask them to estimate a quantitative parameter, such as Cost, their estimations will contain some degree of uncertainty.

For example:

  • the cost is approximately (somewhere around) 100.
  • the cost might be something between 100 and 300.
  • the cost is somwhere around 100-300.
  • I am pretty sure the cost will be close to 2000.
  • most likely the cost will be 50, but anyway it will not exceed the value 70.
  • the cost most likely will be something between 200 and 250, but in any case it will be not less than 100 and not greater than 300.
  • the damage will be not less than 50000.
  • the expected damage may range from 100 thousand up to one million, but it is reasonable to expect damage ranging from 300 to 500 thousand monetary units.
  • the damage will not exceed 200000.
  • the damage will be from Low to Medium
  • I am almost sure the damage will be Medium
  • I am absolutely sure it will cost us 6000.
  • the cost will be usually greater than 5000 but definitely not less than 3800 (lower bound).
  • it is reasonable to expect costs up to 8000, but definitely not greater than 8500 (upper bound).
  • etc.


Currently this sort of uncertainty in human estimated values is not taken into account by the existing implementations of the analysis models, and we need to fix this!