Journal on Policy & Complex Systems Volume 1, Number 2, Fall 2014 | Page 73

��������������������������
in the model design reduces both confusion and inaccuracy . The representation of partial knowledge highlights gaps by stimulating questions such as ‘ How does A connect to B ?’. New sources of incompleteness may also be identified , as pieces are unable to be connected , or questions identify missing information in perspectives that have not previously been completely thought through . These analyses manage unknowns by making them explicit and potentially eliminating some .
Implementing a model also introduces additional sources of unknowns , which can be classified as errors in model structure and calibration ( Hodges , 1987 ). Structure errors are where the relationships ( or equations ) in the model design do not accurately describe the relationships that occur in the target system . This may be due to lack of knowledge about the true relationship , or constraints imposed by the choice of modeling technique ( see the discussion about accuracy ). Calibration relies on datasets that are often adapted for use rather than specifically relevant for the model . For example , the data may be incomplete , measure different features than required by the model , or refer to a different time or location .
Verification and validation techniques occur during the Test phase to reduce unknowns , but some will remain . For a diagram , exposing the model to outside comment is an effective way to identify areas of the model that may require further consideration . For a mathematical or computer model , verification deals with straightforward and obvious problems such as programming errors , but also ensures the model is able to deal with extreme or unusual potential simulations . Validation may expose inaccuracies in the model design , which can then be reduced through redesign . For example , data analysis may reveal an influence between two system components even though the design does not contain a relationship .
Mathematical and computer models are also able to explore the effects of unknowns using sensitivity analysis . This is typically conducted by examining the impact of changes in inputs on the overall model output , but it can also be performed on parts of the model to focus on the specific areas of instability ( Brugnach , 2005 ). Such analysis is often conducted by running the model with many different sets of inputs and parameters that span the plausible range of their values . This systematic variation allows the impact of the uncertainty in those values on the output to be directly measured . Large uncertainties may justify specific research or data collection to make the input information more complete or certain .
Sensitivity analysis can also overcome false confidence in the model . Modeling introduces distortions through layers of abstraction and models are necessarily based on imperfect data . The precision of model outputs can provide false confidence if this uncertainty is not discussed as part of the results . Simply documenting data limitations is a way of limiting the effect of the unknowns and sensitivity analysis emphasizes the effect of those limitations .
Finally , in the Use phase , mathematical or statistical models can propagate uncertainties and estimate their effect . This is appropriate where information is absent and unobtainable but is able to be estimated , such as future economic growth , or where there is inherent variability , such as rainfall patterns . If the distribution of possible values for the parameter is known , the distribution of possible values for the output can be calculated . However , this approach can quickly become intractable if there are many parameters for which their distributions must be combined .
71