Alberto Savoldelli, Gianluca Misuraca and Cristiano Codagnone
combination of input / output X( the policy treatment) and not to any other cause. On the contrary, measurement is the process by which the attributes or dimensions of some phenomenon( in this case any variable among the blocks of inputs, outputs, outcomes, and impacts) are determined and counted, as amply documented in the OECD work on public sector measurement( OECD, 2006a, 2006b, 2009). The all body of work on“ Performance Measurement” or“ Impact Assessment”, as well as many other types of labels, can be seen as belonging together with evaluation to the‘ extended’ family of what we can generically refer to as“ assessment”, but they are clearly different from evaluation strictu sensu. Hence, neither the original eGEP framework nor other similar exercises reviewed in next section can be and should be presented as evaluation frameworks for none of them can be used to demonstrate that the changes in a given variable of interest can be causally attributed to a given e‐Government service, unless they also add an experimental or quasiexperimental component.. It is important to make this clear as to avoid making claims that are not supported scientifically and empirically. The eGEP‐2.0 framework we propose is rigorously only an e‐Government measurement framework that as such raises no claim as to demonstrating causally( evaluate) the effect that a given service or bundle of services( i. e. e‐Government programme or policy) have for different constituency on a given sought outcome. This, however, does not necessarily mean that a measurement framework cannot be linked to, and support, a counterfactual impact evaluation. If the measurement is built on scientifically sound and empirically robust model of causal impact and if data are gathered on the objects of measurement steadily and reliably then eventually that can be used for a true impact evaluation. This is the object of another forthcoming paper( Codagnone, Misuraca, Savoldelli, 2014) and we will not enter into this subject here.
Figure 1: Stylised Logic chain for evaluation and for measurement( authors ' elaboration 2)
2. Brief state of the art
We have reviewed in depth the state of the art for what concerns e‐Government assessment and the barriers deriving from its lack elsewhere( Codagnone & Undheim 2008; Misuraca et al., 2013; Savoldelli et al. 2012 and 2013). Below we extract a selective and compact summary strictly instrumental to our purpose in this paper. The first and most well‐known exercises in our domain of interest have been and still are large survey based on scanning the websites of public agencies and scoring them in terms of either availability and sophistication of services provision or of level of participation embedded in them( Capgemini, 2004; 2010; UN‐DESA, 2010). These approaches have been amply criticized and most of the e‐Government measurement frameworks emerged in past decades, starting from the first version of eGEP, were launched to go beyond this supply‐side focus, to look at more tangible outcomes and impacts, and to be more granular( Misuraca et al., 2013). Table 1 2
Based on several sources: see among others( Algemene Rekenkamer, 2006; Boyne et al., 2003; Codagnone, 2009; Codagnone & Undheim, 2008; Hatry, 1999; Heeks, 2006; Heeks & Molla, 2009; Irani et al., 2005; OECD, 2006a, 2006b, 2009).
443