Alberto Savoldelli, Gianluca Misuraca and Cristiano Codagnone
provides a synthetic overview of such frameworks, many of which include also user‐centric measures to track take‐up and satisfaction – two central parameters which allow governments to learn more about user needs and demands as well as providing a structured approach to assess policy impacts and to support the continuous improvement of eGovernment services. The frameworks or methodologies included in Table 1 have been selected using the criterion that they are some of the most cited in literature and used in practice( see Kunstelj & Vintar, 2004; Gil‐Garcia & Pardo, 2005; Foley, 2006; Esteves & Rhoda, 2008) 3.
Table 1: Comparison of selected e‐Government measurement frameworks( Savoldelli, Codagnone and Misuraca, 2013)
In Table 1 these methodologies are assessed against two criteria: a) the dimension of public value( Carbo & Williams, 2004; Johansen, 2004; Ebrahim & Irani, 2005; Codagnone & Undheim, 2008; Heeks & Molla, 2009; OECD, 2009; Stanimirovic & Vintar, 2012) covered by the areas of impacts and / or indicators proposed; and b) coverage of the various possible stages in the policy planning process. From this comparative analysis it emerges that current approaches are not exhaustive and comprehensive across these two criteria, which confirms the claim that the lack of a structured and comprehensive measurement and assessment framework, especially for local governance( see also Anttiroiko, 2008; Belanger & Carter, 2008; Estevens & Rhoda, 2008; Kolsaker & Lee‐Kelley, 2008; Kunstely & Vintar, 2009; Von Ryzin, 2009; UN‐DESA, 2010; Stanimirovic & Vintar, 2012), is among one of the key barriers delaying full adoption of e‐Government( Savoldelli, Codagnone & Misuraca, 2012), Furthermore, the majority of such frameworks are shaped by a technology‐driven approach( Dawes, 2008), under‐estimating the key importance of outcome oriented approaches strictly connecting e‐ Government with the policy making process( Titah & Barki, 2005; Perrin, 2006; Codagnone, & Undheim, 2008). This myopic behavior often brings eGovernment initiatives into a“ lock‐in / vendor‐driven” situation with the consequence of risking to loose most of the expected benefits( Foley, 2006). Also important to stress the lack of covering all the relevant stages where a measurement and assessment framework would be needed, that is to say both ex‐ante, in‐itinere, and ex‐post, and the need to define a well‐structured ex‐ante measurement( see also Gil‐Garcia & Pardo, 2005; Foley, 2006; DEP, 2012).
While eGEP presents some of the limitations mentioned above, it has been largely recognized that it provides a more robust approach in assessing outcomes of e‐Government initiatives( Misuraca and Rossel, 2011; Stanimirovic & Vintar, 2012). Therefore, our proposal for a new measurement framework has been built starting from the eGEP framework, which has been improved in various aspects, especially in the participation mechanisms for involving stakeholders and beneficiaries in the measurement process of e‐Government services.
In this regards, as rendered in figure 2, the proposed measurement framework aims at overcoming the previous approaches, helping to establish a trust‐based relationships among citizens, policy makers, civil servants and other stakeholders, so to balance the precision in the measurement of the impacts of e‐ 3
They are: eGovernment Signpoint( Danish Digital Task Force, 2004); MAREVA( ADAE, 2007); WeBe 4.0( Rothig, 2004; 2010); eGEP( Codagnone et al 2006); NOIE( Australian Government, 2005); GOL Performance Measurement( Treasury Board of Canada, 2004); eGovernment Satisfaction Index( Freed, 2009); VMM( Booz‐Allen‐Hamilton, 2004); DVAM( AGIMO, 2004); Gateway Process( DFP, 2012).
444