13th European Conference on eGovernment – ECEG 2013 1 | Page 78

Juan Carlos Barahona and Andrey Elizondo
prestación de servicios públicos por medios digitales 2006)( J. Esteves 2006)( Torres 2006)( Universidad de San Andrés 2006)( Esteves and Victor 2007)( Fitsilis, Anthopoulos and Gerogiannis 2009)( Middleton 2007). We believe that this approach has greater potential to provide pertinent feedback to practitioners and policymakers. However, most of these studies could benefit from a more detailed explanation of their data collection or appraisal methods. Another shortcoming is the absence of a periodic and systematic assessment of the same set of institutions in order to test the proposed instrument and methods and to gather enough data to study the organizational impact over time. The studies also lack a discussion on the resources and mechanisms necessary to ensure scalability and consistency in the assessment process, comparability of results and traceability of data over time and between entities.
Recently, an alternative approach has been revised( Azab, Kamel and Dafoulas 2009), which also seeks to guide the development of e‐Government. Instead of proposing to measure a government’ s development in terms of compliance with a stage‐based model, the authors propose a framework similar to management models for project implementation that examines existing conditions as determinants of the evolution of e‐readiness, or the supply and demand of digital services. Although these measurements may inform decision‐makers on how to improve the environment for digital development, the complexity of the authors’ proposed methodology would make it difficult and expensive to scale to a level that allows for any sort of data aggregation and results that could be compared at a national level, or even across countries.
In the following section, we describe a framework developed that considers the virtues and shortcomings of different attempts to measure e‐Government. This framework has been enriched by implementation at a national level and five years of consecutive trial and data gathering.
3. Proposed framework
Since the start of the past decade, there has been debate on models and ways to understand and monitor progress made by countries in e‐Government; however, some models provide information of little use in decision‐making, or their designs make self‐evaluation and peer‐comparisons impossible. In 2006, an alternative methodology was proposed with the following criteria and design restrictions to balance relevance, replicability and trustworthiness( Barahona, Zuleta and Calderón 2006):
• Relevance:
• A clear focus on the interaction between citizens and governments
• Facilitates collaboration and knowledge‐sharing among assessed subjects
• Scales to a national or supranational level
• Provides policy‐makers, public officials and e‐Government project implementers with valuable information for decision‐making
• Explicability facilitates media access to take e‐Government advances to a broader audience
• Replicability and trustworthiness:
• Data collection and evaluation should be easy to replicate at the institutional level
• Provides accountability through adequate data granularity and traceability to the observed variables
• Provides accountability through adequate data granularity and traceability to the assessed subjects
• Completely avoids dependence on government agencies’ willingness to provide information
• Avoids the perceived opacity of current measurements available at the time
• Simplicity is based on a“ do‐it‐yourself” approach
The framework was created after an extensive literature review of available methodologies and instruments. As a result of this work, the abovementioned critiques were made, reinterpreting the issue about the relationship of exchanges made between citizens and their governments. It was identified that the quality of the information is central to the interaction between those two groups.
56