13th European Conference on eGovernment – ECEG 2013 1 | Page 82

Juan Carlos Barahona and Andrey Elizondo
The granularity of the proposed tool contributes to technical knowledge sharing among institutions with different strengths and weakness. There is anecdotal evidence of collaboration between institutions as a result of this process. In addition, there have been clear success cases in several organizations of changes and transformations that have evolved out of this process, as acknowledged by the head of the national e‐ Government authority and other public officials in different media outlets. See, for example, video interviews of high‐level officials at( Fallas 2012). Even though data collection and assessments comply with the abovementioned criteria for trustworthiness, there is room for improvement on how the informational attributes are captured by the observed variables. Three years of gathering data using the same methods allows us to test for reliability. Table 3 shows that most factors and variables are reliable, but there is room for improvement. Special attention should be paid to factors describing efficiency and media infrastructure.
Table 3: Reliability of scales used to describe informational attributes as components of the e‐Government index
Item item‐test correlation item‐rest correlation inter‐item correlation
Alpha 6
Interaction x1: Presentation
0.7752
0.5704
0.3698
0.6378
x2: Simple transaction
0.7588
0.5439
0.3861
0.6536
x3: Complex transaction
0.7962
0.6049
0.3491
0.6167
x4: Integration
0.6404
0.3679
0.5033
0.7525
F1: Interaction
0.4021
0.7290
Personalization x5: Arquetype organization
0.7550
0.5295
0.3339
0.6007
x6: Arquetype integration
0.6291
0.3427
0.4553
0.7149
x7: Personalization
0.7886
0.5845
0.3015
0.5643
x8: Smart personalization
0.7209
0.4762
0.3668
0.6347
F2: Personalization
0.3644
0.6963
Relevance x9: Comprenhension
0.9157
0.8370
0.4910
0.7432
x10: Precision
0.9047
0.8171
0.5032
0.7524
x11: Clarity
0.6153
0.3702
0.8236
0.9334
x12: Applicability
0.8851
0.7820
0.5249
0.7683
F3: Relevance
0.5857
0.8497
Soundness x13: Soundness
0.7166
0.4461
0.2438
0.4917
x14: Consistency
0.7085
0.4337
0.2511
0.5015
x15: Correctness
0.6138
0.2962
0.3367
0.6037
x16: Current
0.6718
0.3785
0.2843
0.5437
F4: Soundness
0.2790
0.6075
Efficiency x17: Navegability
0.6910
0.3557
0.0761
0.1982
x18: Flexibility
0.5276
0.1303
0.2071
0.4393
x19: Verificability
0.6659
0.3177
0.0962
0.2421
x20: Usability
0.5194
0.1202
0.2137
0.4491
F5: Efficiency
0.1483
0.4105
Media x21: Accesibility
0.6729
0.2789
‐0.0285
.
x22: Security
0.5974
0.1713
0.0264
0.0753
x23: Search engine visibility
0.4411
‐0.0188
0.1402
0.3284
x24: Speed
0.4720
0.0159
0.1177
0.2858
F6: Media
0.0639
0.2146
6
Cronbach ' s alpha is an index of reliability associated with the variation accounted for by the true score of the " underlying construct ". Construct is the hypothetical variable that is being measured( Hatcher 1994). Alpha > 0.7 is generally accepted as a good indicator of reliability.
60