13th European Conference on eGovernment – ECEG 2013 1 | Page 414

Svein Ølnes
tics is also in many cases difficult to check because it requires an extensive amount of automated and( manual) expert evaluation techniques. All in all the authors seriously doubt if the examined heuristics aid the experts in their work.
De Jong and van der Geest( 2000) distinguish between these four foundations for heuristics:
• Standards‐based heuristics
• Theory‐based heuristics
• Research‐based heuristics
• Practioners’ heuristics
Moving to the practical implementation of testing heuristics Preece et al.( 2002) distinguish between four major types of evaluations in regard to websites:
•“ Quick and dirty” – a quick and fairly unsystematic feedback from users, colleagues and others
• Usability testing – user tests where users are placed in controlled environments and observed when using the service that is to be evaluated
• Field studies – studies where users are observed in their natural environment
• Predictive evaluations – evaluations based on heuristic principles to find usability problems. The main principle in this method is that the user needs not to be present – in fact the user should not interfere! We also call these evaluations expert evaluations.
Heuristic models have their weaknesses and limitations, but for large scale screening of website quality there are hardly any alternatives. For measuring the usability aspects there are methods like user testing( Nielsen 1993). But user testing is a laborious task and would require far more resources than most governments are willing to spend given the number of websites to evaluate. On the other hand, usability testing of a limited number of websites would probably give valuable feedback to the development of the heuristics.
The results in this paper are derived mostly from the heuristic based expert evaluations of municipality websites in the period 2007 – 2011. In addition, results from a survey targeting web masters of municipality websites and a survey targeting citizens in general have been added to try to explain the differences observed in the expert evaluation results.
2.2 Usability
In his book Usability Engineering( 1993) Jakob Nielsen discusses the usability of a system and refers to concepts like user friendliness, usability and usefulness which can all be viewed as different dimensions of system acceptability. He chooses to use usability and associates it to these properties:
• Learnability: easy to learn
• Efficiency: easy to use
• Memorability; easy to remember
• Errors: Low error rate
• Satisfaction: Pleasant to use, users are subjectively satisfied
One of his main arguments is that different categories of users, different user situations and individually different preferences make usability testing difficult. He points to three main dimensions:
• experience with computers and relevant computer systems in general
• experience with the actual system( novice – expert)
• knowledge and competence in the actual domain where the system is used
The heuristic methods are especially suited for evaluating usability and Nielsen has formulated 10 heuristic principles for usability( 1993). Nielsen therefore points to problems with user testing in general where the results will differ because of different user categories as mentioned above. We agree with Nielsen and think that it is necessary that user tests take into account all the three dimensions.
392