13th European Conference on eGovernment – ECEG 2013 1 | Page 418

Svein Ølnes
Of special relevance for this paper is the question of satisfaction with digital municipality services grouped after size of municipality. The results from 2009 / 2010 show that there are no significant difference in satisfaction between citizens from small municipalities and citizens from larger ones. In services like planning and building permissions and care for elderly people, citizens from small municipalities give higher score than citizens from larger municipalities. But in services like kindergarten and primary school the result is opposite; citizens from larger municipalities are more satisfied than citizens from smaller municipalities. All in all the users’ satisfaction with digital services cannot help us explain or confirm the differences in quality observed in the expert evaluations of the websites. This could very well be an example of what Jakob Nielsen calls the first rule of usability:“ Don’ t listen to the users, watch them work”.
5. Better quality for the users?
The objective for the evaluation of public websites has been to stimulate the quality improvement of these. The results presented in chapter 3.1 show that there have been improvements from the evaluation in 2007 to the last one undertaken in 2011, in terms of overall score on the quality indicators. Analysis of the same benchmarking system for the first years 2001 – 2003 also show a significant improvement in quality( Jansen and Ølnes 2004). As such, the main objective of the benchmarking project has been met. Also when it comes to knowledge about the benchmarking system and the perceived usefulness of it, the survey shows that at least for the larger municipalities this has happened. For the smaller municipalities there is clearly a job to be done in order to get better knowledge and better understanding of the benchmarking system.
We may ask, however, if this really is an indication of better digital services for the users? The results from the user survey described in 3.3 do not confirm this and especially give no support for the observed difference in quality between small and large municipalities. So what have we measured in the evaluations of public websites? The problems with the heuristics which the evaluation system builds on, is that it does not necessarily correspond with actual user needs and behaviour. An important aspect missing from the expert evaluations is context. The evaluations are all done in the same context, the context of testing, which is clearly different from the context of an actual user. Usability testing would be a natural answer to the problems of the heuristics and the context problem. But given the number of websites and the enormous amount of information on them, regular usability testing would not be feasible. Expert evaluations continue to be a good number two in order to try to say something about quality of public websites. Of the three main categories of indicators used in this benchmarking system, the accessibility category is the the least difficult to assess given the general and widely used heuristics derived from W3C’ s WCAG work. The more we approach the usability and usefulness of the websites, the more difficult it gets because our heuristics have a problem of capturing the needs and the behaviour of a real user.
There is no perfect solution to the problem of evaluating the quality of a public website, at least with the means and resources normally available. Compromises are needed in order to find a practical way of evaluating. As the results of the evaluations of public websites have shown when combined with results from other studies, there is a need to approach the quality issue with different methods.
6. Evaluation of public web sites – what answers do we really get?
In the above discussion we have included different approaches to quality assessment, arguing that one single method or approach cannot be applied for all purposes. We argue that many perspectives and dimensions have to be included in such work. This is illustrated by the multi‐functional character of a municipal website, both to serve the democratic ideals, to focus on customer‐orientation in the service provisions as well as to include the efficiency perspective. Furthermore, the evaluations have to include many criteria, like technical characteristics, architecture, functionality, usability or user friendliness, aesthetics etc. This requires different approaches including formal methods as measurements based on well‐defined metrics along with more heuristics evaluations and user testing etc. Important in this work is to design detailed user scenarios and different user settings in which the web‐site is to be evaluated. These different perspectives do have important implications for how we define the quality requirements and not least how international benchmarking is being conducted.
We do agree that the kind of“ mild standardisation” in the benchmarking approach is an efficient way of improving the quality of public web sites and it can also be an efficient instrument to ensure that public bodies follow standards; either formally approved standards or recommended standards.
396