13th European Conference on eGovernment – ECEG 2013 1 | Page 411

Small is Beautiful or Bigger is Better? Size of Municipalities and Quality of Websites
Svein Ølnes Western Norway Research Institute, Norway sol @ vestforsk. no
Abstract: This paper looks at the results of several years of benchmarking of public websites and asks how the size of the municipalities, in terms of population, affects the quality. The paper is based on a quantitative study of data from the last five years’ benchmarking of public websites. In addition, data from a survey of web administrators is analysed to shed light on the results from the benchmarks. Finally, the results are compared with a user survey measuring citizens’ satisfaction with public services. Size does matter and bigger is better, that is the short story. The larger the municipality, the better the quality of their website is according to the benchmarking results. The survey to web administrators of the evaluated websites gives us some explanation for the differences in measured quality. The survey shows that differences can be explained by resource allocation for the work with websites, knowledge about the evaluation system and the heuristics behind it, and the fact that larger municipalities having more collaboration with other municipalities on the subject of maintenance and development of public websites. The larger the municipality, the more likely it also is to have a formal cooperation with other municipalities in this area. The paper shows that a heuristic based quality evaluation is an effective method of assessing the quality of public websites. The results show that the benchmarking system does improve the quality of public websites and that a good understanding of the heuristic principles behind the system is one of the key factors behind good quality. Of course, resource allocation to the work and development of the websites is also an important factor. However, the survey of citizens does not confirm the differences in quality observed in the expert evaluations. We therefore have to ask what quality is measured in the benchmarking system and quality for whom? How can we measure quality that really affects the end users, the citizens? Heuristic based evaluation systems for public websites are probably necessary to raise the awareness of common design principles but not enough to assure quality to the end users. The paper thus argues for multi‐dimensional approach to evaluation of public websites and calls for more research where different methods of quality assessment are used and combined and the effects ultimately measured on real users.
Keywords: heuristics, benchmarking, evaluation, websites, municipalities
1. Introduction: Benchmarking systems for evaluation of public website quality
The public sector plays a very important role in Europe’ s social and economic model in supporting high levels of welfare for citizens, ensuring socio‐economic cohesion and supporting the functioning of a competitive market environment. Within the public sector, public administrations are facing a challenge of improving the efficiency, productivity and quality of their services. Public services provision is expected to be digital if possible, and in most European countries“ digital first” is now the rule. Internet based services have come of age and the phase of trying and failing is mostly over. Digital service provision is serious business also for public sector.
Given the high priority of digital public service provision the quality of public websites in general is important. Several benchmarking systems have been tried and some of them are still in use and of great importance. Most notably the EU’ s benchmarking of public websites( Cap Gemini et al. 2010) have had quite an impact and is an important tool for evaluating the effect of both EU initiatives and the different ICT polities throughout the member countries. However, critics have also argued that the EU benchmarking system has hit the ceiling in many ways and needs a thorough rework( Grönlund 2010).
Norway is among the countries with a long history of benchmarking public websites. The groundwork was laid in 2001 and evaluation of public websites has been carried out annually since then. The number of websites evaluated has risen from around 500 to more than 700, of which approximately 430 are municipality sites and the rest various governmental agencies’ websites. The Agency for Public Management and eGovernment, Difi, is responsible for developing the benchmarking system and carrying out the evaluations based on this.
The Norwegian benchmarking system is built on a set of general indicators common for all websites. This is in line with the EU benchmarking system, but different from e. g. the Danish benchmarking system where domain specific indicators have been tried out( Videnskapsministeriet 2002). The Danish web evaluations have been heavily modified since the start in 2001 after starting with the same expert evaluation approach as the Norwegian system. In fact the Norwegian benchmarking system was modelled after the Danish. After the start with a
389