Editorial: Methodological Concerns in Relation to SERVQUAL:
Investigations in Libraries
The SERVQUAL method is based on the assumption that it is possible to measure customer expectations in relation to a service and their percep-tion and the as the first part offer statements about what customers would expect from an ideal library. Another part consists of statements relating to the experience with the service. This is done by a questionnaire that contains two nearly identical parts. Customer answers are given on a Likert scale with values from 1 to 7. The difference between the expecta-tions and the experiences can form a gap. A negative gap will indicate that customers receive service of a lower quality than they expect, and this will normally be seen as a problem in a service setting.
One would normally formulate 22-24 statements concerning the service. These statements are distributed among the five dimensions that are in-corporated in the SERVQUAL concept, which is often called the RATER, because the 5 dimensions are:
- reliability
- assurance
- tangibles
- empathy
- responsiveness
These five concepts illustrate clearly that service quality focuses both on content of the service and the service context. In this way it differs from the traditional content-oriented concept of quality. It is assumed that the five dimensions are not of equal importance to customers. So the third part of the SERVQUAL questionnaire often asks people to rank the five dimensions in relation to their importance or significance.
It is possible to argue that the SERVQUAL methodology gives an insight into some of the factors that constitute customer satisfaction. It is less evident that it is more useful than traditional user surveys in terms of the objective of improving services. But it is becoming more evident that the theory and practice of service quality in some ways is an approach that deserves attention.
There are some methodological concerns with the distribution of ques-tionnaires in SERVQUAL investigations. First, respondents have to state both their expectations and their experiences, and these can be difficult to distinguish. Second, data are often collected by postal questionnaires to library cardholders. Using this method, we miss people who are former cardholders or potential new customers. A third problem is the way an-swers to statements about perceptions direct or influence answers to the statements concerning experience. This is an extremely important point that could invalidate at least part of a SERVQUAL investigation.
In a forthcoming paper in the MCB journal, The Bottom Line: Managing Library Finances, I wrote about other, more cost-effective sampling frames. In the future sampling will probably be done primarily through the means of electronic questionnaires. This also applies to SERVQUAL investigations. At the moment we have all experienced students sending e-mail questionnaires to people, and a lot of this seems to be done in a random or haphazard way. We will need to build new sampling proce-dures and control mechanisms to cope with the validity of future elec-tronic surveys.
A search in EMERALD reveals the interesting fact that none of the li-brary or information science-oriented journals have published on the SERVQUAL methodology. Nevertheless, I think it is fair to say that the methodology is a fragile one, and this implies the need for caution when one imports it into a library setting from the business and service sector. It is in these areas that the debate and discussion about strengths and weak-nesses of the SERVQUAL instrument and its implications should focus.
This editorial, therefore, introduces two papers that could be a very useful introduction to SERVQUAL. The first one, by Stewart Robinson, gives a broad overview of the current discussion about many aspects of SERVQUAL and is a very enlightening piece of work. This editorial has tried to supplement it with a few methodological issues. The next paper, by Philip and Hazlett, focuses on the use of scaling in SERVQUAL studies, and it raises important reservations about the traditional Likert scale in these types of studies. Together the papers should form a sound basis on which to decide if a given library would want to engage in a SERVQUAL study.
Niels Ole Pors
Regional Editor, Library Link
Letters
We would welcome your thoughts on the above Editorial and on ways in which Library and Information Management On-line can help you to keep abreast of your field.
Patricia Layzell Ward
[email protected]
Articles
Measuring Service Quality: Current Thinking and Future Requirements
Robinson, S.
The publication of the first results of the SERVQUAL instrument provoked a debate on how best to measure service quality. With more than a decade since the publication of those results many researchers have attempted to demonstrate the efficacy, or not, of the SERVQUAL instrument, or to develop their own measurement methods. This paper reviews this debate in relation to six key aspects: the purpose of the measurement instrument; the definition of service quality; models for service quality measurement; the dimensionality of service quality; issues relating to expectations; and the format of the measurement instrument. The main areas of agreement and disagreement are identified. As a result the continued use of the SERVQUAL instrument is called into question, and areas for further research are identified.
Originally published: Marketing Planning and Intelligence 1999, Vol. 17 No.1
Access the full text article as an Adobe PDF

The Measurement of Service Quality: A New P-C-P Attributes Model
Philip, G., and Hazlett, S.A.
Focuses on one of the most widely used service quality measurement scales, SERVQUAL, and looks at some of the areas of concern which have recently been raised regarding its viability as a comprehensive measurement tool for the service industry as a whole. While acknowledging the significant contribution that this model has made, it is suggested that it does not go far enough - the dimensions of SERVQUAL do not adequately address some of the more critical issues associated with the assessment of individual services. Having carried out citation analyses of both the 1985 and 1988 versions of SERVQUAL, it can be shown that although there is a plethora of published work in the marketing and retail sectors about its applicability, relatively little empirical work has been carried out in other service sectors. Indeed, more than one-quarter of all published papers where SERVQUAL was a major theme, appear to have severe reservations about this scale. In place of the SERVQUAL scale, a model which takes the form of a hierarchical structure - based on three main levels of attributes - pivotal, core, and peripheral (P-C-P) is proposed. This P-C-P model has the ability to span any service sector since what is proposed is a skeletal framework within which to consider respective services. The authors are currently in the process of using this model for the empirical analysis of the quality of information which is provided by government bodies to the business community. The results of their empirical study will form the subject matter of the next paper in this series. This is, therefore, largely theoretical in nature with the emphasis on a critical appraisal of the existing models in the service quality arena and it also describes the authors' own model to encourage discussion and debate among researchers, perhaps allowing them to make further refinements to their proposed model.
Originally published: International Journal of Quality & Reliability Management Vol. 14 No.3
Access the full text article as an Adobe PDF
