Guide to Questionnaires for Service Evaluation
This guide has been designed to be used for reference if you are considering using any form of questionnaire in your service evaluation. If you are in the BNSSG area please register your survey activity with email@example.com.
This guide will cover:
- Choosing to use questionnaires
- Defining objectives
- What format should it take
- Sampling and involvement
- Questionnaire design
- Piloting your quesionnaire
- Coding and scoring data
- Analysing data
- Presenting results
- Ethics, data protection and information governance
- Disseminating your findings
1. When should I use a questionnaire in my service evaluation?
A questionnaire may be useful to you if…
- Time and resources are limited
- You want to collect data from a large number of people
- You want a few standardised responses
- You want to find out about knowledge, beliefs, attitudes or behaviour
- Anonymity is important
- The Evaluator is unlikely to be present during data collection
- You want to collect data via post or online
- You want simple answers to basic questions
- You need mostly quantitative results
- You accept the limitations (see box 1)
Questionnaires are not the right choice if…
- You don’t yet know what the main issues or themes are
- You want detailed information
- You want to know about a complex or controversial issue
- Your respondents may need probing; or not necessarily have an opinion.
- Your population might struggle with reading or understanding the questions
- Your population cannot use or access a computer (for online surveys)
Advantages of Questionnaires
Disadvantages of Questionnaires
Quick, cheap and easy to administer
Can measure behaviour, attitudes, preferences, opinions and intentions
Can be used to quasi-experimentally e.g. to measure change before and after an intervention
Evaluator does not need to be present during data collection
Variety of question formats e.g. lists, grids, scales, ranking
Can collect qualitative and quantitative data
Can be analysed more ‘scientifically’ than many other forms of social research method
Data can be collected face-to-face, over the phone, on paper, or electronically.
Software such as SurveyMonkey is available
Lower response rate than some other methods; sample easily skewed
Can lack validity and reliability
Social desirability, poor recall, or misunderstandings can result in response bias
Questions can be leading or misleading
Long questionnaires can be off-putting; short ones don’t get enough information
Data protection, information governance and ethics must be addressed.
Hard-to-reach groups can be excluded; accessibility issues; language can be a barrier.
Not sensitive instruments; they will not tell you about emotions, feelings, or relationships.
Errors can be made in coding, data entry and data analysis.
You cannot usually go back to respondents for clarification
2. Defining Questionnaire Objectives
To stay relevant and coherent, you need to be clear what you want your questionnaire to achieve. Every question must justify its place in the final cut.
It may help to reflect on what you are interested in and write an objective for each theme. Common areas to focus on include
It is also important to consider what the results will be used for:
- Service design
- Service improvement
- Monitoring data
- Managing a message
- Assessing knowledge and understanding
- Outcome measures
- Staff/ Patients/ Public Experience measures
- Satisfaction measures
- Service usage data
3. What format should I use?
There are four main formats that you might consider when deciding how to administer your questionnaire. You may choose more than one approach. To an extent, the format of your questionnaire will depend on the type of questions you want to ask:
Open-ended questions (e.g. “How do you feel about using questionnaires?”)
- No set response
- No standardised responses
- Can generate discussion
- Mostly qualitative data
Closed-ended questions (e.g. “Do you like questionnaires?”)
- Not sensitive to nuance
- Can standardise responses
- Can compare pre- and post-intervention
- Mostly quantitative data
This is when you send your questions to your sample population and hope as many of them as possible fill it in and send it back to you.
Best suited to closed-ended questions, large (or very engaged or primed) populations, and using close-ended questions that have either been validated or thoroughly piloted for sense and efficacy. Relatively cheap and easy to administer. Include an pre-paid reply envelope with mail-outs.
Data quality can suffer from a low response-rate, response bias or recall errors, or people not understanding the questions. Labour-intensive to code and enter data. Layout, design, ease of use, and question phrasing is critical to an effective postal questionnaire.
The BNSSG ICB R&E team has enhanced access to online surveys and can help you formulate and set up your online survey, and give you direct access to the results. Please note this service is only available if you are providing a service commissioned by the ICB. Contact firstname.lastname@example.org if you would like to use this service.
Online questionnaires are very cheap and quick to administer and have the potential to reach vast numbers of people. Data collection and analysis is automated, which saves lots of time. They are clear and easy to use, and the software can help with question planning. You can develop your questionnaire and view results online or via an app from any mobile device.
However, you will only get responses from people who can read and understand the questions, as well as who have access to a computer. You are unable to follow up on respondents unless you have specifically requested their contact details, which they may choose to withhold. You cannot follow up on non-responses.
An alternative approach is phoning people and asking for feedback.
This format can be cheap and quick (although you still need to code and input the data so somewhat labour-intensive). You are likely to get a good response rate, and can build a rapport with the respondent.
Best suited if you have a small number of people in your sample, and anonymity is not an issue. Useful for people for whom literacy is an issue, although relies on being able to communicate well on the phone.
Suits both open- and closed-ended questions, although caution should be used when recording or transcribing responses.
This is when you run through your questions directly with the respondent. The response rate will be good, although it must be recognised that people may not answer as thoroughly as they might if their anonymity can be guaranteed.
In an NHS service evaluation context, this approach can be useful if you just have one or two questions that can be added into assessments or provider monitoring data. Face-to-Face questionnaires can be for both open- and closed- ended questions.
If you feel that face-to-face questionnaires are the best method for you, you would be advised to re-visit the criteria in section 1 of this guide as it may actually be that an interview or focus group is a more appropriate method for what you want to do.
Use this table to help you decide what format will best suit your needs. It may be that you want to put your questions into more than one format to get the best and most thorough response.
Detailed, open-ended questions
Rapport with respondents
Little staff time required
High response rate
4. Sampling and Involvement
Sampling is a statistical approach to identifying how many respondents from each demographic are needed to make findings generalisable and valid.
Formal sampling techniques are less important in service evaluation than in research, because you are looking to assess what standard a service achieves, rather than produce new and generalisable knowledge. However, you will not know who has completed your questionnaires so, even in service evaluations, it is important to be mindful of response bias and that the respondents may not reflect the general population.
Furthermore, commitment to the Public and Patient Involvement (PPI) agenda means that you must make overt efforts to ensure that your questionnaire is equitable and accessible. As well as statutory drivers, meaningful PPI has been shown to improve the quality and impact of survey findings.
The difference between participation, where, for example, people take part by answering a questionnaire and involvement, where people are actively involved in developing questions or commenting on patient information leaflets, should be raised here.
Some basic and pragmatic good practice principles that we recommend include:
- Collecting and reviewing equality data with your questionnaire.
- Putting all documentation aimed at patients and/or public through a Plain English review.
- Having Lay Representation on any steering groups or advisory boards.
- Providing your questionnaire in different formats and languages, or using a translator if appropriate
As well as contacting your organisation’s PPI and Equality Leads to ensure consistency with current work streams and local guidance for involvement, the following link is useful to look at tools to support PPI in questionnaire design: hisengage.scot/equipping-professionals/participation-toolkit
5. Questionnaire Design
Writing a concise introduction to your questionnaire not only ensures that you have identified your objectives clearly, but helps secure informed consent from respondents.
Your introduction should detail:
- What the questionnaire is for
- Who should answer it
- What arrangements, if any, have been made to ensure anonymity
- How the data will be stored
- How to access the questionnaire in other formats
- How to return it
- How long it is likely to take to complete
- How long the questionnaire is open for/ deadline for responses
- Whether respondents need to answer all questions, or just the ones they think are relevant to them.
- Thanks for their participation
- When and how respondents can get feedback
- How respondents can get further information
Identifiers are unnecessary if your questionnaire is completely anonymous, but you may consider using a Unique Confidential Identifier if you want to collect data from identifiable respondents to be submitted confidentially. It may be that a service provider collects the questionnaire data from the patient and adds a unique confidential identifier to the data before submitting data to the ICB or CSU for analysis. This would enable follow up or sense checking with the respondent at a later date if there are queries and would advance the quality of the data.
It would be impossible in the scope of this guide to synthesise the vast amount of theoretical and practical resources that are available on developing questions for surveys. So, we have included some generic principles and recommendations, as well as a selection of links that may help.
Don’t reinvent the wheel
Keep in mind that question development is much harder than it looks, and if you can possibly avoid writing your own questions, do so. Doing some scoping to try to identify tried and tested tools can save you a lot of time. Some you may need to pay to use. You can cherry-pick questions that suit your needs.
Online survey tools such as www.SurveyMonkey.com have a database of customisable questionnaire templates for various measures, supported by a question bank: https://www.surveymonkey.com/mp/certified-survey-questions/.
In addition, your Library Services or Public Health Evidence team can help you identify what is already available, and give advice on how to appraise it effectively.
Sources of validated survey instruments:
Local surveys and tools
The Picker Institute
Measuring health status
Adult Carer’s Quality of Life
Patient Reported Experience Measures
Social Research data and question bank
You need to be very clear what you want the respondent to do throughout the questionnaire. Specify whether you want them to tick a box, circle an option or write freehand. Do not assume that people understand your scales without explanation. Piloting your questionnaire (see section 6 below) is very important to ensure the instructions are clear and sufficient.
The questionnaire should have a logical structure and sections should flow coherently. Group questions together depending on whether they are focusing on
Always number sections and questions clearly.
This is absolutely critical to ensure that your questionnaire makes sense and is unambiguous.
Key things to remember are
- Avoid jargon
- Keep sentences short and uncomplicated.
- Avoid double negatives
- Avoid double-barrelled questions. “Do you agree that questionnaires are the cheapest and best way to evaluate health services?” is impossible to answer with a yes/no.
- For paper/ electronic questionnaires, use mostly close ended questions
- Be clear if you want people to tick a box, circle and answer, or rate on a scale.
- Avoid leading questions that may fall victim to social desirability bias. For example, “Overuse of A&E services are costing the NHS millions of pounds a year. Did you seek advice from any other healthcare service before coming to A&E today?” may not illicit a truthful response.
- Use a mix of positive and negative statements/ questions..
- Limit response options. Categorise if necessary.
- Don’t use acronyms or abbreviations
- Be specific about timeframes – ‘recent months’ is open to interpretation; ‘during June and July 2014’ is better.
The Plain English campaign offers a selection of guides that can help you write clearly for a wide audience: http://www.plainenglish.co.uk/free-guides.html.
It is critical that your questionnaire layout is easy-to-read, inviting and clear.
- Using coloured paper has been shown to increase response rate
- Follow your organisation’s style guide for logos
- Use Arial 12 as standard; response boxes should be 20 point.
- Use larger or bold font for headings or instructions
- Use shading to group sets of similar questions
- Leave lots of blank space
- Use comment boxes rather than dotted lines for freehand comments.
- Proofread thoroughly (ideally have someone else do it).
- Check that your formatting is perfect.
- Keep your questionnaire as short as possible.
- Be mindful of raising expectations
Using scales in questionnaires can be helpful as they can measure strength of attitudes, rather than simply ‘yes/no’. Scales are relative and conclusions should be drawn with caution.
Commonly used scales include:
- Likert Scales: respondents are asked to rate their strength of feeling towards a statement or question. Simple to use and easy to code; although you rely on people reading the questions properly, understanding the scale and not simply ticking the neutral option throughout. Use a variety of positive and negative statements throughout. There are techniques to test data quality, such as including two contradictory statements and comparing the response.
- Semantic Differential Scale: Allows respondents to rate statements on a number of different dimensions. Requires concepts to be mutually exclusive.
- Ranking: Asking respondents to rank items in order of preference can be a useful way of identifying local priorities that can feed into service design/ re-design. Best delivered in face-to-face questionnaires with options on a individual cards to be sorted. Can be done via other questionnaire formats by assigning numbers, although need to be careful with wording as easy to misunderstand instructions. Electronic questionnaires have the potential for people to move web-based icons into a ranked order.
- Visual Analogue Scale: Measures a characteristic or attitude across a visual continuum. It’s a very subjective measure and unlikely to be of use in service evaluation as is hard to interpret. Mostly used in psychometric testing to compare changes in an individual after an intervention.
6. Piloting your Questionnaire
Piloting your questionnaire and building in time to make revisions and re-test is an essential in developing a useful tool.
Piloting will help you identify:
- Confusing questions or instructions
- If the questions mean the same thing to all people
- If the design logic works
- How long it takes to complete
- Which parts need refining
- If you can analyse the results effectively
You can pilot your questionnaire with:
- Lay representatives on your advisory board
- Patient Participation groups/ Patient Reference groups
- Friends and colleagues
- Field experts
- BNSSG R&E team staff
- A small sample of your target population.
7. Coding and Scoring Data
When entering the data for analysis, it is useful to assign each questionnaire an ID number so that you can cross-reference the dataset to the questionnaire in future if you want to check for errors.
You will want to consider how to code your data as you develop your questionnaire. To an extent, this will depend on how you will collate and analyse the results.
If you are using Excel, you will simply need to enter your question numbers into the spreadsheet and develop a consistent coding mechanism for all the possible answers, for example, Yes=1, No=2, Don’t know=3.
This is particularly important when using scales. For example, if 1= very satisfied and 5 = very dissatisfied for one question it should be the same for all questions about satisfaction.
If you are using a statistical package such as SPSS or Access to enter your data, it is likely that you will need to determine which pre-set codes you which to use. These can be
- Text e.g. for recording open-ended questions verbatim.
If you are using an online tool like SurveyMonkey, the coding is done automatically.
Large-scale evaluations taking place over a number of sites may want to score responses in order to
- Simplify interpretation
- Summarise responses to a single figure within accepted confidence intervals
- Compare results
- Aggregate results to a single composite score
Scoring of questionnaires is a complex process and most Service Evaluations done at ICB level are unlikely to require scoring without some external guidance. If you feel that you need to do this for your questionnaire, you would be advised to contact your organisations analytic team for targeted support beyond the scope of this manual7
8. Analysing Data
Exploratory data analysis
Eyeball the data to look for anomalies such as incomplete data sets; data entry errors; responses that look so inconsistent you may want to consider double-checking them or discarding them.
Look at the data and attempt to ‘tell the story of the service’ in a few short paragraphs, for example, 80% of people aged 50 and over visited their GP during the week, with 20% accessing out of hours services.
Deriving the main themes
Re-visit your objectives. Look at what data you have and make a judgement on standards within the frame of that objective. Be mindful that correlation does not mean causation. Do not overstate your results and be aware of the limitations of your study.
9. Presenting Results
In most service evaluations, the questionnaire data will just be part of the picture in assessing the standard that the service achieves.
Quantitative questionnaire data lends itself particularly well to visual representations of data such as graphs or pie charts. Logical and well-labelled tables are also useful to present the data while avoiding reams of text.
If you are using an online tool like SurveyMonkey, the results are presented graphically and tabulated automatically.
Key things to remember when choosing what data to present are who will be reading it and what it will be used for.
Refer to local style guides.
10. Ethics, Data Protection and Information Governance
Service Evaluations do not have to seek formal ethical approval from NHS Ethics Committees. However, it is best practice to assess the activity for ethical implications regardless.
- Equality Impact Assessment: refer to your local policies and/or Equalities Lead.
- Anonymity and Confidentiality: be clear on what you are offering.
- Ensure all data collection and storage complies with local Data Protection and Information Governance procedures e.g., http://www.protectinginfo.nhs.uk/
- Include information on where to get support or feedback.
- ‘Do No Harm’ principle
- Informed Consent is more than telling people what the questionnaire is about. You should consider whether you should contact potential respondents to see if they are happy to receive the survey instrument, particularly if is a sensitive or controversial topic.
- Consider the inclusion of equal opportunities monitoring datasheets
- BNSSG R&E team are working with a group of stakeholders to develop ethical guidelines for service evaluations. These are available on the Evaluation Toolkit website.
11. Disseminating your Findings
There is no point in carefully planning your questionnaire and evaluation as a whole if nobody is aware of the findings and your report is just filed away. Approaches to dissemination include:
- Sharing your report with steering groups and management committees
- Publishing summary findings and recommendations in staff newsletters, organisation intranets and website.
- Discussing a press release with the Communications team
- Using a Seminar platform such as the BNSSG ICB R&E team’s Lunchtime sessions.
- Writing up for a journal
- Poster presentations at conferences.
 Statutory drivers for PPI
- Section 242(1B) of the National Health Service Act (2006) as amended by the Local Government & Public Involvement in Health Act 2007 is commonly known as the ‘Duty to Involve’ clause and mandates involvement in decisions that will affect services from the patients’ perspectives.http://www.uhbristol.nhs.uk/for-clinicians/patient-surveys,-interviews-and-focus-groups/section-242-the-duty-to-involve/
- The Health and Social Care Act (2012) (13H) is clear that ICB’s have a duty to promote involvement of each patient, carer and their representatives in decisions that relate to prevention or diagnosis of illness, and their care or treatment.
- The NHS Constitution (2013) explicitly states that individuals a) have the right to expect NHS bodies to monitor, and make efforts to improve continuously, the quality of healthcare they commission or provide. This includes improvements to the safety, effectiveness and experience of services b)have the right to expect the NHS to assess the health requirements of your community http://www.nhs.uk/choiceintheNHS/Rightsandpledges/NHSConstitution/Documents/2013/the-nhs-constitution-for-england-2013.pdf#page=3&zoom=auto,-18,842
 There is some evidence that respondents are more likely to disagree with a negative comment, than to agree with a positive one. – Cohen, G., Forbes, J., and Garraway, M. Can different patient satisfaction survey methods yield consistent results? Comparison of three surveys. British Medical Journal. 1996; 313: 841–844
Version 2, Jude Hancock, October 2020