Research | Research Methods | Data CollectionResearch Methods | Data Collection
For all types of research it is important to plan the method of data collection at an early stage, and you should always aim to pilot your data collection method before using it “live”. We have concentrated here on questionnaires and interviewing: two data collection techniques used widely in pharmacy practice research.
If your research project is going to involve the use of a questionnaire, then it’s essential to put time and effort into getting the format right. The first step is to determine precisely what information you need to know, while thinking carefully about your hypothesis. Although it can be important to include relevant background questions, you need to make sure that the questions reflect the aims of your project and that you don’t collect unnecessary data. Attempt to answer your own questions. Do they relate to your hypothesis? If they don’t, get rid of them.
You should be aware of the problems caused by questions that create an attitude, known as ‘rarification’, and you must also remember that what people tell you in answer to a question does not always reflect their actual behaviour. These are two of the most common mistakes in questionnaire design. The Pharmacy Practice Research Resource Centre produced a very useful bulletin on designing and administering questionnaires. We are grateful to the Department of Health for allowing us to reproduce some of the content here.
Decide on the response format. Is it going to be opened or closed? A closed question provides a number of alternative answers from which a choice has to be made. An open-ended question allows the respondent to formulate their own answer. There’s no right or wrong approach. Your decision will be based on respondent motivation, method of administering the questionnaire, the topic covered, expertise and time spent developing a good set of unbiased responses. Each has advantages and disadvantages.
There are a variety of approaches in the use of closed questions. The following are examples of the ones most frequently employed:
- Two-way Question: Here there are only two alternatives: yes/no, good/bad, for/against and so on.
- Likert Scale: Here you provide your respondent with statements and ask them to agree or disagree. The scale usually comprises 5 points but there may be more or less. Whether to offer a middle ‘no opinion’ type category is an issue of some controversy. Scales generate ordinal data and this has implications for statistical analysis. If you label all the points on your scale be wary of “silly” labels such as asking respondents to choose between: “Very important – important – don’t know – unimportant – totally unimportant”. This is simply a three point scale transferred into a 5 point one.
- Semantic Differential Scale: Here you provide the respondent with a scale featuring a pair of diametrically opposed adjectives at either end. The respondent puts a mark somewhere between the two extremes. The number of points on the scale can vary but no more than ten points are usually recommended. Be aware of the phenomenon known as ‘the error of central tendency’ - respondents are often afraid of using the extreme categories.
- Checklists: Here the respondent has a set of items and is asked to circle/tick each relevant one. All responses will have to be tabulated, so consideration of the analysis stage is again very important with this choice of format.
- Ranking: A set of items is provided and the respondent is asked to list them in order of preference, importance, merit, etc. No more than ten items to be ranked should be used. Remember that ranking does not tell you anything about the distance between the ranks.
Wording the Questions
You will need to make sure that questions are clear, unambiguous, and useful. The wording is fundamental to both the validity and reliability of any study. Always pilot and evaluate your questions first. Keep the following points in mind:
- Don’t use jargon or abbreviations.
- Keep questions simple and as short as possible.
- Don’t use vague terms. Avoid ambiguity. Be precise.
- Avoid ‘loaded’ or ‘leading’ questions that hint at the answer you want to hear.
- Avoid ‘double-barrelled’ questions, asking more than one question.
- Avoid ‘double-negative’ questions.
- Use common concepts.
- Take care over questions that involve memory/recall.
- Hypothetical questions need to be worded especially carefully. Are they really relevant to what you’re researching? If so, why do you need a hypothetical scenario? Are you being “real world”? Can your question be misinterpreted?
- Take care when covering embarrassing or sensitive issues.
- Avoid using negative words or implicit negatives as this might bias your responses.
- Avoid ‘presumption’ questions: do not assume that everyone practises at the same level or has the same standards.
- Watch out for prestige bias in the question: even if the responses are anonymous, respondents may be wary about portraying themselves in a bad light.
Appearance and Layout
Good design can be helpful in increasing your response rate. Appearance, layout and length will depend on how you are going to administer the questionnaire. How much money you have available for your research will obviously have a bearing on this, but there are a variety of ways to improve the appearance of a questionnaire. Here are a few points to consider:
- Make the questionnaire attractive
- Use space generously; avoid a cramped, untidy appearance
- Make headings and instructions clear
- Use colours in your text, or coloured paper, yo make it more appealing
- Make sure the method of answering is obvious
- Include code boxes if necessary, making sure they don’t interfere with readability
- Don’t split a question between two pages
- Number all questions
- Take care over question order. Generally start with broad, straightforward ones and include more complicated, specific or sensitive ones later
- The questions should proceed in a logical manner
- Vary the question format to add interest
- End questionnaire with a “Thank you” and give a clear deadline for responses.
Piloting the Questionnaire
It is absolutely crucial to pilot your questionnaire. You will want to test how long it takes to complete the questionnaire, check that all questions and instructions are clear and try to expose any items that will not generate usable data. Piloting will also develop your interviewing skills, so ideally respondents in a pilot should be as similar as possible to those in the main study. In practice it is not always possible to pilot on your real audience. If so, try the questionnaire out on friends or colleagues. You could ask them:
- How long did it take to complete?
- Were the instructions clear?
- Were any questions ambiguous?
- Were any questions objectionable?
- Was the layout clear and easy to follow?
- Were any topics omitted?
Reliability and validity and bias
Reliability in questionnaire studies relates to the ability of your tool to produce the same results if you tested it five times over. It is difficult to achieve in practice – consider testing the questionnaire on a small number of the study sample twice, several weeks apart (‘test-retest’). Threats to reliability in questionnaires include the use of ambiguous questions, or being overly long. Reliability is more likely to be ensured if the respondent devotes a consistent degree of concentration and interest throughout.
Validity in questionnaire studies is the extent to which the questions provide a true measure of what they are designed to measure. The purists would argue that there are many different types of validity in such studies but the key things that you need to consider are that the questions are clear and likely to produce accurate information, and that the full scope of the area that you intend to measure is covered by your tool.
In addition to the design of your questionnaire, threats to reliability and validity can creep through the introduction of bias in your study protocol. If you use an interviewer, rather than have the questionnaire self-completed, there is a high potential for interviewer bias. Interviewers can influence the responses in many ways, even by their tone of voice. A ‘response effect’ can arise, for example, out of the eagerness of the respondent to please the interviewer or from a tendency by the interviewer to seek out answers that support preconceived notions. It is far easier to ‘lead’ in an interview than it is in a questionnaire. The advantage of using an interviewer is flexibility. Interviewers can probe deeper, build rapport, put respondents at ease and keep them interested. An interviewer will need to be trained, however.
If the interview is unstructured or semi-structured, reporting respondent’s answers verbatim is vitally important. Timing and venue for an interview can also lead to bias. With a self-completed questionnaire, interviewer bias is obviously eliminated, but other types of bias can creep in. For example, respondents can answer questions not in the intended order or other people, for whom the questionnaire was not intended, can answer the questions instead.
Response bias is more pronounced with self-completion, postal questionnaires since non-response is not a random process. Knowing who your non-respondents are is vital if any decision about possible bias is made. Ideally your response rate should not be lower than 66%. Reminders and second questionnaires will increase the response rate.
Administering the Questionnaire
By post? By you? By another interviewer?
If you decide on a postal survey, include an SAE; it will improve the response rate. Include a covering letter explaining the purpose of the study. Give a guarantee of confidentiality and/or anonymity and tell the respondent how s/he was selected. If you have official ethical approval, say so. Usually it is advisable to give as much information in the letter as possible. Say when you would like the questionnaire returned. Decide what you will do about non-respondents before sending questionnaires. You will not be able to send reminders if responses are anonymous, or if that promise was made.
A very useful guide to questionnaire design from Leeds University can be found in pdf format at http://www.leeds.ac.uk/iss/documentation/top/top2. This has not been written specifically in the context of health research but it is clearly written and covers the most important do’s and don’ts of good questionnaire design.
There is also an excellent collection of more detailed web materials on the use of questionnaires in research collected by the University of British Columbia.
In addition, Oppenheim’s Questionnaire design, interviewing and attitude measurement (London: Continuum; 1992) is a very detailed text that covers all aspects of questionnaire design.
Interviews are the most commonly used data collection technique in qualitative health care research. Whether you choose to interview participants on an individual or group basis will depend upon your research question.
Interviews performed on a one-to-one basis may be loosely classified as structured, semi-structured and unstructured.
During structured or standardised interviews, the researcher administers a questionnaire, usually with fixed responses, to which the participant responds. For example the researcher may ask ‘Would you describe your medication compliance during an average week as excellent, good, fair or poor’. One of the main advantages of a researcher-administered questionnaire is that the response rate will be improved compared to its self-completion counterpart. In addition this technique is useful for participants unable to read and/or write. The disadvantages of using this approach compared to a self-completion questionnaire include the potential for the introduction of ‘interviewer bias’ and the increased cost.
Semi-structured interviews can be used if the researcher knows enough about a particular topic to develop questions in advance of interviewing. The interview is normally based upon a schedule of mainly open questions to enable aspects of a particular topic to be explored in detail. Usually the researcher will ask the same questions of all participants, although not always in the same order, prompting as necessary.
The aim of an unstructured interview is to develop ideas and research hypotheses rather than to collect facts and statistics. Unstructured interviews may open with the researcher asking one general open question such as ‘Can you tell me about your experience of using insulin for your diabetes?’ Further questions asked by the researcher will depend upon the participant’s response, and should mainly seek clarification or probe for more detail. The unstructured interview is concerned with the way people think and feel about the research topic in question.
Preparing to interview
Many authors have written in-depth about proper interview conduct (see Further Reading), therefore only the main points will be covered here.
In designing your interview schedule for semi- and unstructured interviews, generally the open questions should come before the closed questions; if you start with the closed questions this will channel your participants through one route, thereby missing the broader issues that matter to them.
Try to avoid leading questions where participants may be able to guess the response they believe you are seeking. This is important when asking the interview questions as well as when probing; it’s difficult to think of non-leading probes ‘on the spot’, so prepare a list beforehand.
Allow enough time for all the issues to explored thoroughly; don’t rush your participants. Give yourself at least 2 hours for an unstructured or semi-structured interview and let the participant know that they have this time and won’t be rushed. Check that they are all prepared to give you the time needed as well; don’t say “Can I pop in quickly to interview you” and expect them to give you 2 hours!
Minimise distractions as far as possible. This is to ensure that the participant is able to share their experiences and beliefs with you uninterrupted. Background noise such as a radio may also adversely affect the quality of your audiotaping.
Think carefully about the setting of the interview. In pharmacy practice research this could be at the bedside, in a doctor’s surgery or in a participant’s home. Try to make your interviewee as comfortable and as at ease as possible.
Beware of the assumption that any good listener can interview – if you’re going to be interviewing a great deal then book yourself on an interviewing skills course or arrange to shadow an experienced colleague. Run a few pilot interviews beforehand to practise setting up the audiotaping equipment and your questioning skills.
If your interview topic is of a sensitive nature (e.g. terminal illness, sexual behaviour), consider the ethical implications such as consent and confidentiality. Plan a strategy for if the participant divulges certain information (e.g. relating to a crime) and also for ending the interview if the participant becomes distressed. The research ethics committee will look to see you have thought about this.
During the interview
Pace the interview carefully; don’t rush in during uncomfortable silences, as your participant needs time to think and formulate questions. You may miss valuable information if you speak too early.
Listen. During a semi-structured or unstructured interview a participant may offer several ideas around a topic; if not prompted on all of these then you may miss valuable information; experienced interviewers are also able to listen to what’s not been said.
Focus groups are a special type of group interview where, rather than the researcher asking each person a question in turn, the group are encouraged to interact with one another, with the researcher simply acting as the facilitator. This group interaction enables participants to share experiences and ideas and allows their knowledge, attitudes and behaviour to be explored; the resulting data may be more useful than that collected through one-to-one interviews. In addition they may be more useful for participants that would be intimidated by a one-to-one interview or feel that they have nothing useful to say – the group dynamic may encourage these participants in engage in the discussion.
A word of caution
However, facilitating a focus group requires a skilled researcher – it is common to use a second researcher to take notes, ensure the audiotape is working and arrange refreshments.
Be prepared for some vocal participants dominating the discussion, and the need to bring in quieter members of the group.
Be aware of the participants’ desire to conform with what they think is perceived as normal; what they say and what they think – particularly in front of a group of peers, may be entirely different.
Ensure that the composition of your group is appropriate; you may aim for a relatively homogenous group so that participants can discuss their shared experience, or alternatively a diverse group to explore different perspectives of an issue.
Consider the setting of your group carefully; some participants may find a hospital environment intimidating – you may find that a community centre is more appropriate.
As with one-to-one interviews take care to consider the ethical implications of using this technique if your research question is of a sensitive nature.
Running the group
If you choose to adopt this technique, the average number of participants per group is generally between 4-10; you can choose to have as many individual groups as you wish, bearing in mind the time it will take you to analyse a 1-2 hour transcript.
After registering your participants and discussing the relevant housekeeping arrangements, establish the group ground rules. These may include not talking over one another (bearing in mind focus groups are usually audiotaped) and turning off mobile phones. Ask each participant to introduce themselves with the audiotape running, to facilitate transcription of the discussion later on.
Describe the outline of the study and the concept of a focus group, encouraging participants to speak to one another, rather than the researchers. Introduce the questions to the group and try to ensure that the conversation is balanced, stays focussed on the topic in hand, but does not spend too long on any particular point. Some focus group facilitators choose to initially take a ‘hands-off’ approach, and, as the discussion progresses, take a more active role through challenging what has been said and encouraging further debate.
Reliability with respect to interviews relates to the reproducibility of answers that are given – that is if you performed the same interview ten times would you get the same answers - some researchers would argue that this isn’t the point in the qualitative field. However the interviewer should certainly aim to clarify any inconsistencies or conflicting responses with the participant or group.
The validity of an interview is the extent to which the respondent’s opinions are truly reflected. Threats to the validity of interviews include the use of leading or ambiguous questions, or the researcher influencing the results either through their own preconceived ideas or through insufficient analysis of the respondent’ s transcript. Consider using more than one data collection technique (i.e. a mixture of one–to-one and group interviews) to improve your validity, as well as checking your final results with your interviewees and having two independent researchers performing the data analysis.
As with all qualitative methods, analysis of the data produced by interviews and focus group is time-consuming and complex. To read more about qualitative data analysis, click here.
If you’re just starting out;
Smith FJ. Conducting your pharmacy practice research project. London: Pharmaceutical Press; 2005.
Kitzinger J. Focus groups with users and providers of health care and Britten N. Qualitative interviews in health care research. Both in Pope C, Mays N, editors. Qualitative research in health care. 2nd ed. London: BMJ Books; 2000. (The chapters in this book are on the BMJ website for free).
For the more advanced;
Bowling A. Research methods in health. 2nd ed. Maidenhead: Open University Press; 2002.
Oppenheim AN. Questionnaire design, interviewing and attitude measurement. London: Continuum, 1992.
Morse JM, Richards L. Read me first for a user’s guide to qualitative research. London: Sage Publications; 2002.
This section also available as a Word download