Home | UKMI Activities | Research | Research Methods | Study Design

Research | Research Methods | Study Design

Research Methods | Study Design

Study Design

Types of Study Design: Sampling and Recruitment:

Reliability and Validity

Further Reading

TopTypes of Study Design

Before you begin to make decisions about your study design it is important that you have read Taking the first steps’ and considered whether your research question will be best answered using a quantitative or qualitative approach. The nature of your research question will also dictate the design of your study; different types of study design are appropriate for answering different types of research question.

We have briefly reviewed the most common types of study design below, but it is strongly recommended that you refer to the relevant further reading before you embark on any one course.

TopService Evaluation Studies

If you are evaluating any aspect of your service and assessing whether you are ‘doing the right things at the right time with the right people’ then you are probably under the umbrella of service evaluation. A precise definition of service evaluation research doesn’t exist, despite many authors’ best efforts. However Bowling’s (see Further reading) definition is worthy of mention – ‘the systematic collection of research data to assess effectiveness of organisations, services, and programmes (e.g. health service interventions) in achieving predefined objectives’. So service evaluation research includes assessing the effectiveness of new or existing services, for example.

Service evaluation research is commonly encountered within pharmacy practice. Example questions that could be answered by service evaluation methods include ‘What is the impact of the MI enquiry answering service on patient outcome?’ or ‘Does MI training of resident pharmacists improve the quality of answers given out-of-hours?’

Service evaluation research is a distinct approach of enquiry but often uses a mixture of quantitative and qualitative data collection techniques. For example, if you wanted to explore how effective a new specialist MI centre (e.g. substance misuse) was, you could use a questionnaire to survey all the MI staff throughout the UKMi network to produce quantitative data about the number of responders that thought it was good/poor etc. as well as running several focus groups with MI staff, thus producing qualitative data.

Evaluation research is normally divided into either ‘summative’ or ‘formative’ approaches. Summative evaluation usually looks back after a new initiative has been implemented to assess whether it has been effective, whereas formative evaluation is more of an ongoing process that assesses the new initiative as it is implemented – it may suggest that the new initiative needs to change during implementation. Many authors use the following analogy to clarify the difference between the two: if a chef cooks some soup and tastes it as he goes along, adjusting the flavour as necessary then this is formative, yet for the dinner guests who only taste the end product, their assessment is summative – they have no opportunity to influence the end result.

If you are performing service evaluation research, it is important to appreciate that service developments in the NHS take place in the real world and that you will be unable to control aspects of the research process as tightly as you would if you performing a randomised controlled trial, for example. It may be possible to have an intervention group that receive your new intervention or initiative, and a control group that don’t, but this is sometimes very complex to arrange. In addition, if you are assessing the effectiveness of an existing service, you may run into ethical problems if you start to withdraw the service from existing patients or stakeholders. For example to investigate if the MI enquiry answering service reduced the prescribing error rates, it would obviously not be ethical to close your department for one week and compare the difference in error rates to a week when you were open.

For new interventions or initiatives you could consider a ‘before-and –after’ design, where quite literally you take some baseline measurements before your intervention is implemented (e.g. waiting times, patient satisfaction), then implement your new initiative, and repeat the measurements after a specified time period. The use of a control group not subject to the new initiative will help to ensure that if you see a result in your intervention group, it is not due to other factors (e.g. weather, improved staffing).

As highlighted above, it is important to get more advice about using this approach before adopting it as your chosen technique. For more information see Further Reading below.

TopCohort Studies

Cohort studies (sometimes called longitudinal studies) are usually concerned with establishing the causes of disease and as such are not a common pharmacy practice method – however clinical research often uses this methodology so you will come across it. Cohort studies answer questions such as ‘Do oral contraceptives cause breast cancer?’ or ‘Does smoking cause heart disease?’

Using the latter as an example, a typical prospective cohort study design would take a group of children who had not yet started to smoke. The study would follow them as they chose whether to start smoking or remain a non-smoker. It would not be ethical to randomise children to smoking or non-smoking groups, which is why they are expected to make the decision themselves. In choosing whether to smoke or not, they are effectively putting themselves in either the ‘study’ group or the ‘control’ group. Both groups would then be followed for 30-40 years to observe whether they develop heart disease or not (see below).

It is important that the sample that you enrol into your study (in this case the non-smoking children) are free from the disease you are looking for before they expose themselves to the risk factor in question (that is smoking). Although in this example it is unlikely to be a problem, it is an important consideration.

If there is a positive association between smoking and heart disease (which obviously there is), then you expect the number of participants developing heart disease to be higher in the smoking group.

However, the major disadvantage of using a cohort design is time. Participants often need to be followed for many years to observe whether they develop the disease or not. To address this problem, rather than following participants forward in time, some researchers try to use historical information to establish whether a link exists between a disease and exposure to a particular factor.

Using our example, a ‘retrospective’ cohort study would try to identify a defined group of non-smoking children such as a class or year group from the 1980s, who may have been interviewed at a later date (e.g. during 1990s) about their smoking behaviour. Researchers could then – in 2006 – assess whether they had developed heart disease.

Cohort studies are subject to bias such as loss to follow-up, taking into account the long timeframe of these studies.

For more information see Further Reading below.

TopCase-control Studies

Case control studies are also concerned with the causes disease, similar to cohort studies. However, case-control studies start with a group of patients with a certain disease (‘cases’), find a similar group of people without the disease (‘controls’) and look backwards in time to see if exposure to a particular factor may be related with developing the disease in question.

For example a researcher who wanted to know whether NSAIDs were associated with myocardial infarction (MI), could take a group of patients with a recent MI (the cases) and a group of very similar people, but without an MI (the controls), and look backwards through their medical records to see if they had been exposed to NSAIDs. If there was an association between NSAID use and MI, then the researcher would expect to see that more of the cases had been exposed compared to the controls.

Many studies have used this approach to test an association between a risk factor and a disease such as cigarette smoking and lung cancer, and artificial sweeteners and bladder cancer.

Case-control design is particularly useful if the outcome of interest is rare. For example if a researcher wanted to establish whether living near an electricity pylon was associated with developing leukaemia, because this disease is rare, a cohort study would be impractically large – bearing in mind that cohort studies start with a disease-free population. A case-control study would take a group of children with leukaemia, and a group as similar as possible but without leukaemia and look backwards in time to assess whether the children in either group had ever lived near an electricity pylon. If there was a positive association between exposure and disease then more children with leukaemia would have lived near a pylon compared to those without leukaemia.

If you are reading a case-control study, take time to establish how the controls were selected. Many epidemiology texts debate whether the controls should be identical to the cases with the exception of the disease in question; or whether they should represent the whole population from where the cases came. If planning on carrying out a case-control study, ensure you refer to the Further Reading.

Other challenges in case-control studies include recall bias, where participants have to remember details about past exposure to the factor in question.

Participants that have developed a particular disease may have a more accurate recall about their past behaviour compared to those without the disease.

TopRandomised Controlled Studies

It is not intended to discuss randomised controlled studies in any detail, as it is the study design with which MI pharmacists will be most familiar and has been widely written around (see Further Reading).

However, it is worth highlighting that, the most important advantage of using a randomised controlled design is that confounding factors (i.e. characteristics about the participants that may affect the results e.g. smoking, co-morbidity) will be evenly distributed between the intervention and control groups. This is in contrast to cohort and case-control studies where confounding may influence the result. Use of a randomised design means that, if you have enough participants in your study, you can be fairly sure that if you see a difference between the intervention group and the control group, it was due to the intervention and not due to confounding.

In theory randomised controlled trials may be used in practice research, where for example one ward receives an intervention and an identical ward does not. However, as indicated in Service evaluation above it is common to run into feasibility and ethical barriers.

TopCross-sectional Studies

Cross-sectional studies take a single snapshot view of what is happening to a particular phenomenon.

They may also be used to test whether there is an association between a particular factor and a disease, and both are measured at the same time – in contrast to case-control and cohort studies.

For example a questionnaire investigating patients’ perceptions of administering their own medicines whilst in hospital could be administered on a single occasion only and as such would be a cross-sectional view. The questionnaire may ask about events in the past (i.e. patients that have already had experience of such a scheme) or in the future (i.e. patients’ perceptions of trying such a scheme).

Alternatively an epidemiological study investigating whether there is an association between oral contraceptives and breast cancer would take a defined population of women and assess whether they have been exposed to oral contraceptives and whether they have breast cancer at the same time. As the cases that will be identified will be existing cases (i.e. women that already have the disease), cross-sectional studies are sometimes called prevalence studies. However the disadvantage of this design is that it is impossible to know what happened first – the exposure to oral contraceptives or the development of breast cancer.

TopQualitative Study Design

Studies that employ qualitative methodology (also called exploratory studies) aim to explore how people interpret their experiences and the world around them; it asks ‘what does it feel like’ or ‘what is important to you?’

An important distinction between qualitative and quantitative studies is that qualitative research develops hypotheses in contrast to quantitative work, which is designed to test hypotheses.

A qualitative research study may reasonably precede a quantitative study and is often used in practice research. For example to establish how to better support healthcare professionals out-of-hours with information about medicines, a researcher could run several focus groups with a small number of staff to establish the barriers to accessing information during this time. This could feed into a quantitative study that surveyed all the staff within a Trust, asking them if they agreed or disagreed with the barriers identified by the focus groups.

There are several theoretical stances that pure qualitative researchers will adopt according to their background and the nature of the question to be answered. These methods differ in many ways from sampling of participants, to data collection and analysis. Depending upon the level you are working at, you may need to discuss and justify your theoretical approach. The three most common are phenomenology, ethnography and grounded theory.

Phenomenology is used to answer questions about meaning and the essences of an experience. Phenomenology does not generate a theory like some other types of qualitative research, instead it aims to provide insight into how people make sense of the world they live in.

Grounded theory, in contrast, does aim to generate or discover a theory about a particular experience, rather than describe the meaning of that experience. For example, in a study exploring why patients seek homeopathic care a phenomenological approach would reveal the individual’s motivation and experience of seeking homeopathic care, whereas a grounded theory model would enable a theory to be developed about what motivates patients to seek such therapy.

Ethnography studies explore the beliefs and practices of a particular cultural group. It involves direct observation of daily behaviour where the researcher may even participate in the actual process as a participant observer and is a research method based mainly on fieldwork. It may be used to answer questions such as ‘What are the barriers to seeking antenatal care for women from ethnic minority groups living in London?’

Qualitative studies use in-depth, unstructured or semi-structured interviews, informal conversations, observation and focus groups to generate data, according to the researcher’s theoretical standpoint.

See the Further Reading for more detail.

TopSampling and Recruitment

A successful pharmacy practice research project requires a clear sampling strategy whether it falls under the heading of quantitative or qualitative research. The different sampling strategies have been loosely arranged under the quantitative and qualitative headings but note that there may be some techniques that could apply to both paradigms.

TopQuantitative Studies

For quantitative practice research, the most important principle is that the sampling strategy should be random – this means that every member of the population to which the results apply, should have an equal chance of being in your study sample. Using a random approach means that the results from the study sample are more likely to be generalisable to the whole of the affected population. This is concept is called ‘external validity’ (see below).

The simplest method of random sampling requires the researcher to have a list of all the names of the people within the population of interest, (the ‘sampling frame’) and then every person given a number, starting from the number one. The researcher then decides – for example – that every tenth person on the list will be invited to join the study sample.

However this approach is not with problems – the list is likely to be in some order - such as cardiothoracic surgical list, where the complicated patients are likely to be first on the list. In addition the Data Protection Act may prevent access to data in this format.

Another relative straightforward method of sampling that you may encounter in pharmacy practice research is stratified random sampling. Consider a practice research study where you wanted to send a questionnaire to an equal number of doctors and nurses. If you used a simple random sampling approach you may, by chance, obtain too many doctors’ names and too few nurses’ names. Therefore if you divide or ‘stratify’ your populations by separating the list of nurses’ names and doctors’ names, and then perform your random sampling, you should produce an equal number of names with each professional group.

TopQualitative Studies

As qualitative studies aim to understand the nature of complex phenomena and produce hypotheses, rather than test them, they tend to adopt non-random methods of sampling.

The most common is purposive sampling where the researcher targets participants that will be informative with respect to the study objectives. For example if a study aimed to explore the barriers to accessing community pharmacy services from the perspective of older patients, then the sampling strategy would be to select patients that were above a specified age, and who needed to access such services. Obviously there would be no point choosing a young healthy person, as they would be unable to inform the research objectives.

Snowball sampling is useful if a sampling frame does not exist – if there is no ‘list’ of people with a particular disease, for example. The researcher asks the initial participants to try to recruit other potential participants from their friends and/or families. If the potential participants are interested, they contact the researcher and following consent, are recruited into the study. These new participants are then asked to try to recruit from their friends and/or family – and the process continues until enough participants have been recruited.

These are the most common types of sampling - many others exist - see the Further Reading for more detail.

A note on recruitment – to get through the Research Ethics Committee process, if you don’t know the participants prior to the research project, then your study will generally need to be introduced by a third party known to the participants. For example, if you were studying the use of herbal and homeopathic products by patients with breast cancer using semi-structured interviews, it is not okay for you to approach patients in an out-patient clinic directly. You will need to be introduced by the clinic nurse, for example, who is known to the patient.

In addition, potential participants need to be given time to think about taking part in a study – you will not be able to introduce your study, and then conduct the interviews straightaway. The Research Ethics Committee will require that you give participants at least 24 hours to think about the study. There are exceptions such as clinical trials in emergency settings, but these are less likely to apply to practice research.

TopReliability and Validity

Reliability and validity are two important concepts that need to be considered throughout the design and execution of any research project.

Reliability relates to the repeatability of the study – if the study were repeated one hundred times would the same result be produced. For example a study that involved measuring the average outpatient prescription waiting time, using a watch that sometimes ran slow and sometimes ran fast would not produce a reliable result. Before a research instrument – such as a watch, or questionnaire, or quality of life scale – can be judged reliable, it is subject to repeated testing. This may involve using the same instrument on the same sample several weeks apart, or using two independent researchers to use the instrument to measure the outcome of interest to see if they obtain the same result.

Validity is concerned with whether the study is measuring what it set out to measure. For example if you were performing a study to establish patients’ degree of compliance with their medication, if you used a questionnaire to ask them about their behaviour they would probably over-estimate their compliance thus invalidating this data collection technique. In contrast studies that use questionnaires to ask about smoking behaviour may also be invalid as smokers may tend to under-estimate their usage.

Both external and internal validity need to be considered. As explained above, external validity refers to the extent to which the results of a study are generalisable. Internal validity refers to the rigour with which the study was conducted, and may be further divided into face validity, content validity, and construct validity, for example.

If you are designing a research study for the purposes of an academic qualification, your tutor will require you to demonstrate an awareness of reliability and validity throughout your proposal.

TopFurther Reading

For most aspects of health research design – both straightforward to read – if you’re starting out then try Smith’s book first.

  • Bowling A. Research methods in health. 2nd ed. Maidenhead: Open University Press; 2002.

  • Smith FJ. Conducting your pharmacy practice research project. London: Pharmaceutical Press; 2005.

For detailed information on service evaluation.

  • St Leger AS, Schnieden H, Wadsworth-Bell JP. Evaluating health services’ effectiveness. Milton Keynes: Open University Press; 1992.

  • Rossi PH, Lipsey, MW, Freeman HE. Evaluation: A systematic approach. 7th ed. London: Sage Publications; 2004.

For qualitative techniques, in addition to Bond and Bowling above. Try Pope and Mays first, before Morse and Richards and then Creswell.

  • Pope C, Mays N, editors. Qualitative research in health care. 2nd ed. London: BMJ; 2000. (The chapters in this book are on the BMJ website for free).

  • Morse JM, Richards L. Read me first for a user’s guide to qualitative research. London: Sage Publications; 2002. (Good for explaining the different theoretical approaches).

  • Creswell JW. Qualitative inquiry and research design – Choosing among the 5 traditions. London: Sage Publications; 1998.

For all aspects of cohort, case-control and RCTs – very straightforward reads.

  • Gordis L. Epidemiology. 3rd ed. Pennsylvania: Elsevier Saunders; 2004.

  • Haynes RB, Sackett DL, Guyatt GH, Tugwell P. Clinical epidemiology: How to do clinical practice research. 3rd ed. London: Lippincott, Williams & Wilkins; 2006.

MS Word File  This section is also available as a Word download