Skip to Main Content

Critical Appraisal for Health Students

Appraisal of a Quantitative paper: Top tips

undefined

Introduction

Critical appraisal of a quantitative paper (RCT)

This guide, aimed at health students, provides basic level support for appraising quantitative research papers. It's designed for students who have already attended lectures on critical appraisal. One framework for appraising quantitative research (based on reliability, internal and external validity) is provided and there is an opportunity to practise the technique on a sample article.

Please note this framework is for appraising one particular type of quantitative research a Randomised Controlled Trial (RCT) which is defined as 

a trial in which participants are randomly assigned to one of two or more groups: the experimental group or groups receive the intervention or interventions being tested; the comparison group (control group) receive usual care or no treatment or a placebo.  The groups are then followed up to see if there are any differences between the results.  This helps in assessing the effectiveness of the intervention.(CASP, 2020)

Critical Appraisal of a quantitative paper (RCT): practical example

How to use this practical example 

Using the framework, you can have a go at appraising a quantitative paper - we are going to look at the following article:

Marrero, D.G. et al (2016) 'Comparison of commercial and self-initiated weight loss programs in people with prediabetes: a randomized control trial', AJPH Research, 106(5), pp. 949-956.

          Step 1.  Take a quick look at the article

Step 2.  Click on the Internal Validity tab above - there are questions to help you appraise the article, read the questions and look for the answers in the article. 

Step 3.   Click on each question and our answers will appear.

Step 4.   Repeat with the other aspects of external validity and reliability. 

Questioning the Internal Validity:

Participants were randomly assigned to either intervention or control in a one-to-one ratio with a computer generated randomization list with a block size of 4 using REDCap (p.950).
Table 1 (p.953) gives a lot of the characteristics. The two groups were quite similar (only big differences were Cholesterol and Diastolic blood pressure (p value of under 0.05) (p.952)
No blinding of patients, but there couldn’t be (p.950). People giving the intervention couldn’t be blinded. No mention if the researchers or those who weighed patients were blinded – they could have been.
At first until the intervention (p.950-951). Then the interventions were very different (p.952). Weight watchers – 45 minutes activation, free weekly sessions plus access to an app; Your Game Plan – booklet and 15 mins.
Intervention – 16%, Control – 28% (p.954). Big difference so could have had an effect. Discussed in the limitations section (p.954)

Questioning the External Validity:

No, there were drop-outs (p.952 and 954). Yes, must have attempted to contact them as they give reasons for drop-out in Figure 1 (p.951), although majority of drop-outs couldn’t be contacted
Screened in 8 community settings but doesn’t state how people were encouraged to attend these settings. Could these have been Weight Watchers locations and would this introduce bias? (p.950). Probability/non-probability not mentioned but likely to be non-probability as only those screened were eligible. Convenience? Unclear, need more detail (p950). Overall lack of detail on how the potential participants were found
225 participants (p.951) Sample size calculation not mentioned but there was some form of calculation (p.952) – middle column. Explained why their target was to select 100 for each group.
Yes, for setting out the criteria – lots of detail (p.950). Yes, they used appropriate tools to measure pre-diabetes (ADA risk screener) (p.950)

 

Overall validity and reliability:

The study can only be compared with the Your Game Plan sessions, not the Diabetes Prevention Program (DPP) plan discussed at the beginning of the article.
NB: Funded by Weight Watchers and 2 of the authors work for them - has this introduced bias?