Objectives: This study aimed to describe how health researchers identify and counteract fraudulent responses when recruiting participants online.
Design: Scoping review.
Eligibility criteria: Peer-reviewed studies published in English; studies that report on the online recruitment of participants for health research; and studies that specifically describe methodologies or strategies to detect and address fraudulent responses during the online recruitment of research participants.
Sources of evidence: Nine databases, including Medline, Informit, AMED, CINAHL, Embase, Cochrane CENTRAL, IEEE Xplore, Scopus and Web of Science, were searched from inception to April 2024.
Charting methods: Two authors independently screened and selected each study and performed data extraction, following the Joanna Briggs Institute's methodological guidance for scoping reviews and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews guidelines. A predefined framework guided the evaluation of fraud identification and mitigation strategies within the studies included. This framework, adapted from a participatory mapping study that identified indicators of fraudulent survey responses, allowed for systematic assessment and comparison of the effectiveness of various antifraud strategies across studies.
Results: 23 studies were included. 18 studies (78%) reported encountering fraudulent responses. Among the studies reviewed, the proportion of participants excluded for fraudulent or suspicious responses ranged from as low as 3% to as high as 94%. Survey completion time was used in six studies (26%) to identify fraud, with completion times under 5 min flagged as suspicious. 12 studies (52%) focused on non-confirming responses, identifying implausible text patterns through specific questions, consistency checks and open-ended questions. Four studies examined temporal events, such as unusual survey completion times. Seven studies (30%) reported on geographical incongruity, using IP address verification and location screening. Incentives were reported in 17 studies (73%), with higher incentives often increasing fraudulent responses. Mitigation strategies included using in-built survey features like Completely Automated Public Turing test to tell Computers and Humans Apart (34%), manual verification (21%) and video checks (8%). Most studies recommended multiple detection methods to maintain data integrity.
Conclusion: There is insufficient evaluation of strategies to mitigate fraud in online health research, which hinders the ability to offer evidence-based guidance to researchers on their effectiveness. Researchers should employ a combination of strategies to counteract fraudulent responses when recruiting online to optimise data integrity.
Keywords: Evidence-Based Practice; Methods; PUBLIC HEALTH.
© Author(s) (or their employer(s)) 2024. Re-use permitted under CC BY-NC. No commercial re-use. See rights and permissions. Published by BMJ Group.