MORRISVILLE, NC, October 25, 2016 – Worldwide Clinical Trials (www.worldwide.com) has announced findings from a recent global survey that evaluated preferences of investigators who participate in central nervous system (CNS) studies that employ rater training and surveillance methodologies. Douglas Lytle Ph.D., Worldwide’s Executive Director of Clinical Assessment Technologies, will present the findings at 5 p.m. E.T. on Saturday, Oct. 29, at the CNS Summit, being held in Boca Raton, Florida, Oct. 27-30, 2016.
Created and conducted by Worldwide, this unique survey sampled the opinions of more than 1,400 primary investigators, sub-investigators and raters responsible for patient assessments. The survey was designed to understand their preferences, frustrations, and benefits related to rater training and rater surveillance for CNS clinical studies, as well as the potential impact of various methods of surveillance on rater engagement on data quality.
The findings demonstrate clear preferences for certain methods of rater training, with video demonstration of a practice quiz and certification videos being the most preferred. The mock interview, where site raters demonstate their ability to conduct the assessment in front of an expert rater, was the least preferred method, due to the peceptions of the process being intimidating and imposing. While source document review was the favored method of rater surveillance, respondents noted that submitting paper source documents was also one of the more frustrating aspects of surveillance, along with setting up audio/visual equipment and logging into multiple systems to enter data.
Commenting on the results, Lytle said: “Understanding site rater preferences is essential to enrich the current rater training and data monitoring methods that are known to directly impact study outcomes, and our study revealed some interesting insights on this topic that the industry can use to its advantage.
“Importantly, the preferences observed should not impact the rigorous training and surveillance procedures that are proven to increase the reliability of outcomes. For example, raters reported that they do not like audio/video recording of the assessments that they perform. Rather than concluding that this method should not be perfomed, the frustrations identified should be used to refine procedures and educate investigators on why these methods are critical to the detection of efficacy signals from the study,” Lytle further explained.
Lytle will further explore the findings of the survey during his presentation at the CNS Summit. For more information about the CNS Summit, visit www.cnssummit.org.