One question we are frequently asked is whether clinical teams should require a specific type of physician for interpreting medical images when used to support clinical study endpoints. Should the radiologist (or appropriately trained expert) be a generalist or subspecialist with additional fellowship training and experience?
Our past blogs have pointed to the operational and methodological aspects of the blind independent central review. Here, we’ll provide examples from the clinical literature and our perspective based on experience running hundreds of blinded independent central read (BICR) clinical trials.
“Medicine is a science of uncertainty and an art of probability”1
Research conducted on medical imaging errors has demonstrated that errors by radiologists (or earlier known as roentgenologists) arise even when provided contradicting clinical information2, 3. Diagnostic or perception errors can be caused by poor image quality or acquisition technique. But, they are primarily influenced by the reader’s:
- Lack of knowledge
- Inexperience in the indication
- Incomplete searches
- Misjudgment of the treatment response1,4,5
Imaging interpretation performed via blinded review involves decision-making under conditions that can be controlled for, i.e., the reading environment, reader fatigue, physical distractions and clinical or patient information that can confound or bias the interpretations against the protocol criteria. Within medicine it is a widely held belief that the performance of clinicians, irrespective of specialty area, may vary widely. This includes the field of radiology which supports our belief and practice in recommending the use of specialists to interpret imaging studies.
The average error rate among radiologists is reported to be around 30%, with errors in decision-making by inexperienced radiologists usually resulting from incomplete or premature diagnostic conclusions. A recent paper examining Computed Tomography (CT) reports with substantial errors found that 43% of the errors were due to an error of observation, 9% due to a fault in imaging technique, and 48% due to an error in interpretation.1,6
As reported in past blogs, our collective BICR experience reported similar rates and the types of reader errors include:
- Scanning/perception errors
- Recognition errors
- Decision-making errors
- Satisfaction of search errors (e.g., when an abnormality is detected and the reader stops searching for more lesions).6
Generalist versus Specialist Readers?
Focusing on the question of specialist vs generalist for primary image assessments, Rozenberg reported a 22% difference, considered to represent a clinically significant change between interpretations with subspecialty reads often changing the primary diagnosis. Chang Sen and Carter, in separate papers, reported in 27% and 33% of the patients respectively that subspecialty reads changed clinical interpretations and patient management decisions. Theauthors highlighted that specialists have higher levels of training, more clinical experience,resulting in higher tumor detection rates with less read variability.7, 8, 9
In summary, our experience supports that subspecialist readers provide a higher level of knowledge in the organ or disease of interest and provide deeper expertise and experience which translates to better interpretations (i.e. differential diagnosis determination between disease and normality) together with higher lesion detection rates and accuracy. Reviewers should have all of the relevant information necessary to enable them to make accurate interpretations and readers should be selected with these principles in mind including past reader performance metrics when available.
- Wood, B. Decision Making in Radiology. Radiology (1999); 201:601-3
- Degnan AJ, Ghobadi EH, Hardy P, et al. Perceptual and Interpretive Error in Diagnostic Radiology—Causes and Potential Solutions. Academic Radiology 2019; 26: 833-845.
- Manning DJ, Gale A and Krupinski EA. Perception research in medical imaging. The British Journal of Radiology 2005; 78: 683-685
- Fitzgerald R. Error in Radiology. Clinical Radiology (2001) 56: 938-946
- McCreadie, G., Oliver T. Eight CT lessons that we learned the hard way: an analysis of current patterns of radiological error and discrepancy with particular emphasis on CT. Clinical Radiology (2009) 64, 491- 499
- Pintoa, A., Acamporaa C, et al. Learning from diagnostic errors: A good way to improve education in radiology. European Journal of radiology 78 (2011) 372-376
- Rozenberg , A., Kenneally , B. et al. Clinical Impact of Second-Opinion Musculoskeletal Subspecialty Interpretations During a Multidisciplinary Orthopedic Oncology Conference. J Am Coll Radiol (2017);14:931-936
- Chang Sen L.Q., Mayo RC et al. Impact of Second-Opinion Interpretation of Breast Imaging Studies in Patients Not Currently Diagnosed With Breast Cancer. J Am Coll Radiol (2018);15:980-987
- Carter, B., Erasmus J. et al. Quality and Value of Subspecialty Reinterpretation of Thoracic CT Scans of Patients Referred to a Tertiary Cancer Center. J Am Coll Radiol (2017);14:1109-1118