Talk:Computer-aided diagnosis

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

}}

The[edit]

The mentioned study (Fenton JJ, Taplin SH, Carney PA, Abraham L, Sickles EA, D'Orsi C et al. Influence of computer-aided detection on performance of screening mammography. N Engl J Med 2007 April 5;356(14)) is currently under heavy discussion. Many experts are complaining the methodology weakness and inappropriate conclusions. I suggest NOT to mention this study until it has been accepted (or even mitigated) by the community.

Secondly, the low specificity of all CAD products is a known issue, but this is not considered as big issue, because CAD is not intended to replace the radiologist. Main job is to detect suspicious structures and to highlight this to the hum an reader.

I will rework this section, if no additional suggestions are discussed within the next week.

80.171.126.180 14:17, 12 September 2007 (UTC)[reply]

Fenton et al. has been published in the New England Journal of Medicine and is clearly a very important paper in this field. I know there has been ongoing correspondence about the details of the paper, but no-one, as far as I know, has suggested that the paper be withdrawn. I don't think the appropriate reaction to the debate is for Wikipedia to not mention Fenton et al.! If you feel the debate warrants coverage, by all means put in citations to that debate. It is, of course, also not the only paper with negative results about CAD. I've also referenced Taylor et al. (on which I'm a co-author, to declare an interest). There's also: Gur D, Sumkin JH, Rockette HE, Ganott M, Hakim C, Hardesty L et al. Changes in breast cancer detection and mammography recall rates after the introduction of a computer-aided detection system. J Natl Cancer Inst 2004 February 4;96(3):185-90.
The low specificity of CAD is, indeed, a known issue, which means it is an issue that should be discussed in the Wikipedia article. The low specificity of CAD is considered to be an important factor in how people use CAD as the high number of false prompts may lead readers to discount prompts. This is discussed in Taylor et al. and, I think, in Fenton et al. Bondegezou 16:17, 12 September 2007 (UTC)[reply]


Most existing CAD systems indeed exhibit low specificity. However, this is not true for at least one sub-class of CAD - the CAST - Computer-aided Simple Triage systems. CAST systems are aimed at performing an initial triage of studies into positive and negative categories. As such they must exhibit relatively high specificity (less than 1 false positive per study) to be able to report negative studies as negative. For example, a CAST system for the detection of significant coronary artery stenosis in coronary CT angiography achieves the per patient specificity of 60-70%, while keeping the sensitivity above 90% (Goldenberg R, Eilot D, Begelman G, Walach E, Ben-Ishai E, Peled N., Computer-Aided simple triage (CAST) for coronary CT angiography (CCTA), Int J Comput Assist Radiol Surg., Apr 2012, PMID 22484719) Gnamor (talk) 12:09, 13 October 2012 (UTC)[reply]

Screening mammography[edit]

I'm co-author on a recently published systematic review on computer-aided detection in screening mammography that seems of relevance here. Given my obvious conflict of interest, instead of adding it, I thought I'd mention it here should some other editor wish to add it: Taylor P, Potts HWW (2008). Computer aids and human second reading as interventions in screening mammography: Two systematic reviews to compare effects on cancer detection and recall rate. European Journal of Cancer. doi:10.1016/j.ejca.2008.02.016 A free copy is available here. I repeat the abstract below:

Background: There are two competing methods for improving the accuracy of a radiologist interpreting screening mammograms: computer aids (CAD) or independent second reading.

Methods: Bibliographic databases were searched for clinical trials. Meta-analyses estimated impacts of CAD and double reading on odds ratios for cancer detection and recall rates. Sub-group analyses considered double reading with arbitration.

Results: Ten studies compared single reading with CAD to single reading. Seventeen compared double to single reading. Double reading increases cancer detection and recall rates. Double reading with arbitration increases detection rate (confidence interval (CI): 1.02, 1.15) and decreases recall rate (CI: 0.92, 0.96). CAD does not have a significant effect on cancer detection rate (CI: 0.96, 1.13) and increases recall rate (95% CI: 1.09, 1.12). However, there is considerable heterogeneity in the impact on recall rate in both sets of studies.

Conclusion: The evidence that double reading with arbitration enhances screening is stronger than that for single reading with CAD.
Bondegezou (talk) 14:57, 21 April 2008 (UTC)[reply]

i am you[edit]

digital pathology ₤₤₤₤₤₤₤ — Preceding unsigned comment added by 50.30.92.13 (talk) 20:25, 4 December 2017 (UTC)[reply]