User:DanaAlmutawaaa/sandbox

From Wikipedia, the free encyclopedia

Common problems and solutions associated with research practices in Neuroimaging[edit]

What it fMRI:[edit]

Functional magnetic resonance imaging (fMRI) is a method used to measure the brain’s activity by monitoring and recording changes in blood flow (Faro & Mohamed, 2006). fMRI is based on the concept that the flow of blood to any area of the brain increases when it is in use (Ralitza, 2017).

Neuroimaging
An fMRi image of a patient's head with benign familial macrocephaly.
fMRI of a human head, from top to base of the human skull

fMRI is a non-invasive method (Jenkinson & Chappell, 2018). Moreover, it can assess the function of the brain safely and effectively (Filippi, 2015). Moreover, it produces images with very high resolution (De Beeck & Nakatani, 2019). In addition, It is an objective method (Linden, 2016).

Challenges associated with neuroimaging research:[edit]

Low statistical power[edit]

On the other hand, the use of fMRI is associated with several challenges. Low statistical power is one of the key limitations of fMRI (Poldrack et al., 2016). In their study, Button et al. (2013) stressed that low statistical power can reduce the likelihood of finding a true finding (if found), increasing observed positive effect sizes, as well as increasing the risk of regarding a positive result as false (Yarkoni, 2009). In fMRI, the combination of a relatively small number of observations (subjects), a high number of dependent variables, and a need for correcting for the different comparisons may significantly reduce statistical power (Bigler, 2013). Therefore, Cremers et al. (2017) sought to clarify the issue of statistical power by contrasting cases of brain-behaviour correlations: strong localised effects and weak diffuse effects. Findings showed that common sample size, typically 20-30 participants, demonstrated significantly low statistical power, showed a significant variation on subsequent replications, and poorly represented the actual effects in the full sample, especially in the weak diffuse scenario (Cremers et al., 2017). Data from the Human Connectome Project (HCP) was more similar to the weak diffuse scenario than the localised strong scenario (Seung, 2012). This shows that many fMRI studies had a low statistical power (Cremers et al., 2017). HCP refers to a five-year project with a large data set of quality and unprecedented size for mapping the human macroscale connectome (Bijsterbosch & Beckmann, 2017). Connectome refers to the long-distance connections between the different regions of the brain (Behrens & Sporns, 2012; Jbabdi et al., 2015).

A researcher is looking at and examining fMRI images

Because the fMRI has low statistical power, several consequences may arise. First, estimates of effect size may be biased (Ulmer & Jansen, 2013). Because effect size may be overestimated in the weak diffuse scenario, it can be considered an issue (Mulert & Lemieux, 2009). In studies with small samples, statistically significant effect sizes may seem to be larger than the actual effect sizes (Bijsterbosch & Beckmann, 2017). The key problem stems from the combination of low statistical power and selection bias (when findings with only significant results are reported), and this will make fMRI studies with low statistical power give results with sampling error (Yarkoni, 2009, Poldrack et al., 2017, Vul et al., 2009; Reddan et al., 2017). Another problem associated with the low statistical power in fMRI is the misleading inferences about the organisation of the brain (Faro & Mohamed, 2015). In other words, observing extremely strong, highly localised brain-behaviour correlations in a small sample does not tell key information about the spatial extent and strength of the true effects in the general population (Yarkoni, 2009).  

Solutions to the issue of low statistical power:[edit]

One of the solutions to the power problem is to increase the sample size (Cremers et al., 2017). Although it may be expensive to recruit a large number of participants and it may be challenging when analysing patient populations, the researchers should try their best to increase the sample size (Papageorgiou et al., 2014). For example, when making a new protocol for a study, researchers usually prefer to have more tasks per participant to increase the number of research papers (Cremers et al., 2017). However, it was argued that researchers need to increase the number of participants per scan time unit (three subjects in 60 minutes, four in three hours, etc.) (Mumford, 2012).

Another solution to address the statistical power problem is to adjust the significance threshold (Faro et al., 2011). For example, researchers can employ a less strict significance threshold (Stroman, 2016). To perform multiple comparisons corrections, researchers usually concentrate on controlling false positive rates or type I errors (Cremers et al., 2017). As a consequence, the rates of Type 2 error are usually elevated (Cremers et al., 2017). To make a balance between type II errors and type I errors, researchers argued that it would be more appropriate to employ conventional uncorrected thresholds (e.g., p <0.005, p < 0.001) (Lieberman & Cunningham, 2009). However, using uncorrected thresholds may be seen as an unprincipled approach because it does not consider the parameters of each analysis (e.g., data smoothness), which can strongly affect the statistical power and the false positive rate (Bennett et al., 2009). Therefore, it would be recommended to balance the type I and II errors (i.e., the ratio between type two and type 1 errors) (Cremers et al., 2017).      

fMRI machine

Software errors[edit]

Software errors is another problem associated with neuroimaging research (Poldrack et al., 2011). The majority of fMRI researchers employ open-source analysis packages for statistical analysis and pre-processing; a lot of additional statistical analyses need custom programs (Faro & Mohamed, 2006). Because the majority of fMRI researchers lack training in software engineering, there is limited attention to the development of good software practices that can help monitor and reduce errors (Poldrack et al., 2016). For example, a bug was found in the AFNI 3dClustSim program, which increased the type one error (Eklund et al., 2016). The effect of this bug was significant (Eklund et al., 2016). Several solutions are available. For example, fMRI researchers are recommended to learn, develop, and implement protective programming practices, including the use of software validation and testing (Poldrack et al., 2016). It is necessary to define validation methodologies (Poldrack et al., 2016). Instead of the use of custom code, researchers are recommended to use software tools from renowned projects, whenever possible (Mulert & Lemieux, 2009). It is more likely to discover errors when they are utilised by a larger group (Poldrack et al., 2016). Moreover, better development practices are usually followed by larger projects (Ashby, 2019).

Multiple comparisons[edit]

The issue of multiple comparisons is another potential problem in neuroimaging research. One of the common approaches to the analysis of neuroimaging is the “mass univariate” testing of the hypothesis for every voxel (Bijsterbosch & Beckmann, 2017). This shows that the false positive rate will be maximised if multiple tests were not corrected (Uludag et al., 2015). However, correction for multiple comparisons is usually neglected (Stroman, 2016). For example, Bennett et al. (2009) conducted a study to examine the magnitude of this issue by conducting a real experiment that evaluates the negative impact of not correcting for a chance appropriately. Interestingly, the study reported activation in the brains of a dead salmon (Bennett et al., 2009). However, when the study appropriately corrected for multiple comparisons, the activation disappeared (Bennett et al., 2009). The multiplicity’s issue has been discovered very early, and in the last twenty years, have witnessed the development of validated and well-established approaches for correction of false discovery rate and family-wise error in neuroimaging data (Eklund et al., 2016). However, it was argued that higher error rates can result from even very established techniques for inference based on the spatial degree of activations (Poldrack et al., 2016).

Brain scanning


Solutions to the issue of multiple comparisons[edit]

There are several solutions to the problem of multiple comparisons. To make a balance between the rates of both type one and type two errors in a principled way, it would be recommended to report the FWE­corrected whole­brain results as well as share the unthresholded statistical map through a database that allows viewers to view and download the map (e.g., Neurovault.org) (Gorgolewski et al., 2015). An example of such maps is available on http://neurovault.org/collections/122/.

It is also important to justify explicitly any utilisation of non-standard approaches for correcting multiple comparisons (e.g., use of Analysis of Functional NeuroImages (AFNI) open-source environment when other types of analysis were conducted utilising Statistical Parametric Mapping) (Poldrack et al., 2016). Moreover, reviewers are recommended to search for and ask for such justification (Poldrack et al., 2016).      

Conclusion:[edit]

To sum up, although the use of fMRI has many advantages (e.g., a noninvasive method, high resolution, ease of use, objectivity, safety), it has some limitations. For example, neuroimaging research has low statistical power. Low statistical power can decrease the likelihood of finding a true effect. This problem can be addressed by increasing the sample size and adjusting the significance threshold. Software is another problem in neuroimaging research. This problem can be addressed by using open-source analysis packages for preprocessing and statistical analysis. Also, fMRI researchers are recommended to learn, develop, and use defensive programming practices (e.g., use of software validation as well as testing). Multiple comparisons is another issue. It can be addressed by sharing unthresholded statistical maps via repertories that enable viewers to download and view the maps.  

 References:[edit]

  • Ashby, F. G. (2019). Statistical analysis of fMRI data. MIT press.
  • Behrens, T. E., & Sporns, O. (2012). Human connectomics. Current opinion in neurobiology, 22(1), 144-153.
  • Bennett, C. M., Miller, M. B., & Wolford, G. L. (2009). Neural correlates of interspecies perspective taking in the post-mortem Atlantic Salmon: An argument for multiple comparisons correction. Neuroimage, 47(Suppl 1), S125.
  • Bennett, C. M., Wolford, G. L., & Miller, M. B. (2009). The principled control of false positives in neuroimaging. Social cognitive and affective neuroscience, 4(4), 417-422.
  • Bigler, E. D. (Ed.). (2013). Neuroimaging I: basic science. Springer Science & Business Media.
  • Bijsterbosch, J., & Beckmann, C. (2017). An introduction to resting state fMRI functional connectivity. Oxford University Press.
  • Button, K. S., Ioannidis, J. P., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, E. S., & Munafò, M. R. (2013). Power failure: why small sample size undermines the reliability of neuroscience. Nature reviews neuroscience, 14(5), 365-376.
  • Cremers, H. R., Wager, T. D., & Yarkoni, T. (2017). The relation between statistical power and inference in fMRI. PloS one, 12(11), e0184923.
  • Cremers, H. R., Wager, T. D., & Yarkoni, T. (2017). The relation between statistical power and inference in fMRI. PloS one, 12(11), e0184923.
  • de Beeck, H. O., & Nakatani, C. (2019). Introduction to human neuroimaging. Cambridge University Press.
  • Eklund, A., Nichols, T. E., & Knutsson, H. (2016). Cluster failure: Why fMRI inferences for spatial extent have inflated false-positive rates. Proceedings of the national academy of sciences, 113(28), 7900-7905.
  • Faro, S. H., & Mohamed, F. B. (2015). Functional BOLD MRI: Principles and Applications. Springer.
  • Faro, S. H., & Mohamed, F. B. (Eds.). (2006). Functional MRI: basic principles and clinical applications. Springer Science & Business Media.
  • Faro, S. H., Mohamed, F. B., Law, M., & Ulmer, J. T. (Eds.). (2011). Functional neuroradiology: principles and clinical applications. Springer Science & Business Media.
  • Filippi, M. (Ed.). (2015). Oxford textbook of neuroimaging. Oxford Textbooks in Clinical N.
  • Gelman, A., & Loken, E. (2014). The statistical crisis in science: data-dependent analysis--a" garden of forking paths"--explains why many statistically significant comparisons don't hold up. American scientist, 102(6), 460-466.
  • Gorgolewski, K. J., Varoquaux, G., Rivera, G., Schwarz, Y., Ghosh, S. S., Maumet, C., ... & Margulies, D. S. (2015). NeuroVault. org: a web-based repository for collecting and sharing unthresholded statistical maps of the human brain. Frontiers in neuroinformatics, 9, 8.
  • Jbabdi, S., Sotiropoulos, S. N., Haber, S. N., Van Essen, D. C., & Behrens, T. E. (2015). Measuring macroscopic brain connections in vivo. Nature neuroscience, 18(11), 1546-1555.
  • Jenkinson, M., & Chappell, M. (2018). Introduction to neuroimaging analysis. Oxford University Press.
  • Lieberman, M. D., & Cunningham, W. A. (2009). Type I and Type II error concerns in fMRI research: re-balancing the scale. Social cognitive and affective neuroscience, 4(4), 423-428.
  • Linden, D. (2016). Neuroimaging and Neurophysiology in Psychiatry. Oxford University Press.
  • Mulert, C., & Lemieux, L. (Eds.). (2009). EEG-fMRI: physiological basis, technique, and applications. Springer Science & Business Media.
  • Mumford, J. A. (2012). A power calculation guide for fMRI studies. Social cognitive and affective neuroscience, 7(6), 738-742.
  • Papageorgiou, D., Christopoulos, G., & Smirnakis, S. (Eds.). (2014). Advanced Brain Neuroimaging Topics in Health and Disease: Methods and Applications. BoD–Books on Demand.
  • Poldrack, R. A., Baker, C. I., Durnez, J., Gorgolewski, K. J., Matthews, P. M., Munafò, M. R., ... & Yarkoni, T. (2017). Scanning the horizon: towards transparent and reproducible neuroimaging research. Nature reviews neuroscience, 18(2), 115-126.
  • Poldrack, R. A., Baker, C. I., Durnez, J., Gorgolewski, K. J., Matthews, P. M., Munafò, M., ... & Yarkoni, T. (2016). Scanning the Horizon: Future challenges for neuroimaging research. bioRxiv, 059188.
  • Poldrack, R. A., Mumford, J. A., & Nichols, T. E. (2011). Handbook of functional MRI data analysis. Cambridge University Press.
  • Ralitza, G. (2017). Statistical methods in psychiatry and related fields: Longitudinal, clustered, and other repeated measures data. Chapman and Hall/CRC.
  • Reddan, M. C., Lindquist, M. A., & Wager, T. D. (2017). Effect size estimation in neuroimaging. JAMA psychiatry, 74(3), 207-208.
  • Seung, S. (2012). Connectome: How the brain's wiring makes us who we are. HMH.
  • Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological science, 22(11), 1359-1366.
  • Stroman, P. W. (2016). Essentials of functional MRI. CRC Press.
  • Ulmer, S., Jansen., O. (2013). fMRI: Basics and Clinical Applications. Springer Science & Business Media.
  • Uludag, K., Ugurbil, K., & Berliner, L. (Eds.). (2015). fMRI: from nuclear spins to brain functions (Vol. 30). Springer.
  • Vul, E., Harris, C., Winkielman, P., & Pashler, H. (2009). Reply to comments on “puzzlingly high correlations in fMRI studies of emotion, personality, and social cognition”. Perspectives on Psychological Science, 4(3), 319-324.
  • Yarkoni, T. (2009). Big correlations in little studies: Inflated fMRI correlations reflect low statistical power—Commentary on Vul et al.(2009). Perspectives on psychological science, 4(3), 294-298.