18 August 2011

Help, my brain won't normalise!



If you want to make strong, causal inferences about the brain networks supporting a cognitive function, you will probably need to recruit patients with brain lesions. Although fMRI indicates the networks correlated with a particular cognitive factor, fMRI in healthy controls cannot identify the regions that are essential. In contrast, if damage to a region impairs performance, this suggests a direct causal link between this region and cognition, and implies that the region is critical for preserved performance.

Nothing in life is easy, though, and analysing data from patients has its specific challenges. Patients are hard to recruit, and the size and location of their damage is unlikely to match conveniently to the brain areas you study. Once you have the data you need, your troubles are not over: this article deals with the challenge of normalising brain images that include large lesions.

Spatial normalisation is essential for group analysis of brain images. To detect effects in a given region, that region must be in the same place in every individual. In SPM, normalisation works by fitting each image to a template, first using a rough 12- parameter affine transformation (translation, rotation, stretching and skewing in three dimensions) and then using a more detailed non-linear warp (Friston et al. 2007). Warping applies local stretching and shrinking that minimizes the differences between the image and the template. This is great for healthy controls, but when the warping algorithm encounters a large, dark area in the brain it may try to shrink this area to make the image look "normal". Three main strategies have been proposed to deal with this:

Cost function masking
  • Define a mask that covers voxels with damage
  • Carry out warping ignoring voxels in the mask
  • Warp calculated on intact voxels is extrapolated over the masked region
  • Brett et al. (2001)

Unified normalisation
  • Unified normalisation includes a segmentation step
  • Warping is based on the segmented image: only considers tissue of interest
  • This may effectively remove the lesion and make cost function masking redundant
  • Ashburner & Friston (2005)

Warping regularisation
  • Stricter regularisation penalises unlikely warps
  • Makes shrinking of the lesion less likely

Crinion et al. (2007) made a rigorous comparison of unified and standard normalisation, at several levels of warping regulrisation, with or without cost function masking. Unified normalisation performed better than standard overall, reducing the variability in anatomic positioning and slightly improving fMRI results in a group analysis. Within unified normalisation, medium regularisation performed better than either low or high. Cost function masking had an effect only when regularisation was low, suggesting that medium or high regularisation is sufficient to prevent inappropriate warping of the image around the lesion.

The figure below shows images normalised with low (left), medium (middle) and high (right) warping regularisation. Shrinking of the lesion is obvious in low regulrisation.


Crinion et al. focussed on fMRI results as their end point, but warping may also influence VBM using structural images. Stricter warping regularisation may go too far and prevent fit of normal anatomic variations to the template. The figure below shows unified normalisation with strict (left) and medium (right) warping regularisation. The images were skull-stripped with the canonical SPM brain mask as part of preprocessing for VBM. This makes it obvious that strict regularisation prevented a good fit, such the dura was included in the skull-stripped image. [Images modified with photoshop to reduce identifiability].





StrictMedium

Unlike fMRI data, where there is no signal of interest arising from the dura, VBM is intensity-based, and inclusion of dura may substantially alter results. The take-home message here is that you should check the normalisation of each individual image carefully, and consider the suitability of the normalisation method for the type of analysis to be run.

References

Friston KJ, Ashburner JT, Kiebel SJ, Nichols TE, Penny WD (2007) Statistical Parametric Mapping, First Edition. London, UK: Academic Press.
See also: chapter 3 in Human Brain Function, available online http://www.fil.ion.ucl.ac.uk/spm/doc/books/hbf2/

Brett M, Leff AP, Rorden C, Ashburner J (2001) Spatial normalization of brain images with focal lesions using cost function masking. Neuroimage 14:486-500.

Ashburner J, Friston KJ (2005) Unified segmentation. Neuroimage 26:839-851.

Crinion J, Ashbumer J, Leff A, Brett M, Price C, Friston K (2007) Spatial normalization of lesioned brains: Performance evaluation and impact on fMRI analyses. Neuroimage 37:866-875.

10 August 2011

Perception and semantics in the ventral stream

One of the goals of cognitive neuroscience is to understand how we process visual objects, and what functional contributions different neural regions make. Our ability to perceive and interact with the world critically relies on the ventral processing stream through occipital and temporal cortices with increasingly anterior portions of the ventral stream responding to increasingly complex stimuli (Taylor, Moss & Tyler, 2007; Felleman & Van Essen, 1991; Tanaka, 1996; Tsunoda et al., 2001). As such, there is a relatively detailed account of how we process objects in a visual sense. However, an aspect of object recognition that is largely avoided concerns what an object means - it’s associated semantic knowledge. Instead, the dominant research strategy is to either focus on object recognition as a purely visual phenomenon, or study semantics without recourse to visual effects.

Recognising what a visual object is not only requires that objects are processed visually, but also that the semantic knowledge associated with the object is evoked. As such, a comprehensive account of how we recognise what an object is requires bringing together theories of visual object recognition, and cognitive models of semantic knowledge within the same neurobiological framework. This is the approach we’ve been developing (see Taylor, Moss & Tyler, 2007 for review) - understanding not only the cognitive contributions of different brain regions, but also how meaning emerges across time (e.g. Clarke, Taylor & Tyler, 2011). Uncovering how we understand what we see requires the development of a comprehensive systems-level account of how we get from perceiving an object, to understanding what it is, and requires the synthesis of cognitive theories and neurobiological models - a fundamental component of cognitive neuroscience.

Clarke, A., Taylor, K.I., & Tyler, L.K. (2011). The evolution of meaning: Spatiotemporal dynamics of visual object recognition Journal of Cognitive Neuroscience, 23(8), 1887-1899.

Felleman, D. J., & Van Essen, D. C. (1991). Distributed hierarchical processing in the primate cerebral cortex. Cerebral Cortex, 1, 1-47.

Tanaka, K. (1996). Inferotemporal cortex and object vision. Annual Review of Neuroscience, 19, 109-140.

Taylor, K. I., Moss, H. E., & Tyler, L. K. (2007). The conceptual structure account: a cognitive model of semantic memory and its neural instantiation. In J. Hart & M. Kraut (Eds.), The neural basis of semantic memory (pp. 265-301). Cambridge: Cambridge University Press.

Tsunoda, K., Yamane, Y., Nishizaki, M., & Tanifuji, M. (2001). Complex objects are represented in macaque inferotemporal cortex by the combination of feature columns. Nature Neuroscience, 4(8), 832-838.