• This really, really depends on what sort of scans you have and what sort of analysis you want to do.

If you just want an anatomical scan, you don’t need to do preprocessing. the output from the scanner will probably be a .dicom file, and you can the raw scan in any one of a number of free programs (I primarily use , which is also useful for batch-anonymising data).

If you want to do an analysis of fMRI data, then it’s a lot more complicated, and what steps you actually need to do, and how exactly you want to do them will depend on what sorts of analysis you want to do.

The primary steps in single-subject processing are (the order and which steps are wanted will, as I said, depend on the analysis you’re doing, but also on what software you’re using):

1. Removing any unwanted initial TRs.
2. Slice timing alignment on volumes: Each slice of the scan is collected at a slightly different time. Most programs will want to shift the timing so that all the slices of each TR are given the the same timing.
3. Volume registration: You need to align all the different TRs to the same spatial coordinates. Because your analysis is going to be of activation over time, you have to know which voxel (3d pixel) at TR n

” id=”MathJax-Element-1-Frame” role=”presentation” tabindex=”0″>

$n$

corresponds to which voxel at TR n+x

” id=”MathJax-Element-2-Frame” role=”presentation” tabindex=”0″>

$n+x$

, for every x

” id=”MathJax-Element-3-Frame” role=”presentation” tabindex=”0″>

$x$

in your timeseries. The functional data is also usually aligned with anatomical data, in order to make it easier to tell what activation is happening where. Each subject may also be aligned to some atlas space – such as Talairach or MNI, to make it easier both to do group analyses and also to generalise results across studies. Alignment with an atlas is usually done anatomy to anatomy, as these are the highest resolution and have the highest contrast between different brain structures. We end up, if we do all of these, with a transformation matrix from each TR to the main functional space, from functional space to anatomic space, and from anatomic space to atlas space. These transformation matrices allow us to synch up all data spatially across time. Each of these three steps is independent, and aligning with anatomy and/or with an atlas space is often done later, after smoothing.

This is usually done with 6 regressors—the three spatial axes, and rotation around those axes.

4. Skull stripping and brain masking: To make the analysis cleaner and much, much quicker, we usually want to run the group level analyses just on the brain, or even just on the specific part of the brain that interests us. Definitely, there’s not much reason to put the skull, neck, or surrounding “empty” space into a group analysis (although, that being said, you might well want to do this as a sanity check, or as a source of working out the probability of type I errors. Usually this is done on the single subject data, not the group analysis, so you’ll still want a mask). There are many different ways of doing this, but basically you want to end up with a spatial mask of the areas of interest.
5. Smoothing (or, I would say, blurring): not every data analysis needs this, indeed some analyses (such as network analyses) could be badly biased or changed by smoothing, but in general, there is usually some spatial smoothing done of the data—that is, each voxel is transformed to be a weighted average of itself and the voxels surrounding it, with a Gaussian weighting curve. This is a theoretically debatable step, but basically what it does is increase strong effects and effects that have a broad spatial extent, and reduce small or spatially constrained effects. It also, hopefully, reduces unwanted noise—but in a completely unmotivated manner. That can be good or bad, depending on what you want to do and how much of a purist you happen to be. Some people do a smoothing just in the grey matter (often on an “inflated” brain, i.e. one in which all the gyri and sulci have been spatially flattened) so as not to introduce grey matter activation into white matter. One problem with smoothing is that it makes it much more difficult to find effects in small brain structures, such as parts of the basal ganglia. As our imaging techniques get better and more fine grained, we increase our ability to find these small areas of activation despite the high noise of the BOLD measure.
6. Scaling: Many people scale the activation across time—that is, remove the mean, and scale the variance to be between -100 and 100. This changes the measure from the BOLD signal to scaled BOLD, often termed “percent signal change”, because a score of 20, say, is 20% of the way between the mean signal in that voxel and the max activation in that voxel. Again, this is a potentially problematic step, made in order to aid interpretation. It’s good in that it allows you to see effects in regions of the brain that have weaker BOLD signal responses, and to deal more easily with inter-subject variance in overall activation levels. However, phenomena such as drift in the magnetic field can give you a real bias here, as they can make it look as if the response magnitude is increasing or reducing over time, even if it is actually staying more or less consistent.

Other, more optional steps:

1. Despiking: Removing spikes in the time series, i.e short lived (one or two TRs, usually) order of magnitude changes in the BOLD signal. These can really throw off a general linear regression model, or a correlation analysis of some sort, if that’s what you’re going to be doing, because they give us massive outliers. They’re often the result of movement )in which case they’ll be seen across the whole brain), but can also just be random noise in the magnetic field.
2. Removal of cardiac and respiratory regressors, if you have measures of those. Removing these before your main regression model can give you a much cleaner final result than if you put them into the main model. We know that, for example, much of the lower half of the brain “pulses” with each heartbeat, and breathing causes our face and neck to move quite significantly. because they are not whole-head movement, they probably won’t be picked up by the volume registration, which assumes that the brain is a rigid body that is moving in space.

But what about other analyses? If you want to to DTI analysis, your processing pipeline is going to be different again. If you want to do resting state correlation analysis, you’re going to have to make a whole load of decisions about which analyses are justified and which aren’t.

If you want to do some analyses on by-subject anatomical ROIs, you’ll optimally go through each subject and get their own anatomical ROI, but you might just do it on a group level after you’ve done the warping. Which anatomical structure you’re interested in will impact how important it is to do it at the single subject level (which usually means doing it by hand, can be very, very labourious, and leaves you at risk of biasing the data, either on purpose or by mistake, especially if you know what your hypothesis is). if you’re doing it by hand, it should be blinded as much as possible—preferably give it to someone who doesn’t know what you’re looking for, who doesn’t know anything about the subject he’s doing it for, and if at all possible, verify the ROIs with at least a few doubled attempts—either within or between researcher—to lower the risk of bias.

All of this is only a general outline. If you’re doing this sort of analysis, you should really understand both what you’re doing, and what you want to and should be doing.