In other words, there is no indication of how well the whole-brain data would allow us to be confident that the pattern of activation was characteristic of a patient rather than a healthy subject. Instead of being able to make a prediction about an individual being healthy or not on a network-wide basis, we would instead be confined to making statements about the Inhibitors,research,lifescience,medical overall separation of groups at a particular voxel or in a particular region of interest, on the basis (typically) of a t statistic. Figure 1. Data flow in a simple 2-task functional magnetic resonance imaging (fMRI) experiment (alternating blocks of each task) through traditional univariate analysis with
general linear modeling (GLM) and support vector machine (SVM) analysis. The univariate … New developments in analyzing brain imaging data: machine learning methods These observations on the mainstream fMRI analysis status quo have been made by a number of statisticians and neuroscientists in recent years.8,9 In response to the issues described above, growing interest has now come to be focused on a group Inhibitors,research,lifescience,medical of analysis techniques that have been described as “brain-reading” Inhibitors,research,lifescience,medical or “braindecoding” methods10 that belong to a broad group of techniques known collectively as machine learning.11 The basic idea of these methods is that, instead of analyzing the brain voxel by voxel, data from groups of voxels (ROI) or indeed
from the whole brain, are used to train a computer program. In one set of classification methods, the most common clinical trial variant of which is called the support vector machine (SVM), the program will typically find a boundary (referred
to as a hyperplane in the relevant literature because it exists in high-dimensional space) between different classes of data (eg, data from Inhibitors,research,lifescience,medical patients and data from controls either from structural images of the same fMRI experiment). Once this boundary has been located, predictions can be made for data not in the training data set. For example, having trained the program to distinguish controls from depressed patients and define the optimal Inhibitors,research,lifescience,medical hyperplane to achieve this distinction, a new subject could be classified as belonging to the “patient” or the “control” class based on the relationship first of their data to the hyperplane. The specificity- and sensitivity of these predictions can be examined using standardized statistical approaches. In the most common of these, the so-called “leave one out” methods, the computer program is training on all the subjects but one and tested on the remaining individual. This is repeated until all the subjects have been the “one left out.” By averaging the results across all the tests it is possible to compute the sensitivity and specificity, where sensitivity here refers to the probability of correctly classifying a patient as a patient, and specificity the probability of correctly classifying a control as a control.