Feature extraction in medical image processing still remains a challenge, especially in high-dimensionality datasets, where the expected number of available
samples is considerably lower than the dimension of the feature space. This is a
common problem in real-world data, and, specifically, in medical image processing as, while images are composed of hundreds of thousands voxels, only a
reduced number of patients are available. Extracting descriptive and discriminative features allows representing each sample by a small number of features,
which is particularly important in classification task, due to the curse of dimensionality problem. In this paper we solve this recognition problem by means of
sparse representations of the data, which also provides an arena to multimodal
image (PET and MRI) data classification by combining specialized classifiers.
Thus, a novel method to effectively combine SVC classifiers is presented here,
which uses the distance to the hyperplane computed for each class in each classifier allowing to select the most discriminative image modality in each case. The
discriminative power of each modality also provides information about the illness
evolution; while functional changes are clearly found in Alzheimer’s diagnosed
patients (AD) when compared to control subjects (CN), structural changes seem to
be more relevant at the early stages of the illness, affecting Mild Cognitive Impairment (MCI) patients. Finally, classification experiments using 68 CN, 70 AD
and 111 MCI images and assessed by cross-validation show the effectiveness of
the proposed method. Accuracy values of up to 92% and 79% for CN/AD and
CN/MCI classification are achieved.