Skip to content

Feature Contribution Analysis

David López-García edited this page Oct 15, 2021 · 2 revisions

Overview

Usually, classification algorithms are treated as black-boxes. However, highly useful information can be extracted out under specific circumstances. For example, the value of a feature weight, obtained after the training process of SVM models, is sometimes correctly interpreted as a measure of its contribution to the model decision boundary. In other words, it is a measure of its importance.


mvpalab-svm Figure 1. Visual representation of SVM and DA classifiers.


As shown in Figure 1, the feature weight vector represents the coefficients of ω, which is an orthogonal vector to the separation hyperplane. However, as mentioned above, this is valid under certain scenarios (e.g. linear classifiers, use of the same scale for all features, no data transformations such PCA, etc.).

Even meeting all these requirements, the interpretation of raw feature weights can lead to wrong conclusions regarding the origin of the neural signals of interest. A widespread misconception about features weights is that channels with large weights should be related to the experimental condition of interest, which is not always justified. In fact, large weight amplitudes can be observed for channels not containing the signal of interest and vice versa. To solve this problem, Haufe et al. [1] proposed a procedure to transform these feature weights so they can be interpreted as origin of neural processes in space, which leads to more accurate predictions in neuroscience studies.

This useful procedure is implemented in the MVPAlab Toolbox. During any decoding analysis, MVPAlab extracts and saves the raw weight vectors and its Haufe correction in a time-resolved way. Thus, the contribution (or importance) of each electrode to the classification performance can be evaluated at any given timepoint. Additionally, and only if chanel location information is available, MVPAlab can create animated plots representing the evolution of the distribution of weights over a scalp template. This analysis can be computed at group level or only for a specific participant.


mvpalab-feature-contribution Figure 2. Time-resolved MVPA results. (a) Decoding performance (f1-score) for different classification models at a group-level: support vector ma-chine vs. linear discriminant analysis. Single subject plots are represented in dashed and dotted lines. Significant clusters are highlighted using horizontal colored bars. Shaded areas represent the standard error of the mean. (b) Group-level weight distribution (corrected) for three different time windows: T1: 50-150ms, T2: 350-450ms and T3: 850-950ms. (c) Weights’ amplitude for each channel sorted by importance.


Configuration

Feature contribution analysis is disabled by default but can be enabled in the configuration file as follows:

cfg.classmodel.wvector = true;

Result representation

Once the decoding analysis is completed, the graphic user interface can be used to generate static or animated topographical representations of the features contribution to the overall decoding performance.

  1. Open plot utility.
  2. Load the time-resolved decoding result by clicking on Select file button.
  3. Under the Weight analysis tab, load the weight vector by clicking on Select button.
  4. Configure the parameters for weight analysis:
    • Users can select an specific time window to be represented
    • This representation can be static (averaged across timepoints) or animated.
    • The animation speed can also be modified.
    • This feature contribution analysis can be represented at a group-level (averaged across subjects) or for individual subjects.
    • Corrected feature weights are preferred for representation, but raw weights can also be represented.
  5. Click on Plot button to generate the graphical representation.

mvpalab-features

Supplementary material such as temporal animations of feature contributions are available to download from an Open Science Framework project: MVPAlab Temporal animations The Supplementary Material folder includes different video files (.mov) recording the temporal distribution of channels contributing to the decoding accuracy. Raw and corrected feature weights animations for individual participants and group-averaged are included.

Clone this wiki locally