Tokyo Research
Decoding musical pitch from human brain activity with automatic voxel-wise whole-brain FMRI feature selection
Author
Cheung, Vincent K M and Peng, Yueh-Po and Lin, Jing-Hua and Su, Li
Abstract
Decoding models seek to infer stimulus or task information from neural activity and play a central role in brain-computer interfaces. However, the high spatial resolution of fMRI means that the number of available features far exceeds the number of trials in a typical experiment. Although a common approach is to restrict features to a priori-defined regions of interest, related information present in other brain regions are consequently omitted. Here, we propose a two-stage thresholding approach that automatically pools relevant voxels from the whole-brain to enhance decoding performance. Testing on an fMRI dataset of 20 subjects, we show that our approach significantly improves regression performance in decoding musical pitch value by 2-fold compared to restricting voxels to the auditory cortex. We further examine properties of the selected voxels, and compare performance between random forest and convolutional neural network decoders.