Object-centered Control of Brain-computer Interface Systems in Three-dimensional Spaces Using an Intuitive Motor Imagery Paradigm
Yunshan HUANG, Keitaro TOYOKAWA, Tianyi ZHENG, Kenta SHIMBA, Kiyoshi KOTANI, Yasuhiko JIMBO
Vol. 14 (2025) p. 173-184
Motor imagery (MI)-based brain-computer interfaces (BCIs) have received increasing attention in recent years. While most of the existing electroencephalogram (EEG) based MI-BCIs have demonstrated satisfactory performance in binary and four-class classifications, it is essential to develop intuitive systems with higher dimensionality for object-centered control to support daily living. However, increasing the dimensionality in MI-BCI has been a major challenge. One of the main reasons is the difficulty in selecting appropriate MI tasks. To solve this problem, we proposed a novel MI paradigm that selected six real life movements that have underlying intents aligned with the object-centered control of external devices along three axes: front-back, left-right, and up-down. The users’ perspective of intuitiveness was evaluated using an intuitiveness questionnaire. The results from twelve participants indicated that using the proposed MI tasks achieved closer cognitive matches compared to the traditional tasks. Information-preserved data conversion was implemented to preserve temporal, spatial and band power features in video-like data. A spatiotemporal convolutional neural network (CNN) was then applied for six-class classification. The highest accuracy achieved by the proposed system in six-class within-participant classification was 46.88% for the hold-out validation dataset and 39.66% for the test dataset. The average validation accuracy was 33.99% and the test accuracy was 26.26% for seven healthy participants, which were significantly higher than the chance level of 16.67%. Furthermore, the key metrics obtained from the confusion matrices suggested that both the similarity of somatotopic representations and the symmetry involved in the MI signals significantly affected the classification performance. In this study, we introduced the use of intuitive movements in intuitive MI-BCI paradigms and demonstrated their suitability for controlling external objects in 3D space. Our results indicate that compound MI signals involving multiple joints are separable using deep neural networks, which would greatly expand the available tasks for multiclass MI-BCIs. Although large individual differences were observed, the possibility of adopting spatiotemporal CNN classifiers to decode complex MI signals was demonstrated. Based on our findings, future research with larger sample sizes should consider task-specific modifications of the architecture of the classification model.