A collaborative study by researchers at the Tokyo Institute of Technology has developed a new technique to decode motor intention of humans from Electroencephalography (EEG). This technique is based on the well-documented ability of the brain to predict sensory outcomes of self-generated and imagined actions utilizing so called forward models. The method enabled for the first time nearly 90% single trial decoding accuracy across tested subjects, within 96 ms of the stimulation, with zero user training, and with no additional cognitive load on the users.
The ultimate goal of brain computer interface (BCI) research is to develop an efficient connection between machines and the human brain, such that the machines may be used at will. For example, enabling an amputee to use a robotic arm attached to him just by thinking of it as if it was his own arm. One significant challenge for such a task is the deciphering of a human user’s movement intention from his brain activity, while minimizing the user effort. While various methods have been suggested for this in the last two decades, they all require a large effort in part of the human user: they either require extensive user training, work well with only a section of the users, or need to use a conspicuous stimulus, inducing additional attentional and cognitive loads on the users. In this study, Researchers from Tokyo Institute of Technology (Tokyo Tech), Le Centre national de la recherche scientifique (CNRS-France), AIST and Osaka University propose a new movement intention decoding philosophy and technique that overcomes all these issues while also providing much better decoding performance.
The fundamental difference between the previous methods and what they propose is in what is decoded. All the previous methods decode what movement a user intends/imagines, either directly (as in the so called active BCI systems) or indirectly, by decoding what he is attending to (like the reactive BCI systems). Here the researchers propose to use a subliminal sensory stimulator via electroencephalography (EEG), and hence decode not what movement a user intends/imagines, but whether the movement he or she intends matches (or not) the sensory feedback sent to the user using the stimulator. Their proposal is motivated by the multitude of studies on so called forward models in the brain—the neural circuitry implicated in predicting sensory outcomes of self-generated movements. The sensory prediction errors, between the forward model predictions and the actual sensory signals, are known to be fundamental for our sensory-motor abilities—for haptic perception, motor control, motor learning, and even inter-personal interactions and the cognition of self. The researchers therefore hypothesized the prediction errors to have a large signature in EEG, and perturbing the prediction errors (using an external sensory stimulator) to be a promising way to decode movement intentions.
This proposal was tested in a binary simulated wheelchair task, in which users thought of turning their wheelchair either left or right. The researchers stimulated a user’s vestibular system (as this is the dominant sensory feedback during turning), towards either the left or right direction, subliminally using a galvanic vestibular stimulator. They then decode for the presence of prediction errors (ie. whether or not stimulation direction matches the direction the user imagines) and consequently, as the direction of stimulation is known, the direction the user imagines. This procedure provides excellent single trial decoding accuracy (87.2% median) in all tested subjects, and within 96 ms of stimulation. These results were obtained with zero user training and with no additional cognitive load on the users, as the stimulation was subliminal.
Source: Read Full Article