Robust Fusion of c-VEP and Gaze

Berkan Kadioglu, Ilkay Yildiz, Pau Closas, Melanie B. Fried-Oken, Deniz Erdogmus

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

Brain-computer interfaces (BCIs) are one of the developing technologies, serving as a communication interface for people with neuromuscular disorders. Electroencephalography (EEG) and gaze signals are among the commonly used inputs for the user intent classification problem arising in BCIs. Fusing different types of input modalities, i.e., EEG and gaze, is an obvious but effective solution for achieving high performance on this problem. Even though there are some simplistic approaches for fusing these two evidences, a more effective method is required for classification performances and speeds suitable for real-life scenarios. One of the main problems that is left unrecognized is highly noisy real-life data. In the context of the BCI framework utilized in this article, noisy data stem from user error in the form of tracking a nontarget stimuli, which in turn results in misleading EEG and gaze signals. We propose a method for fusing aforementioned evidences in a probabilistic manner that is highly robust against noisy data. We show the performance of the proposed method on real EEG and gaze data for different configurations of noise control variables. Compared to the regular fusion method, the robust method achieves up to 15% higher classification accuracy.

Original languageEnglish (US)
Article number8515115
JournalIEEE Sensors Letters
Volume3
Issue number1
DOIs
StatePublished - Jan 2019

Keywords

  • Bayesian fusion
  • M-estimation
  • brain-computer interfaces (BCIs)
  • code-based VEP (c-VEP)
  • eye tracking
  • multimodal fusion

ASJC Scopus subject areas

  • Instrumentation
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Robust Fusion of c-VEP and Gaze'. Together they form a unique fingerprint.

Cite this