A time–frequency convolutional neural network for the offline classification of steady-state visual evoked potential responses

Hubert Cecotti

    Research output: Contribution to journalArticlepeer-review

    64 Citations (Scopus)

    Abstract

    A new convolutional neural network architecture is presented. It includes the fast Fourier transform between two hidden layers to switch the signal analysis from the time domain to the frequency domain inside the network. This technique allows the signal classification without any special pre-processing and uses knowledge from the problem in the network topology. The first step allows the creation of different spatial and time filters. The second step is dedicated to the signal transformation in the frequency domain. The last step is the classification. The system is tested offline on the classification of EEG signals that contain steady-state visual evoked potential (SSVEP) responses. The mean recognition rate of the classification of five different types of SSVEP response is 95.61% on a time segment length of 1 s. The proposed strategy outperforms other classical neural network architecures.
    Original languageEnglish
    Pages (from-to)1145-1153
    JournalPattern Recognition Letters
    Volume32
    DOIs
    Publication statusPublished (in print/issue) - 11 Mar 2011

    Fingerprint

    Dive into the research topics of 'A time–frequency convolutional neural network for the offline classification of steady-state visual evoked potential responses'. Together they form a unique fingerprint.

    Cite this