Verification of Adaptive Collection for Brain Computer Interface
To provide speech prostheses for individuals with severe communication impairments, brain computer interfaces (BCIs) using silent speech have been studied. I proposed adaptive collection, which divided brainwaves into smaller elements and verified them, for BCIs using silent. This paper verified the effect of adaptive collection in comparison to the conventional method. Brainwaves were obtained when four subjects imagined vocalization. In adaptive collection, shortening time length of brainwaves for common spatial patterns was effective because the state of brainwaves changes fast when a subject imagined vocalization. As a result, using the adaptive collection with 12 ms of the time length and 20 elements for classification, the classification accuracies were improved to 87–99% and the averaged classification accuracy was improved to 93% for the pairwise classification /a/ vs. /u/ in the case of 63 channels of EEG.
Brain-computer interfaces, Brain machine interface, EEG, Common spatial patterns, Support vector machine.
 E. Donchin, K. Spencer, and R. Wijesinghe, “The mental prosthesis: assessing the speed of a P300-based brain–computer interface,” IEEE Trans. Rehabil. Eng., vol. 8, no. 2, pp. 174–179, 2000. Verification of Adaptive Collection for Brain Computer Interface 27
 M. Cheng, X. Gao, S. Gao, and D. Xu, “Design and implementation of a brain–computer interface with high transfer rates,” IEEE Trans. Bio-Medical Eng., vol. 49, no. 10, pp. 1181–1186, 2002.
 L.J. Trejo, R. Rosipal, and B. Matthews, “Brain–computer interfaces for 1-D and 2-D cursor control: designs using volitional control of the EEG spectrum or steady-state visual evoked potentials,” IEEE Trans. Neural Systems Rehabil. Eng., vol. 14, no. 2, pp. 225–229, 2006.
 M. Naito, Y. Michika, K. Izawa, Y. Ito, M. Kiguchi, and T. Kanazawa, “A communication means for completely locked-in ALS patients based on changes in cerebral blood volume measured using near-infrared light,” IEICE Trans, Inf. Syst., E90-D(No.7), pp.1028–1037, 2007.
 W C.S. DaSalla, H. Kambara, M. Sato, and Y. Koike, “Single-trial classification of vowel speech imagery using common spatial patterns,” Neural Networks, vol. 22, no. 9, pp. 1334–1339, 2009.
 M Matsumoto, “Silent speech decoder using adaptive collection,” 19th IUI, pp. 73–76, 2014,
 J. Müller-Gerking, G. Pfurtscheller, and H. Flyvbjerg, “Designing optimal spatial filters for single-trial EEG classification in a movement task,” Clinical Neurophysiology, vol. 100, pp. 787–798, 1999.
 H. Ramoser, J. Müller-Gerking, and G. Pfurtscheller, “Optimal spatial filtering of single trial EEG during imagined hand movement,” IEEE Transactions on Rehabilitation Engineering, vol. 8, no. 4, pp. 441– 446, 2000.
 A. Rakotomamonjy, “Variable selection using SVM based criteria,” The Journal of Machine Learning Research Archive, vol. 3, 3/1/2003, pp. 1357–1370, 2003.
 B.E. Boser, I.M. Guyon, and V.N. Vapnik, “A training algorithm for optimal margin classifiers,” In D. Haussler, editor, Fifth Annual ACM Workshop on COLT, pages 144–152, Pittsburgh, PA, ACM Press, 1992.
 F. Asano, M. Kimura, T. Sekiguchi, and Y. Kamitani, “Classification of movement-related single-trial MEG data using adaptive spatial filter,” IEEE ICASSP, pp. 357–360, 2009.