Silent speech of vowels in persons of different cognitive styles

Habla silenciosa de las vocales en personas de diferente estilo cognitivo

Omar López-Vargas , Luis Sarmiento-Vela , Jan Bacca-Rodríguez , Sergio Villamizar Delgado , Jhon Sarmiento Vela

Suma Psicológica, (2022), 29(1), pp. 20-29.

Received 29 August 2021
Accept 19 January 2022


Introduction: This research measures the differences in silent speech of the vowels / a / – / u / in Spanish, in students with different cognitive styles in the Field Dependence – Independence (FDI) dimension. Method: Fifty-one (51) adults participated in the study. Electroencephalographic (EEG) signals were taken from 14 electrodes placed on the scalp in the language region located in the left hemisphere. Previously, the embedded figures test (EFT) was applied in order to classify them into dependent, intermediate and field independent persons. To analyse the EEG data, the signals were decomposed into intrinsic mode functions (IMF) and a mixed repeated measures analysis was performed. Results: It was found that the Power Spectral Density (PSD) in the vowels is independent of the cognitive style and its magnitude depends on the position of the electrodes. Conclusions: The results suggest that there are no significant differences in PSDs in the silent speech of vowels /a/-/u/ in persons of different cognitive styles. Significant differences were found in the PSDs according to the position of the 14 electrodes used. In our configuration, the silent speech of vowels can be studied using electrodes placed in premotor, motor and Wernicke areas.

Voluntary signals, silent speech, cognitive style, EEG, vowels


Introducción: La investigación mide las diferencias en el habla silenciosa de las vocales /a/-/u/ en español, en estudiantes de diferente estilo cognitivo en la dimensión Dependencia – Independencia de campo (DIC). Método: En el estudio participaron 51 adultos. Se tomaron señales electroencefalográficas (EEG), a partir de 14 electrodos dispuestos sobre el cuero cabelludo de la región del lenguaje ubicada en el hemisferio izquierdo. Previamente les fue aplicado el test de figuras enmascaradas EFT con el fin de clasificarlos en personas dependientes, intermedios e independientes de campo. Para analizar los datos del EEG se descompusieron las señales en funciones de modo intrínseco (IMF) y se realizó un análisis mixto de medidas repetidas. Resultados: Se halló que la densidad espectral de potencia (PSD) en las vocales es independiente del estilo cognitivo y su magnitud depende de la posición de los electrodos. Conclusión: Los resultados sugieren que no existen diferencias significativas en los PSD en el habla silenciosa de las vocales /a/-/u/ en las personas de diferente estilo cognitivo. Se hallaron diferencias significativas en los PSD de acuerdo con la posición de los 14 electrodos utilizados. En nuestra configuración, el habla silenciosa de las vocales puede ser estudiada mediante electrodos situados en las áreas premotora, motora y de Wernicke.

Palabras Clave:

Señales voluntarias, habla silenciosa, estilo cognitivo, EEG, vocales

Artículo Completo

Callan, D. E., Callan, A. M., Honda, K., & Masaki, S. (2000). Sin- gle-sweep EEG analysis of neural processes underlying percep- tion and production of vowels. Cognitive Brain Research, 10(1-2), 173-176.

Chi, X., Hagedorn, J. B., Schoonover, D., & Zmura, M. D. (2011). EEG-Based discrimination of imagined speech phonemes. International Journal of Bioelectromagnetism, 13(4), 201-206.

Cooney, C., Folli, R., & Coyle, D. (2018). Neurolinguistics research advancing development of a direct-speech brain-computer interface. IScience, 8, 103-125.

Cooney, C., Korik, A., Folli, R., & Coyle, D. (2020). Evaluation of hyperparameter optimization in machine and deep learning methods for decoding imagined speech EEG. Sensors (Switzer- land), 20(16), 1-22.

D’Zmura, M., Deng, S., Lappas, T., Thorpe, S., & Srinivasan, R. (2009). Toward EEG sensing of imagined speech. Lectu- re Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioin- formatics), 5610 LNCS(PART 1), 40-48.

DaSalla, C. S., Kambara, H., Sato, M., & Koike, Y. (2009). Single-tri- al classification of vowel speech imagery using common spa- tial patterns. Neural Networks, 22(9), 1334-1339.

Evans, C., Richardson, J. T. E., & Waring, M. (2013). Field independ- ence: Reviewing the evidence. British Journal of Educational Psychology, 83(2), 210-224.

Fujimaki, N., Takeuchi, F., Kobayashi, T., Kuriki, S., & Hasuo, S. (1994). Event-related potentials in silent speech. Brain Topog-raphy, 6(4), 259-267.

Geschwind, N. (1965). Disconnection syndromes in animals and man. Brain, 88(2), 237-294.

Ghosh, R., Sinha, N., Biswas, S. K., & Phadikar, S. (2019). A mod- ified grey wolf optimization based feature selection method from EEG for silent speech classification. Journal of Informa- tion and Optimization Sciences, 40(8), 1639-1652.

González-Castañeda, E. F., Torres-García, A. A., Reyes-García, C. A., & Villaseñor-Pineda, L. (2017). Sonification and textification: Proposing methods for classifying unspoken words from EEG signals. Biomedical Signal Processing and Control, 37, 82-91.

Graimann, B., Allison, B. Z., & Pfurtscheller, G. (2010). Brain-Com- puter interfaces: Revolutionizing human-computer interaction (1st ed.). Springer Verlag.

Hansen, S. T., Hemakom, A., Gylling Safeldt, M., Krohne, L. K., Madsen, K. H., Siebner, H. R., Mandic, D. P., & Hansen, L. (2019). Unmixing oscillatory brain activity by EEG source localization and empirical mode decomposition. Compu- tational Intelligence and Neuroscience, 2019.

Hederich-Martínez, C., López-Vargas, O., & Camargo-Uribe, A. (2016). Effects of the use of a flexible metacognitive scaffold- ing on self-regulated learning during virtual education. Inter- national Journal of Technology Enhanced Learning, 8(3/4), 1.

Hemakom, A., Goverdovsky, V., Looney, D., & Mandic, D. P. (2016). Adaptive-projection intrinsically transformed multivariate em- pirical mode decomposition in cooperative brain-computer in- terface applications. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2065).

Ikeda, S., Shibata, T., Nakano, N., Okada, R., Tsuyuguchi, N., Ike- da, K., & Kato, A. (2014). Neural decoding of single vowels dur- ing covert articulation using electrocorticography. Frontiers in Human Neuroscience, 8(MAR), 1-8.

Imanaka, M., Kakigi, R., & Nakata, H. (2017). The relation- ship between cognitive style and event-related potentials during auditory and somatosensory Go/No-go paradigms. NeuroReport, 28(13), 822-827.

Iqbal, S., Khan, Y. U., & Farooq, O. (2015). EEG based classifica- tion of imagined vowel sounds. 2015 International Conference on Computing for Sustainable Global Development, INDIACom 2015, 1591-1594.

Jahangiri, A., & Sepulveda, F. (2019). Correction to: The Rela- tive Contribution of High-Gamma Linguistic Processing Stag- es of Word Production, and Motor Imagery of Articulation in Class Separability of Covert Speech Tasks in EEG Data. Jour- nal of Medical Systems, 43(8), 237.

Jia, S., Zhang, Q., & Li, S. (2014). Field dependence-independence modulates the efficiency of filtering out irrelevant information in a visual working memory task. Neuroscience, 278, 136-143.

Lee, W., Seong, J. J., Ozlu, B., Shim, B. S., Marakhimov, A., & Lee, S. (2021). Biosignal sensors and deep learning-based speech rec- ognition: A review. Sensors (Switzerland), 21(4), 1-22.

Li, R., Johansen, J. S., Ahmed, H., Ilyevsky, T. V., Wilbur, R. B., Bharadwaj, H. M., & Siskind, J. M. (2018). Training on the test set? An analysis of Spampinato et al. [31]. 1-18.

Li, Y., & Wong, K. M. (2013). Riemannian distances for signal classification by power spectral density. IEEE Journal on Se- lected Topics in Signal Processing, 7(4), 655-669.

López-Vargas, O., Ortiz-Vásquez, J., & Ibáñez-Ibáñez, J. (2020). Autoeficacia y logro de aprendizaje en estudiantes con difer- ente estilo cognitivo en un ambiente m-learning. Pensamiento Psicológico, 18(1), 71–85.

López, O., Hederich, C., & Camargo, A. (2012). Logro de aprendizaje en ambientes hipermediales: Andamiaje autorregulador y es- tilo cognitivo. Revista Latinoamericana de Psicología, 44(2), 13-25.

Martin, S., Brunner, P., Holdgraf, C., Heinze, H. J., Crone, N. E., Rieger, J., Schalk, G., Knight, R. T., & Pasley, B. N. (2014). Decoding spectrotemporal features of overt and covert speech from the human cortex. Frontiers in Neuroengineering, 7(MAY), 1-15.

Matsumoto, M., & Hori, J. (2013). Classification of silent speech using adaptive collection. Proceedings of the 2013 IEEE Sym- posium on Computational Intelligence in Rehabilitation and Assistive Technologies, CIRAT 2013 – 2013 IEEE Symposium Se- ries on Computational Intelligence, SSCI 2013, 5-12.

Matsumoto, M., & Hori, J. (2014). Classification of silent speech using support vector machine and relevance vector machine. Applied Soft Computing Journal, 20, 95-102.

Min, B., Kim, J., Park, H. J., & Lee, B. (2016). Vowel Imagery De- coding toward Silent Speech BCI Using Extreme Learning Ma- chine with Electroencephalogram. BioMed Research Interna- tional, 2016.

Morooka, T., Ishizuka, K., & Kobayashi, N. (2018). Electroenceph- alographic Analysis of Auditory Imagination to Realize Silent Speech BCI. 2018 IEEE 7th Global Conference on Consumer Electronics, GCCE 2018, October, 683–686.

Nguyen, C. H., Karavas, G. K., & Artemiadis, P. (2018). Inferring imagined speech using EEG signals: a new approach using Riemannian manifold features. Journal of Neural Engineering, 15(1), 016002.

Oltman, P. K., Semple, C., & Goldstein, L. (1979). Cognitive style and interhemispheric differentiation in the EEG. Neuropsycho- logia, 17(6), 699702.

Poeppel, D., & Hickok, G. (2004). Towards a new function- al anatomy of language. Cognition, 92(1–2), 112.

Pressel Coretto, G. A., Gareis, I. E., & Rufiner, H. L. (2017). Open access database of EEG signals recorded during imagined speech. 12th International Symposium on Medical Information Processing and Analysis, 10160, 1016002.

Proakis, J. G., & Manolakis, D. G. (2007). Digital signal process- ing: Principles, algorithms, and applications (4th ed.). Pearson Education.

Qureshi, M. N. I., Min, B., Park, H. J., Cho, D., Choi, W., & Lee, B. (2018). Multiclass classification of word imagination speech with hybrid connectivity features. IEEE Transactions on Biomedical Engineering, 65(10), 2168-2177.

Rashid, M., Sulaiman, N., P. P. Abdul Majeed, A., Musa, R. M., Ah- mad, A. F., Bari, B. S., & Khatun, S. (2020). Current status, challenges, and possible solutions of eeg-based brain-computer interface: A comprehensive review. Frontiers in Neurorobot- ics, 14(June), 1-35.

Riaz, A., Akhtar, S., Iftikhar, S., Khan, A. A., & Salman, A. (2015). Inter comparison of classification techniques for vowel speech imagery using EEG sensors. 2014 2nd International Conference on Systems and Informatics, ICSAI 2014, Icsai, 712-717.

Sarmiento, L. C., Lorenzana, P., Cortés, C. J., Arcos, W. J., Bacca, JA., & Tovar, A. (2014). Brain computer interface (BCI) with EEG signals for automatic vowel recognition based on articulation mode. ISSNIP Biosignals and Biorobotics Conference, BRC.

Sarmiento, L. C., Villamizar, S., López, O., Collazos, A. C., Sarmien- to, J., & Rodríguez, J. B. (2021). Recognition of eeg signals from imagined vowels using deep learning methods. Sensors, 21(19), 6503.

Solórzano-Restrepo, J., & López-Vargas, O. (2019). Efecto diferen- cial de un andamiaje metacognitivo en un ambiente e-learning sobre la carga cognitiva, el logro de aprendizaje y la habilidad metacognitiva. Suma Psicológica, 24(1), 33-50.

Valencia-Vallejo, N., López-Vargas, O., & Sanabria-Rodríguez, L. (2019). Effect of a metacognitive scaffolding on self-efficacy, metacognition, and achievement in e-learning environments. Knowledge Management and E-Learning, 11(1), 1-19.

Villamizar, S. I., Sarmiento, L. C., López, O., Caballero, J., & Bacca, J. (2021). EEG vowel silent speech signal discrimination based on apit-emd and svd. Lecture Notes in Electrical Engineering, 685 LNEE(Mi), 74-83.

Wang, L., Zhang, X., Zhong, X., & Zhang, Y. (2013). Analysis and classification of speech imagery EEG for BCI. Biomedi- cal Signal Processing and Control, 8(6), 901-908.

Witkin, H. A., Moore, C. A., Goodenough, D. R., & Cox, P. W. (1977). Field-Dependent and Field-Independent Cognitive Styles and Their Educational Implications. Review of Educational Re- search, 47(1), 1-64.

Yoshimura, N., Nishimoto, A., Belkacem, A. N., Shin, D., Kam- bara, H., Hanakawa, T., & Koike, Y. (2016). Decoding of cov- ert vowel articulation using electroencephalography cortical currents. Frontiers in Neuroscience, 10(MAY), 1-15.

Yuan, R., Lv, Y., & Song, G. (2018). Multi-fault diagnosis of roll- ing bearings via adaptive projection intrinsically transformed multivariate empirical mode decomposition and high order sin- gular value decomposition. Sensors (Switzerland), 18(4), 1210.

Zoccolotti, P. (1982). Field dependence, laterality and the EEG: A reanalysis of O’Connor and Shaw (1982). Biological Psychology, 15, 203-207.