Silent speech of vowels in persons of different cognitive styles

Habla silenciosa de las vocales en personas de diferente estilo cognitivo

Omar López-Vargas , Luis Sarmiento-Vela , Jan Bacca-Rodríguez , Sergio Villamizar-Delgado , Jhon Sarmiento-Vela

Suma Psicológica, (2022), 29(1), pp. 20-29.

Received 29 August 2021
Accept 19 January 2022

https://doi.org/10.14349/sumapsi.2022.v29.n1.3

Abstract

Introduction: This research measures the differences in silent speech of the vowels / a / – / u / in Spanish, in students with different cognitive styles in the Field Dependence – Independence (FDI) dimension. Method: Fifty-one (51) adults participated in the study. Electroencephalographic (EEG) signals were taken from 14 electrodes placed on the scalp in the language region located in the left hemisphere. Previously, the embedded figures test (EFT) was applied in order to classify them into dependent, intermediate and field independent persons. To analyse the EEG data, the signals were decomposed into intrinsic mode functions (IMF) and a mixed repeated measures analysis was performed. Results: It was found that the Power Spectral Density (PSD) in the vowels is independent of the cognitive style and its magnitude depends on the position of the electrodes. Conclusions: The results suggest that there are no significant differences in PSDs in the silent speech of vowels /a/-/u/ in persons of different cognitive styles. Significant differences were found in the PSDs according to the position of the 14 electrodes used. In our configuration, the silent speech of vowels can be studied using electrodes placed in premotor, motor and Wernicke areas.


Keywords:
Voluntary signals, silent speech, cognitive style, EEG, vowels

Resumen

Introducción: La investigación mide las diferencias en el habla silenciosa de las vocales /a/-/u/ en español, en estudiantes de diferente estilo cognitivo en la dimensión Dependencia – Independencia de campo (DIC). Método: En el estudio participaron 51 adultos. Se tomaron señales electroencefalográficas (EEG), a partir de 14 electrodos dispuestos sobre el cuero cabelludo de la región del lenguaje ubicada en el hemisferio izquierdo. Previamente les fue aplicado el test de figuras enmascaradas EFT con el fin de clasificarlos en personas dependientes, intermedios e independientes de campo. Para analizar los datos del EEG se descompusieron las señales en funciones de modo intrínseco (IMF) y se realizó un análisis mixto de medidas repetidas. Resultados: Se halló que la densidad espectral de potencia (PSD) en las vocales es independiente del estilo cognitivo y su magnitud depende de la posición de los electrodos. Conclusión: Los resultados sugieren que no existen diferencias significativas en los PSD en el habla silenciosa de las vocales /a/-/u/ en las personas de diferente estilo cognitivo. Se hallaron diferencias significativas en los PSD de acuerdo con la posición de los 14 electrodos utilizados. En nuestra configuración, el habla silenciosa de las vocales puede ser estudiada mediante electrodos situados en las áreas premotora, motora y de Wernicke.


Palabras Clave:

Señales voluntarias, habla silenciosa, estilo cognitivo, EEG, vocales

Artículo Completo
Bibliografía

Callan, D. E., Callan, A. M., Honda, K., & Masaki, S. (2000). Single-sweep EEG analysis of neural processes underlying perception and production of vowels. Cognitive Brain Research, 10(1-2), 173-176. https://doi.org/10.1016/s0926-6410(00)00025-2

Chi, X., Hagedorn, J. B., Schoonover, D., & Zmura, M. D. (2011). EEG-Based discrimination of imagined speech phonemes. International Journal of Bioelectromagnetism, 13(4), 201-206. https://pdfs.semanticscholar.org/b74f/c325556d1a7b5eb05fe90cde1f0c891357a3.pdf

Cooney, C., Folli, R., & Coyle, D. (2018). Neurolinguistics research advancing development of a direct-speech brain-computer interface. IScience, 8, 103-125. https://doi.org/10.1016/j.isci.2018.09.016

Cooney, C., Korik, A., Folli, R., & Coyle, D. (2020). Evaluation of hyperparameter optimization in machine and deep learning methods for decoding imagined speech EEG. Sensors (Switzerland), 20(16), 1-22. https://doi.org/10.3390/s20164629

D’Zmura, M., Deng, S., Lappas, T., Thorpe, S., & Srinivasan, R. (2009). Toward EEG sensing of imagined speech. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 5610 LNCS(PART 1), 40-48. https://doi.org/10.1007/978-3-642-02574-7_5

DaSalla, C. S., Kambara, H., Sato, M., & Koike, Y. (2009). Singletrial classification of vowel speech imagery using common spatial patterns. Neural Networks, 22(9), 1334-1339. https://doi.org/10.1016/j.neunet.2009.05.008

Evans, C., Richardson, J. T. E., & Waring, M. (2013). Field independence: Reviewing the evidence. British Journal of Educational Psychology, 83(2), 210-224. https://doi.org/10.1111/bjep.12015

Fujimaki, N., Takeuchi, F., Kobayashi, T., Kuriki, S., & Hasuo, S. (1994). Event-related potentials in silent speech. Brain Topog-raphy, 6(4), 259-267. https://doi.org/10.1007/BF01211171

Geschwind, N. (1965). Disconnection syndromes in animals and man. Brain, 88(2), 237-294.

Ghosh, R., Sinha, N., Biswas, S. K., & Phadikar, S. (2019). A modified grey wolf optimization based feature selection method from EEG for silent speech classification. Journal of Information and Optimization Sciences, 40(8), 1639-1652. https://doi.org/10.1080/02522667.2019.1703262

González-Castañeda, E. F., Torres-García, A. A., Reyes-García, C. A., & Villaseñor-Pineda, L. (2017). Sonification and textification: Proposing methods for classifying unspoken words from EEG signals. Biomedical Signal Processing and Control, 37, 82-91. https://doi.org/10.1016/j.bspc.2016.10.012

Graimann, B., Allison, B. Z., & Pfurtscheller, G. (2010). Brain-Computer interfaces: Revolutionizing human-computer interaction (1st ed.). Springer Verlag.

Hansen, S. T., Hemakom, A., Gylling Safeldt, M., Krohne, L. K., Madsen, K. H., Siebner, H. R., Mandic, D. P., & Hansen, L. (2019). Unmixing oscillatory brain activity by EEG source localization and empirical mode decomposition. Computational Intelligence and Neuroscience, 2019. https://doi.org/10.1155/2019/5618303

Hederich-Martínez, C., López-Vargas, O., & Camargo-Uribe, A. (2016). Effects of the use of a flexible metacognitive scaffolding on self-regulated learning during virtual education. International Journal of Technology Enhanced Learning, 8(3/4), 1. https://doi.org/10.1504/ijtel.2016.10002201

Hemakom, A., Goverdovsky, V., Looney, D., & Mandic, D. P. (2016). Adaptive-projection intrinsically transformed multivariate empirical mode decomposition in cooperative brain-computer interface applications. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2065). https://doi.org/10.1098/rsta.2015.0199

Ikeda, S., Shibata, T., Nakano, N., Okada, R., Tsuyuguchi, N., Ike- da, K., & Kato, A. (2014). Neural decoding of single vowels dur- ing covert articulation using electrocorticography. Frontiers in Human Neuroscience, 8(MAR), 1-8. https://doi.org/10.3389/fnhum.2014.00125

Imanaka, M., Kakigi, R., & Nakata, H. (2017). The relationship between cognitive style and event-related potentials during auditory and somatosensory Go/No-go paradigms. NeuroReport, 28(13), 822-827. https://doi.org/10.1097/WNR.0000000000000833

Iqbal, S., Khan, Y. U., & Farooq, O. (2015). EEG based classification of imagined vowel sounds. 2015 International Conference on Computing for Sustainable Global Development, INDIACom 2015, 1591-1594.

Jahangiri, A., & Sepulveda, F. (2019). Correction to: The Relative Contribution of High-Gamma Linguistic Processing Stages of Word Production, and Motor Imagery of Articulation in Class Separability of Covert Speech Tasks in EEG Data. Journal of Medical Systems, 43(8), 237. https://doi.org/10.1007/s10916-019-1379-1

Jia, S., Zhang, Q., & Li, S. (2014). Field dependence-independence modulates the efficiency of filtering out irrelevant information in a visual working memory task. Neuroscience, 278, 136-143. https://doi.org/10.1016/j.neuroscience.2014.07.075

Lee, W., Seong, J. J., Ozlu, B., Shim, B. S., Marakhimov, A., & Lee, S. (2021). Biosignal sensors and deep learning-based speech recognition: A review. Sensors (Switzerland), 21(4), 1-22. https://doi.org/10.3390/s21041399

Li, R., Johansen, J. S., Ahmed, H., Ilyevsky, T. V., Wilbur, R. B., Bharadwaj, H. M., & Siskind, J. M. (2018). Training on the test set? An analysis of Spampinato et al. [31]. 1-18. http://arxiv.org/abs/1812.07697

Li, Y., & Wong, K. M. (2013). Riemannian distances for signal classification by power spectral density. IEEE Journal on Selected Topics in Signal Processing, 7(4), 655-669. https://doi.org/10.1109/JSTSP.2013.2260320

López-Vargas, O., Ortiz-Vásquez, J., & Ibáñez-Ibáñez, J. (2020). Autoeficacia y logro de aprendizaje en estudiantes con diferente estilo cognitivo en un ambiente m-learning. Pensamiento Psicológico, 18(1), 71–85. https://doi.org/10.11144/Javerianacali.PPSI18-1.alae

López, O., Hederich, C., & Camargo, A. (2012). Logro de aprendizaje en ambientes hipermediales: Andamiaje autorregulador y estilo cognitivo. Revista Latinoamericana de Psicología, 44(2), 13-25.

Martin, S., Brunner, P., Holdgraf, C., Heinze, H. J., Crone, N. E., Rieger, J., Schalk, G., Knight, R. T., & Pasley, B. N. (2014). Decoding spectrotemporal features of overt and covert speech from the human cortex. Frontiers in Neuroengineering, 7(MAY), 1-15. https://doi.org/10.3389/fneng.2014.00014

Matsumoto, M., & Hori, J. (2013). Classification of silent speech using adaptive collection. Proceedings of the 2013 IEEE Sym- posium on Computational Intelligence in Rehabilitation and Assistive Technologies, CIRAT 2013 – 2013 IEEE Symposium Series on Computational Intelligence, SSCI 2013, 5-12. https://doi.org/10.1109/CIRAT.2013.6613816

Matsumoto, M., & Hori, J. (2014). Classification of silent speech using support vector machine and relevance vector machine. Applied Soft Computing Journal, 20, 95-102. https://doi.org/10.1016/j.asoc.2013.10.023

Min, B., Kim, J., Park, H. J., & Lee, B. (2016). Vowel Imagery Decoding toward Silent Speech BCI Using Extreme Learning Machine with Electroencephalogram. BioMed Research International, 2016. https://doi.org/10.1155/2016/2618265

Morooka, T., Ishizuka, K., & Kobayashi, N. (2018). Electroencephalographic Analysis of Auditory Imagination to Realize Silent Speech BCI. 2018 IEEE 7th Global Conference on Consumer Electronics, GCCE 2018, October, 683–686.

Nguyen, C. H., Karavas, G. K., & Artemiadis, P. (2018). Inferring imagined speech using EEG signals: a new approach using Riemannian manifold features. Journal of Neural Engineering, 15(1), 016002. https://doi.org/10.1088/1741-2552/aa8235

Oltman, P. K., Semple, C., & Goldstein, L. (1979). Cognitive style and interhemispheric differentiation in the EEG. Neuropsychologia, 17(6), 699702.

Poeppel, D., & Hickok, G. (2004). Towards a new functional anatomy of language. Cognition, 92(1–2), 112. https://doi.org/10.1016/j.cognition.2003.11.001

Pressel Coretto, G. A., Gareis, I. E., & Rufiner, H. L. (2017). Open access database of EEG signals recorded during imagined speech. 12th International Symposium on Medical Information Processing and Analysis, 10160, 1016002. https://doi.org/10.1117/12.2255697

Proakis, J. G., & Manolakis, D. G. (2007). Digital signal processing: Principles, algorithms, and applications (4th ed.). Pearson Education.

Qureshi, M. N. I., Min, B., Park, H. J., Cho, D., Choi, W., & Lee, B. (2018). Multiclass classification of word imagination speech with hybrid connectivity features. IEEE Transactions on Biomedical Engineering, 65(10), 2168-2177. https://doi.org/10.1109/TBME.2017.2786251

Rashid, M., Sulaiman, N., P. P. Abdul Majeed, A., Musa, R. M., Ah- mad, A. F., Bari, B. S., & Khatun, S. (2020). Current status, challenges, and possible solutions of eeg-based brain-computer interface: A comprehensive review. Frontiers in Neurorobot- ics, 14(June), 1-35. https://doi.org/10.3389/fnbot.2020.00025

Riaz, A., Akhtar, S., Iftikhar, S., Khan, A. A., & Salman, A. (2015). Inter comparison of classification techniques for vowel speech imagery using EEG sensors. 2014 2nd International Conference on Systems and Informatics, ICSAI 2014, Icsai, 712-717. https://doi.org/10.1109/ICSAI.2014.7009378

Sarmiento, L. C., Lorenzana, P., Cortés, C. J., Arcos, W. J., Bacca, JA., & Tovar, A. (2014). Brain computer interface (BCI) with EEG signals for automatic vowel recognition based on articulation mode. ISSNIP Biosignals and Biorobotics Conference, BRC. https://doi.org/10.1109/BRC.2014.6880997

Sarmiento, L. C., Villamizar, S., López, O., Collazos, A. C., Sarmiento, J., & Rodríguez, J. B. (2021). Recognition of eeg signals from imagined vowels using deep learning methods. Sensors, 21(19), 6503. https://doi.org/10.3390/s21196503

Solórzano-Restrepo, J., & López-Vargas, O. (2019). Efecto diferencial de un andamiaje metacognitivo en un ambiente e-learning sobre la carga cognitiva, el logro de aprendizaje y la habilidad metacognitiva. Suma Psicológica, 24(1), 33-50. https://doi.org/10.14349/sumapsi.2019.v26.n1.5

Valencia-Vallejo, N., López-Vargas, O., & Sanabria-Rodríguez, L. (2019). Effect of a metacognitive scaffolding on self-efficacy, metacognition, and achievement in e-learning environments. Knowledge Management and E-Learning, 11(1), 1-19. https://doi.org/10.34105/j.kmel.2019.11.001

Villamizar, S. I., Sarmiento, L. C., López, O., Caballero, J., & Bacca, J. (2021). EEG vowel silent speech signal discrimination based on apit-emd and svd. Lecture Notes in Electrical Engineering, 685 LNEE(Mi), 74-83. https://doi.org/10.1007/978-3-030-53021-1_8

Wang, L., Zhang, X., Zhong, X., & Zhang, Y. (2013). Analysis and classification of speech imagery EEG for BCI. Biomedical Signal Processing and Control, 8(6), 901-908. https://doi.org/10.1016/j.bspc.2013.07.011

Witkin, H. A., Moore, C. A., Goodenough, D. R., & Cox, P. W. (1977). Field-Dependent and Field-Independent Cognitive Styles and Their Educational Implications. Review of Educational Research, 47(1), 1-64. https://bit.ly/3q6bncG

Yoshimura, N., Nishimoto, A., Belkacem, A. N., Shin, D., Kam- bara, H., Hanakawa, T., & Koike, Y. (2016). Decoding of covert vowel articulation using electroencephalography cortical currents. Frontiers in Neuroscience, 10(MAY), 1-15. https://doi.org/10.3389/fnins.2016.00175

Yuan, R., Lv, Y., & Song, G. (2018). Multi-fault diagnosis of roll- ing bearings via adaptive projection intrinsically transformed multivariate empirical mode decomposition and high order singular value decomposition. Sensors (Switzerland), 18(4), 1210. https://doi.org/10.3390/s18041210

Zoccolotti, P. (1982). Field dependence, laterality and the EEG: A reanalysis of O’Connor and Shaw (1982). Biological Psychology, 15, 203-207.

PDF