Spoken Utterance Classification Task of Arabic Numerals and Selected Isolated Words
Abstract Nowadays, sensory organs are becoming essential means for controlling modern machines which require human intervention. Among these means, we can cite the sense of voice which can be used to control and monitor modern interfaces. In this regard, Automatic Speech Recognition (ASR) is mainly...
Ausführliche Beschreibung
Autor*in: |
dabbabi, Karim [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2022 |
---|
Schlagwörter: |
---|
Anmerkung: |
© King Fahd University of Petroleum & Minerals 2022 |
---|
Übergeordnetes Werk: |
Enthalten in: The Arabian journal for science and engineering - Berlin : Springer, 2011, 47(2022), 8 vom: 31. März, Seite 10731-10750 |
---|---|
Übergeordnetes Werk: |
volume:47 ; year:2022 ; number:8 ; day:31 ; month:03 ; pages:10731-10750 |
Links: |
---|
DOI / URN: |
10.1007/s13369-022-06649-0 |
---|
Katalog-ID: |
SPR047811757 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | SPR047811757 | ||
003 | DE-627 | ||
005 | 20230509105122.0 | ||
007 | cr uuu---uuuuu | ||
008 | 220810s2022 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1007/s13369-022-06649-0 |2 doi | |
035 | |a (DE-627)SPR047811757 | ||
035 | |a (SPR)s13369-022-06649-0-e | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
100 | 1 | |a dabbabi, Karim |e verfasserin |4 aut | |
245 | 1 | 0 | |a Spoken Utterance Classification Task of Arabic Numerals and Selected Isolated Words |
264 | 1 | |c 2022 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
500 | |a © King Fahd University of Petroleum & Minerals 2022 | ||
520 | |a Abstract Nowadays, sensory organs are becoming essential means for controlling modern machines which require human intervention. Among these means, we can cite the sense of voice which can be used to control and monitor modern interfaces. In this regard, Automatic Speech Recognition (ASR) is mainly explored to accomplish many tasks, such as translating natural voice into computer text and performing actions based on human commands. In this paper, a system for recognizing spoken Arabic numerals and words based on two classification methods is proposed. The first classification approach is a combination of Convolutional Neural Network (CNN) with Long Short-Term Memory (LSTM) and Fully Connected (FC) network (CNN-LSTM-FC), while the second is based on the conventional Dense Network (DenseNet). These classification approaches are integrated into the proposed Arabic speech recognition system to perform the classification task by exploring uniform length sequences of speech utterances extracted from the Mel-frequency Cepstral Coefficients (MFCCs). Regarding the CNN-LSTM-FC approach, it is offered with the objective of learning high-level features that contain long-term contextual dependencies and local information. These features include less information than raw data, which helps to reduce the training time. Also, the CNN-LSTM-FC method allows capturing global contextual information and local correlation results from MFCC coefficients. With respect to the DenseNet model, it is explored to benefit from the direct connections %$\frac{{L\left( {L + 1} \right)}}{2}%$ between its layers in addition to its ability to alleviate the problem of the vanishing of gradient and the reduction in the number of its explored parameters. The training time is therefore reduced. Our models were evaluated on two databases: The first is a database of English voice commands, while the second is that of spoken Arabic numerals and words. Experimental tests showed that the CNN-LSTM-FC model with MFCC coefficients performed best on the database of spoken Arabic numerals and words in terms of evaluated performances (accuracy = 88.04%, precision = 88.56%, recall = 87.78%, F1 = 88.17, and error = 1.10%) compared to those obtained with the DenseNet model. Additionally, the best results on the database of English voice command for precision (87.15%), F1 (85.66), and error (0.58%) were obtained by the CNN-LSTM-FC model, while those for accuracy (85.40%) and recall (85.40%) were achieved using the DenseNet model. Even the two proposed models led to acceptable results on both databases; however, they require less computation to achieve higher performance. | ||
650 | 4 | |a CNN |7 (dpeaa)DE-He213 | |
650 | 4 | |a LSTM |7 (dpeaa)DE-He213 | |
650 | 4 | |a FC |7 (dpeaa)DE-He213 | |
650 | 4 | |a CNN-LSTM-FC |7 (dpeaa)DE-He213 | |
650 | 4 | |a DenseNet |7 (dpeaa)DE-He213 | |
650 | 4 | |a Spoken Arabic numerals and words database |7 (dpeaa)DE-He213 | |
650 | 4 | |a English voice command database |7 (dpeaa)DE-He213 | |
700 | 1 | |a Mars, Abdelkarim |4 aut | |
773 | 0 | 8 | |i Enthalten in |t The Arabian journal for science and engineering |d Berlin : Springer, 2011 |g 47(2022), 8 vom: 31. März, Seite 10731-10750 |w (DE-627)588780731 |w (DE-600)2471504-9 |x 2191-4281 |7 nnns |
773 | 1 | 8 | |g volume:47 |g year:2022 |g number:8 |g day:31 |g month:03 |g pages:10731-10750 |
856 | 4 | 0 | |u https://dx.doi.org/10.1007/s13369-022-06649-0 |z lizenzpflichtig |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_SPRINGER | ||
912 | |a GBV_ILN_11 | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_32 | ||
912 | |a GBV_ILN_39 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_63 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_74 | ||
912 | |a GBV_ILN_90 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_100 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_120 | ||
912 | |a GBV_ILN_138 | ||
912 | |a GBV_ILN_150 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_152 | ||
912 | |a GBV_ILN_161 | ||
912 | |a GBV_ILN_170 | ||
912 | |a GBV_ILN_171 | ||
912 | |a GBV_ILN_187 | ||
912 | |a GBV_ILN_213 | ||
912 | |a GBV_ILN_224 | ||
912 | |a GBV_ILN_230 | ||
912 | |a GBV_ILN_250 | ||
912 | |a GBV_ILN_281 | ||
912 | |a GBV_ILN_285 | ||
912 | |a GBV_ILN_293 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_636 | ||
912 | |a GBV_ILN_702 | ||
912 | |a GBV_ILN_2001 | ||
912 | |a GBV_ILN_2003 | ||
912 | |a GBV_ILN_2004 | ||
912 | |a GBV_ILN_2005 | ||
912 | |a GBV_ILN_2006 | ||
912 | |a GBV_ILN_2007 | ||
912 | |a GBV_ILN_2008 | ||
912 | |a GBV_ILN_2009 | ||
912 | |a GBV_ILN_2010 | ||
912 | |a GBV_ILN_2011 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_2015 | ||
912 | |a GBV_ILN_2020 | ||
912 | |a GBV_ILN_2021 | ||
912 | |a GBV_ILN_2025 | ||
912 | |a GBV_ILN_2026 | ||
912 | |a GBV_ILN_2027 | ||
912 | |a GBV_ILN_2031 | ||
912 | |a GBV_ILN_2034 | ||
912 | |a GBV_ILN_2037 | ||
912 | |a GBV_ILN_2038 | ||
912 | |a GBV_ILN_2039 | ||
912 | |a GBV_ILN_2044 | ||
912 | |a GBV_ILN_2048 | ||
912 | |a GBV_ILN_2049 | ||
912 | |a GBV_ILN_2050 | ||
912 | |a GBV_ILN_2055 | ||
912 | |a GBV_ILN_2056 | ||
912 | |a GBV_ILN_2057 | ||
912 | |a GBV_ILN_2059 | ||
912 | |a GBV_ILN_2061 | ||
912 | |a GBV_ILN_2064 | ||
912 | |a GBV_ILN_2065 | ||
912 | |a GBV_ILN_2068 | ||
912 | |a GBV_ILN_2088 | ||
912 | |a GBV_ILN_2093 | ||
912 | |a GBV_ILN_2106 | ||
912 | |a GBV_ILN_2107 | ||
912 | |a GBV_ILN_2108 | ||
912 | |a GBV_ILN_2110 | ||
912 | |a GBV_ILN_2111 | ||
912 | |a GBV_ILN_2112 | ||
912 | |a GBV_ILN_2113 | ||
912 | |a GBV_ILN_2118 | ||
912 | |a GBV_ILN_2122 | ||
912 | |a GBV_ILN_2129 | ||
912 | |a GBV_ILN_2143 | ||
912 | |a GBV_ILN_2144 | ||
912 | |a GBV_ILN_2147 | ||
912 | |a GBV_ILN_2148 | ||
912 | |a GBV_ILN_2152 | ||
912 | |a GBV_ILN_2153 | ||
912 | |a GBV_ILN_2188 | ||
912 | |a GBV_ILN_2232 | ||
912 | |a GBV_ILN_2336 | ||
912 | |a GBV_ILN_2446 | ||
912 | |a GBV_ILN_2470 | ||
912 | |a GBV_ILN_2472 | ||
912 | |a GBV_ILN_2507 | ||
912 | |a GBV_ILN_2522 | ||
912 | |a GBV_ILN_2548 | ||
912 | |a GBV_ILN_4035 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4046 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4126 | ||
912 | |a GBV_ILN_4242 | ||
912 | |a GBV_ILN_4246 | ||
912 | |a GBV_ILN_4249 | ||
912 | |a GBV_ILN_4251 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4306 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4326 | ||
912 | |a GBV_ILN_4328 | ||
912 | |a GBV_ILN_4333 | ||
912 | |a GBV_ILN_4334 | ||
912 | |a GBV_ILN_4335 | ||
912 | |a GBV_ILN_4336 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4393 | ||
912 | |a GBV_ILN_4700 | ||
951 | |a AR | ||
952 | |d 47 |j 2022 |e 8 |b 31 |c 03 |h 10731-10750 |
author_variant |
k d kd a m am |
---|---|
matchkey_str |
article:21914281:2022----::pknteaccasfctotsoaainmrlade |
hierarchy_sort_str |
2022 |
publishDate |
2022 |
allfields |
10.1007/s13369-022-06649-0 doi (DE-627)SPR047811757 (SPR)s13369-022-06649-0-e DE-627 ger DE-627 rakwb eng dabbabi, Karim verfasserin aut Spoken Utterance Classification Task of Arabic Numerals and Selected Isolated Words 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © King Fahd University of Petroleum & Minerals 2022 Abstract Nowadays, sensory organs are becoming essential means for controlling modern machines which require human intervention. Among these means, we can cite the sense of voice which can be used to control and monitor modern interfaces. In this regard, Automatic Speech Recognition (ASR) is mainly explored to accomplish many tasks, such as translating natural voice into computer text and performing actions based on human commands. In this paper, a system for recognizing spoken Arabic numerals and words based on two classification methods is proposed. The first classification approach is a combination of Convolutional Neural Network (CNN) with Long Short-Term Memory (LSTM) and Fully Connected (FC) network (CNN-LSTM-FC), while the second is based on the conventional Dense Network (DenseNet). These classification approaches are integrated into the proposed Arabic speech recognition system to perform the classification task by exploring uniform length sequences of speech utterances extracted from the Mel-frequency Cepstral Coefficients (MFCCs). Regarding the CNN-LSTM-FC approach, it is offered with the objective of learning high-level features that contain long-term contextual dependencies and local information. These features include less information than raw data, which helps to reduce the training time. Also, the CNN-LSTM-FC method allows capturing global contextual information and local correlation results from MFCC coefficients. With respect to the DenseNet model, it is explored to benefit from the direct connections %$\frac{{L\left( {L + 1} \right)}}{2}%$ between its layers in addition to its ability to alleviate the problem of the vanishing of gradient and the reduction in the number of its explored parameters. The training time is therefore reduced. Our models were evaluated on two databases: The first is a database of English voice commands, while the second is that of spoken Arabic numerals and words. Experimental tests showed that the CNN-LSTM-FC model with MFCC coefficients performed best on the database of spoken Arabic numerals and words in terms of evaluated performances (accuracy = 88.04%, precision = 88.56%, recall = 87.78%, F1 = 88.17, and error = 1.10%) compared to those obtained with the DenseNet model. Additionally, the best results on the database of English voice command for precision (87.15%), F1 (85.66), and error (0.58%) were obtained by the CNN-LSTM-FC model, while those for accuracy (85.40%) and recall (85.40%) were achieved using the DenseNet model. Even the two proposed models led to acceptable results on both databases; however, they require less computation to achieve higher performance. CNN (dpeaa)DE-He213 LSTM (dpeaa)DE-He213 FC (dpeaa)DE-He213 CNN-LSTM-FC (dpeaa)DE-He213 DenseNet (dpeaa)DE-He213 Spoken Arabic numerals and words database (dpeaa)DE-He213 English voice command database (dpeaa)DE-He213 Mars, Abdelkarim aut Enthalten in The Arabian journal for science and engineering Berlin : Springer, 2011 47(2022), 8 vom: 31. März, Seite 10731-10750 (DE-627)588780731 (DE-600)2471504-9 2191-4281 nnns volume:47 year:2022 number:8 day:31 month:03 pages:10731-10750 https://dx.doi.org/10.1007/s13369-022-06649-0 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_150 GBV_ILN_151 GBV_ILN_152 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2093 GBV_ILN_2106 GBV_ILN_2107 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2188 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2446 GBV_ILN_2470 GBV_ILN_2472 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4246 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4328 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 AR 47 2022 8 31 03 10731-10750 |
spelling |
10.1007/s13369-022-06649-0 doi (DE-627)SPR047811757 (SPR)s13369-022-06649-0-e DE-627 ger DE-627 rakwb eng dabbabi, Karim verfasserin aut Spoken Utterance Classification Task of Arabic Numerals and Selected Isolated Words 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © King Fahd University of Petroleum & Minerals 2022 Abstract Nowadays, sensory organs are becoming essential means for controlling modern machines which require human intervention. Among these means, we can cite the sense of voice which can be used to control and monitor modern interfaces. In this regard, Automatic Speech Recognition (ASR) is mainly explored to accomplish many tasks, such as translating natural voice into computer text and performing actions based on human commands. In this paper, a system for recognizing spoken Arabic numerals and words based on two classification methods is proposed. The first classification approach is a combination of Convolutional Neural Network (CNN) with Long Short-Term Memory (LSTM) and Fully Connected (FC) network (CNN-LSTM-FC), while the second is based on the conventional Dense Network (DenseNet). These classification approaches are integrated into the proposed Arabic speech recognition system to perform the classification task by exploring uniform length sequences of speech utterances extracted from the Mel-frequency Cepstral Coefficients (MFCCs). Regarding the CNN-LSTM-FC approach, it is offered with the objective of learning high-level features that contain long-term contextual dependencies and local information. These features include less information than raw data, which helps to reduce the training time. Also, the CNN-LSTM-FC method allows capturing global contextual information and local correlation results from MFCC coefficients. With respect to the DenseNet model, it is explored to benefit from the direct connections %$\frac{{L\left( {L + 1} \right)}}{2}%$ between its layers in addition to its ability to alleviate the problem of the vanishing of gradient and the reduction in the number of its explored parameters. The training time is therefore reduced. Our models were evaluated on two databases: The first is a database of English voice commands, while the second is that of spoken Arabic numerals and words. Experimental tests showed that the CNN-LSTM-FC model with MFCC coefficients performed best on the database of spoken Arabic numerals and words in terms of evaluated performances (accuracy = 88.04%, precision = 88.56%, recall = 87.78%, F1 = 88.17, and error = 1.10%) compared to those obtained with the DenseNet model. Additionally, the best results on the database of English voice command for precision (87.15%), F1 (85.66), and error (0.58%) were obtained by the CNN-LSTM-FC model, while those for accuracy (85.40%) and recall (85.40%) were achieved using the DenseNet model. Even the two proposed models led to acceptable results on both databases; however, they require less computation to achieve higher performance. CNN (dpeaa)DE-He213 LSTM (dpeaa)DE-He213 FC (dpeaa)DE-He213 CNN-LSTM-FC (dpeaa)DE-He213 DenseNet (dpeaa)DE-He213 Spoken Arabic numerals and words database (dpeaa)DE-He213 English voice command database (dpeaa)DE-He213 Mars, Abdelkarim aut Enthalten in The Arabian journal for science and engineering Berlin : Springer, 2011 47(2022), 8 vom: 31. März, Seite 10731-10750 (DE-627)588780731 (DE-600)2471504-9 2191-4281 nnns volume:47 year:2022 number:8 day:31 month:03 pages:10731-10750 https://dx.doi.org/10.1007/s13369-022-06649-0 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_150 GBV_ILN_151 GBV_ILN_152 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2093 GBV_ILN_2106 GBV_ILN_2107 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2188 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2446 GBV_ILN_2470 GBV_ILN_2472 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4246 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4328 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 AR 47 2022 8 31 03 10731-10750 |
allfields_unstemmed |
10.1007/s13369-022-06649-0 doi (DE-627)SPR047811757 (SPR)s13369-022-06649-0-e DE-627 ger DE-627 rakwb eng dabbabi, Karim verfasserin aut Spoken Utterance Classification Task of Arabic Numerals and Selected Isolated Words 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © King Fahd University of Petroleum & Minerals 2022 Abstract Nowadays, sensory organs are becoming essential means for controlling modern machines which require human intervention. Among these means, we can cite the sense of voice which can be used to control and monitor modern interfaces. In this regard, Automatic Speech Recognition (ASR) is mainly explored to accomplish many tasks, such as translating natural voice into computer text and performing actions based on human commands. In this paper, a system for recognizing spoken Arabic numerals and words based on two classification methods is proposed. The first classification approach is a combination of Convolutional Neural Network (CNN) with Long Short-Term Memory (LSTM) and Fully Connected (FC) network (CNN-LSTM-FC), while the second is based on the conventional Dense Network (DenseNet). These classification approaches are integrated into the proposed Arabic speech recognition system to perform the classification task by exploring uniform length sequences of speech utterances extracted from the Mel-frequency Cepstral Coefficients (MFCCs). Regarding the CNN-LSTM-FC approach, it is offered with the objective of learning high-level features that contain long-term contextual dependencies and local information. These features include less information than raw data, which helps to reduce the training time. Also, the CNN-LSTM-FC method allows capturing global contextual information and local correlation results from MFCC coefficients. With respect to the DenseNet model, it is explored to benefit from the direct connections %$\frac{{L\left( {L + 1} \right)}}{2}%$ between its layers in addition to its ability to alleviate the problem of the vanishing of gradient and the reduction in the number of its explored parameters. The training time is therefore reduced. Our models were evaluated on two databases: The first is a database of English voice commands, while the second is that of spoken Arabic numerals and words. Experimental tests showed that the CNN-LSTM-FC model with MFCC coefficients performed best on the database of spoken Arabic numerals and words in terms of evaluated performances (accuracy = 88.04%, precision = 88.56%, recall = 87.78%, F1 = 88.17, and error = 1.10%) compared to those obtained with the DenseNet model. Additionally, the best results on the database of English voice command for precision (87.15%), F1 (85.66), and error (0.58%) were obtained by the CNN-LSTM-FC model, while those for accuracy (85.40%) and recall (85.40%) were achieved using the DenseNet model. Even the two proposed models led to acceptable results on both databases; however, they require less computation to achieve higher performance. CNN (dpeaa)DE-He213 LSTM (dpeaa)DE-He213 FC (dpeaa)DE-He213 CNN-LSTM-FC (dpeaa)DE-He213 DenseNet (dpeaa)DE-He213 Spoken Arabic numerals and words database (dpeaa)DE-He213 English voice command database (dpeaa)DE-He213 Mars, Abdelkarim aut Enthalten in The Arabian journal for science and engineering Berlin : Springer, 2011 47(2022), 8 vom: 31. März, Seite 10731-10750 (DE-627)588780731 (DE-600)2471504-9 2191-4281 nnns volume:47 year:2022 number:8 day:31 month:03 pages:10731-10750 https://dx.doi.org/10.1007/s13369-022-06649-0 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_150 GBV_ILN_151 GBV_ILN_152 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2093 GBV_ILN_2106 GBV_ILN_2107 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2188 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2446 GBV_ILN_2470 GBV_ILN_2472 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4246 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4328 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 AR 47 2022 8 31 03 10731-10750 |
allfieldsGer |
10.1007/s13369-022-06649-0 doi (DE-627)SPR047811757 (SPR)s13369-022-06649-0-e DE-627 ger DE-627 rakwb eng dabbabi, Karim verfasserin aut Spoken Utterance Classification Task of Arabic Numerals and Selected Isolated Words 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © King Fahd University of Petroleum & Minerals 2022 Abstract Nowadays, sensory organs are becoming essential means for controlling modern machines which require human intervention. Among these means, we can cite the sense of voice which can be used to control and monitor modern interfaces. In this regard, Automatic Speech Recognition (ASR) is mainly explored to accomplish many tasks, such as translating natural voice into computer text and performing actions based on human commands. In this paper, a system for recognizing spoken Arabic numerals and words based on two classification methods is proposed. The first classification approach is a combination of Convolutional Neural Network (CNN) with Long Short-Term Memory (LSTM) and Fully Connected (FC) network (CNN-LSTM-FC), while the second is based on the conventional Dense Network (DenseNet). These classification approaches are integrated into the proposed Arabic speech recognition system to perform the classification task by exploring uniform length sequences of speech utterances extracted from the Mel-frequency Cepstral Coefficients (MFCCs). Regarding the CNN-LSTM-FC approach, it is offered with the objective of learning high-level features that contain long-term contextual dependencies and local information. These features include less information than raw data, which helps to reduce the training time. Also, the CNN-LSTM-FC method allows capturing global contextual information and local correlation results from MFCC coefficients. With respect to the DenseNet model, it is explored to benefit from the direct connections %$\frac{{L\left( {L + 1} \right)}}{2}%$ between its layers in addition to its ability to alleviate the problem of the vanishing of gradient and the reduction in the number of its explored parameters. The training time is therefore reduced. Our models were evaluated on two databases: The first is a database of English voice commands, while the second is that of spoken Arabic numerals and words. Experimental tests showed that the CNN-LSTM-FC model with MFCC coefficients performed best on the database of spoken Arabic numerals and words in terms of evaluated performances (accuracy = 88.04%, precision = 88.56%, recall = 87.78%, F1 = 88.17, and error = 1.10%) compared to those obtained with the DenseNet model. Additionally, the best results on the database of English voice command for precision (87.15%), F1 (85.66), and error (0.58%) were obtained by the CNN-LSTM-FC model, while those for accuracy (85.40%) and recall (85.40%) were achieved using the DenseNet model. Even the two proposed models led to acceptable results on both databases; however, they require less computation to achieve higher performance. CNN (dpeaa)DE-He213 LSTM (dpeaa)DE-He213 FC (dpeaa)DE-He213 CNN-LSTM-FC (dpeaa)DE-He213 DenseNet (dpeaa)DE-He213 Spoken Arabic numerals and words database (dpeaa)DE-He213 English voice command database (dpeaa)DE-He213 Mars, Abdelkarim aut Enthalten in The Arabian journal for science and engineering Berlin : Springer, 2011 47(2022), 8 vom: 31. März, Seite 10731-10750 (DE-627)588780731 (DE-600)2471504-9 2191-4281 nnns volume:47 year:2022 number:8 day:31 month:03 pages:10731-10750 https://dx.doi.org/10.1007/s13369-022-06649-0 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_150 GBV_ILN_151 GBV_ILN_152 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2093 GBV_ILN_2106 GBV_ILN_2107 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2188 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2446 GBV_ILN_2470 GBV_ILN_2472 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4246 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4328 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 AR 47 2022 8 31 03 10731-10750 |
allfieldsSound |
10.1007/s13369-022-06649-0 doi (DE-627)SPR047811757 (SPR)s13369-022-06649-0-e DE-627 ger DE-627 rakwb eng dabbabi, Karim verfasserin aut Spoken Utterance Classification Task of Arabic Numerals and Selected Isolated Words 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © King Fahd University of Petroleum & Minerals 2022 Abstract Nowadays, sensory organs are becoming essential means for controlling modern machines which require human intervention. Among these means, we can cite the sense of voice which can be used to control and monitor modern interfaces. In this regard, Automatic Speech Recognition (ASR) is mainly explored to accomplish many tasks, such as translating natural voice into computer text and performing actions based on human commands. In this paper, a system for recognizing spoken Arabic numerals and words based on two classification methods is proposed. The first classification approach is a combination of Convolutional Neural Network (CNN) with Long Short-Term Memory (LSTM) and Fully Connected (FC) network (CNN-LSTM-FC), while the second is based on the conventional Dense Network (DenseNet). These classification approaches are integrated into the proposed Arabic speech recognition system to perform the classification task by exploring uniform length sequences of speech utterances extracted from the Mel-frequency Cepstral Coefficients (MFCCs). Regarding the CNN-LSTM-FC approach, it is offered with the objective of learning high-level features that contain long-term contextual dependencies and local information. These features include less information than raw data, which helps to reduce the training time. Also, the CNN-LSTM-FC method allows capturing global contextual information and local correlation results from MFCC coefficients. With respect to the DenseNet model, it is explored to benefit from the direct connections %$\frac{{L\left( {L + 1} \right)}}{2}%$ between its layers in addition to its ability to alleviate the problem of the vanishing of gradient and the reduction in the number of its explored parameters. The training time is therefore reduced. Our models were evaluated on two databases: The first is a database of English voice commands, while the second is that of spoken Arabic numerals and words. Experimental tests showed that the CNN-LSTM-FC model with MFCC coefficients performed best on the database of spoken Arabic numerals and words in terms of evaluated performances (accuracy = 88.04%, precision = 88.56%, recall = 87.78%, F1 = 88.17, and error = 1.10%) compared to those obtained with the DenseNet model. Additionally, the best results on the database of English voice command for precision (87.15%), F1 (85.66), and error (0.58%) were obtained by the CNN-LSTM-FC model, while those for accuracy (85.40%) and recall (85.40%) were achieved using the DenseNet model. Even the two proposed models led to acceptable results on both databases; however, they require less computation to achieve higher performance. CNN (dpeaa)DE-He213 LSTM (dpeaa)DE-He213 FC (dpeaa)DE-He213 CNN-LSTM-FC (dpeaa)DE-He213 DenseNet (dpeaa)DE-He213 Spoken Arabic numerals and words database (dpeaa)DE-He213 English voice command database (dpeaa)DE-He213 Mars, Abdelkarim aut Enthalten in The Arabian journal for science and engineering Berlin : Springer, 2011 47(2022), 8 vom: 31. März, Seite 10731-10750 (DE-627)588780731 (DE-600)2471504-9 2191-4281 nnns volume:47 year:2022 number:8 day:31 month:03 pages:10731-10750 https://dx.doi.org/10.1007/s13369-022-06649-0 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_150 GBV_ILN_151 GBV_ILN_152 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2093 GBV_ILN_2106 GBV_ILN_2107 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2188 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2446 GBV_ILN_2470 GBV_ILN_2472 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4246 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4328 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 AR 47 2022 8 31 03 10731-10750 |
language |
English |
source |
Enthalten in The Arabian journal for science and engineering 47(2022), 8 vom: 31. März, Seite 10731-10750 volume:47 year:2022 number:8 day:31 month:03 pages:10731-10750 |
sourceStr |
Enthalten in The Arabian journal for science and engineering 47(2022), 8 vom: 31. März, Seite 10731-10750 volume:47 year:2022 number:8 day:31 month:03 pages:10731-10750 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
CNN LSTM FC CNN-LSTM-FC DenseNet Spoken Arabic numerals and words database English voice command database |
isfreeaccess_bool |
false |
container_title |
The Arabian journal for science and engineering |
authorswithroles_txt_mv |
dabbabi, Karim @@aut@@ Mars, Abdelkarim @@aut@@ |
publishDateDaySort_date |
2022-03-31T00:00:00Z |
hierarchy_top_id |
588780731 |
id |
SPR047811757 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">SPR047811757</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230509105122.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">220810s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s13369-022-06649-0</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)SPR047811757</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(SPR)s13369-022-06649-0-e</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">dabbabi, Karim</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Spoken Utterance Classification Task of Arabic Numerals and Selected Isolated Words</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© King Fahd University of Petroleum & Minerals 2022</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Nowadays, sensory organs are becoming essential means for controlling modern machines which require human intervention. Among these means, we can cite the sense of voice which can be used to control and monitor modern interfaces. In this regard, Automatic Speech Recognition (ASR) is mainly explored to accomplish many tasks, such as translating natural voice into computer text and performing actions based on human commands. In this paper, a system for recognizing spoken Arabic numerals and words based on two classification methods is proposed. The first classification approach is a combination of Convolutional Neural Network (CNN) with Long Short-Term Memory (LSTM) and Fully Connected (FC) network (CNN-LSTM-FC), while the second is based on the conventional Dense Network (DenseNet). These classification approaches are integrated into the proposed Arabic speech recognition system to perform the classification task by exploring uniform length sequences of speech utterances extracted from the Mel-frequency Cepstral Coefficients (MFCCs). Regarding the CNN-LSTM-FC approach, it is offered with the objective of learning high-level features that contain long-term contextual dependencies and local information. These features include less information than raw data, which helps to reduce the training time. Also, the CNN-LSTM-FC method allows capturing global contextual information and local correlation results from MFCC coefficients. With respect to the DenseNet model, it is explored to benefit from the direct connections %$\frac{{L\left( {L + 1} \right)}}{2}%$ between its layers in addition to its ability to alleviate the problem of the vanishing of gradient and the reduction in the number of its explored parameters. The training time is therefore reduced. Our models were evaluated on two databases: The first is a database of English voice commands, while the second is that of spoken Arabic numerals and words. Experimental tests showed that the CNN-LSTM-FC model with MFCC coefficients performed best on the database of spoken Arabic numerals and words in terms of evaluated performances (accuracy = 88.04%, precision = 88.56%, recall = 87.78%, F1 = 88.17, and error = 1.10%) compared to those obtained with the DenseNet model. Additionally, the best results on the database of English voice command for precision (87.15%), F1 (85.66), and error (0.58%) were obtained by the CNN-LSTM-FC model, while those for accuracy (85.40%) and recall (85.40%) were achieved using the DenseNet model. Even the two proposed models led to acceptable results on both databases; however, they require less computation to achieve higher performance.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">CNN</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">LSTM</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">FC</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">CNN-LSTM-FC</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">DenseNet</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Spoken Arabic numerals and words database</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">English voice command database</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Mars, Abdelkarim</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">The Arabian journal for science and engineering</subfield><subfield code="d">Berlin : Springer, 2011</subfield><subfield code="g">47(2022), 8 vom: 31. März, Seite 10731-10750</subfield><subfield code="w">(DE-627)588780731</subfield><subfield code="w">(DE-600)2471504-9</subfield><subfield code="x">2191-4281</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:47</subfield><subfield code="g">year:2022</subfield><subfield code="g">number:8</subfield><subfield code="g">day:31</subfield><subfield code="g">month:03</subfield><subfield code="g">pages:10731-10750</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://dx.doi.org/10.1007/s13369-022-06649-0</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_SPRINGER</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_120</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_138</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_150</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_171</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_187</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_250</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_281</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_636</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2001</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2006</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2008</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2010</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2031</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2038</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2039</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2057</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2065</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2068</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2093</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2107</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2108</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2113</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2118</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2144</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2147</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2148</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2188</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2446</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2472</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2522</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2548</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4046</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4246</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4328</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4393</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">47</subfield><subfield code="j">2022</subfield><subfield code="e">8</subfield><subfield code="b">31</subfield><subfield code="c">03</subfield><subfield code="h">10731-10750</subfield></datafield></record></collection>
|
author |
dabbabi, Karim |
spellingShingle |
dabbabi, Karim misc CNN misc LSTM misc FC misc CNN-LSTM-FC misc DenseNet misc Spoken Arabic numerals and words database misc English voice command database Spoken Utterance Classification Task of Arabic Numerals and Selected Isolated Words |
authorStr |
dabbabi, Karim |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)588780731 |
format |
electronic Article |
delete_txt_mv |
keep |
author_role |
aut aut |
collection |
springer |
remote_str |
true |
illustrated |
Not Illustrated |
issn |
2191-4281 |
topic_title |
Spoken Utterance Classification Task of Arabic Numerals and Selected Isolated Words CNN (dpeaa)DE-He213 LSTM (dpeaa)DE-He213 FC (dpeaa)DE-He213 CNN-LSTM-FC (dpeaa)DE-He213 DenseNet (dpeaa)DE-He213 Spoken Arabic numerals and words database (dpeaa)DE-He213 English voice command database (dpeaa)DE-He213 |
topic |
misc CNN misc LSTM misc FC misc CNN-LSTM-FC misc DenseNet misc Spoken Arabic numerals and words database misc English voice command database |
topic_unstemmed |
misc CNN misc LSTM misc FC misc CNN-LSTM-FC misc DenseNet misc Spoken Arabic numerals and words database misc English voice command database |
topic_browse |
misc CNN misc LSTM misc FC misc CNN-LSTM-FC misc DenseNet misc Spoken Arabic numerals and words database misc English voice command database |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
The Arabian journal for science and engineering |
hierarchy_parent_id |
588780731 |
hierarchy_top_title |
The Arabian journal for science and engineering |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)588780731 (DE-600)2471504-9 |
title |
Spoken Utterance Classification Task of Arabic Numerals and Selected Isolated Words |
ctrlnum |
(DE-627)SPR047811757 (SPR)s13369-022-06649-0-e |
title_full |
Spoken Utterance Classification Task of Arabic Numerals and Selected Isolated Words |
author_sort |
dabbabi, Karim |
journal |
The Arabian journal for science and engineering |
journalStr |
The Arabian journal for science and engineering |
lang_code |
eng |
isOA_bool |
false |
recordtype |
marc |
publishDateSort |
2022 |
contenttype_str_mv |
txt |
container_start_page |
10731 |
author_browse |
dabbabi, Karim Mars, Abdelkarim |
container_volume |
47 |
format_se |
Elektronische Aufsätze |
author-letter |
dabbabi, Karim |
doi_str_mv |
10.1007/s13369-022-06649-0 |
title_sort |
spoken utterance classification task of arabic numerals and selected isolated words |
title_auth |
Spoken Utterance Classification Task of Arabic Numerals and Selected Isolated Words |
abstract |
Abstract Nowadays, sensory organs are becoming essential means for controlling modern machines which require human intervention. Among these means, we can cite the sense of voice which can be used to control and monitor modern interfaces. In this regard, Automatic Speech Recognition (ASR) is mainly explored to accomplish many tasks, such as translating natural voice into computer text and performing actions based on human commands. In this paper, a system for recognizing spoken Arabic numerals and words based on two classification methods is proposed. The first classification approach is a combination of Convolutional Neural Network (CNN) with Long Short-Term Memory (LSTM) and Fully Connected (FC) network (CNN-LSTM-FC), while the second is based on the conventional Dense Network (DenseNet). These classification approaches are integrated into the proposed Arabic speech recognition system to perform the classification task by exploring uniform length sequences of speech utterances extracted from the Mel-frequency Cepstral Coefficients (MFCCs). Regarding the CNN-LSTM-FC approach, it is offered with the objective of learning high-level features that contain long-term contextual dependencies and local information. These features include less information than raw data, which helps to reduce the training time. Also, the CNN-LSTM-FC method allows capturing global contextual information and local correlation results from MFCC coefficients. With respect to the DenseNet model, it is explored to benefit from the direct connections %$\frac{{L\left( {L + 1} \right)}}{2}%$ between its layers in addition to its ability to alleviate the problem of the vanishing of gradient and the reduction in the number of its explored parameters. The training time is therefore reduced. Our models were evaluated on two databases: The first is a database of English voice commands, while the second is that of spoken Arabic numerals and words. Experimental tests showed that the CNN-LSTM-FC model with MFCC coefficients performed best on the database of spoken Arabic numerals and words in terms of evaluated performances (accuracy = 88.04%, precision = 88.56%, recall = 87.78%, F1 = 88.17, and error = 1.10%) compared to those obtained with the DenseNet model. Additionally, the best results on the database of English voice command for precision (87.15%), F1 (85.66), and error (0.58%) were obtained by the CNN-LSTM-FC model, while those for accuracy (85.40%) and recall (85.40%) were achieved using the DenseNet model. Even the two proposed models led to acceptable results on both databases; however, they require less computation to achieve higher performance. © King Fahd University of Petroleum & Minerals 2022 |
abstractGer |
Abstract Nowadays, sensory organs are becoming essential means for controlling modern machines which require human intervention. Among these means, we can cite the sense of voice which can be used to control and monitor modern interfaces. In this regard, Automatic Speech Recognition (ASR) is mainly explored to accomplish many tasks, such as translating natural voice into computer text and performing actions based on human commands. In this paper, a system for recognizing spoken Arabic numerals and words based on two classification methods is proposed. The first classification approach is a combination of Convolutional Neural Network (CNN) with Long Short-Term Memory (LSTM) and Fully Connected (FC) network (CNN-LSTM-FC), while the second is based on the conventional Dense Network (DenseNet). These classification approaches are integrated into the proposed Arabic speech recognition system to perform the classification task by exploring uniform length sequences of speech utterances extracted from the Mel-frequency Cepstral Coefficients (MFCCs). Regarding the CNN-LSTM-FC approach, it is offered with the objective of learning high-level features that contain long-term contextual dependencies and local information. These features include less information than raw data, which helps to reduce the training time. Also, the CNN-LSTM-FC method allows capturing global contextual information and local correlation results from MFCC coefficients. With respect to the DenseNet model, it is explored to benefit from the direct connections %$\frac{{L\left( {L + 1} \right)}}{2}%$ between its layers in addition to its ability to alleviate the problem of the vanishing of gradient and the reduction in the number of its explored parameters. The training time is therefore reduced. Our models were evaluated on two databases: The first is a database of English voice commands, while the second is that of spoken Arabic numerals and words. Experimental tests showed that the CNN-LSTM-FC model with MFCC coefficients performed best on the database of spoken Arabic numerals and words in terms of evaluated performances (accuracy = 88.04%, precision = 88.56%, recall = 87.78%, F1 = 88.17, and error = 1.10%) compared to those obtained with the DenseNet model. Additionally, the best results on the database of English voice command for precision (87.15%), F1 (85.66), and error (0.58%) were obtained by the CNN-LSTM-FC model, while those for accuracy (85.40%) and recall (85.40%) were achieved using the DenseNet model. Even the two proposed models led to acceptable results on both databases; however, they require less computation to achieve higher performance. © King Fahd University of Petroleum & Minerals 2022 |
abstract_unstemmed |
Abstract Nowadays, sensory organs are becoming essential means for controlling modern machines which require human intervention. Among these means, we can cite the sense of voice which can be used to control and monitor modern interfaces. In this regard, Automatic Speech Recognition (ASR) is mainly explored to accomplish many tasks, such as translating natural voice into computer text and performing actions based on human commands. In this paper, a system for recognizing spoken Arabic numerals and words based on two classification methods is proposed. The first classification approach is a combination of Convolutional Neural Network (CNN) with Long Short-Term Memory (LSTM) and Fully Connected (FC) network (CNN-LSTM-FC), while the second is based on the conventional Dense Network (DenseNet). These classification approaches are integrated into the proposed Arabic speech recognition system to perform the classification task by exploring uniform length sequences of speech utterances extracted from the Mel-frequency Cepstral Coefficients (MFCCs). Regarding the CNN-LSTM-FC approach, it is offered with the objective of learning high-level features that contain long-term contextual dependencies and local information. These features include less information than raw data, which helps to reduce the training time. Also, the CNN-LSTM-FC method allows capturing global contextual information and local correlation results from MFCC coefficients. With respect to the DenseNet model, it is explored to benefit from the direct connections %$\frac{{L\left( {L + 1} \right)}}{2}%$ between its layers in addition to its ability to alleviate the problem of the vanishing of gradient and the reduction in the number of its explored parameters. The training time is therefore reduced. Our models were evaluated on two databases: The first is a database of English voice commands, while the second is that of spoken Arabic numerals and words. Experimental tests showed that the CNN-LSTM-FC model with MFCC coefficients performed best on the database of spoken Arabic numerals and words in terms of evaluated performances (accuracy = 88.04%, precision = 88.56%, recall = 87.78%, F1 = 88.17, and error = 1.10%) compared to those obtained with the DenseNet model. Additionally, the best results on the database of English voice command for precision (87.15%), F1 (85.66), and error (0.58%) were obtained by the CNN-LSTM-FC model, while those for accuracy (85.40%) and recall (85.40%) were achieved using the DenseNet model. Even the two proposed models led to acceptable results on both databases; however, they require less computation to achieve higher performance. © King Fahd University of Petroleum & Minerals 2022 |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_150 GBV_ILN_151 GBV_ILN_152 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2093 GBV_ILN_2106 GBV_ILN_2107 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2188 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2446 GBV_ILN_2470 GBV_ILN_2472 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4246 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4328 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 |
container_issue |
8 |
title_short |
Spoken Utterance Classification Task of Arabic Numerals and Selected Isolated Words |
url |
https://dx.doi.org/10.1007/s13369-022-06649-0 |
remote_bool |
true |
author2 |
Mars, Abdelkarim |
author2Str |
Mars, Abdelkarim |
ppnlink |
588780731 |
mediatype_str_mv |
c |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1007/s13369-022-06649-0 |
up_date |
2024-07-03T15:08:09.374Z |
_version_ |
1803570946374631424 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">SPR047811757</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230509105122.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">220810s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s13369-022-06649-0</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)SPR047811757</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(SPR)s13369-022-06649-0-e</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">dabbabi, Karim</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Spoken Utterance Classification Task of Arabic Numerals and Selected Isolated Words</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© King Fahd University of Petroleum & Minerals 2022</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Nowadays, sensory organs are becoming essential means for controlling modern machines which require human intervention. Among these means, we can cite the sense of voice which can be used to control and monitor modern interfaces. In this regard, Automatic Speech Recognition (ASR) is mainly explored to accomplish many tasks, such as translating natural voice into computer text and performing actions based on human commands. In this paper, a system for recognizing spoken Arabic numerals and words based on two classification methods is proposed. The first classification approach is a combination of Convolutional Neural Network (CNN) with Long Short-Term Memory (LSTM) and Fully Connected (FC) network (CNN-LSTM-FC), while the second is based on the conventional Dense Network (DenseNet). These classification approaches are integrated into the proposed Arabic speech recognition system to perform the classification task by exploring uniform length sequences of speech utterances extracted from the Mel-frequency Cepstral Coefficients (MFCCs). Regarding the CNN-LSTM-FC approach, it is offered with the objective of learning high-level features that contain long-term contextual dependencies and local information. These features include less information than raw data, which helps to reduce the training time. Also, the CNN-LSTM-FC method allows capturing global contextual information and local correlation results from MFCC coefficients. With respect to the DenseNet model, it is explored to benefit from the direct connections %$\frac{{L\left( {L + 1} \right)}}{2}%$ between its layers in addition to its ability to alleviate the problem of the vanishing of gradient and the reduction in the number of its explored parameters. The training time is therefore reduced. Our models were evaluated on two databases: The first is a database of English voice commands, while the second is that of spoken Arabic numerals and words. Experimental tests showed that the CNN-LSTM-FC model with MFCC coefficients performed best on the database of spoken Arabic numerals and words in terms of evaluated performances (accuracy = 88.04%, precision = 88.56%, recall = 87.78%, F1 = 88.17, and error = 1.10%) compared to those obtained with the DenseNet model. Additionally, the best results on the database of English voice command for precision (87.15%), F1 (85.66), and error (0.58%) were obtained by the CNN-LSTM-FC model, while those for accuracy (85.40%) and recall (85.40%) were achieved using the DenseNet model. Even the two proposed models led to acceptable results on both databases; however, they require less computation to achieve higher performance.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">CNN</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">LSTM</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">FC</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">CNN-LSTM-FC</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">DenseNet</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Spoken Arabic numerals and words database</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">English voice command database</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Mars, Abdelkarim</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">The Arabian journal for science and engineering</subfield><subfield code="d">Berlin : Springer, 2011</subfield><subfield code="g">47(2022), 8 vom: 31. März, Seite 10731-10750</subfield><subfield code="w">(DE-627)588780731</subfield><subfield code="w">(DE-600)2471504-9</subfield><subfield code="x">2191-4281</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:47</subfield><subfield code="g">year:2022</subfield><subfield code="g">number:8</subfield><subfield code="g">day:31</subfield><subfield code="g">month:03</subfield><subfield code="g">pages:10731-10750</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://dx.doi.org/10.1007/s13369-022-06649-0</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_SPRINGER</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_120</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_138</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_150</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_171</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_187</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_250</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_281</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_636</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2001</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2006</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2008</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2010</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2031</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2038</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2039</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2057</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2065</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2068</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2093</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2107</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2108</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2113</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2118</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2144</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2147</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2148</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2188</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2446</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2472</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2522</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2548</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4046</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4246</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4328</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4393</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">47</subfield><subfield code="j">2022</subfield><subfield code="e">8</subfield><subfield code="b">31</subfield><subfield code="c">03</subfield><subfield code="h">10731-10750</subfield></datafield></record></collection>
|
score |
7.3994417 |