A Deep Convolutional Spiking Neural Network for embedded applications
Abstract Deep neural networks (DNNs) have received a great deal of interest in solving everyday tasks in recent years. However, their computational and energy costs limit their use on mobile and edge devices. The neuromorphic computing approach called spiking neural networks (SNNs) represents a pote...
Ausführliche Beschreibung
Autor*in: |
Javanshir, Amirhossein [verfasserIn] Nguyen, Thanh Thi [verfasserIn] Parvez Mahmud, M. A. [verfasserIn] Kouzani, Abbas Z. [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2024 |
---|
Schlagwörter: |
---|
Anmerkung: |
© The Author(s) 2024 |
---|
Übergeordnetes Werk: |
Enthalten in: Progress in artificial intelligence - Springer Berlin Heidelberg, 2012, 13(2024), 1 vom: März, Seite 1-15 |
---|---|
Übergeordnetes Werk: |
volume:13 ; year:2024 ; number:1 ; month:03 ; pages:1-15 |
Links: |
---|
DOI / URN: |
10.1007/s13748-024-00313-4 |
---|
Katalog-ID: |
SPR055434231 |
---|
LEADER | 01000naa a22002652 4500 | ||
---|---|---|---|
001 | SPR055434231 | ||
003 | DE-627 | ||
005 | 20240406064744.0 | ||
007 | cr uuu---uuuuu | ||
008 | 240406s2024 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1007/s13748-024-00313-4 |2 doi | |
035 | |a (DE-627)SPR055434231 | ||
035 | |a (SPR)s13748-024-00313-4-e | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
082 | 0 | 4 | |a 004 |a 600 |q VZ |
100 | 1 | |a Javanshir, Amirhossein |e verfasserin |4 aut | |
245 | 1 | 0 | |a A Deep Convolutional Spiking Neural Network for embedded applications |
264 | 1 | |c 2024 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
500 | |a © The Author(s) 2024 | ||
520 | |a Abstract Deep neural networks (DNNs) have received a great deal of interest in solving everyday tasks in recent years. However, their computational and energy costs limit their use on mobile and edge devices. The neuromorphic computing approach called spiking neural networks (SNNs) represents a potential solution for bridging the gap between performance and computational expense. Despite the potential benefits of energy efficiency, the current SNNs are being used with datasets such as MNIST, Fashion-MNIST, and CIFAR10, limiting their applications compared to DNNs. Therefore, the applicability of SNNs to real-world applications, such as scene classification and forecasting epileptic seizures, must be demonstrated yet. This paper develops a deep convolutional spiking neural network (DCSNN) for embedded applications. We explore a convolutional architecture, Visual Geometry Group (VGG16), to implement deeper SNNs. To train a spiking model, we convert the pre-trained VGG16 into corresponding spiking equivalents with nearly comparable performance to the original one. The trained weights of VGG16 were then transferred to the equivalent SNN architecture while performing a proper weight–threshold balancing. The model is evaluated in two case studies: land use and land cover classification, and epileptic seizure detection. Experimental results show a classification accuracy of 94.88%, and seizure detection specificity of 99.45% and a sensitivity of 95.06%. It is confirmed that conversion-based training SNNs are promising, and the benefits of DNNs, such as solving complex and real-world problems, become available to SNNs. | ||
650 | 4 | |a Spiking neural network |7 (dpeaa)DE-He213 | |
650 | 4 | |a Satellite image classification |7 (dpeaa)DE-He213 | |
650 | 4 | |a Epileptic seizure detection |7 (dpeaa)DE-He213 | |
700 | 1 | |a Nguyen, Thanh Thi |e verfasserin |4 aut | |
700 | 1 | |a Parvez Mahmud, M. A. |e verfasserin |4 aut | |
700 | 1 | |a Kouzani, Abbas Z. |e verfasserin |0 (orcid)0000-0002-6292-1214 |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Progress in artificial intelligence |d Springer Berlin Heidelberg, 2012 |g 13(2024), 1 vom: März, Seite 1-15 |w (DE-627)718730933 |w (DE-600)2668413-5 |x 2192-6360 |7 nnns |
773 | 1 | 8 | |g volume:13 |g year:2024 |g number:1 |g month:03 |g pages:1-15 |
856 | 4 | 0 | |u https://dx.doi.org/10.1007/s13748-024-00313-4 |m X:VERLAG |x 0 |z kostenfrei |3 Volltext |
912 | |a SYSFLAG_0 | ||
912 | |a GBV_SPRINGER | ||
912 | |a GBV_ILN_11 | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_32 | ||
912 | |a GBV_ILN_39 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_63 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_74 | ||
912 | |a GBV_ILN_90 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_100 | ||
912 | |a GBV_ILN_101 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_120 | ||
912 | |a GBV_ILN_138 | ||
912 | |a GBV_ILN_150 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_152 | ||
912 | |a GBV_ILN_161 | ||
912 | |a GBV_ILN_170 | ||
912 | |a GBV_ILN_171 | ||
912 | |a GBV_ILN_187 | ||
912 | |a GBV_ILN_213 | ||
912 | |a GBV_ILN_224 | ||
912 | |a GBV_ILN_230 | ||
912 | |a GBV_ILN_250 | ||
912 | |a GBV_ILN_281 | ||
912 | |a GBV_ILN_285 | ||
912 | |a GBV_ILN_293 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_636 | ||
912 | |a GBV_ILN_702 | ||
912 | |a GBV_ILN_2001 | ||
912 | |a GBV_ILN_2003 | ||
912 | |a GBV_ILN_2004 | ||
912 | |a GBV_ILN_2005 | ||
912 | |a GBV_ILN_2006 | ||
912 | |a GBV_ILN_2007 | ||
912 | |a GBV_ILN_2009 | ||
912 | |a GBV_ILN_2010 | ||
912 | |a GBV_ILN_2011 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_2015 | ||
912 | |a GBV_ILN_2020 | ||
912 | |a GBV_ILN_2021 | ||
912 | |a GBV_ILN_2025 | ||
912 | |a GBV_ILN_2026 | ||
912 | |a GBV_ILN_2027 | ||
912 | |a GBV_ILN_2031 | ||
912 | |a GBV_ILN_2034 | ||
912 | |a GBV_ILN_2037 | ||
912 | |a GBV_ILN_2038 | ||
912 | |a GBV_ILN_2039 | ||
912 | |a GBV_ILN_2044 | ||
912 | |a GBV_ILN_2048 | ||
912 | |a GBV_ILN_2049 | ||
912 | |a GBV_ILN_2050 | ||
912 | |a GBV_ILN_2055 | ||
912 | |a GBV_ILN_2056 | ||
912 | |a GBV_ILN_2057 | ||
912 | |a GBV_ILN_2059 | ||
912 | |a GBV_ILN_2061 | ||
912 | |a GBV_ILN_2064 | ||
912 | |a GBV_ILN_2065 | ||
912 | |a GBV_ILN_2068 | ||
912 | |a GBV_ILN_2088 | ||
912 | |a GBV_ILN_2093 | ||
912 | |a GBV_ILN_2106 | ||
912 | |a GBV_ILN_2107 | ||
912 | |a GBV_ILN_2108 | ||
912 | |a GBV_ILN_2110 | ||
912 | |a GBV_ILN_2111 | ||
912 | |a GBV_ILN_2112 | ||
912 | |a GBV_ILN_2113 | ||
912 | |a GBV_ILN_2118 | ||
912 | |a GBV_ILN_2122 | ||
912 | |a GBV_ILN_2129 | ||
912 | |a GBV_ILN_2143 | ||
912 | |a GBV_ILN_2144 | ||
912 | |a GBV_ILN_2147 | ||
912 | |a GBV_ILN_2148 | ||
912 | |a GBV_ILN_2152 | ||
912 | |a GBV_ILN_2153 | ||
912 | |a GBV_ILN_2188 | ||
912 | |a GBV_ILN_2232 | ||
912 | |a GBV_ILN_2244 | ||
912 | |a GBV_ILN_2336 | ||
912 | |a GBV_ILN_2446 | ||
912 | |a GBV_ILN_2470 | ||
912 | |a GBV_ILN_2472 | ||
912 | |a GBV_ILN_2507 | ||
912 | |a GBV_ILN_2522 | ||
912 | |a GBV_ILN_2548 | ||
912 | |a GBV_ILN_4035 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4046 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4126 | ||
912 | |a GBV_ILN_4242 | ||
912 | |a GBV_ILN_4246 | ||
912 | |a GBV_ILN_4249 | ||
912 | |a GBV_ILN_4251 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4306 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4326 | ||
912 | |a GBV_ILN_4328 | ||
912 | |a GBV_ILN_4333 | ||
912 | |a GBV_ILN_4334 | ||
912 | |a GBV_ILN_4335 | ||
912 | |a GBV_ILN_4336 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4393 | ||
912 | |a GBV_ILN_4700 | ||
951 | |a AR | ||
952 | |d 13 |j 2024 |e 1 |c 03 |h 1-15 |
author_variant |
a j aj t t n tt ttn m m a p mma mmap a z k az azk |
---|---|
matchkey_str |
article:21926360:2024----::decnouinlpknnuantokoeb |
hierarchy_sort_str |
2024 |
publishDate |
2024 |
allfields |
10.1007/s13748-024-00313-4 doi (DE-627)SPR055434231 (SPR)s13748-024-00313-4-e DE-627 ger DE-627 rakwb eng 004 600 VZ Javanshir, Amirhossein verfasserin aut A Deep Convolutional Spiking Neural Network for embedded applications 2024 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © The Author(s) 2024 Abstract Deep neural networks (DNNs) have received a great deal of interest in solving everyday tasks in recent years. However, their computational and energy costs limit their use on mobile and edge devices. The neuromorphic computing approach called spiking neural networks (SNNs) represents a potential solution for bridging the gap between performance and computational expense. Despite the potential benefits of energy efficiency, the current SNNs are being used with datasets such as MNIST, Fashion-MNIST, and CIFAR10, limiting their applications compared to DNNs. Therefore, the applicability of SNNs to real-world applications, such as scene classification and forecasting epileptic seizures, must be demonstrated yet. This paper develops a deep convolutional spiking neural network (DCSNN) for embedded applications. We explore a convolutional architecture, Visual Geometry Group (VGG16), to implement deeper SNNs. To train a spiking model, we convert the pre-trained VGG16 into corresponding spiking equivalents with nearly comparable performance to the original one. The trained weights of VGG16 were then transferred to the equivalent SNN architecture while performing a proper weight–threshold balancing. The model is evaluated in two case studies: land use and land cover classification, and epileptic seizure detection. Experimental results show a classification accuracy of 94.88%, and seizure detection specificity of 99.45% and a sensitivity of 95.06%. It is confirmed that conversion-based training SNNs are promising, and the benefits of DNNs, such as solving complex and real-world problems, become available to SNNs. Spiking neural network (dpeaa)DE-He213 Satellite image classification (dpeaa)DE-He213 Epileptic seizure detection (dpeaa)DE-He213 Nguyen, Thanh Thi verfasserin aut Parvez Mahmud, M. A. verfasserin aut Kouzani, Abbas Z. verfasserin (orcid)0000-0002-6292-1214 aut Enthalten in Progress in artificial intelligence Springer Berlin Heidelberg, 2012 13(2024), 1 vom: März, Seite 1-15 (DE-627)718730933 (DE-600)2668413-5 2192-6360 nnns volume:13 year:2024 number:1 month:03 pages:1-15 https://dx.doi.org/10.1007/s13748-024-00313-4 X:VERLAG 0 kostenfrei Volltext SYSFLAG_0 GBV_SPRINGER GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_150 GBV_ILN_151 GBV_ILN_152 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2093 GBV_ILN_2106 GBV_ILN_2107 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2188 GBV_ILN_2232 GBV_ILN_2244 GBV_ILN_2336 GBV_ILN_2446 GBV_ILN_2470 GBV_ILN_2472 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4246 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4328 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 AR 13 2024 1 03 1-15 |
spelling |
10.1007/s13748-024-00313-4 doi (DE-627)SPR055434231 (SPR)s13748-024-00313-4-e DE-627 ger DE-627 rakwb eng 004 600 VZ Javanshir, Amirhossein verfasserin aut A Deep Convolutional Spiking Neural Network for embedded applications 2024 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © The Author(s) 2024 Abstract Deep neural networks (DNNs) have received a great deal of interest in solving everyday tasks in recent years. However, their computational and energy costs limit their use on mobile and edge devices. The neuromorphic computing approach called spiking neural networks (SNNs) represents a potential solution for bridging the gap between performance and computational expense. Despite the potential benefits of energy efficiency, the current SNNs are being used with datasets such as MNIST, Fashion-MNIST, and CIFAR10, limiting their applications compared to DNNs. Therefore, the applicability of SNNs to real-world applications, such as scene classification and forecasting epileptic seizures, must be demonstrated yet. This paper develops a deep convolutional spiking neural network (DCSNN) for embedded applications. We explore a convolutional architecture, Visual Geometry Group (VGG16), to implement deeper SNNs. To train a spiking model, we convert the pre-trained VGG16 into corresponding spiking equivalents with nearly comparable performance to the original one. The trained weights of VGG16 were then transferred to the equivalent SNN architecture while performing a proper weight–threshold balancing. The model is evaluated in two case studies: land use and land cover classification, and epileptic seizure detection. Experimental results show a classification accuracy of 94.88%, and seizure detection specificity of 99.45% and a sensitivity of 95.06%. It is confirmed that conversion-based training SNNs are promising, and the benefits of DNNs, such as solving complex and real-world problems, become available to SNNs. Spiking neural network (dpeaa)DE-He213 Satellite image classification (dpeaa)DE-He213 Epileptic seizure detection (dpeaa)DE-He213 Nguyen, Thanh Thi verfasserin aut Parvez Mahmud, M. A. verfasserin aut Kouzani, Abbas Z. verfasserin (orcid)0000-0002-6292-1214 aut Enthalten in Progress in artificial intelligence Springer Berlin Heidelberg, 2012 13(2024), 1 vom: März, Seite 1-15 (DE-627)718730933 (DE-600)2668413-5 2192-6360 nnns volume:13 year:2024 number:1 month:03 pages:1-15 https://dx.doi.org/10.1007/s13748-024-00313-4 X:VERLAG 0 kostenfrei Volltext SYSFLAG_0 GBV_SPRINGER GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_150 GBV_ILN_151 GBV_ILN_152 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2093 GBV_ILN_2106 GBV_ILN_2107 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2188 GBV_ILN_2232 GBV_ILN_2244 GBV_ILN_2336 GBV_ILN_2446 GBV_ILN_2470 GBV_ILN_2472 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4246 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4328 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 AR 13 2024 1 03 1-15 |
allfields_unstemmed |
10.1007/s13748-024-00313-4 doi (DE-627)SPR055434231 (SPR)s13748-024-00313-4-e DE-627 ger DE-627 rakwb eng 004 600 VZ Javanshir, Amirhossein verfasserin aut A Deep Convolutional Spiking Neural Network for embedded applications 2024 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © The Author(s) 2024 Abstract Deep neural networks (DNNs) have received a great deal of interest in solving everyday tasks in recent years. However, their computational and energy costs limit their use on mobile and edge devices. The neuromorphic computing approach called spiking neural networks (SNNs) represents a potential solution for bridging the gap between performance and computational expense. Despite the potential benefits of energy efficiency, the current SNNs are being used with datasets such as MNIST, Fashion-MNIST, and CIFAR10, limiting their applications compared to DNNs. Therefore, the applicability of SNNs to real-world applications, such as scene classification and forecasting epileptic seizures, must be demonstrated yet. This paper develops a deep convolutional spiking neural network (DCSNN) for embedded applications. We explore a convolutional architecture, Visual Geometry Group (VGG16), to implement deeper SNNs. To train a spiking model, we convert the pre-trained VGG16 into corresponding spiking equivalents with nearly comparable performance to the original one. The trained weights of VGG16 were then transferred to the equivalent SNN architecture while performing a proper weight–threshold balancing. The model is evaluated in two case studies: land use and land cover classification, and epileptic seizure detection. Experimental results show a classification accuracy of 94.88%, and seizure detection specificity of 99.45% and a sensitivity of 95.06%. It is confirmed that conversion-based training SNNs are promising, and the benefits of DNNs, such as solving complex and real-world problems, become available to SNNs. Spiking neural network (dpeaa)DE-He213 Satellite image classification (dpeaa)DE-He213 Epileptic seizure detection (dpeaa)DE-He213 Nguyen, Thanh Thi verfasserin aut Parvez Mahmud, M. A. verfasserin aut Kouzani, Abbas Z. verfasserin (orcid)0000-0002-6292-1214 aut Enthalten in Progress in artificial intelligence Springer Berlin Heidelberg, 2012 13(2024), 1 vom: März, Seite 1-15 (DE-627)718730933 (DE-600)2668413-5 2192-6360 nnns volume:13 year:2024 number:1 month:03 pages:1-15 https://dx.doi.org/10.1007/s13748-024-00313-4 X:VERLAG 0 kostenfrei Volltext SYSFLAG_0 GBV_SPRINGER GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_150 GBV_ILN_151 GBV_ILN_152 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2093 GBV_ILN_2106 GBV_ILN_2107 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2188 GBV_ILN_2232 GBV_ILN_2244 GBV_ILN_2336 GBV_ILN_2446 GBV_ILN_2470 GBV_ILN_2472 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4246 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4328 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 AR 13 2024 1 03 1-15 |
allfieldsGer |
10.1007/s13748-024-00313-4 doi (DE-627)SPR055434231 (SPR)s13748-024-00313-4-e DE-627 ger DE-627 rakwb eng 004 600 VZ Javanshir, Amirhossein verfasserin aut A Deep Convolutional Spiking Neural Network for embedded applications 2024 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © The Author(s) 2024 Abstract Deep neural networks (DNNs) have received a great deal of interest in solving everyday tasks in recent years. However, their computational and energy costs limit their use on mobile and edge devices. The neuromorphic computing approach called spiking neural networks (SNNs) represents a potential solution for bridging the gap between performance and computational expense. Despite the potential benefits of energy efficiency, the current SNNs are being used with datasets such as MNIST, Fashion-MNIST, and CIFAR10, limiting their applications compared to DNNs. Therefore, the applicability of SNNs to real-world applications, such as scene classification and forecasting epileptic seizures, must be demonstrated yet. This paper develops a deep convolutional spiking neural network (DCSNN) for embedded applications. We explore a convolutional architecture, Visual Geometry Group (VGG16), to implement deeper SNNs. To train a spiking model, we convert the pre-trained VGG16 into corresponding spiking equivalents with nearly comparable performance to the original one. The trained weights of VGG16 were then transferred to the equivalent SNN architecture while performing a proper weight–threshold balancing. The model is evaluated in two case studies: land use and land cover classification, and epileptic seizure detection. Experimental results show a classification accuracy of 94.88%, and seizure detection specificity of 99.45% and a sensitivity of 95.06%. It is confirmed that conversion-based training SNNs are promising, and the benefits of DNNs, such as solving complex and real-world problems, become available to SNNs. Spiking neural network (dpeaa)DE-He213 Satellite image classification (dpeaa)DE-He213 Epileptic seizure detection (dpeaa)DE-He213 Nguyen, Thanh Thi verfasserin aut Parvez Mahmud, M. A. verfasserin aut Kouzani, Abbas Z. verfasserin (orcid)0000-0002-6292-1214 aut Enthalten in Progress in artificial intelligence Springer Berlin Heidelberg, 2012 13(2024), 1 vom: März, Seite 1-15 (DE-627)718730933 (DE-600)2668413-5 2192-6360 nnns volume:13 year:2024 number:1 month:03 pages:1-15 https://dx.doi.org/10.1007/s13748-024-00313-4 X:VERLAG 0 kostenfrei Volltext SYSFLAG_0 GBV_SPRINGER GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_150 GBV_ILN_151 GBV_ILN_152 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2093 GBV_ILN_2106 GBV_ILN_2107 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2188 GBV_ILN_2232 GBV_ILN_2244 GBV_ILN_2336 GBV_ILN_2446 GBV_ILN_2470 GBV_ILN_2472 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4246 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4328 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 AR 13 2024 1 03 1-15 |
allfieldsSound |
10.1007/s13748-024-00313-4 doi (DE-627)SPR055434231 (SPR)s13748-024-00313-4-e DE-627 ger DE-627 rakwb eng 004 600 VZ Javanshir, Amirhossein verfasserin aut A Deep Convolutional Spiking Neural Network for embedded applications 2024 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier © The Author(s) 2024 Abstract Deep neural networks (DNNs) have received a great deal of interest in solving everyday tasks in recent years. However, their computational and energy costs limit their use on mobile and edge devices. The neuromorphic computing approach called spiking neural networks (SNNs) represents a potential solution for bridging the gap between performance and computational expense. Despite the potential benefits of energy efficiency, the current SNNs are being used with datasets such as MNIST, Fashion-MNIST, and CIFAR10, limiting their applications compared to DNNs. Therefore, the applicability of SNNs to real-world applications, such as scene classification and forecasting epileptic seizures, must be demonstrated yet. This paper develops a deep convolutional spiking neural network (DCSNN) for embedded applications. We explore a convolutional architecture, Visual Geometry Group (VGG16), to implement deeper SNNs. To train a spiking model, we convert the pre-trained VGG16 into corresponding spiking equivalents with nearly comparable performance to the original one. The trained weights of VGG16 were then transferred to the equivalent SNN architecture while performing a proper weight–threshold balancing. The model is evaluated in two case studies: land use and land cover classification, and epileptic seizure detection. Experimental results show a classification accuracy of 94.88%, and seizure detection specificity of 99.45% and a sensitivity of 95.06%. It is confirmed that conversion-based training SNNs are promising, and the benefits of DNNs, such as solving complex and real-world problems, become available to SNNs. Spiking neural network (dpeaa)DE-He213 Satellite image classification (dpeaa)DE-He213 Epileptic seizure detection (dpeaa)DE-He213 Nguyen, Thanh Thi verfasserin aut Parvez Mahmud, M. A. verfasserin aut Kouzani, Abbas Z. verfasserin (orcid)0000-0002-6292-1214 aut Enthalten in Progress in artificial intelligence Springer Berlin Heidelberg, 2012 13(2024), 1 vom: März, Seite 1-15 (DE-627)718730933 (DE-600)2668413-5 2192-6360 nnns volume:13 year:2024 number:1 month:03 pages:1-15 https://dx.doi.org/10.1007/s13748-024-00313-4 X:VERLAG 0 kostenfrei Volltext SYSFLAG_0 GBV_SPRINGER GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_150 GBV_ILN_151 GBV_ILN_152 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2093 GBV_ILN_2106 GBV_ILN_2107 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2188 GBV_ILN_2232 GBV_ILN_2244 GBV_ILN_2336 GBV_ILN_2446 GBV_ILN_2470 GBV_ILN_2472 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4246 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4328 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 AR 13 2024 1 03 1-15 |
language |
English |
source |
Enthalten in Progress in artificial intelligence 13(2024), 1 vom: März, Seite 1-15 volume:13 year:2024 number:1 month:03 pages:1-15 |
sourceStr |
Enthalten in Progress in artificial intelligence 13(2024), 1 vom: März, Seite 1-15 volume:13 year:2024 number:1 month:03 pages:1-15 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Spiking neural network Satellite image classification Epileptic seizure detection |
dewey-raw |
004 |
isfreeaccess_bool |
true |
container_title |
Progress in artificial intelligence |
authorswithroles_txt_mv |
Javanshir, Amirhossein @@aut@@ Nguyen, Thanh Thi @@aut@@ Parvez Mahmud, M. A. @@aut@@ Kouzani, Abbas Z. @@aut@@ |
publishDateDaySort_date |
2024-03-01T00:00:00Z |
hierarchy_top_id |
718730933 |
dewey-sort |
14 |
id |
SPR055434231 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000naa a22002652 4500</leader><controlfield tag="001">SPR055434231</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20240406064744.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">240406s2024 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s13748-024-00313-4</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)SPR055434231</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(SPR)s13748-024-00313-4-e</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="a">600</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Javanshir, Amirhossein</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">A Deep Convolutional Spiking Neural Network for embedded applications</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2024</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© The Author(s) 2024</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Deep neural networks (DNNs) have received a great deal of interest in solving everyday tasks in recent years. However, their computational and energy costs limit their use on mobile and edge devices. The neuromorphic computing approach called spiking neural networks (SNNs) represents a potential solution for bridging the gap between performance and computational expense. Despite the potential benefits of energy efficiency, the current SNNs are being used with datasets such as MNIST, Fashion-MNIST, and CIFAR10, limiting their applications compared to DNNs. Therefore, the applicability of SNNs to real-world applications, such as scene classification and forecasting epileptic seizures, must be demonstrated yet. This paper develops a deep convolutional spiking neural network (DCSNN) for embedded applications. We explore a convolutional architecture, Visual Geometry Group (VGG16), to implement deeper SNNs. To train a spiking model, we convert the pre-trained VGG16 into corresponding spiking equivalents with nearly comparable performance to the original one. The trained weights of VGG16 were then transferred to the equivalent SNN architecture while performing a proper weight–threshold balancing. The model is evaluated in two case studies: land use and land cover classification, and epileptic seizure detection. Experimental results show a classification accuracy of 94.88%, and seizure detection specificity of 99.45% and a sensitivity of 95.06%. It is confirmed that conversion-based training SNNs are promising, and the benefits of DNNs, such as solving complex and real-world problems, become available to SNNs.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Spiking neural network</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Satellite image classification</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Epileptic seizure detection</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Nguyen, Thanh Thi</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Parvez Mahmud, M. A.</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Kouzani, Abbas Z.</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0002-6292-1214</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Progress in artificial intelligence</subfield><subfield code="d">Springer Berlin Heidelberg, 2012</subfield><subfield code="g">13(2024), 1 vom: März, Seite 1-15</subfield><subfield code="w">(DE-627)718730933</subfield><subfield code="w">(DE-600)2668413-5</subfield><subfield code="x">2192-6360</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:13</subfield><subfield code="g">year:2024</subfield><subfield code="g">number:1</subfield><subfield code="g">month:03</subfield><subfield code="g">pages:1-15</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://dx.doi.org/10.1007/s13748-024-00313-4</subfield><subfield code="m">X:VERLAG</subfield><subfield code="x">0</subfield><subfield code="z">kostenfrei</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_0</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_SPRINGER</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_101</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_120</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_138</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_150</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_171</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_187</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_250</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_281</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_636</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2001</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2006</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2010</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2031</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2038</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2039</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2057</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2065</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2068</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2093</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2107</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2108</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2113</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2118</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2144</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2147</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2148</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2188</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2244</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2446</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2472</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2522</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2548</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4046</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4246</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4328</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4393</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">13</subfield><subfield code="j">2024</subfield><subfield code="e">1</subfield><subfield code="c">03</subfield><subfield code="h">1-15</subfield></datafield></record></collection>
|
author |
Javanshir, Amirhossein |
spellingShingle |
Javanshir, Amirhossein ddc 004 misc Spiking neural network misc Satellite image classification misc Epileptic seizure detection A Deep Convolutional Spiking Neural Network for embedded applications |
authorStr |
Javanshir, Amirhossein |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)718730933 |
format |
electronic Article |
dewey-ones |
004 - Data processing & computer science 600 - Technology |
delete_txt_mv |
keep |
author_role |
aut aut aut aut |
collection |
springer |
remote_str |
true |
illustrated |
Not Illustrated |
issn |
2192-6360 |
topic_title |
004 600 VZ A Deep Convolutional Spiking Neural Network for embedded applications Spiking neural network (dpeaa)DE-He213 Satellite image classification (dpeaa)DE-He213 Epileptic seizure detection (dpeaa)DE-He213 |
topic |
ddc 004 misc Spiking neural network misc Satellite image classification misc Epileptic seizure detection |
topic_unstemmed |
ddc 004 misc Spiking neural network misc Satellite image classification misc Epileptic seizure detection |
topic_browse |
ddc 004 misc Spiking neural network misc Satellite image classification misc Epileptic seizure detection |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
Progress in artificial intelligence |
hierarchy_parent_id |
718730933 |
dewey-tens |
000 - Computer science, knowledge & systems 600 - Technology |
hierarchy_top_title |
Progress in artificial intelligence |
isfreeaccess_txt |
true |
familylinks_str_mv |
(DE-627)718730933 (DE-600)2668413-5 |
title |
A Deep Convolutional Spiking Neural Network for embedded applications |
ctrlnum |
(DE-627)SPR055434231 (SPR)s13748-024-00313-4-e |
title_full |
A Deep Convolutional Spiking Neural Network for embedded applications |
author_sort |
Javanshir, Amirhossein |
journal |
Progress in artificial intelligence |
journalStr |
Progress in artificial intelligence |
lang_code |
eng |
isOA_bool |
true |
dewey-hundreds |
000 - Computer science, information & general works 600 - Technology |
recordtype |
marc |
publishDateSort |
2024 |
contenttype_str_mv |
txt |
container_start_page |
1 |
author_browse |
Javanshir, Amirhossein Nguyen, Thanh Thi Parvez Mahmud, M. A. Kouzani, Abbas Z. |
container_volume |
13 |
class |
004 600 VZ |
format_se |
Elektronische Aufsätze |
author-letter |
Javanshir, Amirhossein |
doi_str_mv |
10.1007/s13748-024-00313-4 |
normlink |
(ORCID)0000-0002-6292-1214 |
normlink_prefix_str_mv |
(orcid)0000-0002-6292-1214 |
dewey-full |
004 600 |
author2-role |
verfasserin |
title_sort |
a deep convolutional spiking neural network for embedded applications |
title_auth |
A Deep Convolutional Spiking Neural Network for embedded applications |
abstract |
Abstract Deep neural networks (DNNs) have received a great deal of interest in solving everyday tasks in recent years. However, their computational and energy costs limit their use on mobile and edge devices. The neuromorphic computing approach called spiking neural networks (SNNs) represents a potential solution for bridging the gap between performance and computational expense. Despite the potential benefits of energy efficiency, the current SNNs are being used with datasets such as MNIST, Fashion-MNIST, and CIFAR10, limiting their applications compared to DNNs. Therefore, the applicability of SNNs to real-world applications, such as scene classification and forecasting epileptic seizures, must be demonstrated yet. This paper develops a deep convolutional spiking neural network (DCSNN) for embedded applications. We explore a convolutional architecture, Visual Geometry Group (VGG16), to implement deeper SNNs. To train a spiking model, we convert the pre-trained VGG16 into corresponding spiking equivalents with nearly comparable performance to the original one. The trained weights of VGG16 were then transferred to the equivalent SNN architecture while performing a proper weight–threshold balancing. The model is evaluated in two case studies: land use and land cover classification, and epileptic seizure detection. Experimental results show a classification accuracy of 94.88%, and seizure detection specificity of 99.45% and a sensitivity of 95.06%. It is confirmed that conversion-based training SNNs are promising, and the benefits of DNNs, such as solving complex and real-world problems, become available to SNNs. © The Author(s) 2024 |
abstractGer |
Abstract Deep neural networks (DNNs) have received a great deal of interest in solving everyday tasks in recent years. However, their computational and energy costs limit their use on mobile and edge devices. The neuromorphic computing approach called spiking neural networks (SNNs) represents a potential solution for bridging the gap between performance and computational expense. Despite the potential benefits of energy efficiency, the current SNNs are being used with datasets such as MNIST, Fashion-MNIST, and CIFAR10, limiting their applications compared to DNNs. Therefore, the applicability of SNNs to real-world applications, such as scene classification and forecasting epileptic seizures, must be demonstrated yet. This paper develops a deep convolutional spiking neural network (DCSNN) for embedded applications. We explore a convolutional architecture, Visual Geometry Group (VGG16), to implement deeper SNNs. To train a spiking model, we convert the pre-trained VGG16 into corresponding spiking equivalents with nearly comparable performance to the original one. The trained weights of VGG16 were then transferred to the equivalent SNN architecture while performing a proper weight–threshold balancing. The model is evaluated in two case studies: land use and land cover classification, and epileptic seizure detection. Experimental results show a classification accuracy of 94.88%, and seizure detection specificity of 99.45% and a sensitivity of 95.06%. It is confirmed that conversion-based training SNNs are promising, and the benefits of DNNs, such as solving complex and real-world problems, become available to SNNs. © The Author(s) 2024 |
abstract_unstemmed |
Abstract Deep neural networks (DNNs) have received a great deal of interest in solving everyday tasks in recent years. However, their computational and energy costs limit their use on mobile and edge devices. The neuromorphic computing approach called spiking neural networks (SNNs) represents a potential solution for bridging the gap between performance and computational expense. Despite the potential benefits of energy efficiency, the current SNNs are being used with datasets such as MNIST, Fashion-MNIST, and CIFAR10, limiting their applications compared to DNNs. Therefore, the applicability of SNNs to real-world applications, such as scene classification and forecasting epileptic seizures, must be demonstrated yet. This paper develops a deep convolutional spiking neural network (DCSNN) for embedded applications. We explore a convolutional architecture, Visual Geometry Group (VGG16), to implement deeper SNNs. To train a spiking model, we convert the pre-trained VGG16 into corresponding spiking equivalents with nearly comparable performance to the original one. The trained weights of VGG16 were then transferred to the equivalent SNN architecture while performing a proper weight–threshold balancing. The model is evaluated in two case studies: land use and land cover classification, and epileptic seizure detection. Experimental results show a classification accuracy of 94.88%, and seizure detection specificity of 99.45% and a sensitivity of 95.06%. It is confirmed that conversion-based training SNNs are promising, and the benefits of DNNs, such as solving complex and real-world problems, become available to SNNs. © The Author(s) 2024 |
collection_details |
SYSFLAG_0 GBV_SPRINGER GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_120 GBV_ILN_138 GBV_ILN_150 GBV_ILN_151 GBV_ILN_152 GBV_ILN_161 GBV_ILN_170 GBV_ILN_171 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_250 GBV_ILN_281 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_636 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2006 GBV_ILN_2007 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2031 GBV_ILN_2034 GBV_ILN_2037 GBV_ILN_2038 GBV_ILN_2039 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2057 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2065 GBV_ILN_2068 GBV_ILN_2088 GBV_ILN_2093 GBV_ILN_2106 GBV_ILN_2107 GBV_ILN_2108 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2113 GBV_ILN_2118 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2144 GBV_ILN_2147 GBV_ILN_2148 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2188 GBV_ILN_2232 GBV_ILN_2244 GBV_ILN_2336 GBV_ILN_2446 GBV_ILN_2470 GBV_ILN_2472 GBV_ILN_2507 GBV_ILN_2522 GBV_ILN_2548 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4046 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4242 GBV_ILN_4246 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4328 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4335 GBV_ILN_4336 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 |
container_issue |
1 |
title_short |
A Deep Convolutional Spiking Neural Network for embedded applications |
url |
https://dx.doi.org/10.1007/s13748-024-00313-4 |
remote_bool |
true |
author2 |
Nguyen, Thanh Thi Parvez Mahmud, M. A. Kouzani, Abbas Z. |
author2Str |
Nguyen, Thanh Thi Parvez Mahmud, M. A. Kouzani, Abbas Z. |
ppnlink |
718730933 |
mediatype_str_mv |
c |
isOA_txt |
true |
hochschulschrift_bool |
false |
doi_str |
10.1007/s13748-024-00313-4 |
up_date |
2024-07-03T15:38:24.687Z |
_version_ |
1803572849870372864 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000naa a22002652 4500</leader><controlfield tag="001">SPR055434231</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20240406064744.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">240406s2024 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s13748-024-00313-4</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)SPR055434231</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(SPR)s13748-024-00313-4-e</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="a">600</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Javanshir, Amirhossein</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">A Deep Convolutional Spiking Neural Network for embedded applications</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2024</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© The Author(s) 2024</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Deep neural networks (DNNs) have received a great deal of interest in solving everyday tasks in recent years. However, their computational and energy costs limit their use on mobile and edge devices. The neuromorphic computing approach called spiking neural networks (SNNs) represents a potential solution for bridging the gap between performance and computational expense. Despite the potential benefits of energy efficiency, the current SNNs are being used with datasets such as MNIST, Fashion-MNIST, and CIFAR10, limiting their applications compared to DNNs. Therefore, the applicability of SNNs to real-world applications, such as scene classification and forecasting epileptic seizures, must be demonstrated yet. This paper develops a deep convolutional spiking neural network (DCSNN) for embedded applications. We explore a convolutional architecture, Visual Geometry Group (VGG16), to implement deeper SNNs. To train a spiking model, we convert the pre-trained VGG16 into corresponding spiking equivalents with nearly comparable performance to the original one. The trained weights of VGG16 were then transferred to the equivalent SNN architecture while performing a proper weight–threshold balancing. The model is evaluated in two case studies: land use and land cover classification, and epileptic seizure detection. Experimental results show a classification accuracy of 94.88%, and seizure detection specificity of 99.45% and a sensitivity of 95.06%. It is confirmed that conversion-based training SNNs are promising, and the benefits of DNNs, such as solving complex and real-world problems, become available to SNNs.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Spiking neural network</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Satellite image classification</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Epileptic seizure detection</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Nguyen, Thanh Thi</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Parvez Mahmud, M. A.</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Kouzani, Abbas Z.</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0002-6292-1214</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Progress in artificial intelligence</subfield><subfield code="d">Springer Berlin Heidelberg, 2012</subfield><subfield code="g">13(2024), 1 vom: März, Seite 1-15</subfield><subfield code="w">(DE-627)718730933</subfield><subfield code="w">(DE-600)2668413-5</subfield><subfield code="x">2192-6360</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:13</subfield><subfield code="g">year:2024</subfield><subfield code="g">number:1</subfield><subfield code="g">month:03</subfield><subfield code="g">pages:1-15</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://dx.doi.org/10.1007/s13748-024-00313-4</subfield><subfield code="m">X:VERLAG</subfield><subfield code="x">0</subfield><subfield code="z">kostenfrei</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_0</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_SPRINGER</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_101</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_120</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_138</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_150</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_171</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_187</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_250</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_281</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_636</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2001</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2006</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2010</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2031</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2038</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2039</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2057</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2065</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2068</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2093</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2107</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2108</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2113</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2118</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2144</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2147</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2148</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2188</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2244</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2446</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2472</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2522</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2548</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4046</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4246</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4328</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4393</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">13</subfield><subfield code="j">2024</subfield><subfield code="e">1</subfield><subfield code="c">03</subfield><subfield code="h">1-15</subfield></datafield></record></collection>
|
score |
7.4012384 |