Versatile Architectures of Artificial Neural Network with Variable Capacity
Abstract Artificial neural network (ANN) is widely used in modern engineering applications. The decision on the number of layers and the number of nodes per layer in the ANN or the capacity of the ANN is always a non-trivial. The wrong decision on the capacity of the ANN causes underfit or overfit....
Ausführliche Beschreibung
Autor*in: |
Basiri, M. Mohamed Asan [verfasserIn] |
---|
Format: |
Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2022 |
---|
Schlagwörter: |
---|
Anmerkung: |
© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022 |
---|
Übergeordnetes Werk: |
Enthalten in: Circuits, systems and signal processing - Springer US, 1982, 41(2022), 11 vom: 01. Juli, Seite 6333-6353 |
---|---|
Übergeordnetes Werk: |
volume:41 ; year:2022 ; number:11 ; day:01 ; month:07 ; pages:6333-6353 |
Links: |
---|
DOI / URN: |
10.1007/s00034-022-02087-3 |
---|
Katalog-ID: |
OLC2079570382 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | OLC2079570382 | ||
003 | DE-627 | ||
005 | 20230506065557.0 | ||
007 | tu | ||
008 | 221220s2022 xx ||||| 00| ||eng c | ||
024 | 7 | |a 10.1007/s00034-022-02087-3 |2 doi | |
035 | |a (DE-627)OLC2079570382 | ||
035 | |a (DE-He213)s00034-022-02087-3-p | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
082 | 0 | 4 | |a 600 |q VZ |
100 | 1 | |a Basiri, M. Mohamed Asan |e verfasserin |0 (orcid)0000-0002-1898-1690 |4 aut | |
245 | 1 | 0 | |a Versatile Architectures of Artificial Neural Network with Variable Capacity |
264 | 1 | |c 2022 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ohne Hilfsmittel zu benutzen |b n |2 rdamedia | ||
338 | |a Band |b nc |2 rdacarrier | ||
500 | |a © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022 | ||
520 | |a Abstract Artificial neural network (ANN) is widely used in modern engineering applications. The decision on the number of layers and the number of nodes per layer in the ANN or the capacity of the ANN is always a non-trivial. The wrong decision on the capacity of the ANN causes underfit or overfit. This paper proposes various versatile or flexible hardware architectures of multilayer perceptron (MLP)-based neural network, where the number of layers and the number of nodes per layer can be changed with respect to the requirement of the application by avoiding underfit or overfit. Also, the weights of the each node of the MLP can be fixed by the training phase. While the network has being trained, we can change the architecture of the MLP without affecting the accuracy. All the proposed and existing hardware designs of ANNs are implemented with 45 nm CMOS technology. The proposed high throughput design with 3 layers and 512 nodes per layer achieves $$53.8\%$$ of improvement in the throughput as compared with the existing technique. | ||
650 | 4 | |a Artificial neural network | |
650 | 4 | |a Convolutional neural network | |
650 | 4 | |a Multilayer perceptron | |
650 | 4 | |a Recurrent neural network | |
773 | 0 | 8 | |i Enthalten in |t Circuits, systems and signal processing |d Springer US, 1982 |g 41(2022), 11 vom: 01. Juli, Seite 6333-6353 |w (DE-627)130312134 |w (DE-600)588684-3 |w (DE-576)015889939 |x 0278-081X |7 nnns |
773 | 1 | 8 | |g volume:41 |g year:2022 |g number:11 |g day:01 |g month:07 |g pages:6333-6353 |
856 | 4 | 1 | |u https://doi.org/10.1007/s00034-022-02087-3 |z lizenzpflichtig |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_OLC | ||
912 | |a SSG-OLC-TEC | ||
912 | |a GBV_ILN_2244 | ||
951 | |a AR | ||
952 | |d 41 |j 2022 |e 11 |b 01 |c 07 |h 6333-6353 |
author_variant |
m m a b mma mmab |
---|---|
matchkey_str |
article:0278081X:2022----::estlacietrsfriiilerlewrw |
hierarchy_sort_str |
2022 |
publishDate |
2022 |
allfields |
10.1007/s00034-022-02087-3 doi (DE-627)OLC2079570382 (DE-He213)s00034-022-02087-3-p DE-627 ger DE-627 rakwb eng 600 VZ Basiri, M. Mohamed Asan verfasserin (orcid)0000-0002-1898-1690 aut Versatile Architectures of Artificial Neural Network with Variable Capacity 2022 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022 Abstract Artificial neural network (ANN) is widely used in modern engineering applications. The decision on the number of layers and the number of nodes per layer in the ANN or the capacity of the ANN is always a non-trivial. The wrong decision on the capacity of the ANN causes underfit or overfit. This paper proposes various versatile or flexible hardware architectures of multilayer perceptron (MLP)-based neural network, where the number of layers and the number of nodes per layer can be changed with respect to the requirement of the application by avoiding underfit or overfit. Also, the weights of the each node of the MLP can be fixed by the training phase. While the network has being trained, we can change the architecture of the MLP without affecting the accuracy. All the proposed and existing hardware designs of ANNs are implemented with 45 nm CMOS technology. The proposed high throughput design with 3 layers and 512 nodes per layer achieves $$53.8\%$$ of improvement in the throughput as compared with the existing technique. Artificial neural network Convolutional neural network Multilayer perceptron Recurrent neural network Enthalten in Circuits, systems and signal processing Springer US, 1982 41(2022), 11 vom: 01. Juli, Seite 6333-6353 (DE-627)130312134 (DE-600)588684-3 (DE-576)015889939 0278-081X nnns volume:41 year:2022 number:11 day:01 month:07 pages:6333-6353 https://doi.org/10.1007/s00034-022-02087-3 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-TEC GBV_ILN_2244 AR 41 2022 11 01 07 6333-6353 |
spelling |
10.1007/s00034-022-02087-3 doi (DE-627)OLC2079570382 (DE-He213)s00034-022-02087-3-p DE-627 ger DE-627 rakwb eng 600 VZ Basiri, M. Mohamed Asan verfasserin (orcid)0000-0002-1898-1690 aut Versatile Architectures of Artificial Neural Network with Variable Capacity 2022 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022 Abstract Artificial neural network (ANN) is widely used in modern engineering applications. The decision on the number of layers and the number of nodes per layer in the ANN or the capacity of the ANN is always a non-trivial. The wrong decision on the capacity of the ANN causes underfit or overfit. This paper proposes various versatile or flexible hardware architectures of multilayer perceptron (MLP)-based neural network, where the number of layers and the number of nodes per layer can be changed with respect to the requirement of the application by avoiding underfit or overfit. Also, the weights of the each node of the MLP can be fixed by the training phase. While the network has being trained, we can change the architecture of the MLP without affecting the accuracy. All the proposed and existing hardware designs of ANNs are implemented with 45 nm CMOS technology. The proposed high throughput design with 3 layers and 512 nodes per layer achieves $$53.8\%$$ of improvement in the throughput as compared with the existing technique. Artificial neural network Convolutional neural network Multilayer perceptron Recurrent neural network Enthalten in Circuits, systems and signal processing Springer US, 1982 41(2022), 11 vom: 01. Juli, Seite 6333-6353 (DE-627)130312134 (DE-600)588684-3 (DE-576)015889939 0278-081X nnns volume:41 year:2022 number:11 day:01 month:07 pages:6333-6353 https://doi.org/10.1007/s00034-022-02087-3 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-TEC GBV_ILN_2244 AR 41 2022 11 01 07 6333-6353 |
allfields_unstemmed |
10.1007/s00034-022-02087-3 doi (DE-627)OLC2079570382 (DE-He213)s00034-022-02087-3-p DE-627 ger DE-627 rakwb eng 600 VZ Basiri, M. Mohamed Asan verfasserin (orcid)0000-0002-1898-1690 aut Versatile Architectures of Artificial Neural Network with Variable Capacity 2022 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022 Abstract Artificial neural network (ANN) is widely used in modern engineering applications. The decision on the number of layers and the number of nodes per layer in the ANN or the capacity of the ANN is always a non-trivial. The wrong decision on the capacity of the ANN causes underfit or overfit. This paper proposes various versatile or flexible hardware architectures of multilayer perceptron (MLP)-based neural network, where the number of layers and the number of nodes per layer can be changed with respect to the requirement of the application by avoiding underfit or overfit. Also, the weights of the each node of the MLP can be fixed by the training phase. While the network has being trained, we can change the architecture of the MLP without affecting the accuracy. All the proposed and existing hardware designs of ANNs are implemented with 45 nm CMOS technology. The proposed high throughput design with 3 layers and 512 nodes per layer achieves $$53.8\%$$ of improvement in the throughput as compared with the existing technique. Artificial neural network Convolutional neural network Multilayer perceptron Recurrent neural network Enthalten in Circuits, systems and signal processing Springer US, 1982 41(2022), 11 vom: 01. Juli, Seite 6333-6353 (DE-627)130312134 (DE-600)588684-3 (DE-576)015889939 0278-081X nnns volume:41 year:2022 number:11 day:01 month:07 pages:6333-6353 https://doi.org/10.1007/s00034-022-02087-3 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-TEC GBV_ILN_2244 AR 41 2022 11 01 07 6333-6353 |
allfieldsGer |
10.1007/s00034-022-02087-3 doi (DE-627)OLC2079570382 (DE-He213)s00034-022-02087-3-p DE-627 ger DE-627 rakwb eng 600 VZ Basiri, M. Mohamed Asan verfasserin (orcid)0000-0002-1898-1690 aut Versatile Architectures of Artificial Neural Network with Variable Capacity 2022 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022 Abstract Artificial neural network (ANN) is widely used in modern engineering applications. The decision on the number of layers and the number of nodes per layer in the ANN or the capacity of the ANN is always a non-trivial. The wrong decision on the capacity of the ANN causes underfit or overfit. This paper proposes various versatile or flexible hardware architectures of multilayer perceptron (MLP)-based neural network, where the number of layers and the number of nodes per layer can be changed with respect to the requirement of the application by avoiding underfit or overfit. Also, the weights of the each node of the MLP can be fixed by the training phase. While the network has being trained, we can change the architecture of the MLP without affecting the accuracy. All the proposed and existing hardware designs of ANNs are implemented with 45 nm CMOS technology. The proposed high throughput design with 3 layers and 512 nodes per layer achieves $$53.8\%$$ of improvement in the throughput as compared with the existing technique. Artificial neural network Convolutional neural network Multilayer perceptron Recurrent neural network Enthalten in Circuits, systems and signal processing Springer US, 1982 41(2022), 11 vom: 01. Juli, Seite 6333-6353 (DE-627)130312134 (DE-600)588684-3 (DE-576)015889939 0278-081X nnns volume:41 year:2022 number:11 day:01 month:07 pages:6333-6353 https://doi.org/10.1007/s00034-022-02087-3 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-TEC GBV_ILN_2244 AR 41 2022 11 01 07 6333-6353 |
allfieldsSound |
10.1007/s00034-022-02087-3 doi (DE-627)OLC2079570382 (DE-He213)s00034-022-02087-3-p DE-627 ger DE-627 rakwb eng 600 VZ Basiri, M. Mohamed Asan verfasserin (orcid)0000-0002-1898-1690 aut Versatile Architectures of Artificial Neural Network with Variable Capacity 2022 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022 Abstract Artificial neural network (ANN) is widely used in modern engineering applications. The decision on the number of layers and the number of nodes per layer in the ANN or the capacity of the ANN is always a non-trivial. The wrong decision on the capacity of the ANN causes underfit or overfit. This paper proposes various versatile or flexible hardware architectures of multilayer perceptron (MLP)-based neural network, where the number of layers and the number of nodes per layer can be changed with respect to the requirement of the application by avoiding underfit or overfit. Also, the weights of the each node of the MLP can be fixed by the training phase. While the network has being trained, we can change the architecture of the MLP without affecting the accuracy. All the proposed and existing hardware designs of ANNs are implemented with 45 nm CMOS technology. The proposed high throughput design with 3 layers and 512 nodes per layer achieves $$53.8\%$$ of improvement in the throughput as compared with the existing technique. Artificial neural network Convolutional neural network Multilayer perceptron Recurrent neural network Enthalten in Circuits, systems and signal processing Springer US, 1982 41(2022), 11 vom: 01. Juli, Seite 6333-6353 (DE-627)130312134 (DE-600)588684-3 (DE-576)015889939 0278-081X nnns volume:41 year:2022 number:11 day:01 month:07 pages:6333-6353 https://doi.org/10.1007/s00034-022-02087-3 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-TEC GBV_ILN_2244 AR 41 2022 11 01 07 6333-6353 |
language |
English |
source |
Enthalten in Circuits, systems and signal processing 41(2022), 11 vom: 01. Juli, Seite 6333-6353 volume:41 year:2022 number:11 day:01 month:07 pages:6333-6353 |
sourceStr |
Enthalten in Circuits, systems and signal processing 41(2022), 11 vom: 01. Juli, Seite 6333-6353 volume:41 year:2022 number:11 day:01 month:07 pages:6333-6353 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Artificial neural network Convolutional neural network Multilayer perceptron Recurrent neural network |
dewey-raw |
600 |
isfreeaccess_bool |
false |
container_title |
Circuits, systems and signal processing |
authorswithroles_txt_mv |
Basiri, M. Mohamed Asan @@aut@@ |
publishDateDaySort_date |
2022-07-01T00:00:00Z |
hierarchy_top_id |
130312134 |
dewey-sort |
3600 |
id |
OLC2079570382 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">OLC2079570382</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230506065557.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">221220s2022 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s00034-022-02087-3</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC2079570382</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s00034-022-02087-3-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">600</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Basiri, M. Mohamed Asan</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0002-1898-1690</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Versatile Architectures of Artificial Neural Network with Variable Capacity</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Artificial neural network (ANN) is widely used in modern engineering applications. The decision on the number of layers and the number of nodes per layer in the ANN or the capacity of the ANN is always a non-trivial. The wrong decision on the capacity of the ANN causes underfit or overfit. This paper proposes various versatile or flexible hardware architectures of multilayer perceptron (MLP)-based neural network, where the number of layers and the number of nodes per layer can be changed with respect to the requirement of the application by avoiding underfit or overfit. Also, the weights of the each node of the MLP can be fixed by the training phase. While the network has being trained, we can change the architecture of the MLP without affecting the accuracy. All the proposed and existing hardware designs of ANNs are implemented with 45 nm CMOS technology. The proposed high throughput design with 3 layers and 512 nodes per layer achieves $$53.8\%$$ of improvement in the throughput as compared with the existing technique.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Artificial neural network</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Convolutional neural network</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Multilayer perceptron</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Recurrent neural network</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Circuits, systems and signal processing</subfield><subfield code="d">Springer US, 1982</subfield><subfield code="g">41(2022), 11 vom: 01. Juli, Seite 6333-6353</subfield><subfield code="w">(DE-627)130312134</subfield><subfield code="w">(DE-600)588684-3</subfield><subfield code="w">(DE-576)015889939</subfield><subfield code="x">0278-081X</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:41</subfield><subfield code="g">year:2022</subfield><subfield code="g">number:11</subfield><subfield code="g">day:01</subfield><subfield code="g">month:07</subfield><subfield code="g">pages:6333-6353</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s00034-022-02087-3</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-TEC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2244</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">41</subfield><subfield code="j">2022</subfield><subfield code="e">11</subfield><subfield code="b">01</subfield><subfield code="c">07</subfield><subfield code="h">6333-6353</subfield></datafield></record></collection>
|
author |
Basiri, M. Mohamed Asan |
spellingShingle |
Basiri, M. Mohamed Asan ddc 600 misc Artificial neural network misc Convolutional neural network misc Multilayer perceptron misc Recurrent neural network Versatile Architectures of Artificial Neural Network with Variable Capacity |
authorStr |
Basiri, M. Mohamed Asan |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)130312134 |
format |
Article |
dewey-ones |
600 - Technology |
delete_txt_mv |
keep |
author_role |
aut |
collection |
OLC |
remote_str |
false |
illustrated |
Not Illustrated |
issn |
0278-081X |
topic_title |
600 VZ Versatile Architectures of Artificial Neural Network with Variable Capacity Artificial neural network Convolutional neural network Multilayer perceptron Recurrent neural network |
topic |
ddc 600 misc Artificial neural network misc Convolutional neural network misc Multilayer perceptron misc Recurrent neural network |
topic_unstemmed |
ddc 600 misc Artificial neural network misc Convolutional neural network misc Multilayer perceptron misc Recurrent neural network |
topic_browse |
ddc 600 misc Artificial neural network misc Convolutional neural network misc Multilayer perceptron misc Recurrent neural network |
format_facet |
Aufsätze Gedruckte Aufsätze |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
nc |
hierarchy_parent_title |
Circuits, systems and signal processing |
hierarchy_parent_id |
130312134 |
dewey-tens |
600 - Technology |
hierarchy_top_title |
Circuits, systems and signal processing |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)130312134 (DE-600)588684-3 (DE-576)015889939 |
title |
Versatile Architectures of Artificial Neural Network with Variable Capacity |
ctrlnum |
(DE-627)OLC2079570382 (DE-He213)s00034-022-02087-3-p |
title_full |
Versatile Architectures of Artificial Neural Network with Variable Capacity |
author_sort |
Basiri, M. Mohamed Asan |
journal |
Circuits, systems and signal processing |
journalStr |
Circuits, systems and signal processing |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
600 - Technology |
recordtype |
marc |
publishDateSort |
2022 |
contenttype_str_mv |
txt |
container_start_page |
6333 |
author_browse |
Basiri, M. Mohamed Asan |
container_volume |
41 |
class |
600 VZ |
format_se |
Aufsätze |
author-letter |
Basiri, M. Mohamed Asan |
doi_str_mv |
10.1007/s00034-022-02087-3 |
normlink |
(ORCID)0000-0002-1898-1690 |
normlink_prefix_str_mv |
(orcid)0000-0002-1898-1690 |
dewey-full |
600 |
title_sort |
versatile architectures of artificial neural network with variable capacity |
title_auth |
Versatile Architectures of Artificial Neural Network with Variable Capacity |
abstract |
Abstract Artificial neural network (ANN) is widely used in modern engineering applications. The decision on the number of layers and the number of nodes per layer in the ANN or the capacity of the ANN is always a non-trivial. The wrong decision on the capacity of the ANN causes underfit or overfit. This paper proposes various versatile or flexible hardware architectures of multilayer perceptron (MLP)-based neural network, where the number of layers and the number of nodes per layer can be changed with respect to the requirement of the application by avoiding underfit or overfit. Also, the weights of the each node of the MLP can be fixed by the training phase. While the network has being trained, we can change the architecture of the MLP without affecting the accuracy. All the proposed and existing hardware designs of ANNs are implemented with 45 nm CMOS technology. The proposed high throughput design with 3 layers and 512 nodes per layer achieves $$53.8\%$$ of improvement in the throughput as compared with the existing technique. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022 |
abstractGer |
Abstract Artificial neural network (ANN) is widely used in modern engineering applications. The decision on the number of layers and the number of nodes per layer in the ANN or the capacity of the ANN is always a non-trivial. The wrong decision on the capacity of the ANN causes underfit or overfit. This paper proposes various versatile or flexible hardware architectures of multilayer perceptron (MLP)-based neural network, where the number of layers and the number of nodes per layer can be changed with respect to the requirement of the application by avoiding underfit or overfit. Also, the weights of the each node of the MLP can be fixed by the training phase. While the network has being trained, we can change the architecture of the MLP without affecting the accuracy. All the proposed and existing hardware designs of ANNs are implemented with 45 nm CMOS technology. The proposed high throughput design with 3 layers and 512 nodes per layer achieves $$53.8\%$$ of improvement in the throughput as compared with the existing technique. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022 |
abstract_unstemmed |
Abstract Artificial neural network (ANN) is widely used in modern engineering applications. The decision on the number of layers and the number of nodes per layer in the ANN or the capacity of the ANN is always a non-trivial. The wrong decision on the capacity of the ANN causes underfit or overfit. This paper proposes various versatile or flexible hardware architectures of multilayer perceptron (MLP)-based neural network, where the number of layers and the number of nodes per layer can be changed with respect to the requirement of the application by avoiding underfit or overfit. Also, the weights of the each node of the MLP can be fixed by the training phase. While the network has being trained, we can change the architecture of the MLP without affecting the accuracy. All the proposed and existing hardware designs of ANNs are implemented with 45 nm CMOS technology. The proposed high throughput design with 3 layers and 512 nodes per layer achieves $$53.8\%$$ of improvement in the throughput as compared with the existing technique. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022 |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-TEC GBV_ILN_2244 |
container_issue |
11 |
title_short |
Versatile Architectures of Artificial Neural Network with Variable Capacity |
url |
https://doi.org/10.1007/s00034-022-02087-3 |
remote_bool |
false |
ppnlink |
130312134 |
mediatype_str_mv |
n |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1007/s00034-022-02087-3 |
up_date |
2024-07-04T01:23:52.613Z |
_version_ |
1803609684168409088 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">OLC2079570382</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230506065557.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">221220s2022 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s00034-022-02087-3</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC2079570382</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s00034-022-02087-3-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">600</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Basiri, M. Mohamed Asan</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0002-1898-1690</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Versatile Architectures of Artificial Neural Network with Variable Capacity</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Artificial neural network (ANN) is widely used in modern engineering applications. The decision on the number of layers and the number of nodes per layer in the ANN or the capacity of the ANN is always a non-trivial. The wrong decision on the capacity of the ANN causes underfit or overfit. This paper proposes various versatile or flexible hardware architectures of multilayer perceptron (MLP)-based neural network, where the number of layers and the number of nodes per layer can be changed with respect to the requirement of the application by avoiding underfit or overfit. Also, the weights of the each node of the MLP can be fixed by the training phase. While the network has being trained, we can change the architecture of the MLP without affecting the accuracy. All the proposed and existing hardware designs of ANNs are implemented with 45 nm CMOS technology. The proposed high throughput design with 3 layers and 512 nodes per layer achieves $$53.8\%$$ of improvement in the throughput as compared with the existing technique.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Artificial neural network</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Convolutional neural network</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Multilayer perceptron</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Recurrent neural network</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Circuits, systems and signal processing</subfield><subfield code="d">Springer US, 1982</subfield><subfield code="g">41(2022), 11 vom: 01. Juli, Seite 6333-6353</subfield><subfield code="w">(DE-627)130312134</subfield><subfield code="w">(DE-600)588684-3</subfield><subfield code="w">(DE-576)015889939</subfield><subfield code="x">0278-081X</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:41</subfield><subfield code="g">year:2022</subfield><subfield code="g">number:11</subfield><subfield code="g">day:01</subfield><subfield code="g">month:07</subfield><subfield code="g">pages:6333-6353</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s00034-022-02087-3</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-TEC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2244</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">41</subfield><subfield code="j">2022</subfield><subfield code="e">11</subfield><subfield code="b">01</subfield><subfield code="c">07</subfield><subfield code="h">6333-6353</subfield></datafield></record></collection>
|
score |
7.398802 |