Tree-based special-purpose Array architectures for neural computing
Abstract A massively parallel architecture called the mesh-of-appendixed-trees (MAT) is shown to be suitable for processing artificial neural networks (ANNs). Both the recall and the learning phases of the multilayer feedforward with backpropagation ANN model are considered. The MAT structure is ref...
Ausführliche Beschreibung
Autor*in: |
Malluhi, Q. M. [verfasserIn] Bayoumi, M. A. [verfasserIn] Rao, T. R. N. [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
1995 |
---|
Schlagwörter: |
---|
Übergeordnetes Werk: |
Enthalten in: Journal of VLSI signal processing systems for signal, image and video technology - Springer Netherlands, 1989, 11(1995), 3 vom: 01. Dez., Seite 245-262 |
---|---|
Übergeordnetes Werk: |
volume:11 ; year:1995 ; number:3 ; day:01 ; month:12 ; pages:245-262 |
Links: |
---|
DOI / URN: |
10.1007/BF02107056 |
---|
Katalog-ID: |
SPR018310788 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | SPR018310788 | ||
003 | DE-627 | ||
005 | 20201124222346.0 | ||
007 | cr uuu---uuuuu | ||
008 | 201006s1995 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1007/BF02107056 |2 doi | |
035 | |a (DE-627)SPR018310788 | ||
035 | |a (SPR)BF02107056-e | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
100 | 1 | |a Malluhi, Q. M. |e verfasserin |4 aut | |
245 | 1 | 0 | |a Tree-based special-purpose Array architectures for neural computing |
264 | 1 | |c 1995 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a Abstract A massively parallel architecture called the mesh-of-appendixed-trees (MAT) is shown to be suitable for processing artificial neural networks (ANNs). Both the recall and the learning phases of the multilayer feedforward with backpropagation ANN model are considered. The MAT structure is refined to produce two special-purpose array processors; FMAT1 and FMAT2, for efficient ANN computation. This refinement tends to reduce circuit area and increase hardware utilization. FMAT1 is a simple structure suitable for the recall phase. FMAT2 requires little extra hardware but supports learning as well. A major characteristic of the proposed neurocomputers is high performance. It takesO (logN) time to process a neural network withN neurons in its largest layer. Our proposed architecture is shown to provide the best number of connections per unit time when compared to several major techniques in the literature. Another important feature of our approach is its ability to pipeline more than one input pattern which further improves the performance. | ||
650 | 4 | |a Learning Phase |7 (dpeaa)DE-He213 | |
650 | 4 | |a Input Pattern |7 (dpeaa)DE-He213 | |
650 | 4 | |a Local Memory |7 (dpeaa)DE-He213 | |
650 | 4 | |a Synaptic Weight |7 (dpeaa)DE-He213 | |
650 | 4 | |a Systolic Array |7 (dpeaa)DE-He213 | |
700 | 1 | |a Bayoumi, M. A. |e verfasserin |4 aut | |
700 | 1 | |a Rao, T. R. N. |e verfasserin |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Journal of VLSI signal processing systems for signal, image and video technology |d Springer Netherlands, 1989 |g 11(1995), 3 vom: 01. Dez., Seite 245-262 |w (DE-627)SPR018308090 |7 nnns |
773 | 1 | 8 | |g volume:11 |g year:1995 |g number:3 |g day:01 |g month:12 |g pages:245-262 |
856 | 4 | 0 | |u https://dx.doi.org/10.1007/BF02107056 |z lizenzpflichtig |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_SPRINGER | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_2006 | ||
912 | |a GBV_ILN_2027 | ||
951 | |a AR | ||
952 | |d 11 |j 1995 |e 3 |b 01 |c 12 |h 245-262 |
author_variant |
q m m qm qmm m a b ma mab t r n r trn trnr |
---|---|
matchkey_str |
malluhiqmbayoumimaraotrn:1995----:reaeseilupsaryrhtcuef |
hierarchy_sort_str |
1995 |
publishDate |
1995 |
allfields |
10.1007/BF02107056 doi (DE-627)SPR018310788 (SPR)BF02107056-e DE-627 ger DE-627 rakwb eng Malluhi, Q. M. verfasserin aut Tree-based special-purpose Array architectures for neural computing 1995 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Abstract A massively parallel architecture called the mesh-of-appendixed-trees (MAT) is shown to be suitable for processing artificial neural networks (ANNs). Both the recall and the learning phases of the multilayer feedforward with backpropagation ANN model are considered. The MAT structure is refined to produce two special-purpose array processors; FMAT1 and FMAT2, for efficient ANN computation. This refinement tends to reduce circuit area and increase hardware utilization. FMAT1 is a simple structure suitable for the recall phase. FMAT2 requires little extra hardware but supports learning as well. A major characteristic of the proposed neurocomputers is high performance. It takesO (logN) time to process a neural network withN neurons in its largest layer. Our proposed architecture is shown to provide the best number of connections per unit time when compared to several major techniques in the literature. Another important feature of our approach is its ability to pipeline more than one input pattern which further improves the performance. Learning Phase (dpeaa)DE-He213 Input Pattern (dpeaa)DE-He213 Local Memory (dpeaa)DE-He213 Synaptic Weight (dpeaa)DE-He213 Systolic Array (dpeaa)DE-He213 Bayoumi, M. A. verfasserin aut Rao, T. R. N. verfasserin aut Enthalten in Journal of VLSI signal processing systems for signal, image and video technology Springer Netherlands, 1989 11(1995), 3 vom: 01. Dez., Seite 245-262 (DE-627)SPR018308090 nnns volume:11 year:1995 number:3 day:01 month:12 pages:245-262 https://dx.doi.org/10.1007/BF02107056 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_40 GBV_ILN_2006 GBV_ILN_2027 AR 11 1995 3 01 12 245-262 |
spelling |
10.1007/BF02107056 doi (DE-627)SPR018310788 (SPR)BF02107056-e DE-627 ger DE-627 rakwb eng Malluhi, Q. M. verfasserin aut Tree-based special-purpose Array architectures for neural computing 1995 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Abstract A massively parallel architecture called the mesh-of-appendixed-trees (MAT) is shown to be suitable for processing artificial neural networks (ANNs). Both the recall and the learning phases of the multilayer feedforward with backpropagation ANN model are considered. The MAT structure is refined to produce two special-purpose array processors; FMAT1 and FMAT2, for efficient ANN computation. This refinement tends to reduce circuit area and increase hardware utilization. FMAT1 is a simple structure suitable for the recall phase. FMAT2 requires little extra hardware but supports learning as well. A major characteristic of the proposed neurocomputers is high performance. It takesO (logN) time to process a neural network withN neurons in its largest layer. Our proposed architecture is shown to provide the best number of connections per unit time when compared to several major techniques in the literature. Another important feature of our approach is its ability to pipeline more than one input pattern which further improves the performance. Learning Phase (dpeaa)DE-He213 Input Pattern (dpeaa)DE-He213 Local Memory (dpeaa)DE-He213 Synaptic Weight (dpeaa)DE-He213 Systolic Array (dpeaa)DE-He213 Bayoumi, M. A. verfasserin aut Rao, T. R. N. verfasserin aut Enthalten in Journal of VLSI signal processing systems for signal, image and video technology Springer Netherlands, 1989 11(1995), 3 vom: 01. Dez., Seite 245-262 (DE-627)SPR018308090 nnns volume:11 year:1995 number:3 day:01 month:12 pages:245-262 https://dx.doi.org/10.1007/BF02107056 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_40 GBV_ILN_2006 GBV_ILN_2027 AR 11 1995 3 01 12 245-262 |
allfields_unstemmed |
10.1007/BF02107056 doi (DE-627)SPR018310788 (SPR)BF02107056-e DE-627 ger DE-627 rakwb eng Malluhi, Q. M. verfasserin aut Tree-based special-purpose Array architectures for neural computing 1995 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Abstract A massively parallel architecture called the mesh-of-appendixed-trees (MAT) is shown to be suitable for processing artificial neural networks (ANNs). Both the recall and the learning phases of the multilayer feedforward with backpropagation ANN model are considered. The MAT structure is refined to produce two special-purpose array processors; FMAT1 and FMAT2, for efficient ANN computation. This refinement tends to reduce circuit area and increase hardware utilization. FMAT1 is a simple structure suitable for the recall phase. FMAT2 requires little extra hardware but supports learning as well. A major characteristic of the proposed neurocomputers is high performance. It takesO (logN) time to process a neural network withN neurons in its largest layer. Our proposed architecture is shown to provide the best number of connections per unit time when compared to several major techniques in the literature. Another important feature of our approach is its ability to pipeline more than one input pattern which further improves the performance. Learning Phase (dpeaa)DE-He213 Input Pattern (dpeaa)DE-He213 Local Memory (dpeaa)DE-He213 Synaptic Weight (dpeaa)DE-He213 Systolic Array (dpeaa)DE-He213 Bayoumi, M. A. verfasserin aut Rao, T. R. N. verfasserin aut Enthalten in Journal of VLSI signal processing systems for signal, image and video technology Springer Netherlands, 1989 11(1995), 3 vom: 01. Dez., Seite 245-262 (DE-627)SPR018308090 nnns volume:11 year:1995 number:3 day:01 month:12 pages:245-262 https://dx.doi.org/10.1007/BF02107056 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_40 GBV_ILN_2006 GBV_ILN_2027 AR 11 1995 3 01 12 245-262 |
allfieldsGer |
10.1007/BF02107056 doi (DE-627)SPR018310788 (SPR)BF02107056-e DE-627 ger DE-627 rakwb eng Malluhi, Q. M. verfasserin aut Tree-based special-purpose Array architectures for neural computing 1995 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Abstract A massively parallel architecture called the mesh-of-appendixed-trees (MAT) is shown to be suitable for processing artificial neural networks (ANNs). Both the recall and the learning phases of the multilayer feedforward with backpropagation ANN model are considered. The MAT structure is refined to produce two special-purpose array processors; FMAT1 and FMAT2, for efficient ANN computation. This refinement tends to reduce circuit area and increase hardware utilization. FMAT1 is a simple structure suitable for the recall phase. FMAT2 requires little extra hardware but supports learning as well. A major characteristic of the proposed neurocomputers is high performance. It takesO (logN) time to process a neural network withN neurons in its largest layer. Our proposed architecture is shown to provide the best number of connections per unit time when compared to several major techniques in the literature. Another important feature of our approach is its ability to pipeline more than one input pattern which further improves the performance. Learning Phase (dpeaa)DE-He213 Input Pattern (dpeaa)DE-He213 Local Memory (dpeaa)DE-He213 Synaptic Weight (dpeaa)DE-He213 Systolic Array (dpeaa)DE-He213 Bayoumi, M. A. verfasserin aut Rao, T. R. N. verfasserin aut Enthalten in Journal of VLSI signal processing systems for signal, image and video technology Springer Netherlands, 1989 11(1995), 3 vom: 01. Dez., Seite 245-262 (DE-627)SPR018308090 nnns volume:11 year:1995 number:3 day:01 month:12 pages:245-262 https://dx.doi.org/10.1007/BF02107056 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_40 GBV_ILN_2006 GBV_ILN_2027 AR 11 1995 3 01 12 245-262 |
allfieldsSound |
10.1007/BF02107056 doi (DE-627)SPR018310788 (SPR)BF02107056-e DE-627 ger DE-627 rakwb eng Malluhi, Q. M. verfasserin aut Tree-based special-purpose Array architectures for neural computing 1995 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Abstract A massively parallel architecture called the mesh-of-appendixed-trees (MAT) is shown to be suitable for processing artificial neural networks (ANNs). Both the recall and the learning phases of the multilayer feedforward with backpropagation ANN model are considered. The MAT structure is refined to produce two special-purpose array processors; FMAT1 and FMAT2, for efficient ANN computation. This refinement tends to reduce circuit area and increase hardware utilization. FMAT1 is a simple structure suitable for the recall phase. FMAT2 requires little extra hardware but supports learning as well. A major characteristic of the proposed neurocomputers is high performance. It takesO (logN) time to process a neural network withN neurons in its largest layer. Our proposed architecture is shown to provide the best number of connections per unit time when compared to several major techniques in the literature. Another important feature of our approach is its ability to pipeline more than one input pattern which further improves the performance. Learning Phase (dpeaa)DE-He213 Input Pattern (dpeaa)DE-He213 Local Memory (dpeaa)DE-He213 Synaptic Weight (dpeaa)DE-He213 Systolic Array (dpeaa)DE-He213 Bayoumi, M. A. verfasserin aut Rao, T. R. N. verfasserin aut Enthalten in Journal of VLSI signal processing systems for signal, image and video technology Springer Netherlands, 1989 11(1995), 3 vom: 01. Dez., Seite 245-262 (DE-627)SPR018308090 nnns volume:11 year:1995 number:3 day:01 month:12 pages:245-262 https://dx.doi.org/10.1007/BF02107056 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_40 GBV_ILN_2006 GBV_ILN_2027 AR 11 1995 3 01 12 245-262 |
language |
English |
source |
Enthalten in Journal of VLSI signal processing systems for signal, image and video technology 11(1995), 3 vom: 01. Dez., Seite 245-262 volume:11 year:1995 number:3 day:01 month:12 pages:245-262 |
sourceStr |
Enthalten in Journal of VLSI signal processing systems for signal, image and video technology 11(1995), 3 vom: 01. Dez., Seite 245-262 volume:11 year:1995 number:3 day:01 month:12 pages:245-262 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Learning Phase Input Pattern Local Memory Synaptic Weight Systolic Array |
isfreeaccess_bool |
false |
container_title |
Journal of VLSI signal processing systems for signal, image and video technology |
authorswithroles_txt_mv |
Malluhi, Q. M. @@aut@@ Bayoumi, M. A. @@aut@@ Rao, T. R. N. @@aut@@ |
publishDateDaySort_date |
1995-12-01T00:00:00Z |
hierarchy_top_id |
SPR018308090 |
id |
SPR018310788 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">SPR018310788</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20201124222346.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">201006s1995 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/BF02107056</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)SPR018310788</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(SPR)BF02107056-e</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Malluhi, Q. M.</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Tree-based special-purpose Array architectures for neural computing</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">1995</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract A massively parallel architecture called the mesh-of-appendixed-trees (MAT) is shown to be suitable for processing artificial neural networks (ANNs). Both the recall and the learning phases of the multilayer feedforward with backpropagation ANN model are considered. The MAT structure is refined to produce two special-purpose array processors; FMAT1 and FMAT2, for efficient ANN computation. This refinement tends to reduce circuit area and increase hardware utilization. FMAT1 is a simple structure suitable for the recall phase. FMAT2 requires little extra hardware but supports learning as well. A major characteristic of the proposed neurocomputers is high performance. It takesO (logN) time to process a neural network withN neurons in its largest layer. Our proposed architecture is shown to provide the best number of connections per unit time when compared to several major techniques in the literature. Another important feature of our approach is its ability to pipeline more than one input pattern which further improves the performance.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Learning Phase</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Input Pattern</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Local Memory</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Synaptic Weight</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Systolic Array</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Bayoumi, M. A.</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Rao, T. R. N.</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Journal of VLSI signal processing systems for signal, image and video technology</subfield><subfield code="d">Springer Netherlands, 1989</subfield><subfield code="g">11(1995), 3 vom: 01. Dez., Seite 245-262</subfield><subfield code="w">(DE-627)SPR018308090</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:11</subfield><subfield code="g">year:1995</subfield><subfield code="g">number:3</subfield><subfield code="g">day:01</subfield><subfield code="g">month:12</subfield><subfield code="g">pages:245-262</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://dx.doi.org/10.1007/BF02107056</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_SPRINGER</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2006</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">11</subfield><subfield code="j">1995</subfield><subfield code="e">3</subfield><subfield code="b">01</subfield><subfield code="c">12</subfield><subfield code="h">245-262</subfield></datafield></record></collection>
|
author |
Malluhi, Q. M. |
spellingShingle |
Malluhi, Q. M. misc Learning Phase misc Input Pattern misc Local Memory misc Synaptic Weight misc Systolic Array Tree-based special-purpose Array architectures for neural computing |
authorStr |
Malluhi, Q. M. |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)SPR018308090 |
format |
electronic Article |
delete_txt_mv |
keep |
author_role |
aut aut aut |
collection |
springer |
remote_str |
true |
illustrated |
Not Illustrated |
topic_title |
Tree-based special-purpose Array architectures for neural computing Learning Phase (dpeaa)DE-He213 Input Pattern (dpeaa)DE-He213 Local Memory (dpeaa)DE-He213 Synaptic Weight (dpeaa)DE-He213 Systolic Array (dpeaa)DE-He213 |
topic |
misc Learning Phase misc Input Pattern misc Local Memory misc Synaptic Weight misc Systolic Array |
topic_unstemmed |
misc Learning Phase misc Input Pattern misc Local Memory misc Synaptic Weight misc Systolic Array |
topic_browse |
misc Learning Phase misc Input Pattern misc Local Memory misc Synaptic Weight misc Systolic Array |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
Journal of VLSI signal processing systems for signal, image and video technology |
hierarchy_parent_id |
SPR018308090 |
hierarchy_top_title |
Journal of VLSI signal processing systems for signal, image and video technology |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)SPR018308090 |
title |
Tree-based special-purpose Array architectures for neural computing |
ctrlnum |
(DE-627)SPR018310788 (SPR)BF02107056-e |
title_full |
Tree-based special-purpose Array architectures for neural computing |
author_sort |
Malluhi, Q. M. |
journal |
Journal of VLSI signal processing systems for signal, image and video technology |
journalStr |
Journal of VLSI signal processing systems for signal, image and video technology |
lang_code |
eng |
isOA_bool |
false |
recordtype |
marc |
publishDateSort |
1995 |
contenttype_str_mv |
txt |
container_start_page |
245 |
author_browse |
Malluhi, Q. M. Bayoumi, M. A. Rao, T. R. N. |
container_volume |
11 |
format_se |
Elektronische Aufsätze |
author-letter |
Malluhi, Q. M. |
doi_str_mv |
10.1007/BF02107056 |
author2-role |
verfasserin |
title_sort |
tree-based special-purpose array architectures for neural computing |
title_auth |
Tree-based special-purpose Array architectures for neural computing |
abstract |
Abstract A massively parallel architecture called the mesh-of-appendixed-trees (MAT) is shown to be suitable for processing artificial neural networks (ANNs). Both the recall and the learning phases of the multilayer feedforward with backpropagation ANN model are considered. The MAT structure is refined to produce two special-purpose array processors; FMAT1 and FMAT2, for efficient ANN computation. This refinement tends to reduce circuit area and increase hardware utilization. FMAT1 is a simple structure suitable for the recall phase. FMAT2 requires little extra hardware but supports learning as well. A major characteristic of the proposed neurocomputers is high performance. It takesO (logN) time to process a neural network withN neurons in its largest layer. Our proposed architecture is shown to provide the best number of connections per unit time when compared to several major techniques in the literature. Another important feature of our approach is its ability to pipeline more than one input pattern which further improves the performance. |
abstractGer |
Abstract A massively parallel architecture called the mesh-of-appendixed-trees (MAT) is shown to be suitable for processing artificial neural networks (ANNs). Both the recall and the learning phases of the multilayer feedforward with backpropagation ANN model are considered. The MAT structure is refined to produce two special-purpose array processors; FMAT1 and FMAT2, for efficient ANN computation. This refinement tends to reduce circuit area and increase hardware utilization. FMAT1 is a simple structure suitable for the recall phase. FMAT2 requires little extra hardware but supports learning as well. A major characteristic of the proposed neurocomputers is high performance. It takesO (logN) time to process a neural network withN neurons in its largest layer. Our proposed architecture is shown to provide the best number of connections per unit time when compared to several major techniques in the literature. Another important feature of our approach is its ability to pipeline more than one input pattern which further improves the performance. |
abstract_unstemmed |
Abstract A massively parallel architecture called the mesh-of-appendixed-trees (MAT) is shown to be suitable for processing artificial neural networks (ANNs). Both the recall and the learning phases of the multilayer feedforward with backpropagation ANN model are considered. The MAT structure is refined to produce two special-purpose array processors; FMAT1 and FMAT2, for efficient ANN computation. This refinement tends to reduce circuit area and increase hardware utilization. FMAT1 is a simple structure suitable for the recall phase. FMAT2 requires little extra hardware but supports learning as well. A major characteristic of the proposed neurocomputers is high performance. It takesO (logN) time to process a neural network withN neurons in its largest layer. Our proposed architecture is shown to provide the best number of connections per unit time when compared to several major techniques in the literature. Another important feature of our approach is its ability to pipeline more than one input pattern which further improves the performance. |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_SPRINGER GBV_ILN_40 GBV_ILN_2006 GBV_ILN_2027 |
container_issue |
3 |
title_short |
Tree-based special-purpose Array architectures for neural computing |
url |
https://dx.doi.org/10.1007/BF02107056 |
remote_bool |
true |
author2 |
Bayoumi, M. A. Rao, T. R. N. |
author2Str |
Bayoumi, M. A. Rao, T. R. N. |
ppnlink |
SPR018308090 |
mediatype_str_mv |
c |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1007/BF02107056 |
up_date |
2024-07-03T18:50:54.355Z |
_version_ |
1803584960573997056 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">SPR018310788</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20201124222346.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">201006s1995 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/BF02107056</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)SPR018310788</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(SPR)BF02107056-e</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Malluhi, Q. M.</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Tree-based special-purpose Array architectures for neural computing</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">1995</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract A massively parallel architecture called the mesh-of-appendixed-trees (MAT) is shown to be suitable for processing artificial neural networks (ANNs). Both the recall and the learning phases of the multilayer feedforward with backpropagation ANN model are considered. The MAT structure is refined to produce two special-purpose array processors; FMAT1 and FMAT2, for efficient ANN computation. This refinement tends to reduce circuit area and increase hardware utilization. FMAT1 is a simple structure suitable for the recall phase. FMAT2 requires little extra hardware but supports learning as well. A major characteristic of the proposed neurocomputers is high performance. It takesO (logN) time to process a neural network withN neurons in its largest layer. Our proposed architecture is shown to provide the best number of connections per unit time when compared to several major techniques in the literature. Another important feature of our approach is its ability to pipeline more than one input pattern which further improves the performance.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Learning Phase</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Input Pattern</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Local Memory</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Synaptic Weight</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Systolic Array</subfield><subfield code="7">(dpeaa)DE-He213</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Bayoumi, M. A.</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Rao, T. R. N.</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Journal of VLSI signal processing systems for signal, image and video technology</subfield><subfield code="d">Springer Netherlands, 1989</subfield><subfield code="g">11(1995), 3 vom: 01. Dez., Seite 245-262</subfield><subfield code="w">(DE-627)SPR018308090</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:11</subfield><subfield code="g">year:1995</subfield><subfield code="g">number:3</subfield><subfield code="g">day:01</subfield><subfield code="g">month:12</subfield><subfield code="g">pages:245-262</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://dx.doi.org/10.1007/BF02107056</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_SPRINGER</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2006</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">11</subfield><subfield code="j">1995</subfield><subfield code="e">3</subfield><subfield code="b">01</subfield><subfield code="c">12</subfield><subfield code="h">245-262</subfield></datafield></record></collection>
|
score |
7.400853 |