Hilfe beim Zugang
Tree-based special-purpose Array architectures for neural computing
Abstract A massively parallel architecture called the mesh-of-appendixed-trees (MAT) is shown to be suitable for processing artificial neural networks (ANNs). Both the recall and the learning phases of the multilayer feedforward with backpropagation ANN model are considered. The MAT structure is ref...
Ausführliche Beschreibung
Abstract A massively parallel architecture called the mesh-of-appendixed-trees (MAT) is shown to be suitable for processing artificial neural networks (ANNs). Both the recall and the learning phases of the multilayer feedforward with backpropagation ANN model are considered. The MAT structure is refined to produce two special-purpose array processors; FMAT1 and FMAT2, for efficient ANN computation. This refinement tends to reduce circuit area and increase hardware utilization. FMAT1 is a simple structure suitable for the recall phase. FMAT2 requires little extra hardware but supports learning as well. A major characteristic of the proposed neurocomputers is high performance. It takesO (logN) time to process a neural network withN neurons in its largest layer. Our proposed architecture is shown to provide the best number of connections per unit time when compared to several major techniques in the literature. Another important feature of our approach is its ability to pipeline more than one input pattern which further improves the performance. Ausführliche Beschreibung