Fpar: filter pruning via attention and rank enhancement for deep convolutional neural networks acceleration

Abstract Pruning deep neural networks is crucial for enabling their deployment on resource-constrained edge devices, where the vast number of parameters and computational requirements pose significant challenges. However, many of these methods consider only the importance of a single filter to the n...
Ausführliche Beschreibung

Gespeichert in:
Autor*in:

Chen, Yanming [verfasserIn]

Wu, Gang [verfasserIn]

Shuai, Mingrui [verfasserIn]

Lou, Shubin [verfasserIn]

Zhang, Yiwen [verfasserIn]

An, Zhulin [verfasserIn]

Format:

E-Artikel

Sprache:

Englisch

Erschienen:

2024

Schlagwörter:

Neural network

Model compression

Filter pruning

Attention

Rank enhancement

CNNs

Anmerkung:

© The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2024. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Übergeordnetes Werk:

Enthalten in: International journal of machine learning and cybernetics - Springer Berlin Heidelberg, 2010, 15(2024), 7 vom: 29. Jan., Seite 2973-2985

Übergeordnetes Werk:

volume:15 ; year:2024 ; number:7 ; day:29 ; month:01 ; pages:2973-2985

Links:

Volltext

DOI / URN:

10.1007/s13042-023-02076-1

Katalog-ID:

SPR056247826

Nicht das Richtige dabei?

Schreiben Sie uns!