Pruning during training by network efficacy modeling

Abstract Deep neural networks (DNNs) are costly to train. Pruning, an approach to alleviate model complexity by zeroing out or pruning DNN elements, has shown promise in reducing training costs for DNNs with little to no efficacy at a given task. This paper presents a novel method to perform early p...
Ausführliche Beschreibung

Gespeichert in:
Autor*in:

Rajpal, Mohit [verfasserIn]

Zhang, Yehong

Low, Bryan Kian Hsiang

Format:

Artikel

Sprache:

Englisch

Erschienen:

2023

Schlagwörter:

Early pruning

Network efficacy modeling

Network saliency

Multi-output Gaussian process

Foresight pruning

Anmerkung:

© The Author(s), under exclusive licence to Springer Science+Business Media LLC, part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Übergeordnetes Werk:

Enthalten in: Machine learning - Springer US, 1986, 112(2023), 7 vom: 14. März, Seite 2653-2684

Übergeordnetes Werk:

volume:112 ; year:2023 ; number:7 ; day:14 ; month:03 ; pages:2653-2684

Links:

Volltext

DOI / URN:

10.1007/s10994-023-06304-1

Katalog-ID:

OLC2144469385

Nicht das Richtige dabei?

Schreiben Sie uns!