CCPrune: Collaborative channel pruning for learning compact convolutional networks
Deep convolutional neural networks (CNNs) is difficult to deploy on resource-constrained devices due to its huge amount of computation. Channel pruning is an effective method to reduce the amount of computation and accelerate network inference. Most of channels pruning methods use statistics from a...
Ausführliche Beschreibung
Autor*in: |
Chen, Yanming [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2021transfer abstract |
---|
Schlagwörter: |
---|
Umfang: |
11 |
---|
Übergeordnetes Werk: |
Enthalten in: The TORC1 signaling pathway regulates respiration-induced mitophagy in yeast - Liu, Yang ELSEVIER, 2018, an international journal, Amsterdam |
---|---|
Übergeordnetes Werk: |
volume:451 ; year:2021 ; day:3 ; month:09 ; pages:35-45 ; extent:11 |
Links: |
---|
DOI / URN: |
10.1016/j.neucom.2021.04.063 |
---|
Katalog-ID: |
ELV054328179 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | ELV054328179 | ||
003 | DE-627 | ||
005 | 20230626035948.0 | ||
007 | cr uuu---uuuuu | ||
008 | 210910s2021 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1016/j.neucom.2021.04.063 |2 doi | |
028 | 5 | 2 | |a /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001417.pica |
035 | |a (DE-627)ELV054328179 | ||
035 | |a (ELSEVIER)S0925-2312(21)00605-6 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
082 | 0 | 4 | |a 570 |q VZ |
084 | |a BIODIV |q DE-30 |2 fid | ||
084 | |a 35.70 |2 bkl | ||
084 | |a 42.12 |2 bkl | ||
100 | 1 | |a Chen, Yanming |e verfasserin |4 aut | |
245 | 1 | 0 | |a CCPrune: Collaborative channel pruning for learning compact convolutional networks |
264 | 1 | |c 2021transfer abstract | |
300 | |a 11 | ||
336 | |a nicht spezifiziert |b zzz |2 rdacontent | ||
337 | |a nicht spezifiziert |b z |2 rdamedia | ||
338 | |a nicht spezifiziert |b zu |2 rdacarrier | ||
520 | |a Deep convolutional neural networks (CNNs) is difficult to deploy on resource-constrained devices due to its huge amount of computation. Channel pruning is an effective method to reduce the amount of computation and accelerate network inference. Most of channels pruning methods use statistics from a single structure (convolutional layer or batch normalization layer) of the sparse network to evaluate the importance of channels. The limitation of these methods is that it may often mistakenly delete the important channels. In view of this, we propose a novel method, namely Collaborative Channel Pruning (CCPrune), to evaluate the importance of channels, which combines the convolution layer weights and the BN layer scaling factors. The proposed method first introduces the regularization on the convolution layer weights and the BN layer scaling factors respectively. Then combine the weight of the convolutional layer and the scaling factor of the BN layer to evaluate the importance of the channel. Finally, it can delete the unimportant channels without reduces the performance of the model. The experimental results well demonstrate the effectiveness of our method. On CIFAR-10, it can reduce the FLOPs of VGG-19 by 85.50% while only slightly reducing the accuracy of the model, and it can reduce the FLOPs of Resnet-50 by 78.31% without reducing the accuracy of the model, respectively. | ||
520 | |a Deep convolutional neural networks (CNNs) is difficult to deploy on resource-constrained devices due to its huge amount of computation. Channel pruning is an effective method to reduce the amount of computation and accelerate network inference. Most of channels pruning methods use statistics from a single structure (convolutional layer or batch normalization layer) of the sparse network to evaluate the importance of channels. The limitation of these methods is that it may often mistakenly delete the important channels. In view of this, we propose a novel method, namely Collaborative Channel Pruning (CCPrune), to evaluate the importance of channels, which combines the convolution layer weights and the BN layer scaling factors. The proposed method first introduces the regularization on the convolution layer weights and the BN layer scaling factors respectively. Then combine the weight of the convolutional layer and the scaling factor of the BN layer to evaluate the importance of the channel. Finally, it can delete the unimportant channels without reduces the performance of the model. The experimental results well demonstrate the effectiveness of our method. On CIFAR-10, it can reduce the FLOPs of VGG-19 by 85.50% while only slightly reducing the accuracy of the model, and it can reduce the FLOPs of Resnet-50 by 78.31% without reducing the accuracy of the model, respectively. | ||
650 | 7 | |a Channel pruning |2 Elsevier | |
650 | 7 | |a Deep convolutional neural network |2 Elsevier | |
650 | 7 | |a Model compression |2 Elsevier | |
700 | 1 | |a Wen, Xiang |4 oth | |
700 | 1 | |a Zhang, Yiwen |4 oth | |
700 | 1 | |a Shi, Weisong |4 oth | |
773 | 0 | 8 | |i Enthalten in |n Elsevier |a Liu, Yang ELSEVIER |t The TORC1 signaling pathway regulates respiration-induced mitophagy in yeast |d 2018 |d an international journal |g Amsterdam |w (DE-627)ELV002603926 |
773 | 1 | 8 | |g volume:451 |g year:2021 |g day:3 |g month:09 |g pages:35-45 |g extent:11 |
856 | 4 | 0 | |u https://doi.org/10.1016/j.neucom.2021.04.063 |3 Volltext |
912 | |a GBV_USEFLAG_U | ||
912 | |a GBV_ELV | ||
912 | |a SYSFLAG_U | ||
912 | |a FID-BIODIV | ||
912 | |a SSG-OLC-PHA | ||
936 | b | k | |a 35.70 |j Biochemie: Allgemeines |q VZ |
936 | b | k | |a 42.12 |j Biophysik |q VZ |
951 | |a AR | ||
952 | |d 451 |j 2021 |b 3 |c 0903 |h 35-45 |g 11 |
author_variant |
y c yc |
---|---|
matchkey_str |
chenyanmingwenxiangzhangyiwenshiweisong:2021----:crnclaoaiehnepuigolanncmato |
hierarchy_sort_str |
2021transfer abstract |
bklnumber |
35.70 42.12 |
publishDate |
2021 |
allfields |
10.1016/j.neucom.2021.04.063 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001417.pica (DE-627)ELV054328179 (ELSEVIER)S0925-2312(21)00605-6 DE-627 ger DE-627 rakwb eng 570 VZ BIODIV DE-30 fid 35.70 bkl 42.12 bkl Chen, Yanming verfasserin aut CCPrune: Collaborative channel pruning for learning compact convolutional networks 2021transfer abstract 11 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier Deep convolutional neural networks (CNNs) is difficult to deploy on resource-constrained devices due to its huge amount of computation. Channel pruning is an effective method to reduce the amount of computation and accelerate network inference. Most of channels pruning methods use statistics from a single structure (convolutional layer or batch normalization layer) of the sparse network to evaluate the importance of channels. The limitation of these methods is that it may often mistakenly delete the important channels. In view of this, we propose a novel method, namely Collaborative Channel Pruning (CCPrune), to evaluate the importance of channels, which combines the convolution layer weights and the BN layer scaling factors. The proposed method first introduces the regularization on the convolution layer weights and the BN layer scaling factors respectively. Then combine the weight of the convolutional layer and the scaling factor of the BN layer to evaluate the importance of the channel. Finally, it can delete the unimportant channels without reduces the performance of the model. The experimental results well demonstrate the effectiveness of our method. On CIFAR-10, it can reduce the FLOPs of VGG-19 by 85.50% while only slightly reducing the accuracy of the model, and it can reduce the FLOPs of Resnet-50 by 78.31% without reducing the accuracy of the model, respectively. Deep convolutional neural networks (CNNs) is difficult to deploy on resource-constrained devices due to its huge amount of computation. Channel pruning is an effective method to reduce the amount of computation and accelerate network inference. Most of channels pruning methods use statistics from a single structure (convolutional layer or batch normalization layer) of the sparse network to evaluate the importance of channels. The limitation of these methods is that it may often mistakenly delete the important channels. In view of this, we propose a novel method, namely Collaborative Channel Pruning (CCPrune), to evaluate the importance of channels, which combines the convolution layer weights and the BN layer scaling factors. The proposed method first introduces the regularization on the convolution layer weights and the BN layer scaling factors respectively. Then combine the weight of the convolutional layer and the scaling factor of the BN layer to evaluate the importance of the channel. Finally, it can delete the unimportant channels without reduces the performance of the model. The experimental results well demonstrate the effectiveness of our method. On CIFAR-10, it can reduce the FLOPs of VGG-19 by 85.50% while only slightly reducing the accuracy of the model, and it can reduce the FLOPs of Resnet-50 by 78.31% without reducing the accuracy of the model, respectively. Channel pruning Elsevier Deep convolutional neural network Elsevier Model compression Elsevier Wen, Xiang oth Zhang, Yiwen oth Shi, Weisong oth Enthalten in Elsevier Liu, Yang ELSEVIER The TORC1 signaling pathway regulates respiration-induced mitophagy in yeast 2018 an international journal Amsterdam (DE-627)ELV002603926 volume:451 year:2021 day:3 month:09 pages:35-45 extent:11 https://doi.org/10.1016/j.neucom.2021.04.063 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U FID-BIODIV SSG-OLC-PHA 35.70 Biochemie: Allgemeines VZ 42.12 Biophysik VZ AR 451 2021 3 0903 35-45 11 |
spelling |
10.1016/j.neucom.2021.04.063 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001417.pica (DE-627)ELV054328179 (ELSEVIER)S0925-2312(21)00605-6 DE-627 ger DE-627 rakwb eng 570 VZ BIODIV DE-30 fid 35.70 bkl 42.12 bkl Chen, Yanming verfasserin aut CCPrune: Collaborative channel pruning for learning compact convolutional networks 2021transfer abstract 11 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier Deep convolutional neural networks (CNNs) is difficult to deploy on resource-constrained devices due to its huge amount of computation. Channel pruning is an effective method to reduce the amount of computation and accelerate network inference. Most of channels pruning methods use statistics from a single structure (convolutional layer or batch normalization layer) of the sparse network to evaluate the importance of channels. The limitation of these methods is that it may often mistakenly delete the important channels. In view of this, we propose a novel method, namely Collaborative Channel Pruning (CCPrune), to evaluate the importance of channels, which combines the convolution layer weights and the BN layer scaling factors. The proposed method first introduces the regularization on the convolution layer weights and the BN layer scaling factors respectively. Then combine the weight of the convolutional layer and the scaling factor of the BN layer to evaluate the importance of the channel. Finally, it can delete the unimportant channels without reduces the performance of the model. The experimental results well demonstrate the effectiveness of our method. On CIFAR-10, it can reduce the FLOPs of VGG-19 by 85.50% while only slightly reducing the accuracy of the model, and it can reduce the FLOPs of Resnet-50 by 78.31% without reducing the accuracy of the model, respectively. Deep convolutional neural networks (CNNs) is difficult to deploy on resource-constrained devices due to its huge amount of computation. Channel pruning is an effective method to reduce the amount of computation and accelerate network inference. Most of channels pruning methods use statistics from a single structure (convolutional layer or batch normalization layer) of the sparse network to evaluate the importance of channels. The limitation of these methods is that it may often mistakenly delete the important channels. In view of this, we propose a novel method, namely Collaborative Channel Pruning (CCPrune), to evaluate the importance of channels, which combines the convolution layer weights and the BN layer scaling factors. The proposed method first introduces the regularization on the convolution layer weights and the BN layer scaling factors respectively. Then combine the weight of the convolutional layer and the scaling factor of the BN layer to evaluate the importance of the channel. Finally, it can delete the unimportant channels without reduces the performance of the model. The experimental results well demonstrate the effectiveness of our method. On CIFAR-10, it can reduce the FLOPs of VGG-19 by 85.50% while only slightly reducing the accuracy of the model, and it can reduce the FLOPs of Resnet-50 by 78.31% without reducing the accuracy of the model, respectively. Channel pruning Elsevier Deep convolutional neural network Elsevier Model compression Elsevier Wen, Xiang oth Zhang, Yiwen oth Shi, Weisong oth Enthalten in Elsevier Liu, Yang ELSEVIER The TORC1 signaling pathway regulates respiration-induced mitophagy in yeast 2018 an international journal Amsterdam (DE-627)ELV002603926 volume:451 year:2021 day:3 month:09 pages:35-45 extent:11 https://doi.org/10.1016/j.neucom.2021.04.063 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U FID-BIODIV SSG-OLC-PHA 35.70 Biochemie: Allgemeines VZ 42.12 Biophysik VZ AR 451 2021 3 0903 35-45 11 |
allfields_unstemmed |
10.1016/j.neucom.2021.04.063 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001417.pica (DE-627)ELV054328179 (ELSEVIER)S0925-2312(21)00605-6 DE-627 ger DE-627 rakwb eng 570 VZ BIODIV DE-30 fid 35.70 bkl 42.12 bkl Chen, Yanming verfasserin aut CCPrune: Collaborative channel pruning for learning compact convolutional networks 2021transfer abstract 11 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier Deep convolutional neural networks (CNNs) is difficult to deploy on resource-constrained devices due to its huge amount of computation. Channel pruning is an effective method to reduce the amount of computation and accelerate network inference. Most of channels pruning methods use statistics from a single structure (convolutional layer or batch normalization layer) of the sparse network to evaluate the importance of channels. The limitation of these methods is that it may often mistakenly delete the important channels. In view of this, we propose a novel method, namely Collaborative Channel Pruning (CCPrune), to evaluate the importance of channels, which combines the convolution layer weights and the BN layer scaling factors. The proposed method first introduces the regularization on the convolution layer weights and the BN layer scaling factors respectively. Then combine the weight of the convolutional layer and the scaling factor of the BN layer to evaluate the importance of the channel. Finally, it can delete the unimportant channels without reduces the performance of the model. The experimental results well demonstrate the effectiveness of our method. On CIFAR-10, it can reduce the FLOPs of VGG-19 by 85.50% while only slightly reducing the accuracy of the model, and it can reduce the FLOPs of Resnet-50 by 78.31% without reducing the accuracy of the model, respectively. Deep convolutional neural networks (CNNs) is difficult to deploy on resource-constrained devices due to its huge amount of computation. Channel pruning is an effective method to reduce the amount of computation and accelerate network inference. Most of channels pruning methods use statistics from a single structure (convolutional layer or batch normalization layer) of the sparse network to evaluate the importance of channels. The limitation of these methods is that it may often mistakenly delete the important channels. In view of this, we propose a novel method, namely Collaborative Channel Pruning (CCPrune), to evaluate the importance of channels, which combines the convolution layer weights and the BN layer scaling factors. The proposed method first introduces the regularization on the convolution layer weights and the BN layer scaling factors respectively. Then combine the weight of the convolutional layer and the scaling factor of the BN layer to evaluate the importance of the channel. Finally, it can delete the unimportant channels without reduces the performance of the model. The experimental results well demonstrate the effectiveness of our method. On CIFAR-10, it can reduce the FLOPs of VGG-19 by 85.50% while only slightly reducing the accuracy of the model, and it can reduce the FLOPs of Resnet-50 by 78.31% without reducing the accuracy of the model, respectively. Channel pruning Elsevier Deep convolutional neural network Elsevier Model compression Elsevier Wen, Xiang oth Zhang, Yiwen oth Shi, Weisong oth Enthalten in Elsevier Liu, Yang ELSEVIER The TORC1 signaling pathway regulates respiration-induced mitophagy in yeast 2018 an international journal Amsterdam (DE-627)ELV002603926 volume:451 year:2021 day:3 month:09 pages:35-45 extent:11 https://doi.org/10.1016/j.neucom.2021.04.063 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U FID-BIODIV SSG-OLC-PHA 35.70 Biochemie: Allgemeines VZ 42.12 Biophysik VZ AR 451 2021 3 0903 35-45 11 |
allfieldsGer |
10.1016/j.neucom.2021.04.063 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001417.pica (DE-627)ELV054328179 (ELSEVIER)S0925-2312(21)00605-6 DE-627 ger DE-627 rakwb eng 570 VZ BIODIV DE-30 fid 35.70 bkl 42.12 bkl Chen, Yanming verfasserin aut CCPrune: Collaborative channel pruning for learning compact convolutional networks 2021transfer abstract 11 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier Deep convolutional neural networks (CNNs) is difficult to deploy on resource-constrained devices due to its huge amount of computation. Channel pruning is an effective method to reduce the amount of computation and accelerate network inference. Most of channels pruning methods use statistics from a single structure (convolutional layer or batch normalization layer) of the sparse network to evaluate the importance of channels. The limitation of these methods is that it may often mistakenly delete the important channels. In view of this, we propose a novel method, namely Collaborative Channel Pruning (CCPrune), to evaluate the importance of channels, which combines the convolution layer weights and the BN layer scaling factors. The proposed method first introduces the regularization on the convolution layer weights and the BN layer scaling factors respectively. Then combine the weight of the convolutional layer and the scaling factor of the BN layer to evaluate the importance of the channel. Finally, it can delete the unimportant channels without reduces the performance of the model. The experimental results well demonstrate the effectiveness of our method. On CIFAR-10, it can reduce the FLOPs of VGG-19 by 85.50% while only slightly reducing the accuracy of the model, and it can reduce the FLOPs of Resnet-50 by 78.31% without reducing the accuracy of the model, respectively. Deep convolutional neural networks (CNNs) is difficult to deploy on resource-constrained devices due to its huge amount of computation. Channel pruning is an effective method to reduce the amount of computation and accelerate network inference. Most of channels pruning methods use statistics from a single structure (convolutional layer or batch normalization layer) of the sparse network to evaluate the importance of channels. The limitation of these methods is that it may often mistakenly delete the important channels. In view of this, we propose a novel method, namely Collaborative Channel Pruning (CCPrune), to evaluate the importance of channels, which combines the convolution layer weights and the BN layer scaling factors. The proposed method first introduces the regularization on the convolution layer weights and the BN layer scaling factors respectively. Then combine the weight of the convolutional layer and the scaling factor of the BN layer to evaluate the importance of the channel. Finally, it can delete the unimportant channels without reduces the performance of the model. The experimental results well demonstrate the effectiveness of our method. On CIFAR-10, it can reduce the FLOPs of VGG-19 by 85.50% while only slightly reducing the accuracy of the model, and it can reduce the FLOPs of Resnet-50 by 78.31% without reducing the accuracy of the model, respectively. Channel pruning Elsevier Deep convolutional neural network Elsevier Model compression Elsevier Wen, Xiang oth Zhang, Yiwen oth Shi, Weisong oth Enthalten in Elsevier Liu, Yang ELSEVIER The TORC1 signaling pathway regulates respiration-induced mitophagy in yeast 2018 an international journal Amsterdam (DE-627)ELV002603926 volume:451 year:2021 day:3 month:09 pages:35-45 extent:11 https://doi.org/10.1016/j.neucom.2021.04.063 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U FID-BIODIV SSG-OLC-PHA 35.70 Biochemie: Allgemeines VZ 42.12 Biophysik VZ AR 451 2021 3 0903 35-45 11 |
allfieldsSound |
10.1016/j.neucom.2021.04.063 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001417.pica (DE-627)ELV054328179 (ELSEVIER)S0925-2312(21)00605-6 DE-627 ger DE-627 rakwb eng 570 VZ BIODIV DE-30 fid 35.70 bkl 42.12 bkl Chen, Yanming verfasserin aut CCPrune: Collaborative channel pruning for learning compact convolutional networks 2021transfer abstract 11 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier Deep convolutional neural networks (CNNs) is difficult to deploy on resource-constrained devices due to its huge amount of computation. Channel pruning is an effective method to reduce the amount of computation and accelerate network inference. Most of channels pruning methods use statistics from a single structure (convolutional layer or batch normalization layer) of the sparse network to evaluate the importance of channels. The limitation of these methods is that it may often mistakenly delete the important channels. In view of this, we propose a novel method, namely Collaborative Channel Pruning (CCPrune), to evaluate the importance of channels, which combines the convolution layer weights and the BN layer scaling factors. The proposed method first introduces the regularization on the convolution layer weights and the BN layer scaling factors respectively. Then combine the weight of the convolutional layer and the scaling factor of the BN layer to evaluate the importance of the channel. Finally, it can delete the unimportant channels without reduces the performance of the model. The experimental results well demonstrate the effectiveness of our method. On CIFAR-10, it can reduce the FLOPs of VGG-19 by 85.50% while only slightly reducing the accuracy of the model, and it can reduce the FLOPs of Resnet-50 by 78.31% without reducing the accuracy of the model, respectively. Deep convolutional neural networks (CNNs) is difficult to deploy on resource-constrained devices due to its huge amount of computation. Channel pruning is an effective method to reduce the amount of computation and accelerate network inference. Most of channels pruning methods use statistics from a single structure (convolutional layer or batch normalization layer) of the sparse network to evaluate the importance of channels. The limitation of these methods is that it may often mistakenly delete the important channels. In view of this, we propose a novel method, namely Collaborative Channel Pruning (CCPrune), to evaluate the importance of channels, which combines the convolution layer weights and the BN layer scaling factors. The proposed method first introduces the regularization on the convolution layer weights and the BN layer scaling factors respectively. Then combine the weight of the convolutional layer and the scaling factor of the BN layer to evaluate the importance of the channel. Finally, it can delete the unimportant channels without reduces the performance of the model. The experimental results well demonstrate the effectiveness of our method. On CIFAR-10, it can reduce the FLOPs of VGG-19 by 85.50% while only slightly reducing the accuracy of the model, and it can reduce the FLOPs of Resnet-50 by 78.31% without reducing the accuracy of the model, respectively. Channel pruning Elsevier Deep convolutional neural network Elsevier Model compression Elsevier Wen, Xiang oth Zhang, Yiwen oth Shi, Weisong oth Enthalten in Elsevier Liu, Yang ELSEVIER The TORC1 signaling pathway regulates respiration-induced mitophagy in yeast 2018 an international journal Amsterdam (DE-627)ELV002603926 volume:451 year:2021 day:3 month:09 pages:35-45 extent:11 https://doi.org/10.1016/j.neucom.2021.04.063 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U FID-BIODIV SSG-OLC-PHA 35.70 Biochemie: Allgemeines VZ 42.12 Biophysik VZ AR 451 2021 3 0903 35-45 11 |
language |
English |
source |
Enthalten in The TORC1 signaling pathway regulates respiration-induced mitophagy in yeast Amsterdam volume:451 year:2021 day:3 month:09 pages:35-45 extent:11 |
sourceStr |
Enthalten in The TORC1 signaling pathway regulates respiration-induced mitophagy in yeast Amsterdam volume:451 year:2021 day:3 month:09 pages:35-45 extent:11 |
format_phy_str_mv |
Article |
bklname |
Biochemie: Allgemeines Biophysik |
institution |
findex.gbv.de |
topic_facet |
Channel pruning Deep convolutional neural network Model compression |
dewey-raw |
570 |
isfreeaccess_bool |
false |
container_title |
The TORC1 signaling pathway regulates respiration-induced mitophagy in yeast |
authorswithroles_txt_mv |
Chen, Yanming @@aut@@ Wen, Xiang @@oth@@ Zhang, Yiwen @@oth@@ Shi, Weisong @@oth@@ |
publishDateDaySort_date |
2021-01-03T00:00:00Z |
hierarchy_top_id |
ELV002603926 |
dewey-sort |
3570 |
id |
ELV054328179 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV054328179</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230626035948.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">210910s2021 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.neucom.2021.04.063</subfield><subfield code="2">doi</subfield></datafield><datafield tag="028" ind1="5" ind2="2"><subfield code="a">/cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001417.pica</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV054328179</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0925-2312(21)00605-6</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">570</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">BIODIV</subfield><subfield code="q">DE-30</subfield><subfield code="2">fid</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">35.70</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">42.12</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Chen, Yanming</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">CCPrune: Collaborative channel pruning for learning compact convolutional networks</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2021transfer abstract</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">11</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">z</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zu</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Deep convolutional neural networks (CNNs) is difficult to deploy on resource-constrained devices due to its huge amount of computation. Channel pruning is an effective method to reduce the amount of computation and accelerate network inference. Most of channels pruning methods use statistics from a single structure (convolutional layer or batch normalization layer) of the sparse network to evaluate the importance of channels. The limitation of these methods is that it may often mistakenly delete the important channels. In view of this, we propose a novel method, namely Collaborative Channel Pruning (CCPrune), to evaluate the importance of channels, which combines the convolution layer weights and the BN layer scaling factors. The proposed method first introduces the regularization on the convolution layer weights and the BN layer scaling factors respectively. Then combine the weight of the convolutional layer and the scaling factor of the BN layer to evaluate the importance of the channel. Finally, it can delete the unimportant channels without reduces the performance of the model. The experimental results well demonstrate the effectiveness of our method. On CIFAR-10, it can reduce the FLOPs of VGG-19 by 85.50% while only slightly reducing the accuracy of the model, and it can reduce the FLOPs of Resnet-50 by 78.31% without reducing the accuracy of the model, respectively.</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Deep convolutional neural networks (CNNs) is difficult to deploy on resource-constrained devices due to its huge amount of computation. Channel pruning is an effective method to reduce the amount of computation and accelerate network inference. Most of channels pruning methods use statistics from a single structure (convolutional layer or batch normalization layer) of the sparse network to evaluate the importance of channels. The limitation of these methods is that it may often mistakenly delete the important channels. In view of this, we propose a novel method, namely Collaborative Channel Pruning (CCPrune), to evaluate the importance of channels, which combines the convolution layer weights and the BN layer scaling factors. The proposed method first introduces the regularization on the convolution layer weights and the BN layer scaling factors respectively. Then combine the weight of the convolutional layer and the scaling factor of the BN layer to evaluate the importance of the channel. Finally, it can delete the unimportant channels without reduces the performance of the model. The experimental results well demonstrate the effectiveness of our method. On CIFAR-10, it can reduce the FLOPs of VGG-19 by 85.50% while only slightly reducing the accuracy of the model, and it can reduce the FLOPs of Resnet-50 by 78.31% without reducing the accuracy of the model, respectively.</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Channel pruning</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Deep convolutional neural network</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Model compression</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Wen, Xiang</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zhang, Yiwen</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Shi, Weisong</subfield><subfield code="4">oth</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="n">Elsevier</subfield><subfield code="a">Liu, Yang ELSEVIER</subfield><subfield code="t">The TORC1 signaling pathway regulates respiration-induced mitophagy in yeast</subfield><subfield code="d">2018</subfield><subfield code="d">an international journal</subfield><subfield code="g">Amsterdam</subfield><subfield code="w">(DE-627)ELV002603926</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:451</subfield><subfield code="g">year:2021</subfield><subfield code="g">day:3</subfield><subfield code="g">month:09</subfield><subfield code="g">pages:35-45</subfield><subfield code="g">extent:11</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1016/j.neucom.2021.04.063</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">FID-BIODIV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PHA</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">35.70</subfield><subfield code="j">Biochemie: Allgemeines</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">42.12</subfield><subfield code="j">Biophysik</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">451</subfield><subfield code="j">2021</subfield><subfield code="b">3</subfield><subfield code="c">0903</subfield><subfield code="h">35-45</subfield><subfield code="g">11</subfield></datafield></record></collection>
|
author |
Chen, Yanming |
spellingShingle |
Chen, Yanming ddc 570 fid BIODIV bkl 35.70 bkl 42.12 Elsevier Channel pruning Elsevier Deep convolutional neural network Elsevier Model compression CCPrune: Collaborative channel pruning for learning compact convolutional networks |
authorStr |
Chen, Yanming |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)ELV002603926 |
format |
electronic Article |
dewey-ones |
570 - Life sciences; biology |
delete_txt_mv |
keep |
author_role |
aut |
collection |
elsevier |
remote_str |
true |
illustrated |
Not Illustrated |
topic_title |
570 VZ BIODIV DE-30 fid 35.70 bkl 42.12 bkl CCPrune: Collaborative channel pruning for learning compact convolutional networks Channel pruning Elsevier Deep convolutional neural network Elsevier Model compression Elsevier |
topic |
ddc 570 fid BIODIV bkl 35.70 bkl 42.12 Elsevier Channel pruning Elsevier Deep convolutional neural network Elsevier Model compression |
topic_unstemmed |
ddc 570 fid BIODIV bkl 35.70 bkl 42.12 Elsevier Channel pruning Elsevier Deep convolutional neural network Elsevier Model compression |
topic_browse |
ddc 570 fid BIODIV bkl 35.70 bkl 42.12 Elsevier Channel pruning Elsevier Deep convolutional neural network Elsevier Model compression |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
zu |
author2_variant |
x w xw y z yz w s ws |
hierarchy_parent_title |
The TORC1 signaling pathway regulates respiration-induced mitophagy in yeast |
hierarchy_parent_id |
ELV002603926 |
dewey-tens |
570 - Life sciences; biology |
hierarchy_top_title |
The TORC1 signaling pathway regulates respiration-induced mitophagy in yeast |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)ELV002603926 |
title |
CCPrune: Collaborative channel pruning for learning compact convolutional networks |
ctrlnum |
(DE-627)ELV054328179 (ELSEVIER)S0925-2312(21)00605-6 |
title_full |
CCPrune: Collaborative channel pruning for learning compact convolutional networks |
author_sort |
Chen, Yanming |
journal |
The TORC1 signaling pathway regulates respiration-induced mitophagy in yeast |
journalStr |
The TORC1 signaling pathway regulates respiration-induced mitophagy in yeast |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
500 - Science |
recordtype |
marc |
publishDateSort |
2021 |
contenttype_str_mv |
zzz |
container_start_page |
35 |
author_browse |
Chen, Yanming |
container_volume |
451 |
physical |
11 |
class |
570 VZ BIODIV DE-30 fid 35.70 bkl 42.12 bkl |
format_se |
Elektronische Aufsätze |
author-letter |
Chen, Yanming |
doi_str_mv |
10.1016/j.neucom.2021.04.063 |
dewey-full |
570 |
title_sort |
ccprune: collaborative channel pruning for learning compact convolutional networks |
title_auth |
CCPrune: Collaborative channel pruning for learning compact convolutional networks |
abstract |
Deep convolutional neural networks (CNNs) is difficult to deploy on resource-constrained devices due to its huge amount of computation. Channel pruning is an effective method to reduce the amount of computation and accelerate network inference. Most of channels pruning methods use statistics from a single structure (convolutional layer or batch normalization layer) of the sparse network to evaluate the importance of channels. The limitation of these methods is that it may often mistakenly delete the important channels. In view of this, we propose a novel method, namely Collaborative Channel Pruning (CCPrune), to evaluate the importance of channels, which combines the convolution layer weights and the BN layer scaling factors. The proposed method first introduces the regularization on the convolution layer weights and the BN layer scaling factors respectively. Then combine the weight of the convolutional layer and the scaling factor of the BN layer to evaluate the importance of the channel. Finally, it can delete the unimportant channels without reduces the performance of the model. The experimental results well demonstrate the effectiveness of our method. On CIFAR-10, it can reduce the FLOPs of VGG-19 by 85.50% while only slightly reducing the accuracy of the model, and it can reduce the FLOPs of Resnet-50 by 78.31% without reducing the accuracy of the model, respectively. |
abstractGer |
Deep convolutional neural networks (CNNs) is difficult to deploy on resource-constrained devices due to its huge amount of computation. Channel pruning is an effective method to reduce the amount of computation and accelerate network inference. Most of channels pruning methods use statistics from a single structure (convolutional layer or batch normalization layer) of the sparse network to evaluate the importance of channels. The limitation of these methods is that it may often mistakenly delete the important channels. In view of this, we propose a novel method, namely Collaborative Channel Pruning (CCPrune), to evaluate the importance of channels, which combines the convolution layer weights and the BN layer scaling factors. The proposed method first introduces the regularization on the convolution layer weights and the BN layer scaling factors respectively. Then combine the weight of the convolutional layer and the scaling factor of the BN layer to evaluate the importance of the channel. Finally, it can delete the unimportant channels without reduces the performance of the model. The experimental results well demonstrate the effectiveness of our method. On CIFAR-10, it can reduce the FLOPs of VGG-19 by 85.50% while only slightly reducing the accuracy of the model, and it can reduce the FLOPs of Resnet-50 by 78.31% without reducing the accuracy of the model, respectively. |
abstract_unstemmed |
Deep convolutional neural networks (CNNs) is difficult to deploy on resource-constrained devices due to its huge amount of computation. Channel pruning is an effective method to reduce the amount of computation and accelerate network inference. Most of channels pruning methods use statistics from a single structure (convolutional layer or batch normalization layer) of the sparse network to evaluate the importance of channels. The limitation of these methods is that it may often mistakenly delete the important channels. In view of this, we propose a novel method, namely Collaborative Channel Pruning (CCPrune), to evaluate the importance of channels, which combines the convolution layer weights and the BN layer scaling factors. The proposed method first introduces the regularization on the convolution layer weights and the BN layer scaling factors respectively. Then combine the weight of the convolutional layer and the scaling factor of the BN layer to evaluate the importance of the channel. Finally, it can delete the unimportant channels without reduces the performance of the model. The experimental results well demonstrate the effectiveness of our method. On CIFAR-10, it can reduce the FLOPs of VGG-19 by 85.50% while only slightly reducing the accuracy of the model, and it can reduce the FLOPs of Resnet-50 by 78.31% without reducing the accuracy of the model, respectively. |
collection_details |
GBV_USEFLAG_U GBV_ELV SYSFLAG_U FID-BIODIV SSG-OLC-PHA |
title_short |
CCPrune: Collaborative channel pruning for learning compact convolutional networks |
url |
https://doi.org/10.1016/j.neucom.2021.04.063 |
remote_bool |
true |
author2 |
Wen, Xiang Zhang, Yiwen Shi, Weisong |
author2Str |
Wen, Xiang Zhang, Yiwen Shi, Weisong |
ppnlink |
ELV002603926 |
mediatype_str_mv |
z |
isOA_txt |
false |
hochschulschrift_bool |
false |
author2_role |
oth oth oth |
doi_str |
10.1016/j.neucom.2021.04.063 |
up_date |
2024-07-06T21:25:22.422Z |
_version_ |
1803866469745819648 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV054328179</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230626035948.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">210910s2021 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.neucom.2021.04.063</subfield><subfield code="2">doi</subfield></datafield><datafield tag="028" ind1="5" ind2="2"><subfield code="a">/cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001417.pica</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV054328179</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0925-2312(21)00605-6</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">570</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">BIODIV</subfield><subfield code="q">DE-30</subfield><subfield code="2">fid</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">35.70</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">42.12</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Chen, Yanming</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">CCPrune: Collaborative channel pruning for learning compact convolutional networks</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2021transfer abstract</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">11</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">z</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zu</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Deep convolutional neural networks (CNNs) is difficult to deploy on resource-constrained devices due to its huge amount of computation. Channel pruning is an effective method to reduce the amount of computation and accelerate network inference. Most of channels pruning methods use statistics from a single structure (convolutional layer or batch normalization layer) of the sparse network to evaluate the importance of channels. The limitation of these methods is that it may often mistakenly delete the important channels. In view of this, we propose a novel method, namely Collaborative Channel Pruning (CCPrune), to evaluate the importance of channels, which combines the convolution layer weights and the BN layer scaling factors. The proposed method first introduces the regularization on the convolution layer weights and the BN layer scaling factors respectively. Then combine the weight of the convolutional layer and the scaling factor of the BN layer to evaluate the importance of the channel. Finally, it can delete the unimportant channels without reduces the performance of the model. The experimental results well demonstrate the effectiveness of our method. On CIFAR-10, it can reduce the FLOPs of VGG-19 by 85.50% while only slightly reducing the accuracy of the model, and it can reduce the FLOPs of Resnet-50 by 78.31% without reducing the accuracy of the model, respectively.</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Deep convolutional neural networks (CNNs) is difficult to deploy on resource-constrained devices due to its huge amount of computation. Channel pruning is an effective method to reduce the amount of computation and accelerate network inference. Most of channels pruning methods use statistics from a single structure (convolutional layer or batch normalization layer) of the sparse network to evaluate the importance of channels. The limitation of these methods is that it may often mistakenly delete the important channels. In view of this, we propose a novel method, namely Collaborative Channel Pruning (CCPrune), to evaluate the importance of channels, which combines the convolution layer weights and the BN layer scaling factors. The proposed method first introduces the regularization on the convolution layer weights and the BN layer scaling factors respectively. Then combine the weight of the convolutional layer and the scaling factor of the BN layer to evaluate the importance of the channel. Finally, it can delete the unimportant channels without reduces the performance of the model. The experimental results well demonstrate the effectiveness of our method. On CIFAR-10, it can reduce the FLOPs of VGG-19 by 85.50% while only slightly reducing the accuracy of the model, and it can reduce the FLOPs of Resnet-50 by 78.31% without reducing the accuracy of the model, respectively.</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Channel pruning</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Deep convolutional neural network</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Model compression</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Wen, Xiang</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zhang, Yiwen</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Shi, Weisong</subfield><subfield code="4">oth</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="n">Elsevier</subfield><subfield code="a">Liu, Yang ELSEVIER</subfield><subfield code="t">The TORC1 signaling pathway regulates respiration-induced mitophagy in yeast</subfield><subfield code="d">2018</subfield><subfield code="d">an international journal</subfield><subfield code="g">Amsterdam</subfield><subfield code="w">(DE-627)ELV002603926</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:451</subfield><subfield code="g">year:2021</subfield><subfield code="g">day:3</subfield><subfield code="g">month:09</subfield><subfield code="g">pages:35-45</subfield><subfield code="g">extent:11</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1016/j.neucom.2021.04.063</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">FID-BIODIV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PHA</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">35.70</subfield><subfield code="j">Biochemie: Allgemeines</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">42.12</subfield><subfield code="j">Biophysik</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">451</subfield><subfield code="j">2021</subfield><subfield code="b">3</subfield><subfield code="c">0903</subfield><subfield code="h">35-45</subfield><subfield code="g">11</subfield></datafield></record></collection>
|
score |
7.400592 |