Weighted Channel-Wise Decomposed Convolutional Neural Networks
Abstract Currently, block term decomposition is widely utilized to factorize regular convolutional kernels with several groups to decrease parameters. However, networks designed based on this method lack adequate information interactions from every group. Therefore, the Weighted Channel-wise Decompo...
Ausführliche Beschreibung
Autor*in: |
Lu, Yao [verfasserIn] |
---|
Format: |
Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2019 |
---|
Schlagwörter: |
---|
Anmerkung: |
© Springer Science+Business Media, LLC, part of Springer Nature 2019 |
---|
Übergeordnetes Werk: |
Enthalten in: Neural processing letters - Springer US, 1994, 50(2019), 1 vom: 04. Apr., Seite 531-548 |
---|---|
Übergeordnetes Werk: |
volume:50 ; year:2019 ; number:1 ; day:04 ; month:04 ; pages:531-548 |
Links: |
---|
DOI / URN: |
10.1007/s11063-019-10032-w |
---|
Katalog-ID: |
OLC204471387X |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | OLC204471387X | ||
003 | DE-627 | ||
005 | 20230503210452.0 | ||
007 | tu | ||
008 | 200820s2019 xx ||||| 00| ||eng c | ||
024 | 7 | |a 10.1007/s11063-019-10032-w |2 doi | |
035 | |a (DE-627)OLC204471387X | ||
035 | |a (DE-He213)s11063-019-10032-w-p | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
082 | 0 | 4 | |a 000 |q VZ |
100 | 1 | |a Lu, Yao |e verfasserin |4 aut | |
245 | 1 | 0 | |a Weighted Channel-Wise Decomposed Convolutional Neural Networks |
264 | 1 | |c 2019 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ohne Hilfsmittel zu benutzen |b n |2 rdamedia | ||
338 | |a Band |b nc |2 rdacarrier | ||
500 | |a © Springer Science+Business Media, LLC, part of Springer Nature 2019 | ||
520 | |a Abstract Currently, block term decomposition is widely utilized to factorize regular convolutional kernels with several groups to decrease parameters. However, networks designed based on this method lack adequate information interactions from every group. Therefore, the Weighted Channel-wise Decomposed Convolutions (WCDC) are proposed in this paper, and the relevant networks can be called WCDC-Nets. The WCDC convolutional kernel employ the channel-wise decomposition to reduce the parameters and computational complexity to the bone. Furthermore, a tiny learnable weighted module is also utilized to dig up connections of the outputs from channel-wise convolutions in the WCDC kernel. The WCDC filter can be easily applied in many popular networks and can be trained end to end, resulting in a significant improvement of model’s flexibility. Experimental results on the benchmark datasets showed that WCDC-Nets can achieve better performances with much fewer parameters and flop pointing computations. | ||
650 | 4 | |a Block term decomposition | |
650 | 4 | |a Group convolutions | |
650 | 4 | |a Channel-wise convolutions | |
650 | 4 | |a Weighted channel-wise decomposed convolutions | |
700 | 1 | |a Lu, Guangming |4 aut | |
700 | 1 | |a Xu, Yuanrong |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Neural processing letters |d Springer US, 1994 |g 50(2019), 1 vom: 04. Apr., Seite 531-548 |w (DE-627)198692617 |w (DE-600)1316823-X |w (DE-576)052842762 |x 1370-4621 |7 nnns |
773 | 1 | 8 | |g volume:50 |g year:2019 |g number:1 |g day:04 |g month:04 |g pages:531-548 |
856 | 4 | 1 | |u https://doi.org/10.1007/s11063-019-10032-w |z lizenzpflichtig |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_OLC | ||
912 | |a SSG-OLC-PSY | ||
912 | |a SSG-OLC-MAT | ||
912 | |a GBV_ILN_70 | ||
951 | |a AR | ||
952 | |d 50 |j 2019 |e 1 |b 04 |c 04 |h 531-548 |
author_variant |
y l yl g l gl y x yx |
---|---|
matchkey_str |
article:13704621:2019----::egtdhnewsdcmoecnouinl |
hierarchy_sort_str |
2019 |
publishDate |
2019 |
allfields |
10.1007/s11063-019-10032-w doi (DE-627)OLC204471387X (DE-He213)s11063-019-10032-w-p DE-627 ger DE-627 rakwb eng 000 VZ Lu, Yao verfasserin aut Weighted Channel-Wise Decomposed Convolutional Neural Networks 2019 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media, LLC, part of Springer Nature 2019 Abstract Currently, block term decomposition is widely utilized to factorize regular convolutional kernels with several groups to decrease parameters. However, networks designed based on this method lack adequate information interactions from every group. Therefore, the Weighted Channel-wise Decomposed Convolutions (WCDC) are proposed in this paper, and the relevant networks can be called WCDC-Nets. The WCDC convolutional kernel employ the channel-wise decomposition to reduce the parameters and computational complexity to the bone. Furthermore, a tiny learnable weighted module is also utilized to dig up connections of the outputs from channel-wise convolutions in the WCDC kernel. The WCDC filter can be easily applied in many popular networks and can be trained end to end, resulting in a significant improvement of model’s flexibility. Experimental results on the benchmark datasets showed that WCDC-Nets can achieve better performances with much fewer parameters and flop pointing computations. Block term decomposition Group convolutions Channel-wise convolutions Weighted channel-wise decomposed convolutions Lu, Guangming aut Xu, Yuanrong aut Enthalten in Neural processing letters Springer US, 1994 50(2019), 1 vom: 04. Apr., Seite 531-548 (DE-627)198692617 (DE-600)1316823-X (DE-576)052842762 1370-4621 nnns volume:50 year:2019 number:1 day:04 month:04 pages:531-548 https://doi.org/10.1007/s11063-019-10032-w lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-PSY SSG-OLC-MAT GBV_ILN_70 AR 50 2019 1 04 04 531-548 |
spelling |
10.1007/s11063-019-10032-w doi (DE-627)OLC204471387X (DE-He213)s11063-019-10032-w-p DE-627 ger DE-627 rakwb eng 000 VZ Lu, Yao verfasserin aut Weighted Channel-Wise Decomposed Convolutional Neural Networks 2019 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media, LLC, part of Springer Nature 2019 Abstract Currently, block term decomposition is widely utilized to factorize regular convolutional kernels with several groups to decrease parameters. However, networks designed based on this method lack adequate information interactions from every group. Therefore, the Weighted Channel-wise Decomposed Convolutions (WCDC) are proposed in this paper, and the relevant networks can be called WCDC-Nets. The WCDC convolutional kernel employ the channel-wise decomposition to reduce the parameters and computational complexity to the bone. Furthermore, a tiny learnable weighted module is also utilized to dig up connections of the outputs from channel-wise convolutions in the WCDC kernel. The WCDC filter can be easily applied in many popular networks and can be trained end to end, resulting in a significant improvement of model’s flexibility. Experimental results on the benchmark datasets showed that WCDC-Nets can achieve better performances with much fewer parameters and flop pointing computations. Block term decomposition Group convolutions Channel-wise convolutions Weighted channel-wise decomposed convolutions Lu, Guangming aut Xu, Yuanrong aut Enthalten in Neural processing letters Springer US, 1994 50(2019), 1 vom: 04. Apr., Seite 531-548 (DE-627)198692617 (DE-600)1316823-X (DE-576)052842762 1370-4621 nnns volume:50 year:2019 number:1 day:04 month:04 pages:531-548 https://doi.org/10.1007/s11063-019-10032-w lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-PSY SSG-OLC-MAT GBV_ILN_70 AR 50 2019 1 04 04 531-548 |
allfields_unstemmed |
10.1007/s11063-019-10032-w doi (DE-627)OLC204471387X (DE-He213)s11063-019-10032-w-p DE-627 ger DE-627 rakwb eng 000 VZ Lu, Yao verfasserin aut Weighted Channel-Wise Decomposed Convolutional Neural Networks 2019 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media, LLC, part of Springer Nature 2019 Abstract Currently, block term decomposition is widely utilized to factorize regular convolutional kernels with several groups to decrease parameters. However, networks designed based on this method lack adequate information interactions from every group. Therefore, the Weighted Channel-wise Decomposed Convolutions (WCDC) are proposed in this paper, and the relevant networks can be called WCDC-Nets. The WCDC convolutional kernel employ the channel-wise decomposition to reduce the parameters and computational complexity to the bone. Furthermore, a tiny learnable weighted module is also utilized to dig up connections of the outputs from channel-wise convolutions in the WCDC kernel. The WCDC filter can be easily applied in many popular networks and can be trained end to end, resulting in a significant improvement of model’s flexibility. Experimental results on the benchmark datasets showed that WCDC-Nets can achieve better performances with much fewer parameters and flop pointing computations. Block term decomposition Group convolutions Channel-wise convolutions Weighted channel-wise decomposed convolutions Lu, Guangming aut Xu, Yuanrong aut Enthalten in Neural processing letters Springer US, 1994 50(2019), 1 vom: 04. Apr., Seite 531-548 (DE-627)198692617 (DE-600)1316823-X (DE-576)052842762 1370-4621 nnns volume:50 year:2019 number:1 day:04 month:04 pages:531-548 https://doi.org/10.1007/s11063-019-10032-w lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-PSY SSG-OLC-MAT GBV_ILN_70 AR 50 2019 1 04 04 531-548 |
allfieldsGer |
10.1007/s11063-019-10032-w doi (DE-627)OLC204471387X (DE-He213)s11063-019-10032-w-p DE-627 ger DE-627 rakwb eng 000 VZ Lu, Yao verfasserin aut Weighted Channel-Wise Decomposed Convolutional Neural Networks 2019 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media, LLC, part of Springer Nature 2019 Abstract Currently, block term decomposition is widely utilized to factorize regular convolutional kernels with several groups to decrease parameters. However, networks designed based on this method lack adequate information interactions from every group. Therefore, the Weighted Channel-wise Decomposed Convolutions (WCDC) are proposed in this paper, and the relevant networks can be called WCDC-Nets. The WCDC convolutional kernel employ the channel-wise decomposition to reduce the parameters and computational complexity to the bone. Furthermore, a tiny learnable weighted module is also utilized to dig up connections of the outputs from channel-wise convolutions in the WCDC kernel. The WCDC filter can be easily applied in many popular networks and can be trained end to end, resulting in a significant improvement of model’s flexibility. Experimental results on the benchmark datasets showed that WCDC-Nets can achieve better performances with much fewer parameters and flop pointing computations. Block term decomposition Group convolutions Channel-wise convolutions Weighted channel-wise decomposed convolutions Lu, Guangming aut Xu, Yuanrong aut Enthalten in Neural processing letters Springer US, 1994 50(2019), 1 vom: 04. Apr., Seite 531-548 (DE-627)198692617 (DE-600)1316823-X (DE-576)052842762 1370-4621 nnns volume:50 year:2019 number:1 day:04 month:04 pages:531-548 https://doi.org/10.1007/s11063-019-10032-w lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-PSY SSG-OLC-MAT GBV_ILN_70 AR 50 2019 1 04 04 531-548 |
allfieldsSound |
10.1007/s11063-019-10032-w doi (DE-627)OLC204471387X (DE-He213)s11063-019-10032-w-p DE-627 ger DE-627 rakwb eng 000 VZ Lu, Yao verfasserin aut Weighted Channel-Wise Decomposed Convolutional Neural Networks 2019 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media, LLC, part of Springer Nature 2019 Abstract Currently, block term decomposition is widely utilized to factorize regular convolutional kernels with several groups to decrease parameters. However, networks designed based on this method lack adequate information interactions from every group. Therefore, the Weighted Channel-wise Decomposed Convolutions (WCDC) are proposed in this paper, and the relevant networks can be called WCDC-Nets. The WCDC convolutional kernel employ the channel-wise decomposition to reduce the parameters and computational complexity to the bone. Furthermore, a tiny learnable weighted module is also utilized to dig up connections of the outputs from channel-wise convolutions in the WCDC kernel. The WCDC filter can be easily applied in many popular networks and can be trained end to end, resulting in a significant improvement of model’s flexibility. Experimental results on the benchmark datasets showed that WCDC-Nets can achieve better performances with much fewer parameters and flop pointing computations. Block term decomposition Group convolutions Channel-wise convolutions Weighted channel-wise decomposed convolutions Lu, Guangming aut Xu, Yuanrong aut Enthalten in Neural processing letters Springer US, 1994 50(2019), 1 vom: 04. Apr., Seite 531-548 (DE-627)198692617 (DE-600)1316823-X (DE-576)052842762 1370-4621 nnns volume:50 year:2019 number:1 day:04 month:04 pages:531-548 https://doi.org/10.1007/s11063-019-10032-w lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-PSY SSG-OLC-MAT GBV_ILN_70 AR 50 2019 1 04 04 531-548 |
language |
English |
source |
Enthalten in Neural processing letters 50(2019), 1 vom: 04. Apr., Seite 531-548 volume:50 year:2019 number:1 day:04 month:04 pages:531-548 |
sourceStr |
Enthalten in Neural processing letters 50(2019), 1 vom: 04. Apr., Seite 531-548 volume:50 year:2019 number:1 day:04 month:04 pages:531-548 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Block term decomposition Group convolutions Channel-wise convolutions Weighted channel-wise decomposed convolutions |
dewey-raw |
000 |
isfreeaccess_bool |
false |
container_title |
Neural processing letters |
authorswithroles_txt_mv |
Lu, Yao @@aut@@ Lu, Guangming @@aut@@ Xu, Yuanrong @@aut@@ |
publishDateDaySort_date |
2019-04-04T00:00:00Z |
hierarchy_top_id |
198692617 |
dewey-sort |
0 |
id |
OLC204471387X |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">OLC204471387X</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230503210452.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">200820s2019 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s11063-019-10032-w</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC204471387X</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s11063-019-10032-w-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">000</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Lu, Yao</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Weighted Channel-Wise Decomposed Convolutional Neural Networks</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2019</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© Springer Science+Business Media, LLC, part of Springer Nature 2019</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Currently, block term decomposition is widely utilized to factorize regular convolutional kernels with several groups to decrease parameters. However, networks designed based on this method lack adequate information interactions from every group. Therefore, the Weighted Channel-wise Decomposed Convolutions (WCDC) are proposed in this paper, and the relevant networks can be called WCDC-Nets. The WCDC convolutional kernel employ the channel-wise decomposition to reduce the parameters and computational complexity to the bone. Furthermore, a tiny learnable weighted module is also utilized to dig up connections of the outputs from channel-wise convolutions in the WCDC kernel. The WCDC filter can be easily applied in many popular networks and can be trained end to end, resulting in a significant improvement of model’s flexibility. Experimental results on the benchmark datasets showed that WCDC-Nets can achieve better performances with much fewer parameters and flop pointing computations.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Block term decomposition</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Group convolutions</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Channel-wise convolutions</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Weighted channel-wise decomposed convolutions</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Lu, Guangming</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Xu, Yuanrong</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Neural processing letters</subfield><subfield code="d">Springer US, 1994</subfield><subfield code="g">50(2019), 1 vom: 04. Apr., Seite 531-548</subfield><subfield code="w">(DE-627)198692617</subfield><subfield code="w">(DE-600)1316823-X</subfield><subfield code="w">(DE-576)052842762</subfield><subfield code="x">1370-4621</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:50</subfield><subfield code="g">year:2019</subfield><subfield code="g">number:1</subfield><subfield code="g">day:04</subfield><subfield code="g">month:04</subfield><subfield code="g">pages:531-548</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s11063-019-10032-w</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PSY</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MAT</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">50</subfield><subfield code="j">2019</subfield><subfield code="e">1</subfield><subfield code="b">04</subfield><subfield code="c">04</subfield><subfield code="h">531-548</subfield></datafield></record></collection>
|
author |
Lu, Yao |
spellingShingle |
Lu, Yao ddc 000 misc Block term decomposition misc Group convolutions misc Channel-wise convolutions misc Weighted channel-wise decomposed convolutions Weighted Channel-Wise Decomposed Convolutional Neural Networks |
authorStr |
Lu, Yao |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)198692617 |
format |
Article |
dewey-ones |
000 - Computer science, information & general works |
delete_txt_mv |
keep |
author_role |
aut aut aut |
collection |
OLC |
remote_str |
false |
illustrated |
Not Illustrated |
issn |
1370-4621 |
topic_title |
000 VZ Weighted Channel-Wise Decomposed Convolutional Neural Networks Block term decomposition Group convolutions Channel-wise convolutions Weighted channel-wise decomposed convolutions |
topic |
ddc 000 misc Block term decomposition misc Group convolutions misc Channel-wise convolutions misc Weighted channel-wise decomposed convolutions |
topic_unstemmed |
ddc 000 misc Block term decomposition misc Group convolutions misc Channel-wise convolutions misc Weighted channel-wise decomposed convolutions |
topic_browse |
ddc 000 misc Block term decomposition misc Group convolutions misc Channel-wise convolutions misc Weighted channel-wise decomposed convolutions |
format_facet |
Aufsätze Gedruckte Aufsätze |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
nc |
hierarchy_parent_title |
Neural processing letters |
hierarchy_parent_id |
198692617 |
dewey-tens |
000 - Computer science, knowledge & systems |
hierarchy_top_title |
Neural processing letters |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)198692617 (DE-600)1316823-X (DE-576)052842762 |
title |
Weighted Channel-Wise Decomposed Convolutional Neural Networks |
ctrlnum |
(DE-627)OLC204471387X (DE-He213)s11063-019-10032-w-p |
title_full |
Weighted Channel-Wise Decomposed Convolutional Neural Networks |
author_sort |
Lu, Yao |
journal |
Neural processing letters |
journalStr |
Neural processing letters |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
000 - Computer science, information & general works |
recordtype |
marc |
publishDateSort |
2019 |
contenttype_str_mv |
txt |
container_start_page |
531 |
author_browse |
Lu, Yao Lu, Guangming Xu, Yuanrong |
container_volume |
50 |
class |
000 VZ |
format_se |
Aufsätze |
author-letter |
Lu, Yao |
doi_str_mv |
10.1007/s11063-019-10032-w |
dewey-full |
000 |
title_sort |
weighted channel-wise decomposed convolutional neural networks |
title_auth |
Weighted Channel-Wise Decomposed Convolutional Neural Networks |
abstract |
Abstract Currently, block term decomposition is widely utilized to factorize regular convolutional kernels with several groups to decrease parameters. However, networks designed based on this method lack adequate information interactions from every group. Therefore, the Weighted Channel-wise Decomposed Convolutions (WCDC) are proposed in this paper, and the relevant networks can be called WCDC-Nets. The WCDC convolutional kernel employ the channel-wise decomposition to reduce the parameters and computational complexity to the bone. Furthermore, a tiny learnable weighted module is also utilized to dig up connections of the outputs from channel-wise convolutions in the WCDC kernel. The WCDC filter can be easily applied in many popular networks and can be trained end to end, resulting in a significant improvement of model’s flexibility. Experimental results on the benchmark datasets showed that WCDC-Nets can achieve better performances with much fewer parameters and flop pointing computations. © Springer Science+Business Media, LLC, part of Springer Nature 2019 |
abstractGer |
Abstract Currently, block term decomposition is widely utilized to factorize regular convolutional kernels with several groups to decrease parameters. However, networks designed based on this method lack adequate information interactions from every group. Therefore, the Weighted Channel-wise Decomposed Convolutions (WCDC) are proposed in this paper, and the relevant networks can be called WCDC-Nets. The WCDC convolutional kernel employ the channel-wise decomposition to reduce the parameters and computational complexity to the bone. Furthermore, a tiny learnable weighted module is also utilized to dig up connections of the outputs from channel-wise convolutions in the WCDC kernel. The WCDC filter can be easily applied in many popular networks and can be trained end to end, resulting in a significant improvement of model’s flexibility. Experimental results on the benchmark datasets showed that WCDC-Nets can achieve better performances with much fewer parameters and flop pointing computations. © Springer Science+Business Media, LLC, part of Springer Nature 2019 |
abstract_unstemmed |
Abstract Currently, block term decomposition is widely utilized to factorize regular convolutional kernels with several groups to decrease parameters. However, networks designed based on this method lack adequate information interactions from every group. Therefore, the Weighted Channel-wise Decomposed Convolutions (WCDC) are proposed in this paper, and the relevant networks can be called WCDC-Nets. The WCDC convolutional kernel employ the channel-wise decomposition to reduce the parameters and computational complexity to the bone. Furthermore, a tiny learnable weighted module is also utilized to dig up connections of the outputs from channel-wise convolutions in the WCDC kernel. The WCDC filter can be easily applied in many popular networks and can be trained end to end, resulting in a significant improvement of model’s flexibility. Experimental results on the benchmark datasets showed that WCDC-Nets can achieve better performances with much fewer parameters and flop pointing computations. © Springer Science+Business Media, LLC, part of Springer Nature 2019 |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-PSY SSG-OLC-MAT GBV_ILN_70 |
container_issue |
1 |
title_short |
Weighted Channel-Wise Decomposed Convolutional Neural Networks |
url |
https://doi.org/10.1007/s11063-019-10032-w |
remote_bool |
false |
author2 |
Lu, Guangming Xu, Yuanrong |
author2Str |
Lu, Guangming Xu, Yuanrong |
ppnlink |
198692617 |
mediatype_str_mv |
n |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1007/s11063-019-10032-w |
up_date |
2024-07-04T00:31:43.073Z |
_version_ |
1803606402611019776 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">OLC204471387X</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230503210452.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">200820s2019 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s11063-019-10032-w</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC204471387X</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s11063-019-10032-w-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">000</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Lu, Yao</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Weighted Channel-Wise Decomposed Convolutional Neural Networks</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2019</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© Springer Science+Business Media, LLC, part of Springer Nature 2019</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Currently, block term decomposition is widely utilized to factorize regular convolutional kernels with several groups to decrease parameters. However, networks designed based on this method lack adequate information interactions from every group. Therefore, the Weighted Channel-wise Decomposed Convolutions (WCDC) are proposed in this paper, and the relevant networks can be called WCDC-Nets. The WCDC convolutional kernel employ the channel-wise decomposition to reduce the parameters and computational complexity to the bone. Furthermore, a tiny learnable weighted module is also utilized to dig up connections of the outputs from channel-wise convolutions in the WCDC kernel. The WCDC filter can be easily applied in many popular networks and can be trained end to end, resulting in a significant improvement of model’s flexibility. Experimental results on the benchmark datasets showed that WCDC-Nets can achieve better performances with much fewer parameters and flop pointing computations.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Block term decomposition</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Group convolutions</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Channel-wise convolutions</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Weighted channel-wise decomposed convolutions</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Lu, Guangming</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Xu, Yuanrong</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Neural processing letters</subfield><subfield code="d">Springer US, 1994</subfield><subfield code="g">50(2019), 1 vom: 04. Apr., Seite 531-548</subfield><subfield code="w">(DE-627)198692617</subfield><subfield code="w">(DE-600)1316823-X</subfield><subfield code="w">(DE-576)052842762</subfield><subfield code="x">1370-4621</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:50</subfield><subfield code="g">year:2019</subfield><subfield code="g">number:1</subfield><subfield code="g">day:04</subfield><subfield code="g">month:04</subfield><subfield code="g">pages:531-548</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s11063-019-10032-w</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PSY</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MAT</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">50</subfield><subfield code="j">2019</subfield><subfield code="e">1</subfield><subfield code="b">04</subfield><subfield code="c">04</subfield><subfield code="h">531-548</subfield></datafield></record></collection>
|
score |
7.3986187 |