Multi-match: mutual information maximization and CutEdge for semi-supervised learning
Abstract Deep supervised learning has achieved great successes in tackling complex computer vision tasks. However, it typically requires a large amount of data with labels and is expensive in practical applications. Semi-supervised learning, which leverages the hidden structures learned from unlabel...
Ausführliche Beschreibung
Autor*in: |
Wu, Yulin [verfasserIn] |
---|
Format: |
Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2022 |
---|
Schlagwörter: |
---|
Anmerkung: |
© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022 |
---|
Übergeordnetes Werk: |
Enthalten in: Multimedia tools and applications - Springer US, 1995, 82(2022), 1 vom: 07. Juni, Seite 479-496 |
---|---|
Übergeordnetes Werk: |
volume:82 ; year:2022 ; number:1 ; day:07 ; month:06 ; pages:479-496 |
Links: |
---|
DOI / URN: |
10.1007/s11042-022-13126-1 |
---|
Katalog-ID: |
OLC2080198467 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | OLC2080198467 | ||
003 | DE-627 | ||
005 | 20230506100512.0 | ||
007 | tu | ||
008 | 230131s2022 xx ||||| 00| ||eng c | ||
024 | 7 | |a 10.1007/s11042-022-13126-1 |2 doi | |
035 | |a (DE-627)OLC2080198467 | ||
035 | |a (DE-He213)s11042-022-13126-1-p | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
082 | 0 | 4 | |a 070 |a 004 |q VZ |
100 | 1 | |a Wu, Yulin |e verfasserin |0 (orcid)0000-0001-8116-715X |4 aut | |
245 | 1 | 0 | |a Multi-match: mutual information maximization and CutEdge for semi-supervised learning |
264 | 1 | |c 2022 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ohne Hilfsmittel zu benutzen |b n |2 rdamedia | ||
338 | |a Band |b nc |2 rdacarrier | ||
500 | |a © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022 | ||
520 | |a Abstract Deep supervised learning has achieved great successes in tackling complex computer vision tasks. However, it typically requires a large amount of data with labels and is expensive in practical applications. Semi-supervised learning, which leverages the hidden structures learned from unlabeled data, has attracted much attention. In this work, a semi-supervised classification model named Multi-Match is proposed, which includes two augmentation branches and encourages the output of the complex augmentation branch to be close to the predictions of the simple augmentation branch. A mutual information (MI) loss is introduced to maximize MI not only between the input and output representation, but also between the class assignments inside the simple augmentation branch. A novel information dropping method named CutEdge is proposed by removing multiple regions near the input edges to further improve the robustness. The experimental results on CIFAR-10, CIFAR-100 and SVHN with different label sizes demonstrate that the proposed model outperforms the compared semi-supervised learning methods. The gains come from the MI loss, the combination of affine transformation and CutEdge, and the use of multiple branches. | ||
650 | 4 | |a Semi-supervised learning | |
650 | 4 | |a Multi-Match | |
650 | 4 | |a Mutual information | |
650 | 4 | |a CutEdge | |
650 | 4 | |a Multiple branches | |
700 | 1 | |a Chen, Lei |4 aut | |
700 | 1 | |a Zhao, Dong |4 aut | |
700 | 1 | |a Zhou, Hongchao |4 aut | |
700 | 1 | |a Zheng, Qinghe |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Multimedia tools and applications |d Springer US, 1995 |g 82(2022), 1 vom: 07. Juni, Seite 479-496 |w (DE-627)189064145 |w (DE-600)1287642-2 |w (DE-576)052842126 |x 1380-7501 |7 nnns |
773 | 1 | 8 | |g volume:82 |g year:2022 |g number:1 |g day:07 |g month:06 |g pages:479-496 |
856 | 4 | 1 | |u https://doi.org/10.1007/s11042-022-13126-1 |z lizenzpflichtig |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_OLC | ||
912 | |a SSG-OLC-MAT | ||
912 | |a SSG-OLC-BUB | ||
912 | |a SSG-OLC-MKW | ||
951 | |a AR | ||
952 | |d 82 |j 2022 |e 1 |b 07 |c 06 |h 479-496 |
author_variant |
y w yw l c lc d z dz h z hz q z qz |
---|---|
matchkey_str |
article:13807501:2022----::utmthuulnomtomxmztoaduegfre |
hierarchy_sort_str |
2022 |
publishDate |
2022 |
allfields |
10.1007/s11042-022-13126-1 doi (DE-627)OLC2080198467 (DE-He213)s11042-022-13126-1-p DE-627 ger DE-627 rakwb eng 070 004 VZ Wu, Yulin verfasserin (orcid)0000-0001-8116-715X aut Multi-match: mutual information maximization and CutEdge for semi-supervised learning 2022 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022 Abstract Deep supervised learning has achieved great successes in tackling complex computer vision tasks. However, it typically requires a large amount of data with labels and is expensive in practical applications. Semi-supervised learning, which leverages the hidden structures learned from unlabeled data, has attracted much attention. In this work, a semi-supervised classification model named Multi-Match is proposed, which includes two augmentation branches and encourages the output of the complex augmentation branch to be close to the predictions of the simple augmentation branch. A mutual information (MI) loss is introduced to maximize MI not only between the input and output representation, but also between the class assignments inside the simple augmentation branch. A novel information dropping method named CutEdge is proposed by removing multiple regions near the input edges to further improve the robustness. The experimental results on CIFAR-10, CIFAR-100 and SVHN with different label sizes demonstrate that the proposed model outperforms the compared semi-supervised learning methods. The gains come from the MI loss, the combination of affine transformation and CutEdge, and the use of multiple branches. Semi-supervised learning Multi-Match Mutual information CutEdge Multiple branches Chen, Lei aut Zhao, Dong aut Zhou, Hongchao aut Zheng, Qinghe aut Enthalten in Multimedia tools and applications Springer US, 1995 82(2022), 1 vom: 07. Juni, Seite 479-496 (DE-627)189064145 (DE-600)1287642-2 (DE-576)052842126 1380-7501 nnns volume:82 year:2022 number:1 day:07 month:06 pages:479-496 https://doi.org/10.1007/s11042-022-13126-1 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT SSG-OLC-BUB SSG-OLC-MKW AR 82 2022 1 07 06 479-496 |
spelling |
10.1007/s11042-022-13126-1 doi (DE-627)OLC2080198467 (DE-He213)s11042-022-13126-1-p DE-627 ger DE-627 rakwb eng 070 004 VZ Wu, Yulin verfasserin (orcid)0000-0001-8116-715X aut Multi-match: mutual information maximization and CutEdge for semi-supervised learning 2022 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022 Abstract Deep supervised learning has achieved great successes in tackling complex computer vision tasks. However, it typically requires a large amount of data with labels and is expensive in practical applications. Semi-supervised learning, which leverages the hidden structures learned from unlabeled data, has attracted much attention. In this work, a semi-supervised classification model named Multi-Match is proposed, which includes two augmentation branches and encourages the output of the complex augmentation branch to be close to the predictions of the simple augmentation branch. A mutual information (MI) loss is introduced to maximize MI not only between the input and output representation, but also between the class assignments inside the simple augmentation branch. A novel information dropping method named CutEdge is proposed by removing multiple regions near the input edges to further improve the robustness. The experimental results on CIFAR-10, CIFAR-100 and SVHN with different label sizes demonstrate that the proposed model outperforms the compared semi-supervised learning methods. The gains come from the MI loss, the combination of affine transformation and CutEdge, and the use of multiple branches. Semi-supervised learning Multi-Match Mutual information CutEdge Multiple branches Chen, Lei aut Zhao, Dong aut Zhou, Hongchao aut Zheng, Qinghe aut Enthalten in Multimedia tools and applications Springer US, 1995 82(2022), 1 vom: 07. Juni, Seite 479-496 (DE-627)189064145 (DE-600)1287642-2 (DE-576)052842126 1380-7501 nnns volume:82 year:2022 number:1 day:07 month:06 pages:479-496 https://doi.org/10.1007/s11042-022-13126-1 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT SSG-OLC-BUB SSG-OLC-MKW AR 82 2022 1 07 06 479-496 |
allfields_unstemmed |
10.1007/s11042-022-13126-1 doi (DE-627)OLC2080198467 (DE-He213)s11042-022-13126-1-p DE-627 ger DE-627 rakwb eng 070 004 VZ Wu, Yulin verfasserin (orcid)0000-0001-8116-715X aut Multi-match: mutual information maximization and CutEdge for semi-supervised learning 2022 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022 Abstract Deep supervised learning has achieved great successes in tackling complex computer vision tasks. However, it typically requires a large amount of data with labels and is expensive in practical applications. Semi-supervised learning, which leverages the hidden structures learned from unlabeled data, has attracted much attention. In this work, a semi-supervised classification model named Multi-Match is proposed, which includes two augmentation branches and encourages the output of the complex augmentation branch to be close to the predictions of the simple augmentation branch. A mutual information (MI) loss is introduced to maximize MI not only between the input and output representation, but also between the class assignments inside the simple augmentation branch. A novel information dropping method named CutEdge is proposed by removing multiple regions near the input edges to further improve the robustness. The experimental results on CIFAR-10, CIFAR-100 and SVHN with different label sizes demonstrate that the proposed model outperforms the compared semi-supervised learning methods. The gains come from the MI loss, the combination of affine transformation and CutEdge, and the use of multiple branches. Semi-supervised learning Multi-Match Mutual information CutEdge Multiple branches Chen, Lei aut Zhao, Dong aut Zhou, Hongchao aut Zheng, Qinghe aut Enthalten in Multimedia tools and applications Springer US, 1995 82(2022), 1 vom: 07. Juni, Seite 479-496 (DE-627)189064145 (DE-600)1287642-2 (DE-576)052842126 1380-7501 nnns volume:82 year:2022 number:1 day:07 month:06 pages:479-496 https://doi.org/10.1007/s11042-022-13126-1 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT SSG-OLC-BUB SSG-OLC-MKW AR 82 2022 1 07 06 479-496 |
allfieldsGer |
10.1007/s11042-022-13126-1 doi (DE-627)OLC2080198467 (DE-He213)s11042-022-13126-1-p DE-627 ger DE-627 rakwb eng 070 004 VZ Wu, Yulin verfasserin (orcid)0000-0001-8116-715X aut Multi-match: mutual information maximization and CutEdge for semi-supervised learning 2022 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022 Abstract Deep supervised learning has achieved great successes in tackling complex computer vision tasks. However, it typically requires a large amount of data with labels and is expensive in practical applications. Semi-supervised learning, which leverages the hidden structures learned from unlabeled data, has attracted much attention. In this work, a semi-supervised classification model named Multi-Match is proposed, which includes two augmentation branches and encourages the output of the complex augmentation branch to be close to the predictions of the simple augmentation branch. A mutual information (MI) loss is introduced to maximize MI not only between the input and output representation, but also between the class assignments inside the simple augmentation branch. A novel information dropping method named CutEdge is proposed by removing multiple regions near the input edges to further improve the robustness. The experimental results on CIFAR-10, CIFAR-100 and SVHN with different label sizes demonstrate that the proposed model outperforms the compared semi-supervised learning methods. The gains come from the MI loss, the combination of affine transformation and CutEdge, and the use of multiple branches. Semi-supervised learning Multi-Match Mutual information CutEdge Multiple branches Chen, Lei aut Zhao, Dong aut Zhou, Hongchao aut Zheng, Qinghe aut Enthalten in Multimedia tools and applications Springer US, 1995 82(2022), 1 vom: 07. Juni, Seite 479-496 (DE-627)189064145 (DE-600)1287642-2 (DE-576)052842126 1380-7501 nnns volume:82 year:2022 number:1 day:07 month:06 pages:479-496 https://doi.org/10.1007/s11042-022-13126-1 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT SSG-OLC-BUB SSG-OLC-MKW AR 82 2022 1 07 06 479-496 |
allfieldsSound |
10.1007/s11042-022-13126-1 doi (DE-627)OLC2080198467 (DE-He213)s11042-022-13126-1-p DE-627 ger DE-627 rakwb eng 070 004 VZ Wu, Yulin verfasserin (orcid)0000-0001-8116-715X aut Multi-match: mutual information maximization and CutEdge for semi-supervised learning 2022 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022 Abstract Deep supervised learning has achieved great successes in tackling complex computer vision tasks. However, it typically requires a large amount of data with labels and is expensive in practical applications. Semi-supervised learning, which leverages the hidden structures learned from unlabeled data, has attracted much attention. In this work, a semi-supervised classification model named Multi-Match is proposed, which includes two augmentation branches and encourages the output of the complex augmentation branch to be close to the predictions of the simple augmentation branch. A mutual information (MI) loss is introduced to maximize MI not only between the input and output representation, but also between the class assignments inside the simple augmentation branch. A novel information dropping method named CutEdge is proposed by removing multiple regions near the input edges to further improve the robustness. The experimental results on CIFAR-10, CIFAR-100 and SVHN with different label sizes demonstrate that the proposed model outperforms the compared semi-supervised learning methods. The gains come from the MI loss, the combination of affine transformation and CutEdge, and the use of multiple branches. Semi-supervised learning Multi-Match Mutual information CutEdge Multiple branches Chen, Lei aut Zhao, Dong aut Zhou, Hongchao aut Zheng, Qinghe aut Enthalten in Multimedia tools and applications Springer US, 1995 82(2022), 1 vom: 07. Juni, Seite 479-496 (DE-627)189064145 (DE-600)1287642-2 (DE-576)052842126 1380-7501 nnns volume:82 year:2022 number:1 day:07 month:06 pages:479-496 https://doi.org/10.1007/s11042-022-13126-1 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT SSG-OLC-BUB SSG-OLC-MKW AR 82 2022 1 07 06 479-496 |
language |
English |
source |
Enthalten in Multimedia tools and applications 82(2022), 1 vom: 07. Juni, Seite 479-496 volume:82 year:2022 number:1 day:07 month:06 pages:479-496 |
sourceStr |
Enthalten in Multimedia tools and applications 82(2022), 1 vom: 07. Juni, Seite 479-496 volume:82 year:2022 number:1 day:07 month:06 pages:479-496 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Semi-supervised learning Multi-Match Mutual information CutEdge Multiple branches |
dewey-raw |
070 |
isfreeaccess_bool |
false |
container_title |
Multimedia tools and applications |
authorswithroles_txt_mv |
Wu, Yulin @@aut@@ Chen, Lei @@aut@@ Zhao, Dong @@aut@@ Zhou, Hongchao @@aut@@ Zheng, Qinghe @@aut@@ |
publishDateDaySort_date |
2022-06-07T00:00:00Z |
hierarchy_top_id |
189064145 |
dewey-sort |
270 |
id |
OLC2080198467 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">OLC2080198467</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230506100512.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">230131s2022 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s11042-022-13126-1</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC2080198467</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s11042-022-13126-1-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">070</subfield><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Wu, Yulin</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0001-8116-715X</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Multi-match: mutual information maximization and CutEdge for semi-supervised learning</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Deep supervised learning has achieved great successes in tackling complex computer vision tasks. However, it typically requires a large amount of data with labels and is expensive in practical applications. Semi-supervised learning, which leverages the hidden structures learned from unlabeled data, has attracted much attention. In this work, a semi-supervised classification model named Multi-Match is proposed, which includes two augmentation branches and encourages the output of the complex augmentation branch to be close to the predictions of the simple augmentation branch. A mutual information (MI) loss is introduced to maximize MI not only between the input and output representation, but also between the class assignments inside the simple augmentation branch. A novel information dropping method named CutEdge is proposed by removing multiple regions near the input edges to further improve the robustness. The experimental results on CIFAR-10, CIFAR-100 and SVHN with different label sizes demonstrate that the proposed model outperforms the compared semi-supervised learning methods. The gains come from the MI loss, the combination of affine transformation and CutEdge, and the use of multiple branches.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Semi-supervised learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Multi-Match</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Mutual information</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">CutEdge</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Multiple branches</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Chen, Lei</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zhao, Dong</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zhou, Hongchao</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zheng, Qinghe</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Multimedia tools and applications</subfield><subfield code="d">Springer US, 1995</subfield><subfield code="g">82(2022), 1 vom: 07. Juni, Seite 479-496</subfield><subfield code="w">(DE-627)189064145</subfield><subfield code="w">(DE-600)1287642-2</subfield><subfield code="w">(DE-576)052842126</subfield><subfield code="x">1380-7501</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:82</subfield><subfield code="g">year:2022</subfield><subfield code="g">number:1</subfield><subfield code="g">day:07</subfield><subfield code="g">month:06</subfield><subfield code="g">pages:479-496</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s11042-022-13126-1</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MAT</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-BUB</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MKW</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">82</subfield><subfield code="j">2022</subfield><subfield code="e">1</subfield><subfield code="b">07</subfield><subfield code="c">06</subfield><subfield code="h">479-496</subfield></datafield></record></collection>
|
author |
Wu, Yulin |
spellingShingle |
Wu, Yulin ddc 070 misc Semi-supervised learning misc Multi-Match misc Mutual information misc CutEdge misc Multiple branches Multi-match: mutual information maximization and CutEdge for semi-supervised learning |
authorStr |
Wu, Yulin |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)189064145 |
format |
Article |
dewey-ones |
070 - News media, journalism & publishing 004 - Data processing & computer science |
delete_txt_mv |
keep |
author_role |
aut aut aut aut aut |
collection |
OLC |
remote_str |
false |
illustrated |
Not Illustrated |
issn |
1380-7501 |
topic_title |
070 004 VZ Multi-match: mutual information maximization and CutEdge for semi-supervised learning Semi-supervised learning Multi-Match Mutual information CutEdge Multiple branches |
topic |
ddc 070 misc Semi-supervised learning misc Multi-Match misc Mutual information misc CutEdge misc Multiple branches |
topic_unstemmed |
ddc 070 misc Semi-supervised learning misc Multi-Match misc Mutual information misc CutEdge misc Multiple branches |
topic_browse |
ddc 070 misc Semi-supervised learning misc Multi-Match misc Mutual information misc CutEdge misc Multiple branches |
format_facet |
Aufsätze Gedruckte Aufsätze |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
nc |
hierarchy_parent_title |
Multimedia tools and applications |
hierarchy_parent_id |
189064145 |
dewey-tens |
070 - News media, journalism & publishing 000 - Computer science, knowledge & systems |
hierarchy_top_title |
Multimedia tools and applications |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)189064145 (DE-600)1287642-2 (DE-576)052842126 |
title |
Multi-match: mutual information maximization and CutEdge for semi-supervised learning |
ctrlnum |
(DE-627)OLC2080198467 (DE-He213)s11042-022-13126-1-p |
title_full |
Multi-match: mutual information maximization and CutEdge for semi-supervised learning |
author_sort |
Wu, Yulin |
journal |
Multimedia tools and applications |
journalStr |
Multimedia tools and applications |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
000 - Computer science, information & general works |
recordtype |
marc |
publishDateSort |
2022 |
contenttype_str_mv |
txt |
container_start_page |
479 |
author_browse |
Wu, Yulin Chen, Lei Zhao, Dong Zhou, Hongchao Zheng, Qinghe |
container_volume |
82 |
class |
070 004 VZ |
format_se |
Aufsätze |
author-letter |
Wu, Yulin |
doi_str_mv |
10.1007/s11042-022-13126-1 |
normlink |
(ORCID)0000-0001-8116-715X |
normlink_prefix_str_mv |
(orcid)0000-0001-8116-715X |
dewey-full |
070 004 |
title_sort |
multi-match: mutual information maximization and cutedge for semi-supervised learning |
title_auth |
Multi-match: mutual information maximization and CutEdge for semi-supervised learning |
abstract |
Abstract Deep supervised learning has achieved great successes in tackling complex computer vision tasks. However, it typically requires a large amount of data with labels and is expensive in practical applications. Semi-supervised learning, which leverages the hidden structures learned from unlabeled data, has attracted much attention. In this work, a semi-supervised classification model named Multi-Match is proposed, which includes two augmentation branches and encourages the output of the complex augmentation branch to be close to the predictions of the simple augmentation branch. A mutual information (MI) loss is introduced to maximize MI not only between the input and output representation, but also between the class assignments inside the simple augmentation branch. A novel information dropping method named CutEdge is proposed by removing multiple regions near the input edges to further improve the robustness. The experimental results on CIFAR-10, CIFAR-100 and SVHN with different label sizes demonstrate that the proposed model outperforms the compared semi-supervised learning methods. The gains come from the MI loss, the combination of affine transformation and CutEdge, and the use of multiple branches. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022 |
abstractGer |
Abstract Deep supervised learning has achieved great successes in tackling complex computer vision tasks. However, it typically requires a large amount of data with labels and is expensive in practical applications. Semi-supervised learning, which leverages the hidden structures learned from unlabeled data, has attracted much attention. In this work, a semi-supervised classification model named Multi-Match is proposed, which includes two augmentation branches and encourages the output of the complex augmentation branch to be close to the predictions of the simple augmentation branch. A mutual information (MI) loss is introduced to maximize MI not only between the input and output representation, but also between the class assignments inside the simple augmentation branch. A novel information dropping method named CutEdge is proposed by removing multiple regions near the input edges to further improve the robustness. The experimental results on CIFAR-10, CIFAR-100 and SVHN with different label sizes demonstrate that the proposed model outperforms the compared semi-supervised learning methods. The gains come from the MI loss, the combination of affine transformation and CutEdge, and the use of multiple branches. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022 |
abstract_unstemmed |
Abstract Deep supervised learning has achieved great successes in tackling complex computer vision tasks. However, it typically requires a large amount of data with labels and is expensive in practical applications. Semi-supervised learning, which leverages the hidden structures learned from unlabeled data, has attracted much attention. In this work, a semi-supervised classification model named Multi-Match is proposed, which includes two augmentation branches and encourages the output of the complex augmentation branch to be close to the predictions of the simple augmentation branch. A mutual information (MI) loss is introduced to maximize MI not only between the input and output representation, but also between the class assignments inside the simple augmentation branch. A novel information dropping method named CutEdge is proposed by removing multiple regions near the input edges to further improve the robustness. The experimental results on CIFAR-10, CIFAR-100 and SVHN with different label sizes demonstrate that the proposed model outperforms the compared semi-supervised learning methods. The gains come from the MI loss, the combination of affine transformation and CutEdge, and the use of multiple branches. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022 |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT SSG-OLC-BUB SSG-OLC-MKW |
container_issue |
1 |
title_short |
Multi-match: mutual information maximization and CutEdge for semi-supervised learning |
url |
https://doi.org/10.1007/s11042-022-13126-1 |
remote_bool |
false |
author2 |
Chen, Lei Zhao, Dong Zhou, Hongchao Zheng, Qinghe |
author2Str |
Chen, Lei Zhao, Dong Zhou, Hongchao Zheng, Qinghe |
ppnlink |
189064145 |
mediatype_str_mv |
n |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1007/s11042-022-13126-1 |
up_date |
2024-07-04T03:11:24.301Z |
_version_ |
1803616449267236864 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">OLC2080198467</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230506100512.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">230131s2022 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s11042-022-13126-1</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC2080198467</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s11042-022-13126-1-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">070</subfield><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Wu, Yulin</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0001-8116-715X</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Multi-match: mutual information maximization and CutEdge for semi-supervised learning</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Deep supervised learning has achieved great successes in tackling complex computer vision tasks. However, it typically requires a large amount of data with labels and is expensive in practical applications. Semi-supervised learning, which leverages the hidden structures learned from unlabeled data, has attracted much attention. In this work, a semi-supervised classification model named Multi-Match is proposed, which includes two augmentation branches and encourages the output of the complex augmentation branch to be close to the predictions of the simple augmentation branch. A mutual information (MI) loss is introduced to maximize MI not only between the input and output representation, but also between the class assignments inside the simple augmentation branch. A novel information dropping method named CutEdge is proposed by removing multiple regions near the input edges to further improve the robustness. The experimental results on CIFAR-10, CIFAR-100 and SVHN with different label sizes demonstrate that the proposed model outperforms the compared semi-supervised learning methods. The gains come from the MI loss, the combination of affine transformation and CutEdge, and the use of multiple branches.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Semi-supervised learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Multi-Match</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Mutual information</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">CutEdge</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Multiple branches</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Chen, Lei</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zhao, Dong</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zhou, Hongchao</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zheng, Qinghe</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Multimedia tools and applications</subfield><subfield code="d">Springer US, 1995</subfield><subfield code="g">82(2022), 1 vom: 07. Juni, Seite 479-496</subfield><subfield code="w">(DE-627)189064145</subfield><subfield code="w">(DE-600)1287642-2</subfield><subfield code="w">(DE-576)052842126</subfield><subfield code="x">1380-7501</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:82</subfield><subfield code="g">year:2022</subfield><subfield code="g">number:1</subfield><subfield code="g">day:07</subfield><subfield code="g">month:06</subfield><subfield code="g">pages:479-496</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s11042-022-13126-1</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MAT</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-BUB</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MKW</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">82</subfield><subfield code="j">2022</subfield><subfield code="e">1</subfield><subfield code="b">07</subfield><subfield code="c">06</subfield><subfield code="h">479-496</subfield></datafield></record></collection>
|
score |
7.400893 |