Interpolation consistency training for semi-supervised learning
We introduce Interpolation Consistency Training (ICT), a simple and computation efficient algorithm for training Deep Neural Networks in the semi-supervised learning paradigm. ICT encourages the prediction at an interpolation of unlabeled points to be consistent with the interpolation of the predict...
Ausführliche Beschreibung
Autor*in: |
Verma, Vikas [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2022transfer abstract |
---|
Schlagwörter: |
---|
Umfang: |
17 |
---|
Übergeordnetes Werk: |
Enthalten in: Regulatory design for RES-E support mechanisms: Learning curves, market structure, and burden-sharing - 2012, the official journal of the International Neural Network Society, European Neural Network Society and Japanese Neural Network Society, Amsterdam |
---|---|
Übergeordnetes Werk: |
volume:145 ; year:2022 ; pages:90-106 ; extent:17 |
Links: |
---|
DOI / URN: |
10.1016/j.neunet.2021.10.008 |
---|
Katalog-ID: |
ELV056112351 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | ELV056112351 | ||
003 | DE-627 | ||
005 | 20230626042735.0 | ||
007 | cr uuu---uuuuu | ||
008 | 220105s2022 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1016/j.neunet.2021.10.008 |2 doi | |
028 | 5 | 2 | |a /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001702.pica |
035 | |a (DE-627)ELV056112351 | ||
035 | |a (ELSEVIER)S0893-6080(21)00399-3 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
082 | 0 | 4 | |a 620 |q VZ |
082 | 0 | 4 | |a 610 |q VZ |
084 | |a 77.50 |2 bkl | ||
100 | 1 | |a Verma, Vikas |e verfasserin |4 aut | |
245 | 1 | 0 | |a Interpolation consistency training for semi-supervised learning |
264 | 1 | |c 2022transfer abstract | |
300 | |a 17 | ||
336 | |a nicht spezifiziert |b zzz |2 rdacontent | ||
337 | |a nicht spezifiziert |b z |2 rdamedia | ||
338 | |a nicht spezifiziert |b zu |2 rdacarrier | ||
520 | |a We introduce Interpolation Consistency Training (ICT), a simple and computation efficient algorithm for training Deep Neural Networks in the semi-supervised learning paradigm. ICT encourages the prediction at an interpolation of unlabeled points to be consistent with the interpolation of the predictions at those points. In classification problems, ICT moves the decision boundary to low-density regions of the data distribution. Our experiments show that ICT achieves state-of-the-art performance when applied to standard neural network architectures on the CIFAR-10 and SVHN benchmark datasets. Our theoretical analysis shows that ICT corresponds to a certain type of data-adaptive regularization with unlabeled points which reduces overfitting to labeled points under high confidence values. | ||
520 | |a We introduce Interpolation Consistency Training (ICT), a simple and computation efficient algorithm for training Deep Neural Networks in the semi-supervised learning paradigm. ICT encourages the prediction at an interpolation of unlabeled points to be consistent with the interpolation of the predictions at those points. In classification problems, ICT moves the decision boundary to low-density regions of the data distribution. Our experiments show that ICT achieves state-of-the-art performance when applied to standard neural network architectures on the CIFAR-10 and SVHN benchmark datasets. Our theoretical analysis shows that ICT corresponds to a certain type of data-adaptive regularization with unlabeled points which reduces overfitting to labeled points under high confidence values. | ||
650 | 7 | |a Consistency regularization |2 Elsevier | |
650 | 7 | |a Semi-supervised learning |2 Elsevier | |
650 | 7 | |a Mixup |2 Elsevier | |
650 | 7 | |a Deep Neural Networks |2 Elsevier | |
700 | 1 | |a Kawaguchi, Kenji |4 oth | |
700 | 1 | |a Lamb, Alex |4 oth | |
700 | 1 | |a Kannala, Juho |4 oth | |
700 | 1 | |a Solin, Arno |4 oth | |
700 | 1 | |a Bengio, Yoshua |4 oth | |
700 | 1 | |a Lopez-Paz, David |4 oth | |
773 | 0 | 8 | |i Enthalten in |n Elsevier |t Regulatory design for RES-E support mechanisms: Learning curves, market structure, and burden-sharing |d 2012 |d the official journal of the International Neural Network Society, European Neural Network Society and Japanese Neural Network Society |g Amsterdam |w (DE-627)ELV016218965 |
773 | 1 | 8 | |g volume:145 |g year:2022 |g pages:90-106 |g extent:17 |
856 | 4 | 0 | |u https://doi.org/10.1016/j.neunet.2021.10.008 |3 Volltext |
912 | |a GBV_USEFLAG_U | ||
912 | |a GBV_ELV | ||
912 | |a SYSFLAG_U | ||
912 | |a SSG-OLC-PHA | ||
936 | b | k | |a 77.50 |j Psychophysiologie |q VZ |
951 | |a AR | ||
952 | |d 145 |j 2022 |h 90-106 |g 17 |
author_variant |
v v vv |
---|---|
matchkey_str |
vermavikaskawaguchikenjilambalexkannalaj:2022----:nepltocnitnyriigosmsp |
hierarchy_sort_str |
2022transfer abstract |
bklnumber |
77.50 |
publishDate |
2022 |
allfields |
10.1016/j.neunet.2021.10.008 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001702.pica (DE-627)ELV056112351 (ELSEVIER)S0893-6080(21)00399-3 DE-627 ger DE-627 rakwb eng 620 VZ 610 VZ 77.50 bkl Verma, Vikas verfasserin aut Interpolation consistency training for semi-supervised learning 2022transfer abstract 17 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier We introduce Interpolation Consistency Training (ICT), a simple and computation efficient algorithm for training Deep Neural Networks in the semi-supervised learning paradigm. ICT encourages the prediction at an interpolation of unlabeled points to be consistent with the interpolation of the predictions at those points. In classification problems, ICT moves the decision boundary to low-density regions of the data distribution. Our experiments show that ICT achieves state-of-the-art performance when applied to standard neural network architectures on the CIFAR-10 and SVHN benchmark datasets. Our theoretical analysis shows that ICT corresponds to a certain type of data-adaptive regularization with unlabeled points which reduces overfitting to labeled points under high confidence values. We introduce Interpolation Consistency Training (ICT), a simple and computation efficient algorithm for training Deep Neural Networks in the semi-supervised learning paradigm. ICT encourages the prediction at an interpolation of unlabeled points to be consistent with the interpolation of the predictions at those points. In classification problems, ICT moves the decision boundary to low-density regions of the data distribution. Our experiments show that ICT achieves state-of-the-art performance when applied to standard neural network architectures on the CIFAR-10 and SVHN benchmark datasets. Our theoretical analysis shows that ICT corresponds to a certain type of data-adaptive regularization with unlabeled points which reduces overfitting to labeled points under high confidence values. Consistency regularization Elsevier Semi-supervised learning Elsevier Mixup Elsevier Deep Neural Networks Elsevier Kawaguchi, Kenji oth Lamb, Alex oth Kannala, Juho oth Solin, Arno oth Bengio, Yoshua oth Lopez-Paz, David oth Enthalten in Elsevier Regulatory design for RES-E support mechanisms: Learning curves, market structure, and burden-sharing 2012 the official journal of the International Neural Network Society, European Neural Network Society and Japanese Neural Network Society Amsterdam (DE-627)ELV016218965 volume:145 year:2022 pages:90-106 extent:17 https://doi.org/10.1016/j.neunet.2021.10.008 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA 77.50 Psychophysiologie VZ AR 145 2022 90-106 17 |
spelling |
10.1016/j.neunet.2021.10.008 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001702.pica (DE-627)ELV056112351 (ELSEVIER)S0893-6080(21)00399-3 DE-627 ger DE-627 rakwb eng 620 VZ 610 VZ 77.50 bkl Verma, Vikas verfasserin aut Interpolation consistency training for semi-supervised learning 2022transfer abstract 17 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier We introduce Interpolation Consistency Training (ICT), a simple and computation efficient algorithm for training Deep Neural Networks in the semi-supervised learning paradigm. ICT encourages the prediction at an interpolation of unlabeled points to be consistent with the interpolation of the predictions at those points. In classification problems, ICT moves the decision boundary to low-density regions of the data distribution. Our experiments show that ICT achieves state-of-the-art performance when applied to standard neural network architectures on the CIFAR-10 and SVHN benchmark datasets. Our theoretical analysis shows that ICT corresponds to a certain type of data-adaptive regularization with unlabeled points which reduces overfitting to labeled points under high confidence values. We introduce Interpolation Consistency Training (ICT), a simple and computation efficient algorithm for training Deep Neural Networks in the semi-supervised learning paradigm. ICT encourages the prediction at an interpolation of unlabeled points to be consistent with the interpolation of the predictions at those points. In classification problems, ICT moves the decision boundary to low-density regions of the data distribution. Our experiments show that ICT achieves state-of-the-art performance when applied to standard neural network architectures on the CIFAR-10 and SVHN benchmark datasets. Our theoretical analysis shows that ICT corresponds to a certain type of data-adaptive regularization with unlabeled points which reduces overfitting to labeled points under high confidence values. Consistency regularization Elsevier Semi-supervised learning Elsevier Mixup Elsevier Deep Neural Networks Elsevier Kawaguchi, Kenji oth Lamb, Alex oth Kannala, Juho oth Solin, Arno oth Bengio, Yoshua oth Lopez-Paz, David oth Enthalten in Elsevier Regulatory design for RES-E support mechanisms: Learning curves, market structure, and burden-sharing 2012 the official journal of the International Neural Network Society, European Neural Network Society and Japanese Neural Network Society Amsterdam (DE-627)ELV016218965 volume:145 year:2022 pages:90-106 extent:17 https://doi.org/10.1016/j.neunet.2021.10.008 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA 77.50 Psychophysiologie VZ AR 145 2022 90-106 17 |
allfields_unstemmed |
10.1016/j.neunet.2021.10.008 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001702.pica (DE-627)ELV056112351 (ELSEVIER)S0893-6080(21)00399-3 DE-627 ger DE-627 rakwb eng 620 VZ 610 VZ 77.50 bkl Verma, Vikas verfasserin aut Interpolation consistency training for semi-supervised learning 2022transfer abstract 17 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier We introduce Interpolation Consistency Training (ICT), a simple and computation efficient algorithm for training Deep Neural Networks in the semi-supervised learning paradigm. ICT encourages the prediction at an interpolation of unlabeled points to be consistent with the interpolation of the predictions at those points. In classification problems, ICT moves the decision boundary to low-density regions of the data distribution. Our experiments show that ICT achieves state-of-the-art performance when applied to standard neural network architectures on the CIFAR-10 and SVHN benchmark datasets. Our theoretical analysis shows that ICT corresponds to a certain type of data-adaptive regularization with unlabeled points which reduces overfitting to labeled points under high confidence values. We introduce Interpolation Consistency Training (ICT), a simple and computation efficient algorithm for training Deep Neural Networks in the semi-supervised learning paradigm. ICT encourages the prediction at an interpolation of unlabeled points to be consistent with the interpolation of the predictions at those points. In classification problems, ICT moves the decision boundary to low-density regions of the data distribution. Our experiments show that ICT achieves state-of-the-art performance when applied to standard neural network architectures on the CIFAR-10 and SVHN benchmark datasets. Our theoretical analysis shows that ICT corresponds to a certain type of data-adaptive regularization with unlabeled points which reduces overfitting to labeled points under high confidence values. Consistency regularization Elsevier Semi-supervised learning Elsevier Mixup Elsevier Deep Neural Networks Elsevier Kawaguchi, Kenji oth Lamb, Alex oth Kannala, Juho oth Solin, Arno oth Bengio, Yoshua oth Lopez-Paz, David oth Enthalten in Elsevier Regulatory design for RES-E support mechanisms: Learning curves, market structure, and burden-sharing 2012 the official journal of the International Neural Network Society, European Neural Network Society and Japanese Neural Network Society Amsterdam (DE-627)ELV016218965 volume:145 year:2022 pages:90-106 extent:17 https://doi.org/10.1016/j.neunet.2021.10.008 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA 77.50 Psychophysiologie VZ AR 145 2022 90-106 17 |
allfieldsGer |
10.1016/j.neunet.2021.10.008 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001702.pica (DE-627)ELV056112351 (ELSEVIER)S0893-6080(21)00399-3 DE-627 ger DE-627 rakwb eng 620 VZ 610 VZ 77.50 bkl Verma, Vikas verfasserin aut Interpolation consistency training for semi-supervised learning 2022transfer abstract 17 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier We introduce Interpolation Consistency Training (ICT), a simple and computation efficient algorithm for training Deep Neural Networks in the semi-supervised learning paradigm. ICT encourages the prediction at an interpolation of unlabeled points to be consistent with the interpolation of the predictions at those points. In classification problems, ICT moves the decision boundary to low-density regions of the data distribution. Our experiments show that ICT achieves state-of-the-art performance when applied to standard neural network architectures on the CIFAR-10 and SVHN benchmark datasets. Our theoretical analysis shows that ICT corresponds to a certain type of data-adaptive regularization with unlabeled points which reduces overfitting to labeled points under high confidence values. We introduce Interpolation Consistency Training (ICT), a simple and computation efficient algorithm for training Deep Neural Networks in the semi-supervised learning paradigm. ICT encourages the prediction at an interpolation of unlabeled points to be consistent with the interpolation of the predictions at those points. In classification problems, ICT moves the decision boundary to low-density regions of the data distribution. Our experiments show that ICT achieves state-of-the-art performance when applied to standard neural network architectures on the CIFAR-10 and SVHN benchmark datasets. Our theoretical analysis shows that ICT corresponds to a certain type of data-adaptive regularization with unlabeled points which reduces overfitting to labeled points under high confidence values. Consistency regularization Elsevier Semi-supervised learning Elsevier Mixup Elsevier Deep Neural Networks Elsevier Kawaguchi, Kenji oth Lamb, Alex oth Kannala, Juho oth Solin, Arno oth Bengio, Yoshua oth Lopez-Paz, David oth Enthalten in Elsevier Regulatory design for RES-E support mechanisms: Learning curves, market structure, and burden-sharing 2012 the official journal of the International Neural Network Society, European Neural Network Society and Japanese Neural Network Society Amsterdam (DE-627)ELV016218965 volume:145 year:2022 pages:90-106 extent:17 https://doi.org/10.1016/j.neunet.2021.10.008 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA 77.50 Psychophysiologie VZ AR 145 2022 90-106 17 |
allfieldsSound |
10.1016/j.neunet.2021.10.008 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001702.pica (DE-627)ELV056112351 (ELSEVIER)S0893-6080(21)00399-3 DE-627 ger DE-627 rakwb eng 620 VZ 610 VZ 77.50 bkl Verma, Vikas verfasserin aut Interpolation consistency training for semi-supervised learning 2022transfer abstract 17 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier We introduce Interpolation Consistency Training (ICT), a simple and computation efficient algorithm for training Deep Neural Networks in the semi-supervised learning paradigm. ICT encourages the prediction at an interpolation of unlabeled points to be consistent with the interpolation of the predictions at those points. In classification problems, ICT moves the decision boundary to low-density regions of the data distribution. Our experiments show that ICT achieves state-of-the-art performance when applied to standard neural network architectures on the CIFAR-10 and SVHN benchmark datasets. Our theoretical analysis shows that ICT corresponds to a certain type of data-adaptive regularization with unlabeled points which reduces overfitting to labeled points under high confidence values. We introduce Interpolation Consistency Training (ICT), a simple and computation efficient algorithm for training Deep Neural Networks in the semi-supervised learning paradigm. ICT encourages the prediction at an interpolation of unlabeled points to be consistent with the interpolation of the predictions at those points. In classification problems, ICT moves the decision boundary to low-density regions of the data distribution. Our experiments show that ICT achieves state-of-the-art performance when applied to standard neural network architectures on the CIFAR-10 and SVHN benchmark datasets. Our theoretical analysis shows that ICT corresponds to a certain type of data-adaptive regularization with unlabeled points which reduces overfitting to labeled points under high confidence values. Consistency regularization Elsevier Semi-supervised learning Elsevier Mixup Elsevier Deep Neural Networks Elsevier Kawaguchi, Kenji oth Lamb, Alex oth Kannala, Juho oth Solin, Arno oth Bengio, Yoshua oth Lopez-Paz, David oth Enthalten in Elsevier Regulatory design for RES-E support mechanisms: Learning curves, market structure, and burden-sharing 2012 the official journal of the International Neural Network Society, European Neural Network Society and Japanese Neural Network Society Amsterdam (DE-627)ELV016218965 volume:145 year:2022 pages:90-106 extent:17 https://doi.org/10.1016/j.neunet.2021.10.008 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA 77.50 Psychophysiologie VZ AR 145 2022 90-106 17 |
language |
English |
source |
Enthalten in Regulatory design for RES-E support mechanisms: Learning curves, market structure, and burden-sharing Amsterdam volume:145 year:2022 pages:90-106 extent:17 |
sourceStr |
Enthalten in Regulatory design for RES-E support mechanisms: Learning curves, market structure, and burden-sharing Amsterdam volume:145 year:2022 pages:90-106 extent:17 |
format_phy_str_mv |
Article |
bklname |
Psychophysiologie |
institution |
findex.gbv.de |
topic_facet |
Consistency regularization Semi-supervised learning Mixup Deep Neural Networks |
dewey-raw |
620 |
isfreeaccess_bool |
false |
container_title |
Regulatory design for RES-E support mechanisms: Learning curves, market structure, and burden-sharing |
authorswithroles_txt_mv |
Verma, Vikas @@aut@@ Kawaguchi, Kenji @@oth@@ Lamb, Alex @@oth@@ Kannala, Juho @@oth@@ Solin, Arno @@oth@@ Bengio, Yoshua @@oth@@ Lopez-Paz, David @@oth@@ |
publishDateDaySort_date |
2022-01-01T00:00:00Z |
hierarchy_top_id |
ELV016218965 |
dewey-sort |
3620 |
id |
ELV056112351 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV056112351</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230626042735.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">220105s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.neunet.2021.10.008</subfield><subfield code="2">doi</subfield></datafield><datafield tag="028" ind1="5" ind2="2"><subfield code="a">/cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001702.pica</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV056112351</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0893-6080(21)00399-3</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">620</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">610</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">77.50</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Verma, Vikas</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Interpolation consistency training for semi-supervised learning</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022transfer abstract</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">17</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">z</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zu</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">We introduce Interpolation Consistency Training (ICT), a simple and computation efficient algorithm for training Deep Neural Networks in the semi-supervised learning paradigm. ICT encourages the prediction at an interpolation of unlabeled points to be consistent with the interpolation of the predictions at those points. In classification problems, ICT moves the decision boundary to low-density regions of the data distribution. Our experiments show that ICT achieves state-of-the-art performance when applied to standard neural network architectures on the CIFAR-10 and SVHN benchmark datasets. Our theoretical analysis shows that ICT corresponds to a certain type of data-adaptive regularization with unlabeled points which reduces overfitting to labeled points under high confidence values.</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">We introduce Interpolation Consistency Training (ICT), a simple and computation efficient algorithm for training Deep Neural Networks in the semi-supervised learning paradigm. ICT encourages the prediction at an interpolation of unlabeled points to be consistent with the interpolation of the predictions at those points. In classification problems, ICT moves the decision boundary to low-density regions of the data distribution. Our experiments show that ICT achieves state-of-the-art performance when applied to standard neural network architectures on the CIFAR-10 and SVHN benchmark datasets. Our theoretical analysis shows that ICT corresponds to a certain type of data-adaptive regularization with unlabeled points which reduces overfitting to labeled points under high confidence values.</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Consistency regularization</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Semi-supervised learning</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Mixup</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Deep Neural Networks</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Kawaguchi, Kenji</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Lamb, Alex</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Kannala, Juho</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Solin, Arno</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Bengio, Yoshua</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Lopez-Paz, David</subfield><subfield code="4">oth</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="n">Elsevier</subfield><subfield code="t">Regulatory design for RES-E support mechanisms: Learning curves, market structure, and burden-sharing</subfield><subfield code="d">2012</subfield><subfield code="d">the official journal of the International Neural Network Society, European Neural Network Society and Japanese Neural Network Society</subfield><subfield code="g">Amsterdam</subfield><subfield code="w">(DE-627)ELV016218965</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:145</subfield><subfield code="g">year:2022</subfield><subfield code="g">pages:90-106</subfield><subfield code="g">extent:17</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1016/j.neunet.2021.10.008</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PHA</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">77.50</subfield><subfield code="j">Psychophysiologie</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">145</subfield><subfield code="j">2022</subfield><subfield code="h">90-106</subfield><subfield code="g">17</subfield></datafield></record></collection>
|
author |
Verma, Vikas |
spellingShingle |
Verma, Vikas ddc 620 ddc 610 bkl 77.50 Elsevier Consistency regularization Elsevier Semi-supervised learning Elsevier Mixup Elsevier Deep Neural Networks Interpolation consistency training for semi-supervised learning |
authorStr |
Verma, Vikas |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)ELV016218965 |
format |
electronic Article |
dewey-ones |
620 - Engineering & allied operations 610 - Medicine & health |
delete_txt_mv |
keep |
author_role |
aut |
collection |
elsevier |
remote_str |
true |
illustrated |
Not Illustrated |
topic_title |
620 VZ 610 VZ 77.50 bkl Interpolation consistency training for semi-supervised learning Consistency regularization Elsevier Semi-supervised learning Elsevier Mixup Elsevier Deep Neural Networks Elsevier |
topic |
ddc 620 ddc 610 bkl 77.50 Elsevier Consistency regularization Elsevier Semi-supervised learning Elsevier Mixup Elsevier Deep Neural Networks |
topic_unstemmed |
ddc 620 ddc 610 bkl 77.50 Elsevier Consistency regularization Elsevier Semi-supervised learning Elsevier Mixup Elsevier Deep Neural Networks |
topic_browse |
ddc 620 ddc 610 bkl 77.50 Elsevier Consistency regularization Elsevier Semi-supervised learning Elsevier Mixup Elsevier Deep Neural Networks |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
zu |
author2_variant |
k k kk a l al j k jk a s as y b yb d l p dlp |
hierarchy_parent_title |
Regulatory design for RES-E support mechanisms: Learning curves, market structure, and burden-sharing |
hierarchy_parent_id |
ELV016218965 |
dewey-tens |
620 - Engineering 610 - Medicine & health |
hierarchy_top_title |
Regulatory design for RES-E support mechanisms: Learning curves, market structure, and burden-sharing |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)ELV016218965 |
title |
Interpolation consistency training for semi-supervised learning |
ctrlnum |
(DE-627)ELV056112351 (ELSEVIER)S0893-6080(21)00399-3 |
title_full |
Interpolation consistency training for semi-supervised learning |
author_sort |
Verma, Vikas |
journal |
Regulatory design for RES-E support mechanisms: Learning curves, market structure, and burden-sharing |
journalStr |
Regulatory design for RES-E support mechanisms: Learning curves, market structure, and burden-sharing |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
600 - Technology |
recordtype |
marc |
publishDateSort |
2022 |
contenttype_str_mv |
zzz |
container_start_page |
90 |
author_browse |
Verma, Vikas |
container_volume |
145 |
physical |
17 |
class |
620 VZ 610 VZ 77.50 bkl |
format_se |
Elektronische Aufsätze |
author-letter |
Verma, Vikas |
doi_str_mv |
10.1016/j.neunet.2021.10.008 |
dewey-full |
620 610 |
title_sort |
interpolation consistency training for semi-supervised learning |
title_auth |
Interpolation consistency training for semi-supervised learning |
abstract |
We introduce Interpolation Consistency Training (ICT), a simple and computation efficient algorithm for training Deep Neural Networks in the semi-supervised learning paradigm. ICT encourages the prediction at an interpolation of unlabeled points to be consistent with the interpolation of the predictions at those points. In classification problems, ICT moves the decision boundary to low-density regions of the data distribution. Our experiments show that ICT achieves state-of-the-art performance when applied to standard neural network architectures on the CIFAR-10 and SVHN benchmark datasets. Our theoretical analysis shows that ICT corresponds to a certain type of data-adaptive regularization with unlabeled points which reduces overfitting to labeled points under high confidence values. |
abstractGer |
We introduce Interpolation Consistency Training (ICT), a simple and computation efficient algorithm for training Deep Neural Networks in the semi-supervised learning paradigm. ICT encourages the prediction at an interpolation of unlabeled points to be consistent with the interpolation of the predictions at those points. In classification problems, ICT moves the decision boundary to low-density regions of the data distribution. Our experiments show that ICT achieves state-of-the-art performance when applied to standard neural network architectures on the CIFAR-10 and SVHN benchmark datasets. Our theoretical analysis shows that ICT corresponds to a certain type of data-adaptive regularization with unlabeled points which reduces overfitting to labeled points under high confidence values. |
abstract_unstemmed |
We introduce Interpolation Consistency Training (ICT), a simple and computation efficient algorithm for training Deep Neural Networks in the semi-supervised learning paradigm. ICT encourages the prediction at an interpolation of unlabeled points to be consistent with the interpolation of the predictions at those points. In classification problems, ICT moves the decision boundary to low-density regions of the data distribution. Our experiments show that ICT achieves state-of-the-art performance when applied to standard neural network architectures on the CIFAR-10 and SVHN benchmark datasets. Our theoretical analysis shows that ICT corresponds to a certain type of data-adaptive regularization with unlabeled points which reduces overfitting to labeled points under high confidence values. |
collection_details |
GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA |
title_short |
Interpolation consistency training for semi-supervised learning |
url |
https://doi.org/10.1016/j.neunet.2021.10.008 |
remote_bool |
true |
author2 |
Kawaguchi, Kenji Lamb, Alex Kannala, Juho Solin, Arno Bengio, Yoshua Lopez-Paz, David |
author2Str |
Kawaguchi, Kenji Lamb, Alex Kannala, Juho Solin, Arno Bengio, Yoshua Lopez-Paz, David |
ppnlink |
ELV016218965 |
mediatype_str_mv |
z |
isOA_txt |
false |
hochschulschrift_bool |
false |
author2_role |
oth oth oth oth oth oth |
doi_str |
10.1016/j.neunet.2021.10.008 |
up_date |
2024-07-06T19:27:54.807Z |
_version_ |
1803859079784824832 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV056112351</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230626042735.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">220105s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.neunet.2021.10.008</subfield><subfield code="2">doi</subfield></datafield><datafield tag="028" ind1="5" ind2="2"><subfield code="a">/cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001702.pica</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV056112351</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0893-6080(21)00399-3</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">620</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">610</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">77.50</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Verma, Vikas</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Interpolation consistency training for semi-supervised learning</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022transfer abstract</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">17</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">z</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zu</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">We introduce Interpolation Consistency Training (ICT), a simple and computation efficient algorithm for training Deep Neural Networks in the semi-supervised learning paradigm. ICT encourages the prediction at an interpolation of unlabeled points to be consistent with the interpolation of the predictions at those points. In classification problems, ICT moves the decision boundary to low-density regions of the data distribution. Our experiments show that ICT achieves state-of-the-art performance when applied to standard neural network architectures on the CIFAR-10 and SVHN benchmark datasets. Our theoretical analysis shows that ICT corresponds to a certain type of data-adaptive regularization with unlabeled points which reduces overfitting to labeled points under high confidence values.</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">We introduce Interpolation Consistency Training (ICT), a simple and computation efficient algorithm for training Deep Neural Networks in the semi-supervised learning paradigm. ICT encourages the prediction at an interpolation of unlabeled points to be consistent with the interpolation of the predictions at those points. In classification problems, ICT moves the decision boundary to low-density regions of the data distribution. Our experiments show that ICT achieves state-of-the-art performance when applied to standard neural network architectures on the CIFAR-10 and SVHN benchmark datasets. Our theoretical analysis shows that ICT corresponds to a certain type of data-adaptive regularization with unlabeled points which reduces overfitting to labeled points under high confidence values.</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Consistency regularization</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Semi-supervised learning</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Mixup</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Deep Neural Networks</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Kawaguchi, Kenji</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Lamb, Alex</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Kannala, Juho</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Solin, Arno</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Bengio, Yoshua</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Lopez-Paz, David</subfield><subfield code="4">oth</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="n">Elsevier</subfield><subfield code="t">Regulatory design for RES-E support mechanisms: Learning curves, market structure, and burden-sharing</subfield><subfield code="d">2012</subfield><subfield code="d">the official journal of the International Neural Network Society, European Neural Network Society and Japanese Neural Network Society</subfield><subfield code="g">Amsterdam</subfield><subfield code="w">(DE-627)ELV016218965</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:145</subfield><subfield code="g">year:2022</subfield><subfield code="g">pages:90-106</subfield><subfield code="g">extent:17</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1016/j.neunet.2021.10.008</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PHA</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">77.50</subfield><subfield code="j">Psychophysiologie</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">145</subfield><subfield code="j">2022</subfield><subfield code="h">90-106</subfield><subfield code="g">17</subfield></datafield></record></collection>
|
score |
7.401063 |