A noise injection strategy for graph autoencoder training
Abstract Graph autoencoder can map graph data into a low-dimensional space. It is a powerful graph embedding method applied in graph analytics to lower the computational cost. Researchers have developed different graph autoencoders for addressing different needs. This paper proposes a strategy based...
Ausführliche Beschreibung
Autor*in: |
Wang, Yingfeng [verfasserIn] |
---|
Format: |
Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2020 |
---|
Schlagwörter: |
---|
Anmerkung: |
© Springer-Verlag London Ltd., part of Springer Nature 2020 |
---|
Übergeordnetes Werk: |
Enthalten in: Neural computing & applications - Springer London, 1993, 33(2020), 10 vom: 11. Aug., Seite 4807-4814 |
---|---|
Übergeordnetes Werk: |
volume:33 ; year:2020 ; number:10 ; day:11 ; month:08 ; pages:4807-4814 |
Links: |
---|
DOI / URN: |
10.1007/s00521-020-05283-x |
---|
Katalog-ID: |
OLC2125152150 |
---|
LEADER | 01000naa a22002652 4500 | ||
---|---|---|---|
001 | OLC2125152150 | ||
003 | DE-627 | ||
005 | 20230505095830.0 | ||
007 | tu | ||
008 | 230505s2020 xx ||||| 00| ||eng c | ||
024 | 7 | |a 10.1007/s00521-020-05283-x |2 doi | |
035 | |a (DE-627)OLC2125152150 | ||
035 | |a (DE-He213)s00521-020-05283-x-p | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
082 | 0 | 4 | |a 004 |q VZ |
100 | 1 | |a Wang, Yingfeng |e verfasserin |0 (orcid)0000-0002-3715-5124 |4 aut | |
245 | 1 | 0 | |a A noise injection strategy for graph autoencoder training |
264 | 1 | |c 2020 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ohne Hilfsmittel zu benutzen |b n |2 rdamedia | ||
338 | |a Band |b nc |2 rdacarrier | ||
500 | |a © Springer-Verlag London Ltd., part of Springer Nature 2020 | ||
520 | |a Abstract Graph autoencoder can map graph data into a low-dimensional space. It is a powerful graph embedding method applied in graph analytics to lower the computational cost. Researchers have developed different graph autoencoders for addressing different needs. This paper proposes a strategy based on noise injection for graph autoencoder training. This is a general training strategy that can flexibly fit most existing training algorithms. The experimental results verify this general strategy can significantly reduce overfitting and identify the noise rate setting for consistent training performance improvement. | ||
650 | 4 | |a Graph autoencoder | |
650 | 4 | |a Noise injection | |
650 | 4 | |a Training algorithm | |
650 | 4 | |a Overfitting | |
700 | 1 | |a Xu, Biyun |4 aut | |
700 | 1 | |a Kwak, Myungjae |4 aut | |
700 | 1 | |a Zeng, Xiaoqin |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Neural computing & applications |d Springer London, 1993 |g 33(2020), 10 vom: 11. Aug., Seite 4807-4814 |w (DE-627)165669608 |w (DE-600)1136944-9 |w (DE-576)032873050 |x 0941-0643 |7 nnns |
773 | 1 | 8 | |g volume:33 |g year:2020 |g number:10 |g day:11 |g month:08 |g pages:4807-4814 |
856 | 4 | 1 | |u https://doi.org/10.1007/s00521-020-05283-x |z lizenzpflichtig |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_OLC | ||
912 | |a SSG-OLC-MAT | ||
912 | |a GBV_ILN_2018 | ||
912 | |a GBV_ILN_4277 | ||
951 | |a AR | ||
952 | |d 33 |j 2020 |e 10 |b 11 |c 08 |h 4807-4814 |
author_variant |
y w yw b x bx m k mk x z xz |
---|---|
matchkey_str |
article:09410643:2020----::nienetosrtgfrrpate |
hierarchy_sort_str |
2020 |
publishDate |
2020 |
allfields |
10.1007/s00521-020-05283-x doi (DE-627)OLC2125152150 (DE-He213)s00521-020-05283-x-p DE-627 ger DE-627 rakwb eng 004 VZ Wang, Yingfeng verfasserin (orcid)0000-0002-3715-5124 aut A noise injection strategy for graph autoencoder training 2020 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer-Verlag London Ltd., part of Springer Nature 2020 Abstract Graph autoencoder can map graph data into a low-dimensional space. It is a powerful graph embedding method applied in graph analytics to lower the computational cost. Researchers have developed different graph autoencoders for addressing different needs. This paper proposes a strategy based on noise injection for graph autoencoder training. This is a general training strategy that can flexibly fit most existing training algorithms. The experimental results verify this general strategy can significantly reduce overfitting and identify the noise rate setting for consistent training performance improvement. Graph autoencoder Noise injection Training algorithm Overfitting Xu, Biyun aut Kwak, Myungjae aut Zeng, Xiaoqin aut Enthalten in Neural computing & applications Springer London, 1993 33(2020), 10 vom: 11. Aug., Seite 4807-4814 (DE-627)165669608 (DE-600)1136944-9 (DE-576)032873050 0941-0643 nnns volume:33 year:2020 number:10 day:11 month:08 pages:4807-4814 https://doi.org/10.1007/s00521-020-05283-x lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_2018 GBV_ILN_4277 AR 33 2020 10 11 08 4807-4814 |
spelling |
10.1007/s00521-020-05283-x doi (DE-627)OLC2125152150 (DE-He213)s00521-020-05283-x-p DE-627 ger DE-627 rakwb eng 004 VZ Wang, Yingfeng verfasserin (orcid)0000-0002-3715-5124 aut A noise injection strategy for graph autoencoder training 2020 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer-Verlag London Ltd., part of Springer Nature 2020 Abstract Graph autoencoder can map graph data into a low-dimensional space. It is a powerful graph embedding method applied in graph analytics to lower the computational cost. Researchers have developed different graph autoencoders for addressing different needs. This paper proposes a strategy based on noise injection for graph autoencoder training. This is a general training strategy that can flexibly fit most existing training algorithms. The experimental results verify this general strategy can significantly reduce overfitting and identify the noise rate setting for consistent training performance improvement. Graph autoencoder Noise injection Training algorithm Overfitting Xu, Biyun aut Kwak, Myungjae aut Zeng, Xiaoqin aut Enthalten in Neural computing & applications Springer London, 1993 33(2020), 10 vom: 11. Aug., Seite 4807-4814 (DE-627)165669608 (DE-600)1136944-9 (DE-576)032873050 0941-0643 nnns volume:33 year:2020 number:10 day:11 month:08 pages:4807-4814 https://doi.org/10.1007/s00521-020-05283-x lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_2018 GBV_ILN_4277 AR 33 2020 10 11 08 4807-4814 |
allfields_unstemmed |
10.1007/s00521-020-05283-x doi (DE-627)OLC2125152150 (DE-He213)s00521-020-05283-x-p DE-627 ger DE-627 rakwb eng 004 VZ Wang, Yingfeng verfasserin (orcid)0000-0002-3715-5124 aut A noise injection strategy for graph autoencoder training 2020 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer-Verlag London Ltd., part of Springer Nature 2020 Abstract Graph autoencoder can map graph data into a low-dimensional space. It is a powerful graph embedding method applied in graph analytics to lower the computational cost. Researchers have developed different graph autoencoders for addressing different needs. This paper proposes a strategy based on noise injection for graph autoencoder training. This is a general training strategy that can flexibly fit most existing training algorithms. The experimental results verify this general strategy can significantly reduce overfitting and identify the noise rate setting for consistent training performance improvement. Graph autoencoder Noise injection Training algorithm Overfitting Xu, Biyun aut Kwak, Myungjae aut Zeng, Xiaoqin aut Enthalten in Neural computing & applications Springer London, 1993 33(2020), 10 vom: 11. Aug., Seite 4807-4814 (DE-627)165669608 (DE-600)1136944-9 (DE-576)032873050 0941-0643 nnns volume:33 year:2020 number:10 day:11 month:08 pages:4807-4814 https://doi.org/10.1007/s00521-020-05283-x lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_2018 GBV_ILN_4277 AR 33 2020 10 11 08 4807-4814 |
allfieldsGer |
10.1007/s00521-020-05283-x doi (DE-627)OLC2125152150 (DE-He213)s00521-020-05283-x-p DE-627 ger DE-627 rakwb eng 004 VZ Wang, Yingfeng verfasserin (orcid)0000-0002-3715-5124 aut A noise injection strategy for graph autoencoder training 2020 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer-Verlag London Ltd., part of Springer Nature 2020 Abstract Graph autoencoder can map graph data into a low-dimensional space. It is a powerful graph embedding method applied in graph analytics to lower the computational cost. Researchers have developed different graph autoencoders for addressing different needs. This paper proposes a strategy based on noise injection for graph autoencoder training. This is a general training strategy that can flexibly fit most existing training algorithms. The experimental results verify this general strategy can significantly reduce overfitting and identify the noise rate setting for consistent training performance improvement. Graph autoencoder Noise injection Training algorithm Overfitting Xu, Biyun aut Kwak, Myungjae aut Zeng, Xiaoqin aut Enthalten in Neural computing & applications Springer London, 1993 33(2020), 10 vom: 11. Aug., Seite 4807-4814 (DE-627)165669608 (DE-600)1136944-9 (DE-576)032873050 0941-0643 nnns volume:33 year:2020 number:10 day:11 month:08 pages:4807-4814 https://doi.org/10.1007/s00521-020-05283-x lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_2018 GBV_ILN_4277 AR 33 2020 10 11 08 4807-4814 |
allfieldsSound |
10.1007/s00521-020-05283-x doi (DE-627)OLC2125152150 (DE-He213)s00521-020-05283-x-p DE-627 ger DE-627 rakwb eng 004 VZ Wang, Yingfeng verfasserin (orcid)0000-0002-3715-5124 aut A noise injection strategy for graph autoencoder training 2020 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer-Verlag London Ltd., part of Springer Nature 2020 Abstract Graph autoencoder can map graph data into a low-dimensional space. It is a powerful graph embedding method applied in graph analytics to lower the computational cost. Researchers have developed different graph autoencoders for addressing different needs. This paper proposes a strategy based on noise injection for graph autoencoder training. This is a general training strategy that can flexibly fit most existing training algorithms. The experimental results verify this general strategy can significantly reduce overfitting and identify the noise rate setting for consistent training performance improvement. Graph autoencoder Noise injection Training algorithm Overfitting Xu, Biyun aut Kwak, Myungjae aut Zeng, Xiaoqin aut Enthalten in Neural computing & applications Springer London, 1993 33(2020), 10 vom: 11. Aug., Seite 4807-4814 (DE-627)165669608 (DE-600)1136944-9 (DE-576)032873050 0941-0643 nnns volume:33 year:2020 number:10 day:11 month:08 pages:4807-4814 https://doi.org/10.1007/s00521-020-05283-x lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_2018 GBV_ILN_4277 AR 33 2020 10 11 08 4807-4814 |
language |
English |
source |
Enthalten in Neural computing & applications 33(2020), 10 vom: 11. Aug., Seite 4807-4814 volume:33 year:2020 number:10 day:11 month:08 pages:4807-4814 |
sourceStr |
Enthalten in Neural computing & applications 33(2020), 10 vom: 11. Aug., Seite 4807-4814 volume:33 year:2020 number:10 day:11 month:08 pages:4807-4814 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Graph autoencoder Noise injection Training algorithm Overfitting |
dewey-raw |
004 |
isfreeaccess_bool |
false |
container_title |
Neural computing & applications |
authorswithroles_txt_mv |
Wang, Yingfeng @@aut@@ Xu, Biyun @@aut@@ Kwak, Myungjae @@aut@@ Zeng, Xiaoqin @@aut@@ |
publishDateDaySort_date |
2020-08-11T00:00:00Z |
hierarchy_top_id |
165669608 |
dewey-sort |
14 |
id |
OLC2125152150 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000naa a22002652 4500</leader><controlfield tag="001">OLC2125152150</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230505095830.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">230505s2020 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s00521-020-05283-x</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC2125152150</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s00521-020-05283-x-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Wang, Yingfeng</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0002-3715-5124</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">A noise injection strategy for graph autoencoder training</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2020</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© Springer-Verlag London Ltd., part of Springer Nature 2020</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Graph autoencoder can map graph data into a low-dimensional space. It is a powerful graph embedding method applied in graph analytics to lower the computational cost. Researchers have developed different graph autoencoders for addressing different needs. This paper proposes a strategy based on noise injection for graph autoencoder training. This is a general training strategy that can flexibly fit most existing training algorithms. The experimental results verify this general strategy can significantly reduce overfitting and identify the noise rate setting for consistent training performance improvement.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Graph autoencoder</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Noise injection</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Training algorithm</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Overfitting</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Xu, Biyun</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Kwak, Myungjae</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zeng, Xiaoqin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Neural computing & applications</subfield><subfield code="d">Springer London, 1993</subfield><subfield code="g">33(2020), 10 vom: 11. Aug., Seite 4807-4814</subfield><subfield code="w">(DE-627)165669608</subfield><subfield code="w">(DE-600)1136944-9</subfield><subfield code="w">(DE-576)032873050</subfield><subfield code="x">0941-0643</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:33</subfield><subfield code="g">year:2020</subfield><subfield code="g">number:10</subfield><subfield code="g">day:11</subfield><subfield code="g">month:08</subfield><subfield code="g">pages:4807-4814</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s00521-020-05283-x</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MAT</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2018</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4277</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">33</subfield><subfield code="j">2020</subfield><subfield code="e">10</subfield><subfield code="b">11</subfield><subfield code="c">08</subfield><subfield code="h">4807-4814</subfield></datafield></record></collection>
|
author |
Wang, Yingfeng |
spellingShingle |
Wang, Yingfeng ddc 004 misc Graph autoencoder misc Noise injection misc Training algorithm misc Overfitting A noise injection strategy for graph autoencoder training |
authorStr |
Wang, Yingfeng |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)165669608 |
format |
Article |
dewey-ones |
004 - Data processing & computer science |
delete_txt_mv |
keep |
author_role |
aut aut aut aut |
collection |
OLC |
remote_str |
false |
illustrated |
Not Illustrated |
issn |
0941-0643 |
topic_title |
004 VZ A noise injection strategy for graph autoencoder training Graph autoencoder Noise injection Training algorithm Overfitting |
topic |
ddc 004 misc Graph autoencoder misc Noise injection misc Training algorithm misc Overfitting |
topic_unstemmed |
ddc 004 misc Graph autoencoder misc Noise injection misc Training algorithm misc Overfitting |
topic_browse |
ddc 004 misc Graph autoencoder misc Noise injection misc Training algorithm misc Overfitting |
format_facet |
Aufsätze Gedruckte Aufsätze |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
nc |
hierarchy_parent_title |
Neural computing & applications |
hierarchy_parent_id |
165669608 |
dewey-tens |
000 - Computer science, knowledge & systems |
hierarchy_top_title |
Neural computing & applications |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)165669608 (DE-600)1136944-9 (DE-576)032873050 |
title |
A noise injection strategy for graph autoencoder training |
ctrlnum |
(DE-627)OLC2125152150 (DE-He213)s00521-020-05283-x-p |
title_full |
A noise injection strategy for graph autoencoder training |
author_sort |
Wang, Yingfeng |
journal |
Neural computing & applications |
journalStr |
Neural computing & applications |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
000 - Computer science, information & general works |
recordtype |
marc |
publishDateSort |
2020 |
contenttype_str_mv |
txt |
container_start_page |
4807 |
author_browse |
Wang, Yingfeng Xu, Biyun Kwak, Myungjae Zeng, Xiaoqin |
container_volume |
33 |
class |
004 VZ |
format_se |
Aufsätze |
author-letter |
Wang, Yingfeng |
doi_str_mv |
10.1007/s00521-020-05283-x |
normlink |
(ORCID)0000-0002-3715-5124 |
normlink_prefix_str_mv |
(orcid)0000-0002-3715-5124 |
dewey-full |
004 |
title_sort |
a noise injection strategy for graph autoencoder training |
title_auth |
A noise injection strategy for graph autoencoder training |
abstract |
Abstract Graph autoencoder can map graph data into a low-dimensional space. It is a powerful graph embedding method applied in graph analytics to lower the computational cost. Researchers have developed different graph autoencoders for addressing different needs. This paper proposes a strategy based on noise injection for graph autoencoder training. This is a general training strategy that can flexibly fit most existing training algorithms. The experimental results verify this general strategy can significantly reduce overfitting and identify the noise rate setting for consistent training performance improvement. © Springer-Verlag London Ltd., part of Springer Nature 2020 |
abstractGer |
Abstract Graph autoencoder can map graph data into a low-dimensional space. It is a powerful graph embedding method applied in graph analytics to lower the computational cost. Researchers have developed different graph autoencoders for addressing different needs. This paper proposes a strategy based on noise injection for graph autoencoder training. This is a general training strategy that can flexibly fit most existing training algorithms. The experimental results verify this general strategy can significantly reduce overfitting and identify the noise rate setting for consistent training performance improvement. © Springer-Verlag London Ltd., part of Springer Nature 2020 |
abstract_unstemmed |
Abstract Graph autoencoder can map graph data into a low-dimensional space. It is a powerful graph embedding method applied in graph analytics to lower the computational cost. Researchers have developed different graph autoencoders for addressing different needs. This paper proposes a strategy based on noise injection for graph autoencoder training. This is a general training strategy that can flexibly fit most existing training algorithms. The experimental results verify this general strategy can significantly reduce overfitting and identify the noise rate setting for consistent training performance improvement. © Springer-Verlag London Ltd., part of Springer Nature 2020 |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_2018 GBV_ILN_4277 |
container_issue |
10 |
title_short |
A noise injection strategy for graph autoencoder training |
url |
https://doi.org/10.1007/s00521-020-05283-x |
remote_bool |
false |
author2 |
Xu, Biyun Kwak, Myungjae Zeng, Xiaoqin |
author2Str |
Xu, Biyun Kwak, Myungjae Zeng, Xiaoqin |
ppnlink |
165669608 |
mediatype_str_mv |
n |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1007/s00521-020-05283-x |
up_date |
2024-07-04T02:43:43.783Z |
_version_ |
1803614708080574464 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000naa a22002652 4500</leader><controlfield tag="001">OLC2125152150</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230505095830.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">230505s2020 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s00521-020-05283-x</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC2125152150</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s00521-020-05283-x-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Wang, Yingfeng</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0002-3715-5124</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">A noise injection strategy for graph autoencoder training</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2020</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© Springer-Verlag London Ltd., part of Springer Nature 2020</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Graph autoencoder can map graph data into a low-dimensional space. It is a powerful graph embedding method applied in graph analytics to lower the computational cost. Researchers have developed different graph autoencoders for addressing different needs. This paper proposes a strategy based on noise injection for graph autoencoder training. This is a general training strategy that can flexibly fit most existing training algorithms. The experimental results verify this general strategy can significantly reduce overfitting and identify the noise rate setting for consistent training performance improvement.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Graph autoencoder</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Noise injection</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Training algorithm</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Overfitting</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Xu, Biyun</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Kwak, Myungjae</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zeng, Xiaoqin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Neural computing & applications</subfield><subfield code="d">Springer London, 1993</subfield><subfield code="g">33(2020), 10 vom: 11. Aug., Seite 4807-4814</subfield><subfield code="w">(DE-627)165669608</subfield><subfield code="w">(DE-600)1136944-9</subfield><subfield code="w">(DE-576)032873050</subfield><subfield code="x">0941-0643</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:33</subfield><subfield code="g">year:2020</subfield><subfield code="g">number:10</subfield><subfield code="g">day:11</subfield><subfield code="g">month:08</subfield><subfield code="g">pages:4807-4814</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s00521-020-05283-x</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MAT</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2018</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4277</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">33</subfield><subfield code="j">2020</subfield><subfield code="e">10</subfield><subfield code="b">11</subfield><subfield code="c">08</subfield><subfield code="h">4807-4814</subfield></datafield></record></collection>
|
score |
7.4000654 |