Graph over-parameterization: Why the graph helps the training of deep graph convolutional network
Recent studies show that gradient descent can train a deep neural network (DNN) to achieve small training and test errors when the DNN is sufficiently wide. This result applies to various over-parameterized neural network models including fully-connected neural networks and convolutional neural netw...
Ausführliche Beschreibung
Autor*in: |
Lin, Yucong [verfasserIn] Li, Silu [verfasserIn] Xu, Jiaxing [verfasserIn] Xu, Jiawei [verfasserIn] Huang, Dong [verfasserIn] Zheng, Wendi [verfasserIn] Cao, Yuan [verfasserIn] Lu, Junwei [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2023 |
---|
Schlagwörter: |
---|
Übergeordnetes Werk: |
Enthalten in: Neurocomputing - Amsterdam : Elsevier, 1989, 534, Seite 77-85 |
---|---|
Übergeordnetes Werk: |
volume:534 ; pages:77-85 |
DOI / URN: |
10.1016/j.neucom.2023.02.054 |
---|
Katalog-ID: |
ELV064866963 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | ELV064866963 | ||
003 | DE-627 | ||
005 | 20231003073046.0 | ||
007 | cr uuu---uuuuu | ||
008 | 230928s2023 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1016/j.neucom.2023.02.054 |2 doi | |
035 | |a (DE-627)ELV064866963 | ||
035 | |a (ELSEVIER)S0925-2312(23)00204-7 | ||
040 | |a DE-627 |b ger |c DE-627 |e rda | ||
041 | |a eng | ||
082 | 0 | 4 | |a 610 |q VZ |
084 | |a 54.72 |2 bkl | ||
100 | 1 | |a Lin, Yucong |e verfasserin |4 aut | |
245 | 1 | 0 | |a Graph over-parameterization: Why the graph helps the training of deep graph convolutional network |
264 | 1 | |c 2023 | |
336 | |a nicht spezifiziert |b zzz |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a Recent studies show that gradient descent can train a deep neural network (DNN) to achieve small training and test errors when the DNN is sufficiently wide. This result applies to various over-parameterized neural network models including fully-connected neural networks and convolutional neural networks. However, existing theory does not apply to graph convolutional networks (GCNs), as GCNs is built according to the topological structures of the data. It has been empirically observed that GCNs can outperform vanilla neural networks when the underlying graph captures geometric information of the data. However, there is few theoretical justification of such observation. In this paper, we establish theoretical guarantees of the high-probability convergence of gradient descent for training over-parameterized GCNs. Specifically, we introduce a novel measurement of the relation between the graph and the data, called the “graph disparity coefficient”, and show that the convergence of GCN is faster when the graph disparity coefficient is smaller. Our analysis provides novel insights into how the graph convolution operation in a GCN helps training, and provides useful guidance for GCN training in practice. | ||
650 | 4 | |a Graph convolutional neural network | |
650 | 4 | |a Over-parameterization | |
700 | 1 | |a Li, Silu |e verfasserin |4 aut | |
700 | 1 | |a Xu, Jiaxing |e verfasserin |4 aut | |
700 | 1 | |a Xu, Jiawei |e verfasserin |4 aut | |
700 | 1 | |a Huang, Dong |e verfasserin |4 aut | |
700 | 1 | |a Zheng, Wendi |e verfasserin |4 aut | |
700 | 1 | |a Cao, Yuan |e verfasserin |4 aut | |
700 | 1 | |a Lu, Junwei |e verfasserin |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Neurocomputing |d Amsterdam : Elsevier, 1989 |g 534, Seite 77-85 |h Online-Ressource |w (DE-627)271176008 |w (DE-600)1479006-3 |w (DE-576)078412358 |x 1872-8286 |7 nnns |
773 | 1 | 8 | |g volume:534 |g pages:77-85 |
912 | |a GBV_USEFLAG_U | ||
912 | |a GBV_ELV | ||
912 | |a SYSFLAG_U | ||
912 | |a SSG-OLC-PHA | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_32 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_74 | ||
912 | |a GBV_ILN_90 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_100 | ||
912 | |a GBV_ILN_101 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_150 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_187 | ||
912 | |a GBV_ILN_213 | ||
912 | |a GBV_ILN_224 | ||
912 | |a GBV_ILN_230 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_702 | ||
912 | |a GBV_ILN_2001 | ||
912 | |a GBV_ILN_2003 | ||
912 | |a GBV_ILN_2004 | ||
912 | |a GBV_ILN_2005 | ||
912 | |a GBV_ILN_2007 | ||
912 | |a GBV_ILN_2008 | ||
912 | |a GBV_ILN_2009 | ||
912 | |a GBV_ILN_2010 | ||
912 | |a GBV_ILN_2011 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_2015 | ||
912 | |a GBV_ILN_2020 | ||
912 | |a GBV_ILN_2021 | ||
912 | |a GBV_ILN_2025 | ||
912 | |a GBV_ILN_2026 | ||
912 | |a GBV_ILN_2027 | ||
912 | |a GBV_ILN_2034 | ||
912 | |a GBV_ILN_2044 | ||
912 | |a GBV_ILN_2048 | ||
912 | |a GBV_ILN_2049 | ||
912 | |a GBV_ILN_2050 | ||
912 | |a GBV_ILN_2055 | ||
912 | |a GBV_ILN_2056 | ||
912 | |a GBV_ILN_2059 | ||
912 | |a GBV_ILN_2061 | ||
912 | |a GBV_ILN_2064 | ||
912 | |a GBV_ILN_2088 | ||
912 | |a GBV_ILN_2106 | ||
912 | |a GBV_ILN_2110 | ||
912 | |a GBV_ILN_2111 | ||
912 | |a GBV_ILN_2112 | ||
912 | |a GBV_ILN_2122 | ||
912 | |a GBV_ILN_2129 | ||
912 | |a GBV_ILN_2143 | ||
912 | |a GBV_ILN_2152 | ||
912 | |a GBV_ILN_2153 | ||
912 | |a GBV_ILN_2190 | ||
912 | |a GBV_ILN_2232 | ||
912 | |a GBV_ILN_2336 | ||
912 | |a GBV_ILN_2470 | ||
912 | |a GBV_ILN_2507 | ||
912 | |a GBV_ILN_4035 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4242 | ||
912 | |a GBV_ILN_4249 | ||
912 | |a GBV_ILN_4251 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4306 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4326 | ||
912 | |a GBV_ILN_4333 | ||
912 | |a GBV_ILN_4334 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4393 | ||
912 | |a GBV_ILN_4700 | ||
936 | b | k | |a 54.72 |j Künstliche Intelligenz |q VZ |
951 | |a AR | ||
952 | |d 534 |h 77-85 |
author_variant |
y l yl s l sl j x jx j x jx d h dh w z wz y c yc j l jl |
---|---|
matchkey_str |
article:18728286:2023----::rpoeprmtrztowyhgahepteriigfepr |
hierarchy_sort_str |
2023 |
bklnumber |
54.72 |
publishDate |
2023 |
allfields |
10.1016/j.neucom.2023.02.054 doi (DE-627)ELV064866963 (ELSEVIER)S0925-2312(23)00204-7 DE-627 ger DE-627 rda eng 610 VZ 54.72 bkl Lin, Yucong verfasserin aut Graph over-parameterization: Why the graph helps the training of deep graph convolutional network 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Recent studies show that gradient descent can train a deep neural network (DNN) to achieve small training and test errors when the DNN is sufficiently wide. This result applies to various over-parameterized neural network models including fully-connected neural networks and convolutional neural networks. However, existing theory does not apply to graph convolutional networks (GCNs), as GCNs is built according to the topological structures of the data. It has been empirically observed that GCNs can outperform vanilla neural networks when the underlying graph captures geometric information of the data. However, there is few theoretical justification of such observation. In this paper, we establish theoretical guarantees of the high-probability convergence of gradient descent for training over-parameterized GCNs. Specifically, we introduce a novel measurement of the relation between the graph and the data, called the “graph disparity coefficient”, and show that the convergence of GCN is faster when the graph disparity coefficient is smaller. Our analysis provides novel insights into how the graph convolution operation in a GCN helps training, and provides useful guidance for GCN training in practice. Graph convolutional neural network Over-parameterization Li, Silu verfasserin aut Xu, Jiaxing verfasserin aut Xu, Jiawei verfasserin aut Huang, Dong verfasserin aut Zheng, Wendi verfasserin aut Cao, Yuan verfasserin aut Lu, Junwei verfasserin aut Enthalten in Neurocomputing Amsterdam : Elsevier, 1989 534, Seite 77-85 Online-Ressource (DE-627)271176008 (DE-600)1479006-3 (DE-576)078412358 1872-8286 nnns volume:534 pages:77-85 GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 54.72 Künstliche Intelligenz VZ AR 534 77-85 |
spelling |
10.1016/j.neucom.2023.02.054 doi (DE-627)ELV064866963 (ELSEVIER)S0925-2312(23)00204-7 DE-627 ger DE-627 rda eng 610 VZ 54.72 bkl Lin, Yucong verfasserin aut Graph over-parameterization: Why the graph helps the training of deep graph convolutional network 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Recent studies show that gradient descent can train a deep neural network (DNN) to achieve small training and test errors when the DNN is sufficiently wide. This result applies to various over-parameterized neural network models including fully-connected neural networks and convolutional neural networks. However, existing theory does not apply to graph convolutional networks (GCNs), as GCNs is built according to the topological structures of the data. It has been empirically observed that GCNs can outperform vanilla neural networks when the underlying graph captures geometric information of the data. However, there is few theoretical justification of such observation. In this paper, we establish theoretical guarantees of the high-probability convergence of gradient descent for training over-parameterized GCNs. Specifically, we introduce a novel measurement of the relation between the graph and the data, called the “graph disparity coefficient”, and show that the convergence of GCN is faster when the graph disparity coefficient is smaller. Our analysis provides novel insights into how the graph convolution operation in a GCN helps training, and provides useful guidance for GCN training in practice. Graph convolutional neural network Over-parameterization Li, Silu verfasserin aut Xu, Jiaxing verfasserin aut Xu, Jiawei verfasserin aut Huang, Dong verfasserin aut Zheng, Wendi verfasserin aut Cao, Yuan verfasserin aut Lu, Junwei verfasserin aut Enthalten in Neurocomputing Amsterdam : Elsevier, 1989 534, Seite 77-85 Online-Ressource (DE-627)271176008 (DE-600)1479006-3 (DE-576)078412358 1872-8286 nnns volume:534 pages:77-85 GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 54.72 Künstliche Intelligenz VZ AR 534 77-85 |
allfields_unstemmed |
10.1016/j.neucom.2023.02.054 doi (DE-627)ELV064866963 (ELSEVIER)S0925-2312(23)00204-7 DE-627 ger DE-627 rda eng 610 VZ 54.72 bkl Lin, Yucong verfasserin aut Graph over-parameterization: Why the graph helps the training of deep graph convolutional network 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Recent studies show that gradient descent can train a deep neural network (DNN) to achieve small training and test errors when the DNN is sufficiently wide. This result applies to various over-parameterized neural network models including fully-connected neural networks and convolutional neural networks. However, existing theory does not apply to graph convolutional networks (GCNs), as GCNs is built according to the topological structures of the data. It has been empirically observed that GCNs can outperform vanilla neural networks when the underlying graph captures geometric information of the data. However, there is few theoretical justification of such observation. In this paper, we establish theoretical guarantees of the high-probability convergence of gradient descent for training over-parameterized GCNs. Specifically, we introduce a novel measurement of the relation between the graph and the data, called the “graph disparity coefficient”, and show that the convergence of GCN is faster when the graph disparity coefficient is smaller. Our analysis provides novel insights into how the graph convolution operation in a GCN helps training, and provides useful guidance for GCN training in practice. Graph convolutional neural network Over-parameterization Li, Silu verfasserin aut Xu, Jiaxing verfasserin aut Xu, Jiawei verfasserin aut Huang, Dong verfasserin aut Zheng, Wendi verfasserin aut Cao, Yuan verfasserin aut Lu, Junwei verfasserin aut Enthalten in Neurocomputing Amsterdam : Elsevier, 1989 534, Seite 77-85 Online-Ressource (DE-627)271176008 (DE-600)1479006-3 (DE-576)078412358 1872-8286 nnns volume:534 pages:77-85 GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 54.72 Künstliche Intelligenz VZ AR 534 77-85 |
allfieldsGer |
10.1016/j.neucom.2023.02.054 doi (DE-627)ELV064866963 (ELSEVIER)S0925-2312(23)00204-7 DE-627 ger DE-627 rda eng 610 VZ 54.72 bkl Lin, Yucong verfasserin aut Graph over-parameterization: Why the graph helps the training of deep graph convolutional network 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Recent studies show that gradient descent can train a deep neural network (DNN) to achieve small training and test errors when the DNN is sufficiently wide. This result applies to various over-parameterized neural network models including fully-connected neural networks and convolutional neural networks. However, existing theory does not apply to graph convolutional networks (GCNs), as GCNs is built according to the topological structures of the data. It has been empirically observed that GCNs can outperform vanilla neural networks when the underlying graph captures geometric information of the data. However, there is few theoretical justification of such observation. In this paper, we establish theoretical guarantees of the high-probability convergence of gradient descent for training over-parameterized GCNs. Specifically, we introduce a novel measurement of the relation between the graph and the data, called the “graph disparity coefficient”, and show that the convergence of GCN is faster when the graph disparity coefficient is smaller. Our analysis provides novel insights into how the graph convolution operation in a GCN helps training, and provides useful guidance for GCN training in practice. Graph convolutional neural network Over-parameterization Li, Silu verfasserin aut Xu, Jiaxing verfasserin aut Xu, Jiawei verfasserin aut Huang, Dong verfasserin aut Zheng, Wendi verfasserin aut Cao, Yuan verfasserin aut Lu, Junwei verfasserin aut Enthalten in Neurocomputing Amsterdam : Elsevier, 1989 534, Seite 77-85 Online-Ressource (DE-627)271176008 (DE-600)1479006-3 (DE-576)078412358 1872-8286 nnns volume:534 pages:77-85 GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 54.72 Künstliche Intelligenz VZ AR 534 77-85 |
allfieldsSound |
10.1016/j.neucom.2023.02.054 doi (DE-627)ELV064866963 (ELSEVIER)S0925-2312(23)00204-7 DE-627 ger DE-627 rda eng 610 VZ 54.72 bkl Lin, Yucong verfasserin aut Graph over-parameterization: Why the graph helps the training of deep graph convolutional network 2023 nicht spezifiziert zzz rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier Recent studies show that gradient descent can train a deep neural network (DNN) to achieve small training and test errors when the DNN is sufficiently wide. This result applies to various over-parameterized neural network models including fully-connected neural networks and convolutional neural networks. However, existing theory does not apply to graph convolutional networks (GCNs), as GCNs is built according to the topological structures of the data. It has been empirically observed that GCNs can outperform vanilla neural networks when the underlying graph captures geometric information of the data. However, there is few theoretical justification of such observation. In this paper, we establish theoretical guarantees of the high-probability convergence of gradient descent for training over-parameterized GCNs. Specifically, we introduce a novel measurement of the relation between the graph and the data, called the “graph disparity coefficient”, and show that the convergence of GCN is faster when the graph disparity coefficient is smaller. Our analysis provides novel insights into how the graph convolution operation in a GCN helps training, and provides useful guidance for GCN training in practice. Graph convolutional neural network Over-parameterization Li, Silu verfasserin aut Xu, Jiaxing verfasserin aut Xu, Jiawei verfasserin aut Huang, Dong verfasserin aut Zheng, Wendi verfasserin aut Cao, Yuan verfasserin aut Lu, Junwei verfasserin aut Enthalten in Neurocomputing Amsterdam : Elsevier, 1989 534, Seite 77-85 Online-Ressource (DE-627)271176008 (DE-600)1479006-3 (DE-576)078412358 1872-8286 nnns volume:534 pages:77-85 GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 54.72 Künstliche Intelligenz VZ AR 534 77-85 |
language |
English |
source |
Enthalten in Neurocomputing 534, Seite 77-85 volume:534 pages:77-85 |
sourceStr |
Enthalten in Neurocomputing 534, Seite 77-85 volume:534 pages:77-85 |
format_phy_str_mv |
Article |
bklname |
Künstliche Intelligenz |
institution |
findex.gbv.de |
topic_facet |
Graph convolutional neural network Over-parameterization |
dewey-raw |
610 |
isfreeaccess_bool |
false |
container_title |
Neurocomputing |
authorswithroles_txt_mv |
Lin, Yucong @@aut@@ Li, Silu @@aut@@ Xu, Jiaxing @@aut@@ Xu, Jiawei @@aut@@ Huang, Dong @@aut@@ Zheng, Wendi @@aut@@ Cao, Yuan @@aut@@ Lu, Junwei @@aut@@ |
publishDateDaySort_date |
2023-01-01T00:00:00Z |
hierarchy_top_id |
271176008 |
dewey-sort |
3610 |
id |
ELV064866963 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV064866963</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20231003073046.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230928s2023 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.neucom.2023.02.054</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV064866963</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0925-2312(23)00204-7</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">610</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">54.72</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Lin, Yucong</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Graph over-parameterization: Why the graph helps the training of deep graph convolutional network</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2023</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Recent studies show that gradient descent can train a deep neural network (DNN) to achieve small training and test errors when the DNN is sufficiently wide. This result applies to various over-parameterized neural network models including fully-connected neural networks and convolutional neural networks. However, existing theory does not apply to graph convolutional networks (GCNs), as GCNs is built according to the topological structures of the data. It has been empirically observed that GCNs can outperform vanilla neural networks when the underlying graph captures geometric information of the data. However, there is few theoretical justification of such observation. In this paper, we establish theoretical guarantees of the high-probability convergence of gradient descent for training over-parameterized GCNs. Specifically, we introduce a novel measurement of the relation between the graph and the data, called the “graph disparity coefficient”, and show that the convergence of GCN is faster when the graph disparity coefficient is smaller. Our analysis provides novel insights into how the graph convolution operation in a GCN helps training, and provides useful guidance for GCN training in practice.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Graph convolutional neural network</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Over-parameterization</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Li, Silu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Xu, Jiaxing</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Xu, Jiawei</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Huang, Dong</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zheng, Wendi</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Cao, Yuan</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Lu, Junwei</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Neurocomputing</subfield><subfield code="d">Amsterdam : Elsevier, 1989</subfield><subfield code="g">534, Seite 77-85</subfield><subfield code="h">Online-Ressource</subfield><subfield code="w">(DE-627)271176008</subfield><subfield code="w">(DE-600)1479006-3</subfield><subfield code="w">(DE-576)078412358</subfield><subfield code="x">1872-8286</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:534</subfield><subfield code="g">pages:77-85</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PHA</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_101</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_150</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_187</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2001</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2008</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2010</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2190</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4393</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">54.72</subfield><subfield code="j">Künstliche Intelligenz</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">534</subfield><subfield code="h">77-85</subfield></datafield></record></collection>
|
author |
Lin, Yucong |
spellingShingle |
Lin, Yucong ddc 610 bkl 54.72 misc Graph convolutional neural network misc Over-parameterization Graph over-parameterization: Why the graph helps the training of deep graph convolutional network |
authorStr |
Lin, Yucong |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)271176008 |
format |
electronic Article |
dewey-ones |
610 - Medicine & health |
delete_txt_mv |
keep |
author_role |
aut aut aut aut aut aut aut aut |
collection |
elsevier |
remote_str |
true |
illustrated |
Not Illustrated |
issn |
1872-8286 |
topic_title |
610 VZ 54.72 bkl Graph over-parameterization: Why the graph helps the training of deep graph convolutional network Graph convolutional neural network Over-parameterization |
topic |
ddc 610 bkl 54.72 misc Graph convolutional neural network misc Over-parameterization |
topic_unstemmed |
ddc 610 bkl 54.72 misc Graph convolutional neural network misc Over-parameterization |
topic_browse |
ddc 610 bkl 54.72 misc Graph convolutional neural network misc Over-parameterization |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
Neurocomputing |
hierarchy_parent_id |
271176008 |
dewey-tens |
610 - Medicine & health |
hierarchy_top_title |
Neurocomputing |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)271176008 (DE-600)1479006-3 (DE-576)078412358 |
title |
Graph over-parameterization: Why the graph helps the training of deep graph convolutional network |
ctrlnum |
(DE-627)ELV064866963 (ELSEVIER)S0925-2312(23)00204-7 |
title_full |
Graph over-parameterization: Why the graph helps the training of deep graph convolutional network |
author_sort |
Lin, Yucong |
journal |
Neurocomputing |
journalStr |
Neurocomputing |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
600 - Technology |
recordtype |
marc |
publishDateSort |
2023 |
contenttype_str_mv |
zzz |
container_start_page |
77 |
author_browse |
Lin, Yucong Li, Silu Xu, Jiaxing Xu, Jiawei Huang, Dong Zheng, Wendi Cao, Yuan Lu, Junwei |
container_volume |
534 |
class |
610 VZ 54.72 bkl |
format_se |
Elektronische Aufsätze |
author-letter |
Lin, Yucong |
doi_str_mv |
10.1016/j.neucom.2023.02.054 |
dewey-full |
610 |
author2-role |
verfasserin |
title_sort |
graph over-parameterization: why the graph helps the training of deep graph convolutional network |
title_auth |
Graph over-parameterization: Why the graph helps the training of deep graph convolutional network |
abstract |
Recent studies show that gradient descent can train a deep neural network (DNN) to achieve small training and test errors when the DNN is sufficiently wide. This result applies to various over-parameterized neural network models including fully-connected neural networks and convolutional neural networks. However, existing theory does not apply to graph convolutional networks (GCNs), as GCNs is built according to the topological structures of the data. It has been empirically observed that GCNs can outperform vanilla neural networks when the underlying graph captures geometric information of the data. However, there is few theoretical justification of such observation. In this paper, we establish theoretical guarantees of the high-probability convergence of gradient descent for training over-parameterized GCNs. Specifically, we introduce a novel measurement of the relation between the graph and the data, called the “graph disparity coefficient”, and show that the convergence of GCN is faster when the graph disparity coefficient is smaller. Our analysis provides novel insights into how the graph convolution operation in a GCN helps training, and provides useful guidance for GCN training in practice. |
abstractGer |
Recent studies show that gradient descent can train a deep neural network (DNN) to achieve small training and test errors when the DNN is sufficiently wide. This result applies to various over-parameterized neural network models including fully-connected neural networks and convolutional neural networks. However, existing theory does not apply to graph convolutional networks (GCNs), as GCNs is built according to the topological structures of the data. It has been empirically observed that GCNs can outperform vanilla neural networks when the underlying graph captures geometric information of the data. However, there is few theoretical justification of such observation. In this paper, we establish theoretical guarantees of the high-probability convergence of gradient descent for training over-parameterized GCNs. Specifically, we introduce a novel measurement of the relation between the graph and the data, called the “graph disparity coefficient”, and show that the convergence of GCN is faster when the graph disparity coefficient is smaller. Our analysis provides novel insights into how the graph convolution operation in a GCN helps training, and provides useful guidance for GCN training in practice. |
abstract_unstemmed |
Recent studies show that gradient descent can train a deep neural network (DNN) to achieve small training and test errors when the DNN is sufficiently wide. This result applies to various over-parameterized neural network models including fully-connected neural networks and convolutional neural networks. However, existing theory does not apply to graph convolutional networks (GCNs), as GCNs is built according to the topological structures of the data. It has been empirically observed that GCNs can outperform vanilla neural networks when the underlying graph captures geometric information of the data. However, there is few theoretical justification of such observation. In this paper, we establish theoretical guarantees of the high-probability convergence of gradient descent for training over-parameterized GCNs. Specifically, we introduce a novel measurement of the relation between the graph and the data, called the “graph disparity coefficient”, and show that the convergence of GCN is faster when the graph disparity coefficient is smaller. Our analysis provides novel insights into how the graph convolution operation in a GCN helps training, and provides useful guidance for GCN training in practice. |
collection_details |
GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_32 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_74 GBV_ILN_90 GBV_ILN_95 GBV_ILN_100 GBV_ILN_101 GBV_ILN_105 GBV_ILN_110 GBV_ILN_150 GBV_ILN_151 GBV_ILN_187 GBV_ILN_213 GBV_ILN_224 GBV_ILN_230 GBV_ILN_370 GBV_ILN_602 GBV_ILN_702 GBV_ILN_2001 GBV_ILN_2003 GBV_ILN_2004 GBV_ILN_2005 GBV_ILN_2007 GBV_ILN_2008 GBV_ILN_2009 GBV_ILN_2010 GBV_ILN_2011 GBV_ILN_2014 GBV_ILN_2015 GBV_ILN_2020 GBV_ILN_2021 GBV_ILN_2025 GBV_ILN_2026 GBV_ILN_2027 GBV_ILN_2034 GBV_ILN_2044 GBV_ILN_2048 GBV_ILN_2049 GBV_ILN_2050 GBV_ILN_2055 GBV_ILN_2056 GBV_ILN_2059 GBV_ILN_2061 GBV_ILN_2064 GBV_ILN_2088 GBV_ILN_2106 GBV_ILN_2110 GBV_ILN_2111 GBV_ILN_2112 GBV_ILN_2122 GBV_ILN_2129 GBV_ILN_2143 GBV_ILN_2152 GBV_ILN_2153 GBV_ILN_2190 GBV_ILN_2232 GBV_ILN_2336 GBV_ILN_2470 GBV_ILN_2507 GBV_ILN_4035 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4242 GBV_ILN_4249 GBV_ILN_4251 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4333 GBV_ILN_4334 GBV_ILN_4338 GBV_ILN_4393 GBV_ILN_4700 |
title_short |
Graph over-parameterization: Why the graph helps the training of deep graph convolutional network |
remote_bool |
true |
author2 |
Li, Silu Xu, Jiaxing Xu, Jiawei Huang, Dong Zheng, Wendi Cao, Yuan Lu, Junwei |
author2Str |
Li, Silu Xu, Jiaxing Xu, Jiawei Huang, Dong Zheng, Wendi Cao, Yuan Lu, Junwei |
ppnlink |
271176008 |
mediatype_str_mv |
c |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1016/j.neucom.2023.02.054 |
up_date |
2024-07-06T21:01:15.282Z |
_version_ |
1803864952309547008 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV064866963</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20231003073046.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230928s2023 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.neucom.2023.02.054</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV064866963</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0925-2312(23)00204-7</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rda</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">610</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">54.72</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Lin, Yucong</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Graph over-parameterization: Why the graph helps the training of deep graph convolutional network</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2023</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Recent studies show that gradient descent can train a deep neural network (DNN) to achieve small training and test errors when the DNN is sufficiently wide. This result applies to various over-parameterized neural network models including fully-connected neural networks and convolutional neural networks. However, existing theory does not apply to graph convolutional networks (GCNs), as GCNs is built according to the topological structures of the data. It has been empirically observed that GCNs can outperform vanilla neural networks when the underlying graph captures geometric information of the data. However, there is few theoretical justification of such observation. In this paper, we establish theoretical guarantees of the high-probability convergence of gradient descent for training over-parameterized GCNs. Specifically, we introduce a novel measurement of the relation between the graph and the data, called the “graph disparity coefficient”, and show that the convergence of GCN is faster when the graph disparity coefficient is smaller. Our analysis provides novel insights into how the graph convolution operation in a GCN helps training, and provides useful guidance for GCN training in practice.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Graph convolutional neural network</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Over-parameterization</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Li, Silu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Xu, Jiaxing</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Xu, Jiawei</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Huang, Dong</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zheng, Wendi</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Cao, Yuan</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Lu, Junwei</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Neurocomputing</subfield><subfield code="d">Amsterdam : Elsevier, 1989</subfield><subfield code="g">534, Seite 77-85</subfield><subfield code="h">Online-Ressource</subfield><subfield code="w">(DE-627)271176008</subfield><subfield code="w">(DE-600)1479006-3</subfield><subfield code="w">(DE-576)078412358</subfield><subfield code="x">1872-8286</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:534</subfield><subfield code="g">pages:77-85</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PHA</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_32</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_74</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_90</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_100</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_101</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_150</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_187</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_224</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_702</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2001</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2003</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2004</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2005</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2007</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2008</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2009</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2010</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2011</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2015</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2020</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2021</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2025</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2026</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2027</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2034</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2044</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2048</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2049</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2050</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2055</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2056</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2059</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2061</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2064</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2088</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2106</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2111</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2122</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2129</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2143</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2152</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2153</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2190</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2232</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2336</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2470</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2507</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4035</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4242</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4251</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4333</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4334</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4393</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">54.72</subfield><subfield code="j">Künstliche Intelligenz</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">534</subfield><subfield code="h">77-85</subfield></datafield></record></collection>
|
score |
7.399846 |