Augmented Graph Neural Network with hierarchical global-based residual connections
Graph Neural Networks (GNNs) are powerful architectures for learning on graphs. They are efficient for predicting nodes, links and graphs properties. Standard GNN variants follow a message passing schema to update nodes representations using information from higher-order neighborhoods iteratively. C...
Ausführliche Beschreibung
Autor*in: |
Rassil, Asmaa [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2022transfer abstract |
---|
Schlagwörter: |
---|
Umfang: |
18 |
---|
Übergeordnetes Werk: |
Enthalten in: Regulatory design for RES-E support mechanisms: Learning curves, market structure, and burden-sharing - 2012, the official journal of the International Neural Network Society, European Neural Network Society and Japanese Neural Network Society, Amsterdam |
---|---|
Übergeordnetes Werk: |
volume:150 ; year:2022 ; pages:149-166 ; extent:18 |
Links: |
---|
DOI / URN: |
10.1016/j.neunet.2022.03.008 |
---|
Katalog-ID: |
ELV057342938 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | ELV057342938 | ||
003 | DE-627 | ||
005 | 20230626044926.0 | ||
007 | cr uuu---uuuuu | ||
008 | 220808s2022 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1016/j.neunet.2022.03.008 |2 doi | |
028 | 5 | 2 | |a /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001813.pica |
035 | |a (DE-627)ELV057342938 | ||
035 | |a (ELSEVIER)S0893-6080(22)00078-8 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
082 | 0 | 4 | |a 620 |q VZ |
082 | 0 | 4 | |a 610 |q VZ |
084 | |a 77.50 |2 bkl | ||
100 | 1 | |a Rassil, Asmaa |e verfasserin |4 aut | |
245 | 1 | 0 | |a Augmented Graph Neural Network with hierarchical global-based residual connections |
264 | 1 | |c 2022transfer abstract | |
300 | |a 18 | ||
336 | |a nicht spezifiziert |b zzz |2 rdacontent | ||
337 | |a nicht spezifiziert |b z |2 rdamedia | ||
338 | |a nicht spezifiziert |b zu |2 rdacarrier | ||
520 | |a Graph Neural Networks (GNNs) are powerful architectures for learning on graphs. They are efficient for predicting nodes, links and graphs properties. Standard GNN variants follow a message passing schema to update nodes representations using information from higher-order neighborhoods iteratively. Consequently, deeper GNNs make it possible to define high-level nodes representations generated based on local as well as distant neighborhoods. However, deeper networks are prone to suffer from over-smoothing. To build deeper GNN architectures and avoid losing the dependency between lower (the layers closer to the input) and higher (the layers closer to the output) layers, networks can integrate residual connections to connect intermediate layers. We propose the Augmented Graph Neural Network (AGNN) model with hierarchical global-based residual connections. Using the proposed residual connections, the model generates high-level nodes representations without the need for a deeper architecture. We disclose that the nodes representations generated through our proposed AGNN model are able to define an expressive all-encompassing representation of the entire graph. As such, the graph predictions generated through the AGNN model surpass considerably state-of-the-art results. Moreover, we carry out extensive experiments to identify the best global pooling strategy and attention weights to define the adequate hierarchical and global-based residual connections for different graph property prediction tasks. Furthermore, we propose a reversible variant of the AGNN model to address the extensive memory consumption problem that typically arises from training networks on large and dense graph datasets. The proposed Reversible Augmented Graph Neural Network (R-AGNN) only stores the nodes representations acquired from the output layer as opposed to saving all representations from intermediate layers as it is conventionally done when optimizing the parameters of other GNNs. We further refine the definition of the backpropagation algorithm to fit the R-AGNN model. We evaluate the proposed models AGNN and R-AGNN on benchmark Molecular, Bioinformatics and Social Networks datasets for graph classification and achieve state-of-the-art results. For instance the AGNN model realizes improvements of + 39 % on IMDB-MULTI reaching 91.7% accuracy and + 16 % on COLLAB reaching 96.8% accuracy compared to other GNN variants. | ||
520 | |a Graph Neural Networks (GNNs) are powerful architectures for learning on graphs. They are efficient for predicting nodes, links and graphs properties. Standard GNN variants follow a message passing schema to update nodes representations using information from higher-order neighborhoods iteratively. Consequently, deeper GNNs make it possible to define high-level nodes representations generated based on local as well as distant neighborhoods. However, deeper networks are prone to suffer from over-smoothing. To build deeper GNN architectures and avoid losing the dependency between lower (the layers closer to the input) and higher (the layers closer to the output) layers, networks can integrate residual connections to connect intermediate layers. We propose the Augmented Graph Neural Network (AGNN) model with hierarchical global-based residual connections. Using the proposed residual connections, the model generates high-level nodes representations without the need for a deeper architecture. We disclose that the nodes representations generated through our proposed AGNN model are able to define an expressive all-encompassing representation of the entire graph. As such, the graph predictions generated through the AGNN model surpass considerably state-of-the-art results. Moreover, we carry out extensive experiments to identify the best global pooling strategy and attention weights to define the adequate hierarchical and global-based residual connections for different graph property prediction tasks. Furthermore, we propose a reversible variant of the AGNN model to address the extensive memory consumption problem that typically arises from training networks on large and dense graph datasets. The proposed Reversible Augmented Graph Neural Network (R-AGNN) only stores the nodes representations acquired from the output layer as opposed to saving all representations from intermediate layers as it is conventionally done when optimizing the parameters of other GNNs. We further refine the definition of the backpropagation algorithm to fit the R-AGNN model. We evaluate the proposed models AGNN and R-AGNN on benchmark Molecular, Bioinformatics and Social Networks datasets for graph classification and achieve state-of-the-art results. For instance the AGNN model realizes improvements of + 39 % on IMDB-MULTI reaching 91.7% accuracy and + 16 % on COLLAB reaching 96.8% accuracy compared to other GNN variants. | ||
650 | 7 | |a Graph representation learning |2 Elsevier | |
650 | 7 | |a Residual connections |2 Elsevier | |
650 | 7 | |a Reversible networks |2 Elsevier | |
650 | 7 | |a Graph Neural Networks |2 Elsevier | |
700 | 1 | |a Chougrad, Hiba |4 oth | |
700 | 1 | |a Zouaki, Hamid |4 oth | |
773 | 0 | 8 | |i Enthalten in |n Elsevier |t Regulatory design for RES-E support mechanisms: Learning curves, market structure, and burden-sharing |d 2012 |d the official journal of the International Neural Network Society, European Neural Network Society and Japanese Neural Network Society |g Amsterdam |w (DE-627)ELV016218965 |
773 | 1 | 8 | |g volume:150 |g year:2022 |g pages:149-166 |g extent:18 |
856 | 4 | 0 | |u https://doi.org/10.1016/j.neunet.2022.03.008 |3 Volltext |
912 | |a GBV_USEFLAG_U | ||
912 | |a GBV_ELV | ||
912 | |a SYSFLAG_U | ||
912 | |a SSG-OLC-PHA | ||
936 | b | k | |a 77.50 |j Psychophysiologie |q VZ |
951 | |a AR | ||
952 | |d 150 |j 2022 |h 149-166 |g 18 |
author_variant |
a r ar |
---|---|
matchkey_str |
rassilasmaachougradhibazouakihamid:2022----:umnegaherlewrwtheaciagoabsd |
hierarchy_sort_str |
2022transfer abstract |
bklnumber |
77.50 |
publishDate |
2022 |
allfields |
10.1016/j.neunet.2022.03.008 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001813.pica (DE-627)ELV057342938 (ELSEVIER)S0893-6080(22)00078-8 DE-627 ger DE-627 rakwb eng 620 VZ 610 VZ 77.50 bkl Rassil, Asmaa verfasserin aut Augmented Graph Neural Network with hierarchical global-based residual connections 2022transfer abstract 18 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier Graph Neural Networks (GNNs) are powerful architectures for learning on graphs. They are efficient for predicting nodes, links and graphs properties. Standard GNN variants follow a message passing schema to update nodes representations using information from higher-order neighborhoods iteratively. Consequently, deeper GNNs make it possible to define high-level nodes representations generated based on local as well as distant neighborhoods. However, deeper networks are prone to suffer from over-smoothing. To build deeper GNN architectures and avoid losing the dependency between lower (the layers closer to the input) and higher (the layers closer to the output) layers, networks can integrate residual connections to connect intermediate layers. We propose the Augmented Graph Neural Network (AGNN) model with hierarchical global-based residual connections. Using the proposed residual connections, the model generates high-level nodes representations without the need for a deeper architecture. We disclose that the nodes representations generated through our proposed AGNN model are able to define an expressive all-encompassing representation of the entire graph. As such, the graph predictions generated through the AGNN model surpass considerably state-of-the-art results. Moreover, we carry out extensive experiments to identify the best global pooling strategy and attention weights to define the adequate hierarchical and global-based residual connections for different graph property prediction tasks. Furthermore, we propose a reversible variant of the AGNN model to address the extensive memory consumption problem that typically arises from training networks on large and dense graph datasets. The proposed Reversible Augmented Graph Neural Network (R-AGNN) only stores the nodes representations acquired from the output layer as opposed to saving all representations from intermediate layers as it is conventionally done when optimizing the parameters of other GNNs. We further refine the definition of the backpropagation algorithm to fit the R-AGNN model. We evaluate the proposed models AGNN and R-AGNN on benchmark Molecular, Bioinformatics and Social Networks datasets for graph classification and achieve state-of-the-art results. For instance the AGNN model realizes improvements of + 39 % on IMDB-MULTI reaching 91.7% accuracy and + 16 % on COLLAB reaching 96.8% accuracy compared to other GNN variants. Graph Neural Networks (GNNs) are powerful architectures for learning on graphs. They are efficient for predicting nodes, links and graphs properties. Standard GNN variants follow a message passing schema to update nodes representations using information from higher-order neighborhoods iteratively. Consequently, deeper GNNs make it possible to define high-level nodes representations generated based on local as well as distant neighborhoods. However, deeper networks are prone to suffer from over-smoothing. To build deeper GNN architectures and avoid losing the dependency between lower (the layers closer to the input) and higher (the layers closer to the output) layers, networks can integrate residual connections to connect intermediate layers. We propose the Augmented Graph Neural Network (AGNN) model with hierarchical global-based residual connections. Using the proposed residual connections, the model generates high-level nodes representations without the need for a deeper architecture. We disclose that the nodes representations generated through our proposed AGNN model are able to define an expressive all-encompassing representation of the entire graph. As such, the graph predictions generated through the AGNN model surpass considerably state-of-the-art results. Moreover, we carry out extensive experiments to identify the best global pooling strategy and attention weights to define the adequate hierarchical and global-based residual connections for different graph property prediction tasks. Furthermore, we propose a reversible variant of the AGNN model to address the extensive memory consumption problem that typically arises from training networks on large and dense graph datasets. The proposed Reversible Augmented Graph Neural Network (R-AGNN) only stores the nodes representations acquired from the output layer as opposed to saving all representations from intermediate layers as it is conventionally done when optimizing the parameters of other GNNs. We further refine the definition of the backpropagation algorithm to fit the R-AGNN model. We evaluate the proposed models AGNN and R-AGNN on benchmark Molecular, Bioinformatics and Social Networks datasets for graph classification and achieve state-of-the-art results. For instance the AGNN model realizes improvements of + 39 % on IMDB-MULTI reaching 91.7% accuracy and + 16 % on COLLAB reaching 96.8% accuracy compared to other GNN variants. Graph representation learning Elsevier Residual connections Elsevier Reversible networks Elsevier Graph Neural Networks Elsevier Chougrad, Hiba oth Zouaki, Hamid oth Enthalten in Elsevier Regulatory design for RES-E support mechanisms: Learning curves, market structure, and burden-sharing 2012 the official journal of the International Neural Network Society, European Neural Network Society and Japanese Neural Network Society Amsterdam (DE-627)ELV016218965 volume:150 year:2022 pages:149-166 extent:18 https://doi.org/10.1016/j.neunet.2022.03.008 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA 77.50 Psychophysiologie VZ AR 150 2022 149-166 18 |
spelling |
10.1016/j.neunet.2022.03.008 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001813.pica (DE-627)ELV057342938 (ELSEVIER)S0893-6080(22)00078-8 DE-627 ger DE-627 rakwb eng 620 VZ 610 VZ 77.50 bkl Rassil, Asmaa verfasserin aut Augmented Graph Neural Network with hierarchical global-based residual connections 2022transfer abstract 18 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier Graph Neural Networks (GNNs) are powerful architectures for learning on graphs. They are efficient for predicting nodes, links and graphs properties. Standard GNN variants follow a message passing schema to update nodes representations using information from higher-order neighborhoods iteratively. Consequently, deeper GNNs make it possible to define high-level nodes representations generated based on local as well as distant neighborhoods. However, deeper networks are prone to suffer from over-smoothing. To build deeper GNN architectures and avoid losing the dependency between lower (the layers closer to the input) and higher (the layers closer to the output) layers, networks can integrate residual connections to connect intermediate layers. We propose the Augmented Graph Neural Network (AGNN) model with hierarchical global-based residual connections. Using the proposed residual connections, the model generates high-level nodes representations without the need for a deeper architecture. We disclose that the nodes representations generated through our proposed AGNN model are able to define an expressive all-encompassing representation of the entire graph. As such, the graph predictions generated through the AGNN model surpass considerably state-of-the-art results. Moreover, we carry out extensive experiments to identify the best global pooling strategy and attention weights to define the adequate hierarchical and global-based residual connections for different graph property prediction tasks. Furthermore, we propose a reversible variant of the AGNN model to address the extensive memory consumption problem that typically arises from training networks on large and dense graph datasets. The proposed Reversible Augmented Graph Neural Network (R-AGNN) only stores the nodes representations acquired from the output layer as opposed to saving all representations from intermediate layers as it is conventionally done when optimizing the parameters of other GNNs. We further refine the definition of the backpropagation algorithm to fit the R-AGNN model. We evaluate the proposed models AGNN and R-AGNN on benchmark Molecular, Bioinformatics and Social Networks datasets for graph classification and achieve state-of-the-art results. For instance the AGNN model realizes improvements of + 39 % on IMDB-MULTI reaching 91.7% accuracy and + 16 % on COLLAB reaching 96.8% accuracy compared to other GNN variants. Graph Neural Networks (GNNs) are powerful architectures for learning on graphs. They are efficient for predicting nodes, links and graphs properties. Standard GNN variants follow a message passing schema to update nodes representations using information from higher-order neighborhoods iteratively. Consequently, deeper GNNs make it possible to define high-level nodes representations generated based on local as well as distant neighborhoods. However, deeper networks are prone to suffer from over-smoothing. To build deeper GNN architectures and avoid losing the dependency between lower (the layers closer to the input) and higher (the layers closer to the output) layers, networks can integrate residual connections to connect intermediate layers. We propose the Augmented Graph Neural Network (AGNN) model with hierarchical global-based residual connections. Using the proposed residual connections, the model generates high-level nodes representations without the need for a deeper architecture. We disclose that the nodes representations generated through our proposed AGNN model are able to define an expressive all-encompassing representation of the entire graph. As such, the graph predictions generated through the AGNN model surpass considerably state-of-the-art results. Moreover, we carry out extensive experiments to identify the best global pooling strategy and attention weights to define the adequate hierarchical and global-based residual connections for different graph property prediction tasks. Furthermore, we propose a reversible variant of the AGNN model to address the extensive memory consumption problem that typically arises from training networks on large and dense graph datasets. The proposed Reversible Augmented Graph Neural Network (R-AGNN) only stores the nodes representations acquired from the output layer as opposed to saving all representations from intermediate layers as it is conventionally done when optimizing the parameters of other GNNs. We further refine the definition of the backpropagation algorithm to fit the R-AGNN model. We evaluate the proposed models AGNN and R-AGNN on benchmark Molecular, Bioinformatics and Social Networks datasets for graph classification and achieve state-of-the-art results. For instance the AGNN model realizes improvements of + 39 % on IMDB-MULTI reaching 91.7% accuracy and + 16 % on COLLAB reaching 96.8% accuracy compared to other GNN variants. Graph representation learning Elsevier Residual connections Elsevier Reversible networks Elsevier Graph Neural Networks Elsevier Chougrad, Hiba oth Zouaki, Hamid oth Enthalten in Elsevier Regulatory design for RES-E support mechanisms: Learning curves, market structure, and burden-sharing 2012 the official journal of the International Neural Network Society, European Neural Network Society and Japanese Neural Network Society Amsterdam (DE-627)ELV016218965 volume:150 year:2022 pages:149-166 extent:18 https://doi.org/10.1016/j.neunet.2022.03.008 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA 77.50 Psychophysiologie VZ AR 150 2022 149-166 18 |
allfields_unstemmed |
10.1016/j.neunet.2022.03.008 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001813.pica (DE-627)ELV057342938 (ELSEVIER)S0893-6080(22)00078-8 DE-627 ger DE-627 rakwb eng 620 VZ 610 VZ 77.50 bkl Rassil, Asmaa verfasserin aut Augmented Graph Neural Network with hierarchical global-based residual connections 2022transfer abstract 18 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier Graph Neural Networks (GNNs) are powerful architectures for learning on graphs. They are efficient for predicting nodes, links and graphs properties. Standard GNN variants follow a message passing schema to update nodes representations using information from higher-order neighborhoods iteratively. Consequently, deeper GNNs make it possible to define high-level nodes representations generated based on local as well as distant neighborhoods. However, deeper networks are prone to suffer from over-smoothing. To build deeper GNN architectures and avoid losing the dependency between lower (the layers closer to the input) and higher (the layers closer to the output) layers, networks can integrate residual connections to connect intermediate layers. We propose the Augmented Graph Neural Network (AGNN) model with hierarchical global-based residual connections. Using the proposed residual connections, the model generates high-level nodes representations without the need for a deeper architecture. We disclose that the nodes representations generated through our proposed AGNN model are able to define an expressive all-encompassing representation of the entire graph. As such, the graph predictions generated through the AGNN model surpass considerably state-of-the-art results. Moreover, we carry out extensive experiments to identify the best global pooling strategy and attention weights to define the adequate hierarchical and global-based residual connections for different graph property prediction tasks. Furthermore, we propose a reversible variant of the AGNN model to address the extensive memory consumption problem that typically arises from training networks on large and dense graph datasets. The proposed Reversible Augmented Graph Neural Network (R-AGNN) only stores the nodes representations acquired from the output layer as opposed to saving all representations from intermediate layers as it is conventionally done when optimizing the parameters of other GNNs. We further refine the definition of the backpropagation algorithm to fit the R-AGNN model. We evaluate the proposed models AGNN and R-AGNN on benchmark Molecular, Bioinformatics and Social Networks datasets for graph classification and achieve state-of-the-art results. For instance the AGNN model realizes improvements of + 39 % on IMDB-MULTI reaching 91.7% accuracy and + 16 % on COLLAB reaching 96.8% accuracy compared to other GNN variants. Graph Neural Networks (GNNs) are powerful architectures for learning on graphs. They are efficient for predicting nodes, links and graphs properties. Standard GNN variants follow a message passing schema to update nodes representations using information from higher-order neighborhoods iteratively. Consequently, deeper GNNs make it possible to define high-level nodes representations generated based on local as well as distant neighborhoods. However, deeper networks are prone to suffer from over-smoothing. To build deeper GNN architectures and avoid losing the dependency between lower (the layers closer to the input) and higher (the layers closer to the output) layers, networks can integrate residual connections to connect intermediate layers. We propose the Augmented Graph Neural Network (AGNN) model with hierarchical global-based residual connections. Using the proposed residual connections, the model generates high-level nodes representations without the need for a deeper architecture. We disclose that the nodes representations generated through our proposed AGNN model are able to define an expressive all-encompassing representation of the entire graph. As such, the graph predictions generated through the AGNN model surpass considerably state-of-the-art results. Moreover, we carry out extensive experiments to identify the best global pooling strategy and attention weights to define the adequate hierarchical and global-based residual connections for different graph property prediction tasks. Furthermore, we propose a reversible variant of the AGNN model to address the extensive memory consumption problem that typically arises from training networks on large and dense graph datasets. The proposed Reversible Augmented Graph Neural Network (R-AGNN) only stores the nodes representations acquired from the output layer as opposed to saving all representations from intermediate layers as it is conventionally done when optimizing the parameters of other GNNs. We further refine the definition of the backpropagation algorithm to fit the R-AGNN model. We evaluate the proposed models AGNN and R-AGNN on benchmark Molecular, Bioinformatics and Social Networks datasets for graph classification and achieve state-of-the-art results. For instance the AGNN model realizes improvements of + 39 % on IMDB-MULTI reaching 91.7% accuracy and + 16 % on COLLAB reaching 96.8% accuracy compared to other GNN variants. Graph representation learning Elsevier Residual connections Elsevier Reversible networks Elsevier Graph Neural Networks Elsevier Chougrad, Hiba oth Zouaki, Hamid oth Enthalten in Elsevier Regulatory design for RES-E support mechanisms: Learning curves, market structure, and burden-sharing 2012 the official journal of the International Neural Network Society, European Neural Network Society and Japanese Neural Network Society Amsterdam (DE-627)ELV016218965 volume:150 year:2022 pages:149-166 extent:18 https://doi.org/10.1016/j.neunet.2022.03.008 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA 77.50 Psychophysiologie VZ AR 150 2022 149-166 18 |
allfieldsGer |
10.1016/j.neunet.2022.03.008 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001813.pica (DE-627)ELV057342938 (ELSEVIER)S0893-6080(22)00078-8 DE-627 ger DE-627 rakwb eng 620 VZ 610 VZ 77.50 bkl Rassil, Asmaa verfasserin aut Augmented Graph Neural Network with hierarchical global-based residual connections 2022transfer abstract 18 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier Graph Neural Networks (GNNs) are powerful architectures for learning on graphs. They are efficient for predicting nodes, links and graphs properties. Standard GNN variants follow a message passing schema to update nodes representations using information from higher-order neighborhoods iteratively. Consequently, deeper GNNs make it possible to define high-level nodes representations generated based on local as well as distant neighborhoods. However, deeper networks are prone to suffer from over-smoothing. To build deeper GNN architectures and avoid losing the dependency between lower (the layers closer to the input) and higher (the layers closer to the output) layers, networks can integrate residual connections to connect intermediate layers. We propose the Augmented Graph Neural Network (AGNN) model with hierarchical global-based residual connections. Using the proposed residual connections, the model generates high-level nodes representations without the need for a deeper architecture. We disclose that the nodes representations generated through our proposed AGNN model are able to define an expressive all-encompassing representation of the entire graph. As such, the graph predictions generated through the AGNN model surpass considerably state-of-the-art results. Moreover, we carry out extensive experiments to identify the best global pooling strategy and attention weights to define the adequate hierarchical and global-based residual connections for different graph property prediction tasks. Furthermore, we propose a reversible variant of the AGNN model to address the extensive memory consumption problem that typically arises from training networks on large and dense graph datasets. The proposed Reversible Augmented Graph Neural Network (R-AGNN) only stores the nodes representations acquired from the output layer as opposed to saving all representations from intermediate layers as it is conventionally done when optimizing the parameters of other GNNs. We further refine the definition of the backpropagation algorithm to fit the R-AGNN model. We evaluate the proposed models AGNN and R-AGNN on benchmark Molecular, Bioinformatics and Social Networks datasets for graph classification and achieve state-of-the-art results. For instance the AGNN model realizes improvements of + 39 % on IMDB-MULTI reaching 91.7% accuracy and + 16 % on COLLAB reaching 96.8% accuracy compared to other GNN variants. Graph Neural Networks (GNNs) are powerful architectures for learning on graphs. They are efficient for predicting nodes, links and graphs properties. Standard GNN variants follow a message passing schema to update nodes representations using information from higher-order neighborhoods iteratively. Consequently, deeper GNNs make it possible to define high-level nodes representations generated based on local as well as distant neighborhoods. However, deeper networks are prone to suffer from over-smoothing. To build deeper GNN architectures and avoid losing the dependency between lower (the layers closer to the input) and higher (the layers closer to the output) layers, networks can integrate residual connections to connect intermediate layers. We propose the Augmented Graph Neural Network (AGNN) model with hierarchical global-based residual connections. Using the proposed residual connections, the model generates high-level nodes representations without the need for a deeper architecture. We disclose that the nodes representations generated through our proposed AGNN model are able to define an expressive all-encompassing representation of the entire graph. As such, the graph predictions generated through the AGNN model surpass considerably state-of-the-art results. Moreover, we carry out extensive experiments to identify the best global pooling strategy and attention weights to define the adequate hierarchical and global-based residual connections for different graph property prediction tasks. Furthermore, we propose a reversible variant of the AGNN model to address the extensive memory consumption problem that typically arises from training networks on large and dense graph datasets. The proposed Reversible Augmented Graph Neural Network (R-AGNN) only stores the nodes representations acquired from the output layer as opposed to saving all representations from intermediate layers as it is conventionally done when optimizing the parameters of other GNNs. We further refine the definition of the backpropagation algorithm to fit the R-AGNN model. We evaluate the proposed models AGNN and R-AGNN on benchmark Molecular, Bioinformatics and Social Networks datasets for graph classification and achieve state-of-the-art results. For instance the AGNN model realizes improvements of + 39 % on IMDB-MULTI reaching 91.7% accuracy and + 16 % on COLLAB reaching 96.8% accuracy compared to other GNN variants. Graph representation learning Elsevier Residual connections Elsevier Reversible networks Elsevier Graph Neural Networks Elsevier Chougrad, Hiba oth Zouaki, Hamid oth Enthalten in Elsevier Regulatory design for RES-E support mechanisms: Learning curves, market structure, and burden-sharing 2012 the official journal of the International Neural Network Society, European Neural Network Society and Japanese Neural Network Society Amsterdam (DE-627)ELV016218965 volume:150 year:2022 pages:149-166 extent:18 https://doi.org/10.1016/j.neunet.2022.03.008 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA 77.50 Psychophysiologie VZ AR 150 2022 149-166 18 |
allfieldsSound |
10.1016/j.neunet.2022.03.008 doi /cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001813.pica (DE-627)ELV057342938 (ELSEVIER)S0893-6080(22)00078-8 DE-627 ger DE-627 rakwb eng 620 VZ 610 VZ 77.50 bkl Rassil, Asmaa verfasserin aut Augmented Graph Neural Network with hierarchical global-based residual connections 2022transfer abstract 18 nicht spezifiziert zzz rdacontent nicht spezifiziert z rdamedia nicht spezifiziert zu rdacarrier Graph Neural Networks (GNNs) are powerful architectures for learning on graphs. They are efficient for predicting nodes, links and graphs properties. Standard GNN variants follow a message passing schema to update nodes representations using information from higher-order neighborhoods iteratively. Consequently, deeper GNNs make it possible to define high-level nodes representations generated based on local as well as distant neighborhoods. However, deeper networks are prone to suffer from over-smoothing. To build deeper GNN architectures and avoid losing the dependency between lower (the layers closer to the input) and higher (the layers closer to the output) layers, networks can integrate residual connections to connect intermediate layers. We propose the Augmented Graph Neural Network (AGNN) model with hierarchical global-based residual connections. Using the proposed residual connections, the model generates high-level nodes representations without the need for a deeper architecture. We disclose that the nodes representations generated through our proposed AGNN model are able to define an expressive all-encompassing representation of the entire graph. As such, the graph predictions generated through the AGNN model surpass considerably state-of-the-art results. Moreover, we carry out extensive experiments to identify the best global pooling strategy and attention weights to define the adequate hierarchical and global-based residual connections for different graph property prediction tasks. Furthermore, we propose a reversible variant of the AGNN model to address the extensive memory consumption problem that typically arises from training networks on large and dense graph datasets. The proposed Reversible Augmented Graph Neural Network (R-AGNN) only stores the nodes representations acquired from the output layer as opposed to saving all representations from intermediate layers as it is conventionally done when optimizing the parameters of other GNNs. We further refine the definition of the backpropagation algorithm to fit the R-AGNN model. We evaluate the proposed models AGNN and R-AGNN on benchmark Molecular, Bioinformatics and Social Networks datasets for graph classification and achieve state-of-the-art results. For instance the AGNN model realizes improvements of + 39 % on IMDB-MULTI reaching 91.7% accuracy and + 16 % on COLLAB reaching 96.8% accuracy compared to other GNN variants. Graph Neural Networks (GNNs) are powerful architectures for learning on graphs. They are efficient for predicting nodes, links and graphs properties. Standard GNN variants follow a message passing schema to update nodes representations using information from higher-order neighborhoods iteratively. Consequently, deeper GNNs make it possible to define high-level nodes representations generated based on local as well as distant neighborhoods. However, deeper networks are prone to suffer from over-smoothing. To build deeper GNN architectures and avoid losing the dependency between lower (the layers closer to the input) and higher (the layers closer to the output) layers, networks can integrate residual connections to connect intermediate layers. We propose the Augmented Graph Neural Network (AGNN) model with hierarchical global-based residual connections. Using the proposed residual connections, the model generates high-level nodes representations without the need for a deeper architecture. We disclose that the nodes representations generated through our proposed AGNN model are able to define an expressive all-encompassing representation of the entire graph. As such, the graph predictions generated through the AGNN model surpass considerably state-of-the-art results. Moreover, we carry out extensive experiments to identify the best global pooling strategy and attention weights to define the adequate hierarchical and global-based residual connections for different graph property prediction tasks. Furthermore, we propose a reversible variant of the AGNN model to address the extensive memory consumption problem that typically arises from training networks on large and dense graph datasets. The proposed Reversible Augmented Graph Neural Network (R-AGNN) only stores the nodes representations acquired from the output layer as opposed to saving all representations from intermediate layers as it is conventionally done when optimizing the parameters of other GNNs. We further refine the definition of the backpropagation algorithm to fit the R-AGNN model. We evaluate the proposed models AGNN and R-AGNN on benchmark Molecular, Bioinformatics and Social Networks datasets for graph classification and achieve state-of-the-art results. For instance the AGNN model realizes improvements of + 39 % on IMDB-MULTI reaching 91.7% accuracy and + 16 % on COLLAB reaching 96.8% accuracy compared to other GNN variants. Graph representation learning Elsevier Residual connections Elsevier Reversible networks Elsevier Graph Neural Networks Elsevier Chougrad, Hiba oth Zouaki, Hamid oth Enthalten in Elsevier Regulatory design for RES-E support mechanisms: Learning curves, market structure, and burden-sharing 2012 the official journal of the International Neural Network Society, European Neural Network Society and Japanese Neural Network Society Amsterdam (DE-627)ELV016218965 volume:150 year:2022 pages:149-166 extent:18 https://doi.org/10.1016/j.neunet.2022.03.008 Volltext GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA 77.50 Psychophysiologie VZ AR 150 2022 149-166 18 |
language |
English |
source |
Enthalten in Regulatory design for RES-E support mechanisms: Learning curves, market structure, and burden-sharing Amsterdam volume:150 year:2022 pages:149-166 extent:18 |
sourceStr |
Enthalten in Regulatory design for RES-E support mechanisms: Learning curves, market structure, and burden-sharing Amsterdam volume:150 year:2022 pages:149-166 extent:18 |
format_phy_str_mv |
Article |
bklname |
Psychophysiologie |
institution |
findex.gbv.de |
topic_facet |
Graph representation learning Residual connections Reversible networks Graph Neural Networks |
dewey-raw |
620 |
isfreeaccess_bool |
false |
container_title |
Regulatory design for RES-E support mechanisms: Learning curves, market structure, and burden-sharing |
authorswithroles_txt_mv |
Rassil, Asmaa @@aut@@ Chougrad, Hiba @@oth@@ Zouaki, Hamid @@oth@@ |
publishDateDaySort_date |
2022-01-01T00:00:00Z |
hierarchy_top_id |
ELV016218965 |
dewey-sort |
3620 |
id |
ELV057342938 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV057342938</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230626044926.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">220808s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.neunet.2022.03.008</subfield><subfield code="2">doi</subfield></datafield><datafield tag="028" ind1="5" ind2="2"><subfield code="a">/cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001813.pica</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV057342938</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0893-6080(22)00078-8</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">620</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">610</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">77.50</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Rassil, Asmaa</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Augmented Graph Neural Network with hierarchical global-based residual connections</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022transfer abstract</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">18</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">z</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zu</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Graph Neural Networks (GNNs) are powerful architectures for learning on graphs. They are efficient for predicting nodes, links and graphs properties. Standard GNN variants follow a message passing schema to update nodes representations using information from higher-order neighborhoods iteratively. Consequently, deeper GNNs make it possible to define high-level nodes representations generated based on local as well as distant neighborhoods. However, deeper networks are prone to suffer from over-smoothing. To build deeper GNN architectures and avoid losing the dependency between lower (the layers closer to the input) and higher (the layers closer to the output) layers, networks can integrate residual connections to connect intermediate layers. We propose the Augmented Graph Neural Network (AGNN) model with hierarchical global-based residual connections. Using the proposed residual connections, the model generates high-level nodes representations without the need for a deeper architecture. We disclose that the nodes representations generated through our proposed AGNN model are able to define an expressive all-encompassing representation of the entire graph. As such, the graph predictions generated through the AGNN model surpass considerably state-of-the-art results. Moreover, we carry out extensive experiments to identify the best global pooling strategy and attention weights to define the adequate hierarchical and global-based residual connections for different graph property prediction tasks. Furthermore, we propose a reversible variant of the AGNN model to address the extensive memory consumption problem that typically arises from training networks on large and dense graph datasets. The proposed Reversible Augmented Graph Neural Network (R-AGNN) only stores the nodes representations acquired from the output layer as opposed to saving all representations from intermediate layers as it is conventionally done when optimizing the parameters of other GNNs. We further refine the definition of the backpropagation algorithm to fit the R-AGNN model. We evaluate the proposed models AGNN and R-AGNN on benchmark Molecular, Bioinformatics and Social Networks datasets for graph classification and achieve state-of-the-art results. For instance the AGNN model realizes improvements of + 39 % on IMDB-MULTI reaching 91.7% accuracy and + 16 % on COLLAB reaching 96.8% accuracy compared to other GNN variants.</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Graph Neural Networks (GNNs) are powerful architectures for learning on graphs. They are efficient for predicting nodes, links and graphs properties. Standard GNN variants follow a message passing schema to update nodes representations using information from higher-order neighborhoods iteratively. Consequently, deeper GNNs make it possible to define high-level nodes representations generated based on local as well as distant neighborhoods. However, deeper networks are prone to suffer from over-smoothing. To build deeper GNN architectures and avoid losing the dependency between lower (the layers closer to the input) and higher (the layers closer to the output) layers, networks can integrate residual connections to connect intermediate layers. We propose the Augmented Graph Neural Network (AGNN) model with hierarchical global-based residual connections. Using the proposed residual connections, the model generates high-level nodes representations without the need for a deeper architecture. We disclose that the nodes representations generated through our proposed AGNN model are able to define an expressive all-encompassing representation of the entire graph. As such, the graph predictions generated through the AGNN model surpass considerably state-of-the-art results. Moreover, we carry out extensive experiments to identify the best global pooling strategy and attention weights to define the adequate hierarchical and global-based residual connections for different graph property prediction tasks. Furthermore, we propose a reversible variant of the AGNN model to address the extensive memory consumption problem that typically arises from training networks on large and dense graph datasets. The proposed Reversible Augmented Graph Neural Network (R-AGNN) only stores the nodes representations acquired from the output layer as opposed to saving all representations from intermediate layers as it is conventionally done when optimizing the parameters of other GNNs. We further refine the definition of the backpropagation algorithm to fit the R-AGNN model. We evaluate the proposed models AGNN and R-AGNN on benchmark Molecular, Bioinformatics and Social Networks datasets for graph classification and achieve state-of-the-art results. For instance the AGNN model realizes improvements of + 39 % on IMDB-MULTI reaching 91.7% accuracy and + 16 % on COLLAB reaching 96.8% accuracy compared to other GNN variants.</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Graph representation learning</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Residual connections</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Reversible networks</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Graph Neural Networks</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Chougrad, Hiba</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zouaki, Hamid</subfield><subfield code="4">oth</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="n">Elsevier</subfield><subfield code="t">Regulatory design for RES-E support mechanisms: Learning curves, market structure, and burden-sharing</subfield><subfield code="d">2012</subfield><subfield code="d">the official journal of the International Neural Network Society, European Neural Network Society and Japanese Neural Network Society</subfield><subfield code="g">Amsterdam</subfield><subfield code="w">(DE-627)ELV016218965</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:150</subfield><subfield code="g">year:2022</subfield><subfield code="g">pages:149-166</subfield><subfield code="g">extent:18</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1016/j.neunet.2022.03.008</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PHA</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">77.50</subfield><subfield code="j">Psychophysiologie</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">150</subfield><subfield code="j">2022</subfield><subfield code="h">149-166</subfield><subfield code="g">18</subfield></datafield></record></collection>
|
author |
Rassil, Asmaa |
spellingShingle |
Rassil, Asmaa ddc 620 ddc 610 bkl 77.50 Elsevier Graph representation learning Elsevier Residual connections Elsevier Reversible networks Elsevier Graph Neural Networks Augmented Graph Neural Network with hierarchical global-based residual connections |
authorStr |
Rassil, Asmaa |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)ELV016218965 |
format |
electronic Article |
dewey-ones |
620 - Engineering & allied operations 610 - Medicine & health |
delete_txt_mv |
keep |
author_role |
aut |
collection |
elsevier |
remote_str |
true |
illustrated |
Not Illustrated |
topic_title |
620 VZ 610 VZ 77.50 bkl Augmented Graph Neural Network with hierarchical global-based residual connections Graph representation learning Elsevier Residual connections Elsevier Reversible networks Elsevier Graph Neural Networks Elsevier |
topic |
ddc 620 ddc 610 bkl 77.50 Elsevier Graph representation learning Elsevier Residual connections Elsevier Reversible networks Elsevier Graph Neural Networks |
topic_unstemmed |
ddc 620 ddc 610 bkl 77.50 Elsevier Graph representation learning Elsevier Residual connections Elsevier Reversible networks Elsevier Graph Neural Networks |
topic_browse |
ddc 620 ddc 610 bkl 77.50 Elsevier Graph representation learning Elsevier Residual connections Elsevier Reversible networks Elsevier Graph Neural Networks |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
zu |
author2_variant |
h c hc h z hz |
hierarchy_parent_title |
Regulatory design for RES-E support mechanisms: Learning curves, market structure, and burden-sharing |
hierarchy_parent_id |
ELV016218965 |
dewey-tens |
620 - Engineering 610 - Medicine & health |
hierarchy_top_title |
Regulatory design for RES-E support mechanisms: Learning curves, market structure, and burden-sharing |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)ELV016218965 |
title |
Augmented Graph Neural Network with hierarchical global-based residual connections |
ctrlnum |
(DE-627)ELV057342938 (ELSEVIER)S0893-6080(22)00078-8 |
title_full |
Augmented Graph Neural Network with hierarchical global-based residual connections |
author_sort |
Rassil, Asmaa |
journal |
Regulatory design for RES-E support mechanisms: Learning curves, market structure, and burden-sharing |
journalStr |
Regulatory design for RES-E support mechanisms: Learning curves, market structure, and burden-sharing |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
600 - Technology |
recordtype |
marc |
publishDateSort |
2022 |
contenttype_str_mv |
zzz |
container_start_page |
149 |
author_browse |
Rassil, Asmaa |
container_volume |
150 |
physical |
18 |
class |
620 VZ 610 VZ 77.50 bkl |
format_se |
Elektronische Aufsätze |
author-letter |
Rassil, Asmaa |
doi_str_mv |
10.1016/j.neunet.2022.03.008 |
dewey-full |
620 610 |
title_sort |
augmented graph neural network with hierarchical global-based residual connections |
title_auth |
Augmented Graph Neural Network with hierarchical global-based residual connections |
abstract |
Graph Neural Networks (GNNs) are powerful architectures for learning on graphs. They are efficient for predicting nodes, links and graphs properties. Standard GNN variants follow a message passing schema to update nodes representations using information from higher-order neighborhoods iteratively. Consequently, deeper GNNs make it possible to define high-level nodes representations generated based on local as well as distant neighborhoods. However, deeper networks are prone to suffer from over-smoothing. To build deeper GNN architectures and avoid losing the dependency between lower (the layers closer to the input) and higher (the layers closer to the output) layers, networks can integrate residual connections to connect intermediate layers. We propose the Augmented Graph Neural Network (AGNN) model with hierarchical global-based residual connections. Using the proposed residual connections, the model generates high-level nodes representations without the need for a deeper architecture. We disclose that the nodes representations generated through our proposed AGNN model are able to define an expressive all-encompassing representation of the entire graph. As such, the graph predictions generated through the AGNN model surpass considerably state-of-the-art results. Moreover, we carry out extensive experiments to identify the best global pooling strategy and attention weights to define the adequate hierarchical and global-based residual connections for different graph property prediction tasks. Furthermore, we propose a reversible variant of the AGNN model to address the extensive memory consumption problem that typically arises from training networks on large and dense graph datasets. The proposed Reversible Augmented Graph Neural Network (R-AGNN) only stores the nodes representations acquired from the output layer as opposed to saving all representations from intermediate layers as it is conventionally done when optimizing the parameters of other GNNs. We further refine the definition of the backpropagation algorithm to fit the R-AGNN model. We evaluate the proposed models AGNN and R-AGNN on benchmark Molecular, Bioinformatics and Social Networks datasets for graph classification and achieve state-of-the-art results. For instance the AGNN model realizes improvements of + 39 % on IMDB-MULTI reaching 91.7% accuracy and + 16 % on COLLAB reaching 96.8% accuracy compared to other GNN variants. |
abstractGer |
Graph Neural Networks (GNNs) are powerful architectures for learning on graphs. They are efficient for predicting nodes, links and graphs properties. Standard GNN variants follow a message passing schema to update nodes representations using information from higher-order neighborhoods iteratively. Consequently, deeper GNNs make it possible to define high-level nodes representations generated based on local as well as distant neighborhoods. However, deeper networks are prone to suffer from over-smoothing. To build deeper GNN architectures and avoid losing the dependency between lower (the layers closer to the input) and higher (the layers closer to the output) layers, networks can integrate residual connections to connect intermediate layers. We propose the Augmented Graph Neural Network (AGNN) model with hierarchical global-based residual connections. Using the proposed residual connections, the model generates high-level nodes representations without the need for a deeper architecture. We disclose that the nodes representations generated through our proposed AGNN model are able to define an expressive all-encompassing representation of the entire graph. As such, the graph predictions generated through the AGNN model surpass considerably state-of-the-art results. Moreover, we carry out extensive experiments to identify the best global pooling strategy and attention weights to define the adequate hierarchical and global-based residual connections for different graph property prediction tasks. Furthermore, we propose a reversible variant of the AGNN model to address the extensive memory consumption problem that typically arises from training networks on large and dense graph datasets. The proposed Reversible Augmented Graph Neural Network (R-AGNN) only stores the nodes representations acquired from the output layer as opposed to saving all representations from intermediate layers as it is conventionally done when optimizing the parameters of other GNNs. We further refine the definition of the backpropagation algorithm to fit the R-AGNN model. We evaluate the proposed models AGNN and R-AGNN on benchmark Molecular, Bioinformatics and Social Networks datasets for graph classification and achieve state-of-the-art results. For instance the AGNN model realizes improvements of + 39 % on IMDB-MULTI reaching 91.7% accuracy and + 16 % on COLLAB reaching 96.8% accuracy compared to other GNN variants. |
abstract_unstemmed |
Graph Neural Networks (GNNs) are powerful architectures for learning on graphs. They are efficient for predicting nodes, links and graphs properties. Standard GNN variants follow a message passing schema to update nodes representations using information from higher-order neighborhoods iteratively. Consequently, deeper GNNs make it possible to define high-level nodes representations generated based on local as well as distant neighborhoods. However, deeper networks are prone to suffer from over-smoothing. To build deeper GNN architectures and avoid losing the dependency between lower (the layers closer to the input) and higher (the layers closer to the output) layers, networks can integrate residual connections to connect intermediate layers. We propose the Augmented Graph Neural Network (AGNN) model with hierarchical global-based residual connections. Using the proposed residual connections, the model generates high-level nodes representations without the need for a deeper architecture. We disclose that the nodes representations generated through our proposed AGNN model are able to define an expressive all-encompassing representation of the entire graph. As such, the graph predictions generated through the AGNN model surpass considerably state-of-the-art results. Moreover, we carry out extensive experiments to identify the best global pooling strategy and attention weights to define the adequate hierarchical and global-based residual connections for different graph property prediction tasks. Furthermore, we propose a reversible variant of the AGNN model to address the extensive memory consumption problem that typically arises from training networks on large and dense graph datasets. The proposed Reversible Augmented Graph Neural Network (R-AGNN) only stores the nodes representations acquired from the output layer as opposed to saving all representations from intermediate layers as it is conventionally done when optimizing the parameters of other GNNs. We further refine the definition of the backpropagation algorithm to fit the R-AGNN model. We evaluate the proposed models AGNN and R-AGNN on benchmark Molecular, Bioinformatics and Social Networks datasets for graph classification and achieve state-of-the-art results. For instance the AGNN model realizes improvements of + 39 % on IMDB-MULTI reaching 91.7% accuracy and + 16 % on COLLAB reaching 96.8% accuracy compared to other GNN variants. |
collection_details |
GBV_USEFLAG_U GBV_ELV SYSFLAG_U SSG-OLC-PHA |
title_short |
Augmented Graph Neural Network with hierarchical global-based residual connections |
url |
https://doi.org/10.1016/j.neunet.2022.03.008 |
remote_bool |
true |
author2 |
Chougrad, Hiba Zouaki, Hamid |
author2Str |
Chougrad, Hiba Zouaki, Hamid |
ppnlink |
ELV016218965 |
mediatype_str_mv |
z |
isOA_txt |
false |
hochschulschrift_bool |
false |
author2_role |
oth oth |
doi_str |
10.1016/j.neunet.2022.03.008 |
up_date |
2024-07-06T22:57:26.925Z |
_version_ |
1803872262608125952 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">ELV057342938</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230626044926.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">220808s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1016/j.neunet.2022.03.008</subfield><subfield code="2">doi</subfield></datafield><datafield tag="028" ind1="5" ind2="2"><subfield code="a">/cbs_pica/cbs_olc/import_discovery/elsevier/einzuspielen/GBV00000000001813.pica</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)ELV057342938</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(ELSEVIER)S0893-6080(22)00078-8</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">620</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">610</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">77.50</subfield><subfield code="2">bkl</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Rassil, Asmaa</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Augmented Graph Neural Network with hierarchical global-based residual connections</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022transfer abstract</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">18</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zzz</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">z</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">nicht spezifiziert</subfield><subfield code="b">zu</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Graph Neural Networks (GNNs) are powerful architectures for learning on graphs. They are efficient for predicting nodes, links and graphs properties. Standard GNN variants follow a message passing schema to update nodes representations using information from higher-order neighborhoods iteratively. Consequently, deeper GNNs make it possible to define high-level nodes representations generated based on local as well as distant neighborhoods. However, deeper networks are prone to suffer from over-smoothing. To build deeper GNN architectures and avoid losing the dependency between lower (the layers closer to the input) and higher (the layers closer to the output) layers, networks can integrate residual connections to connect intermediate layers. We propose the Augmented Graph Neural Network (AGNN) model with hierarchical global-based residual connections. Using the proposed residual connections, the model generates high-level nodes representations without the need for a deeper architecture. We disclose that the nodes representations generated through our proposed AGNN model are able to define an expressive all-encompassing representation of the entire graph. As such, the graph predictions generated through the AGNN model surpass considerably state-of-the-art results. Moreover, we carry out extensive experiments to identify the best global pooling strategy and attention weights to define the adequate hierarchical and global-based residual connections for different graph property prediction tasks. Furthermore, we propose a reversible variant of the AGNN model to address the extensive memory consumption problem that typically arises from training networks on large and dense graph datasets. The proposed Reversible Augmented Graph Neural Network (R-AGNN) only stores the nodes representations acquired from the output layer as opposed to saving all representations from intermediate layers as it is conventionally done when optimizing the parameters of other GNNs. We further refine the definition of the backpropagation algorithm to fit the R-AGNN model. We evaluate the proposed models AGNN and R-AGNN on benchmark Molecular, Bioinformatics and Social Networks datasets for graph classification and achieve state-of-the-art results. For instance the AGNN model realizes improvements of + 39 % on IMDB-MULTI reaching 91.7% accuracy and + 16 % on COLLAB reaching 96.8% accuracy compared to other GNN variants.</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Graph Neural Networks (GNNs) are powerful architectures for learning on graphs. They are efficient for predicting nodes, links and graphs properties. Standard GNN variants follow a message passing schema to update nodes representations using information from higher-order neighborhoods iteratively. Consequently, deeper GNNs make it possible to define high-level nodes representations generated based on local as well as distant neighborhoods. However, deeper networks are prone to suffer from over-smoothing. To build deeper GNN architectures and avoid losing the dependency between lower (the layers closer to the input) and higher (the layers closer to the output) layers, networks can integrate residual connections to connect intermediate layers. We propose the Augmented Graph Neural Network (AGNN) model with hierarchical global-based residual connections. Using the proposed residual connections, the model generates high-level nodes representations without the need for a deeper architecture. We disclose that the nodes representations generated through our proposed AGNN model are able to define an expressive all-encompassing representation of the entire graph. As such, the graph predictions generated through the AGNN model surpass considerably state-of-the-art results. Moreover, we carry out extensive experiments to identify the best global pooling strategy and attention weights to define the adequate hierarchical and global-based residual connections for different graph property prediction tasks. Furthermore, we propose a reversible variant of the AGNN model to address the extensive memory consumption problem that typically arises from training networks on large and dense graph datasets. The proposed Reversible Augmented Graph Neural Network (R-AGNN) only stores the nodes representations acquired from the output layer as opposed to saving all representations from intermediate layers as it is conventionally done when optimizing the parameters of other GNNs. We further refine the definition of the backpropagation algorithm to fit the R-AGNN model. We evaluate the proposed models AGNN and R-AGNN on benchmark Molecular, Bioinformatics and Social Networks datasets for graph classification and achieve state-of-the-art results. For instance the AGNN model realizes improvements of + 39 % on IMDB-MULTI reaching 91.7% accuracy and + 16 % on COLLAB reaching 96.8% accuracy compared to other GNN variants.</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Graph representation learning</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Residual connections</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Reversible networks</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Graph Neural Networks</subfield><subfield code="2">Elsevier</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Chougrad, Hiba</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zouaki, Hamid</subfield><subfield code="4">oth</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="n">Elsevier</subfield><subfield code="t">Regulatory design for RES-E support mechanisms: Learning curves, market structure, and burden-sharing</subfield><subfield code="d">2012</subfield><subfield code="d">the official journal of the International Neural Network Society, European Neural Network Society and Japanese Neural Network Society</subfield><subfield code="g">Amsterdam</subfield><subfield code="w">(DE-627)ELV016218965</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:150</subfield><subfield code="g">year:2022</subfield><subfield code="g">pages:149-166</subfield><subfield code="g">extent:18</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1016/j.neunet.2022.03.008</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ELV</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_U</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-PHA</subfield></datafield><datafield tag="936" ind1="b" ind2="k"><subfield code="a">77.50</subfield><subfield code="j">Psychophysiologie</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">150</subfield><subfield code="j">2022</subfield><subfield code="h">149-166</subfield><subfield code="g">18</subfield></datafield></record></collection>
|
score |
7.4007196 |