A hybrid framework for multivariate long-sequence time series forecasting
Abstract Time series forecasting provides insights into the far future by utilizing the available history observations. Recent studies have demonstrated the superiority of transformer-based models in dealing with multivariate long-sequence time series forecasting (MLTSF). However, the data complexit...
Ausführliche Beschreibung
Autor*in: |
Wang, Xiaohu [verfasserIn] |
---|
Format: |
Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2022 |
---|
Schlagwörter: |
---|
Anmerkung: |
© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. |
---|
Übergeordnetes Werk: |
Enthalten in: Applied intelligence - Springer US, 1991, 53(2022), 11 vom: 13. Okt., Seite 13549-13568 |
---|---|
Übergeordnetes Werk: |
volume:53 ; year:2022 ; number:11 ; day:13 ; month:10 ; pages:13549-13568 |
Links: |
---|
DOI / URN: |
10.1007/s10489-022-04110-1 |
---|
Katalog-ID: |
OLC2143603398 |
---|
LEADER | 01000naa a22002652 4500 | ||
---|---|---|---|
001 | OLC2143603398 | ||
003 | DE-627 | ||
005 | 20240118090836.0 | ||
007 | tu | ||
008 | 240118s2022 xx ||||| 00| ||eng c | ||
024 | 7 | |a 10.1007/s10489-022-04110-1 |2 doi | |
035 | |a (DE-627)OLC2143603398 | ||
035 | |a (DE-He213)s10489-022-04110-1-p | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
082 | 0 | 4 | |a 004 |q VZ |
100 | 1 | |a Wang, Xiaohu |e verfasserin |4 aut | |
245 | 1 | 0 | |a A hybrid framework for multivariate long-sequence time series forecasting |
264 | 1 | |c 2022 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ohne Hilfsmittel zu benutzen |b n |2 rdamedia | ||
338 | |a Band |b nc |2 rdacarrier | ||
500 | |a © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. | ||
520 | |a Abstract Time series forecasting provides insights into the far future by utilizing the available history observations. Recent studies have demonstrated the superiority of transformer-based models in dealing with multivariate long-sequence time series forecasting (MLTSF). However, the data complexity hinders the forecasting accuracy of current deep neural network models. In this article, a hybrid framework - Waveformer - is proposed, which decomposes fluctuated and complex data sequence into multiple stable and more predictable subsequences (components) through the entire forecasting process. Waveformer interactively learns temporal dependencies on each pair of decomposed components, which enhances its ability of learning their temporal dependencies. Moreover, Waveformer treats the implicit and dynamic dependencies among variables as a set of dynamic direct graphs. Based on which, an attention adaptive graph convolution net (AAGCN) is designed, which combines self-attention and adaptive direct graph convolution to capture multivariate dynamic dependencies in a flexible manner. The experimental results on six public datasets show that Waveformer considerably outperforms a varied range of state-of-the-art benchmarks, with at the most 54.3% relative improvement. | ||
650 | 4 | |a Time series forecasting | |
650 | 4 | |a Time sequence decomposition | |
650 | 4 | |a Graph attention network | |
650 | 4 | |a Interactive learning | |
700 | 1 | |a Wang, Yong |0 (orcid)0000-0002-0422-5691 |4 aut | |
700 | 1 | |a Peng, Jianjian |4 aut | |
700 | 1 | |a Zhang, Zhicheng |4 aut | |
700 | 1 | |a Tang, Xueliang |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Applied intelligence |d Springer US, 1991 |g 53(2022), 11 vom: 13. Okt., Seite 13549-13568 |w (DE-627)130990515 |w (DE-600)1080229-0 |w (DE-576)029154286 |x 0924-669X |7 nnns |
773 | 1 | 8 | |g volume:53 |g year:2022 |g number:11 |g day:13 |g month:10 |g pages:13549-13568 |
856 | 4 | 1 | |u https://doi.org/10.1007/s10489-022-04110-1 |z lizenzpflichtig |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_OLC | ||
912 | |a SSG-OLC-MAT | ||
951 | |a AR | ||
952 | |d 53 |j 2022 |e 11 |b 13 |c 10 |h 13549-13568 |
author_variant |
x w xw y w yw j p jp z z zz x t xt |
---|---|
matchkey_str |
article:0924669X:2022----::hbifaeokomliaitlnsqectms |
hierarchy_sort_str |
2022 |
publishDate |
2022 |
allfields |
10.1007/s10489-022-04110-1 doi (DE-627)OLC2143603398 (DE-He213)s10489-022-04110-1-p DE-627 ger DE-627 rakwb eng 004 VZ Wang, Xiaohu verfasserin aut A hybrid framework for multivariate long-sequence time series forecasting 2022 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. Abstract Time series forecasting provides insights into the far future by utilizing the available history observations. Recent studies have demonstrated the superiority of transformer-based models in dealing with multivariate long-sequence time series forecasting (MLTSF). However, the data complexity hinders the forecasting accuracy of current deep neural network models. In this article, a hybrid framework - Waveformer - is proposed, which decomposes fluctuated and complex data sequence into multiple stable and more predictable subsequences (components) through the entire forecasting process. Waveformer interactively learns temporal dependencies on each pair of decomposed components, which enhances its ability of learning their temporal dependencies. Moreover, Waveformer treats the implicit and dynamic dependencies among variables as a set of dynamic direct graphs. Based on which, an attention adaptive graph convolution net (AAGCN) is designed, which combines self-attention and adaptive direct graph convolution to capture multivariate dynamic dependencies in a flexible manner. The experimental results on six public datasets show that Waveformer considerably outperforms a varied range of state-of-the-art benchmarks, with at the most 54.3% relative improvement. Time series forecasting Time sequence decomposition Graph attention network Interactive learning Wang, Yong (orcid)0000-0002-0422-5691 aut Peng, Jianjian aut Zhang, Zhicheng aut Tang, Xueliang aut Enthalten in Applied intelligence Springer US, 1991 53(2022), 11 vom: 13. Okt., Seite 13549-13568 (DE-627)130990515 (DE-600)1080229-0 (DE-576)029154286 0924-669X nnns volume:53 year:2022 number:11 day:13 month:10 pages:13549-13568 https://doi.org/10.1007/s10489-022-04110-1 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT AR 53 2022 11 13 10 13549-13568 |
spelling |
10.1007/s10489-022-04110-1 doi (DE-627)OLC2143603398 (DE-He213)s10489-022-04110-1-p DE-627 ger DE-627 rakwb eng 004 VZ Wang, Xiaohu verfasserin aut A hybrid framework for multivariate long-sequence time series forecasting 2022 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. Abstract Time series forecasting provides insights into the far future by utilizing the available history observations. Recent studies have demonstrated the superiority of transformer-based models in dealing with multivariate long-sequence time series forecasting (MLTSF). However, the data complexity hinders the forecasting accuracy of current deep neural network models. In this article, a hybrid framework - Waveformer - is proposed, which decomposes fluctuated and complex data sequence into multiple stable and more predictable subsequences (components) through the entire forecasting process. Waveformer interactively learns temporal dependencies on each pair of decomposed components, which enhances its ability of learning their temporal dependencies. Moreover, Waveformer treats the implicit and dynamic dependencies among variables as a set of dynamic direct graphs. Based on which, an attention adaptive graph convolution net (AAGCN) is designed, which combines self-attention and adaptive direct graph convolution to capture multivariate dynamic dependencies in a flexible manner. The experimental results on six public datasets show that Waveformer considerably outperforms a varied range of state-of-the-art benchmarks, with at the most 54.3% relative improvement. Time series forecasting Time sequence decomposition Graph attention network Interactive learning Wang, Yong (orcid)0000-0002-0422-5691 aut Peng, Jianjian aut Zhang, Zhicheng aut Tang, Xueliang aut Enthalten in Applied intelligence Springer US, 1991 53(2022), 11 vom: 13. Okt., Seite 13549-13568 (DE-627)130990515 (DE-600)1080229-0 (DE-576)029154286 0924-669X nnns volume:53 year:2022 number:11 day:13 month:10 pages:13549-13568 https://doi.org/10.1007/s10489-022-04110-1 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT AR 53 2022 11 13 10 13549-13568 |
allfields_unstemmed |
10.1007/s10489-022-04110-1 doi (DE-627)OLC2143603398 (DE-He213)s10489-022-04110-1-p DE-627 ger DE-627 rakwb eng 004 VZ Wang, Xiaohu verfasserin aut A hybrid framework for multivariate long-sequence time series forecasting 2022 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. Abstract Time series forecasting provides insights into the far future by utilizing the available history observations. Recent studies have demonstrated the superiority of transformer-based models in dealing with multivariate long-sequence time series forecasting (MLTSF). However, the data complexity hinders the forecasting accuracy of current deep neural network models. In this article, a hybrid framework - Waveformer - is proposed, which decomposes fluctuated and complex data sequence into multiple stable and more predictable subsequences (components) through the entire forecasting process. Waveformer interactively learns temporal dependencies on each pair of decomposed components, which enhances its ability of learning their temporal dependencies. Moreover, Waveformer treats the implicit and dynamic dependencies among variables as a set of dynamic direct graphs. Based on which, an attention adaptive graph convolution net (AAGCN) is designed, which combines self-attention and adaptive direct graph convolution to capture multivariate dynamic dependencies in a flexible manner. The experimental results on six public datasets show that Waveformer considerably outperforms a varied range of state-of-the-art benchmarks, with at the most 54.3% relative improvement. Time series forecasting Time sequence decomposition Graph attention network Interactive learning Wang, Yong (orcid)0000-0002-0422-5691 aut Peng, Jianjian aut Zhang, Zhicheng aut Tang, Xueliang aut Enthalten in Applied intelligence Springer US, 1991 53(2022), 11 vom: 13. Okt., Seite 13549-13568 (DE-627)130990515 (DE-600)1080229-0 (DE-576)029154286 0924-669X nnns volume:53 year:2022 number:11 day:13 month:10 pages:13549-13568 https://doi.org/10.1007/s10489-022-04110-1 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT AR 53 2022 11 13 10 13549-13568 |
allfieldsGer |
10.1007/s10489-022-04110-1 doi (DE-627)OLC2143603398 (DE-He213)s10489-022-04110-1-p DE-627 ger DE-627 rakwb eng 004 VZ Wang, Xiaohu verfasserin aut A hybrid framework for multivariate long-sequence time series forecasting 2022 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. Abstract Time series forecasting provides insights into the far future by utilizing the available history observations. Recent studies have demonstrated the superiority of transformer-based models in dealing with multivariate long-sequence time series forecasting (MLTSF). However, the data complexity hinders the forecasting accuracy of current deep neural network models. In this article, a hybrid framework - Waveformer - is proposed, which decomposes fluctuated and complex data sequence into multiple stable and more predictable subsequences (components) through the entire forecasting process. Waveformer interactively learns temporal dependencies on each pair of decomposed components, which enhances its ability of learning their temporal dependencies. Moreover, Waveformer treats the implicit and dynamic dependencies among variables as a set of dynamic direct graphs. Based on which, an attention adaptive graph convolution net (AAGCN) is designed, which combines self-attention and adaptive direct graph convolution to capture multivariate dynamic dependencies in a flexible manner. The experimental results on six public datasets show that Waveformer considerably outperforms a varied range of state-of-the-art benchmarks, with at the most 54.3% relative improvement. Time series forecasting Time sequence decomposition Graph attention network Interactive learning Wang, Yong (orcid)0000-0002-0422-5691 aut Peng, Jianjian aut Zhang, Zhicheng aut Tang, Xueliang aut Enthalten in Applied intelligence Springer US, 1991 53(2022), 11 vom: 13. Okt., Seite 13549-13568 (DE-627)130990515 (DE-600)1080229-0 (DE-576)029154286 0924-669X nnns volume:53 year:2022 number:11 day:13 month:10 pages:13549-13568 https://doi.org/10.1007/s10489-022-04110-1 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT AR 53 2022 11 13 10 13549-13568 |
allfieldsSound |
10.1007/s10489-022-04110-1 doi (DE-627)OLC2143603398 (DE-He213)s10489-022-04110-1-p DE-627 ger DE-627 rakwb eng 004 VZ Wang, Xiaohu verfasserin aut A hybrid framework for multivariate long-sequence time series forecasting 2022 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. Abstract Time series forecasting provides insights into the far future by utilizing the available history observations. Recent studies have demonstrated the superiority of transformer-based models in dealing with multivariate long-sequence time series forecasting (MLTSF). However, the data complexity hinders the forecasting accuracy of current deep neural network models. In this article, a hybrid framework - Waveformer - is proposed, which decomposes fluctuated and complex data sequence into multiple stable and more predictable subsequences (components) through the entire forecasting process. Waveformer interactively learns temporal dependencies on each pair of decomposed components, which enhances its ability of learning their temporal dependencies. Moreover, Waveformer treats the implicit and dynamic dependencies among variables as a set of dynamic direct graphs. Based on which, an attention adaptive graph convolution net (AAGCN) is designed, which combines self-attention and adaptive direct graph convolution to capture multivariate dynamic dependencies in a flexible manner. The experimental results on six public datasets show that Waveformer considerably outperforms a varied range of state-of-the-art benchmarks, with at the most 54.3% relative improvement. Time series forecasting Time sequence decomposition Graph attention network Interactive learning Wang, Yong (orcid)0000-0002-0422-5691 aut Peng, Jianjian aut Zhang, Zhicheng aut Tang, Xueliang aut Enthalten in Applied intelligence Springer US, 1991 53(2022), 11 vom: 13. Okt., Seite 13549-13568 (DE-627)130990515 (DE-600)1080229-0 (DE-576)029154286 0924-669X nnns volume:53 year:2022 number:11 day:13 month:10 pages:13549-13568 https://doi.org/10.1007/s10489-022-04110-1 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT AR 53 2022 11 13 10 13549-13568 |
language |
English |
source |
Enthalten in Applied intelligence 53(2022), 11 vom: 13. Okt., Seite 13549-13568 volume:53 year:2022 number:11 day:13 month:10 pages:13549-13568 |
sourceStr |
Enthalten in Applied intelligence 53(2022), 11 vom: 13. Okt., Seite 13549-13568 volume:53 year:2022 number:11 day:13 month:10 pages:13549-13568 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Time series forecasting Time sequence decomposition Graph attention network Interactive learning |
dewey-raw |
004 |
isfreeaccess_bool |
false |
container_title |
Applied intelligence |
authorswithroles_txt_mv |
Wang, Xiaohu @@aut@@ Wang, Yong @@aut@@ Peng, Jianjian @@aut@@ Zhang, Zhicheng @@aut@@ Tang, Xueliang @@aut@@ |
publishDateDaySort_date |
2022-10-13T00:00:00Z |
hierarchy_top_id |
130990515 |
dewey-sort |
14 |
id |
OLC2143603398 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000naa a22002652 4500</leader><controlfield tag="001">OLC2143603398</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20240118090836.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">240118s2022 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s10489-022-04110-1</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC2143603398</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s10489-022-04110-1-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Wang, Xiaohu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">A hybrid framework for multivariate long-sequence time series forecasting</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Time series forecasting provides insights into the far future by utilizing the available history observations. Recent studies have demonstrated the superiority of transformer-based models in dealing with multivariate long-sequence time series forecasting (MLTSF). However, the data complexity hinders the forecasting accuracy of current deep neural network models. In this article, a hybrid framework - Waveformer - is proposed, which decomposes fluctuated and complex data sequence into multiple stable and more predictable subsequences (components) through the entire forecasting process. Waveformer interactively learns temporal dependencies on each pair of decomposed components, which enhances its ability of learning their temporal dependencies. Moreover, Waveformer treats the implicit and dynamic dependencies among variables as a set of dynamic direct graphs. Based on which, an attention adaptive graph convolution net (AAGCN) is designed, which combines self-attention and adaptive direct graph convolution to capture multivariate dynamic dependencies in a flexible manner. The experimental results on six public datasets show that Waveformer considerably outperforms a varied range of state-of-the-art benchmarks, with at the most 54.3% relative improvement.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Time series forecasting</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Time sequence decomposition</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Graph attention network</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Interactive learning</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Wang, Yong</subfield><subfield code="0">(orcid)0000-0002-0422-5691</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Peng, Jianjian</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zhang, Zhicheng</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Tang, Xueliang</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Applied intelligence</subfield><subfield code="d">Springer US, 1991</subfield><subfield code="g">53(2022), 11 vom: 13. Okt., Seite 13549-13568</subfield><subfield code="w">(DE-627)130990515</subfield><subfield code="w">(DE-600)1080229-0</subfield><subfield code="w">(DE-576)029154286</subfield><subfield code="x">0924-669X</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:53</subfield><subfield code="g">year:2022</subfield><subfield code="g">number:11</subfield><subfield code="g">day:13</subfield><subfield code="g">month:10</subfield><subfield code="g">pages:13549-13568</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s10489-022-04110-1</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MAT</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">53</subfield><subfield code="j">2022</subfield><subfield code="e">11</subfield><subfield code="b">13</subfield><subfield code="c">10</subfield><subfield code="h">13549-13568</subfield></datafield></record></collection>
|
author |
Wang, Xiaohu |
spellingShingle |
Wang, Xiaohu ddc 004 misc Time series forecasting misc Time sequence decomposition misc Graph attention network misc Interactive learning A hybrid framework for multivariate long-sequence time series forecasting |
authorStr |
Wang, Xiaohu |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)130990515 |
format |
Article |
dewey-ones |
004 - Data processing & computer science |
delete_txt_mv |
keep |
author_role |
aut aut aut aut aut |
collection |
OLC |
remote_str |
false |
illustrated |
Not Illustrated |
issn |
0924-669X |
topic_title |
004 VZ A hybrid framework for multivariate long-sequence time series forecasting Time series forecasting Time sequence decomposition Graph attention network Interactive learning |
topic |
ddc 004 misc Time series forecasting misc Time sequence decomposition misc Graph attention network misc Interactive learning |
topic_unstemmed |
ddc 004 misc Time series forecasting misc Time sequence decomposition misc Graph attention network misc Interactive learning |
topic_browse |
ddc 004 misc Time series forecasting misc Time sequence decomposition misc Graph attention network misc Interactive learning |
format_facet |
Aufsätze Gedruckte Aufsätze |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
nc |
hierarchy_parent_title |
Applied intelligence |
hierarchy_parent_id |
130990515 |
dewey-tens |
000 - Computer science, knowledge & systems |
hierarchy_top_title |
Applied intelligence |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)130990515 (DE-600)1080229-0 (DE-576)029154286 |
title |
A hybrid framework for multivariate long-sequence time series forecasting |
ctrlnum |
(DE-627)OLC2143603398 (DE-He213)s10489-022-04110-1-p |
title_full |
A hybrid framework for multivariate long-sequence time series forecasting |
author_sort |
Wang, Xiaohu |
journal |
Applied intelligence |
journalStr |
Applied intelligence |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
000 - Computer science, information & general works |
recordtype |
marc |
publishDateSort |
2022 |
contenttype_str_mv |
txt |
container_start_page |
13549 |
author_browse |
Wang, Xiaohu Wang, Yong Peng, Jianjian Zhang, Zhicheng Tang, Xueliang |
container_volume |
53 |
class |
004 VZ |
format_se |
Aufsätze |
author-letter |
Wang, Xiaohu |
doi_str_mv |
10.1007/s10489-022-04110-1 |
normlink |
(ORCID)0000-0002-0422-5691 |
normlink_prefix_str_mv |
(orcid)0000-0002-0422-5691 |
dewey-full |
004 |
title_sort |
a hybrid framework for multivariate long-sequence time series forecasting |
title_auth |
A hybrid framework for multivariate long-sequence time series forecasting |
abstract |
Abstract Time series forecasting provides insights into the far future by utilizing the available history observations. Recent studies have demonstrated the superiority of transformer-based models in dealing with multivariate long-sequence time series forecasting (MLTSF). However, the data complexity hinders the forecasting accuracy of current deep neural network models. In this article, a hybrid framework - Waveformer - is proposed, which decomposes fluctuated and complex data sequence into multiple stable and more predictable subsequences (components) through the entire forecasting process. Waveformer interactively learns temporal dependencies on each pair of decomposed components, which enhances its ability of learning their temporal dependencies. Moreover, Waveformer treats the implicit and dynamic dependencies among variables as a set of dynamic direct graphs. Based on which, an attention adaptive graph convolution net (AAGCN) is designed, which combines self-attention and adaptive direct graph convolution to capture multivariate dynamic dependencies in a flexible manner. The experimental results on six public datasets show that Waveformer considerably outperforms a varied range of state-of-the-art benchmarks, with at the most 54.3% relative improvement. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. |
abstractGer |
Abstract Time series forecasting provides insights into the far future by utilizing the available history observations. Recent studies have demonstrated the superiority of transformer-based models in dealing with multivariate long-sequence time series forecasting (MLTSF). However, the data complexity hinders the forecasting accuracy of current deep neural network models. In this article, a hybrid framework - Waveformer - is proposed, which decomposes fluctuated and complex data sequence into multiple stable and more predictable subsequences (components) through the entire forecasting process. Waveformer interactively learns temporal dependencies on each pair of decomposed components, which enhances its ability of learning their temporal dependencies. Moreover, Waveformer treats the implicit and dynamic dependencies among variables as a set of dynamic direct graphs. Based on which, an attention adaptive graph convolution net (AAGCN) is designed, which combines self-attention and adaptive direct graph convolution to capture multivariate dynamic dependencies in a flexible manner. The experimental results on six public datasets show that Waveformer considerably outperforms a varied range of state-of-the-art benchmarks, with at the most 54.3% relative improvement. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. |
abstract_unstemmed |
Abstract Time series forecasting provides insights into the far future by utilizing the available history observations. Recent studies have demonstrated the superiority of transformer-based models in dealing with multivariate long-sequence time series forecasting (MLTSF). However, the data complexity hinders the forecasting accuracy of current deep neural network models. In this article, a hybrid framework - Waveformer - is proposed, which decomposes fluctuated and complex data sequence into multiple stable and more predictable subsequences (components) through the entire forecasting process. Waveformer interactively learns temporal dependencies on each pair of decomposed components, which enhances its ability of learning their temporal dependencies. Moreover, Waveformer treats the implicit and dynamic dependencies among variables as a set of dynamic direct graphs. Based on which, an attention adaptive graph convolution net (AAGCN) is designed, which combines self-attention and adaptive direct graph convolution to capture multivariate dynamic dependencies in a flexible manner. The experimental results on six public datasets show that Waveformer considerably outperforms a varied range of state-of-the-art benchmarks, with at the most 54.3% relative improvement. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT |
container_issue |
11 |
title_short |
A hybrid framework for multivariate long-sequence time series forecasting |
url |
https://doi.org/10.1007/s10489-022-04110-1 |
remote_bool |
false |
author2 |
Wang, Yong Peng, Jianjian Zhang, Zhicheng Tang, Xueliang |
author2Str |
Wang, Yong Peng, Jianjian Zhang, Zhicheng Tang, Xueliang |
ppnlink |
130990515 |
mediatype_str_mv |
n |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1007/s10489-022-04110-1 |
up_date |
2024-07-03T16:54:12.956Z |
_version_ |
1803577619089719296 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000naa a22002652 4500</leader><controlfield tag="001">OLC2143603398</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20240118090836.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">240118s2022 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s10489-022-04110-1</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC2143603398</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s10489-022-04110-1-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Wang, Xiaohu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">A hybrid framework for multivariate long-sequence time series forecasting</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Time series forecasting provides insights into the far future by utilizing the available history observations. Recent studies have demonstrated the superiority of transformer-based models in dealing with multivariate long-sequence time series forecasting (MLTSF). However, the data complexity hinders the forecasting accuracy of current deep neural network models. In this article, a hybrid framework - Waveformer - is proposed, which decomposes fluctuated and complex data sequence into multiple stable and more predictable subsequences (components) through the entire forecasting process. Waveformer interactively learns temporal dependencies on each pair of decomposed components, which enhances its ability of learning their temporal dependencies. Moreover, Waveformer treats the implicit and dynamic dependencies among variables as a set of dynamic direct graphs. Based on which, an attention adaptive graph convolution net (AAGCN) is designed, which combines self-attention and adaptive direct graph convolution to capture multivariate dynamic dependencies in a flexible manner. The experimental results on six public datasets show that Waveformer considerably outperforms a varied range of state-of-the-art benchmarks, with at the most 54.3% relative improvement.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Time series forecasting</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Time sequence decomposition</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Graph attention network</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Interactive learning</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Wang, Yong</subfield><subfield code="0">(orcid)0000-0002-0422-5691</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Peng, Jianjian</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zhang, Zhicheng</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Tang, Xueliang</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Applied intelligence</subfield><subfield code="d">Springer US, 1991</subfield><subfield code="g">53(2022), 11 vom: 13. Okt., Seite 13549-13568</subfield><subfield code="w">(DE-627)130990515</subfield><subfield code="w">(DE-600)1080229-0</subfield><subfield code="w">(DE-576)029154286</subfield><subfield code="x">0924-669X</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:53</subfield><subfield code="g">year:2022</subfield><subfield code="g">number:11</subfield><subfield code="g">day:13</subfield><subfield code="g">month:10</subfield><subfield code="g">pages:13549-13568</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s10489-022-04110-1</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MAT</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">53</subfield><subfield code="j">2022</subfield><subfield code="e">11</subfield><subfield code="b">13</subfield><subfield code="c">10</subfield><subfield code="h">13549-13568</subfield></datafield></record></collection>
|
score |
7.401354 |