Spatial–temporal attention fusion for traffic speed prediction
Abstract Accurate vehicle speed prediction is of great significance to the urban traffic intelligent control system. However, in terms of traffic speed prediction, the modules that integrate temporal and spatial features in the existing traffic speed prediction methods are effective in short-term pr...
Ausführliche Beschreibung
Autor*in: |
Zhang, Anqin [verfasserIn] |
---|
Format: |
Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2021 |
---|
Schlagwörter: |
---|
Anmerkung: |
© The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2021 |
---|
Übergeordnetes Werk: |
Enthalten in: Soft computing - Springer Berlin Heidelberg, 1997, 26(2021), 2 vom: 19. Nov., Seite 695-707 |
---|---|
Übergeordnetes Werk: |
volume:26 ; year:2021 ; number:2 ; day:19 ; month:11 ; pages:695-707 |
Links: |
---|
DOI / URN: |
10.1007/s00500-021-06521-7 |
---|
Katalog-ID: |
OLC2077782188 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | OLC2077782188 | ||
003 | DE-627 | ||
005 | 20230505192038.0 | ||
007 | tu | ||
008 | 221220s2021 xx ||||| 00| ||eng c | ||
024 | 7 | |a 10.1007/s00500-021-06521-7 |2 doi | |
035 | |a (DE-627)OLC2077782188 | ||
035 | |a (DE-He213)s00500-021-06521-7-p | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
082 | 0 | 4 | |a 004 |q VZ |
082 | 0 | 4 | |a 004 |q VZ |
084 | |a 11 |2 ssgn | ||
100 | 1 | |a Zhang, Anqin |e verfasserin |0 (orcid)0000-0003-1572-9585 |4 aut | |
245 | 1 | 0 | |a Spatial–temporal attention fusion for traffic speed prediction |
264 | 1 | |c 2021 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ohne Hilfsmittel zu benutzen |b n |2 rdamedia | ||
338 | |a Band |b nc |2 rdacarrier | ||
500 | |a © The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2021 | ||
520 | |a Abstract Accurate vehicle speed prediction is of great significance to the urban traffic intelligent control system. However, in terms of traffic speed prediction, the modules that integrate temporal and spatial features in the existing traffic speed prediction methods are effective in short-term prediction, but the medium-term or long-term prediction errors are relatively large. In order to reduce the errors of existing methods in short-term prediction and predict the medium-term and long-term traffic speed, this paper proposes a traffic speed prediction method that combines attention and Spatial–temporal features, referred to as ASTCN. Specifically, unlike previous methods, ASTCN can use the temporal attention convolutional network (ATCN) to separately extract temporal features from the traffic speed features collected by each sensor, and use the spatial attention mechanism to extract spatial features and then perform spatial–temporal feature fusion. Experiments on three real-world datasets show that the proposed ASTCN model outperforms the state-of-the-art baselines. | ||
650 | 4 | |a Traffic speed prediction | |
650 | 4 | |a Temporal attention convolutional network | |
650 | 4 | |a Spatial attention mechanism | |
650 | 4 | |a Spatial–temporal features | |
700 | 1 | |a Liu, Qizheng |4 aut | |
700 | 1 | |a Zhang, Ting |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Soft computing |d Springer Berlin Heidelberg, 1997 |g 26(2021), 2 vom: 19. Nov., Seite 695-707 |w (DE-627)231970536 |w (DE-600)1387526-7 |w (DE-576)060238259 |x 1432-7643 |7 nnns |
773 | 1 | 8 | |g volume:26 |g year:2021 |g number:2 |g day:19 |g month:11 |g pages:695-707 |
856 | 4 | 1 | |u https://doi.org/10.1007/s00500-021-06521-7 |z lizenzpflichtig |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_OLC | ||
912 | |a SSG-OLC-MAT | ||
912 | |a GBV_ILN_267 | ||
912 | |a GBV_ILN_2018 | ||
912 | |a GBV_ILN_4277 | ||
951 | |a AR | ||
952 | |d 26 |j 2021 |e 2 |b 19 |c 11 |h 695-707 |
author_variant |
a z az q l ql t z tz |
---|---|
matchkey_str |
article:14327643:2021----::ptatmoaatninuinotaf |
hierarchy_sort_str |
2021 |
publishDate |
2021 |
allfields |
10.1007/s00500-021-06521-7 doi (DE-627)OLC2077782188 (DE-He213)s00500-021-06521-7-p DE-627 ger DE-627 rakwb eng 004 VZ 004 VZ 11 ssgn Zhang, Anqin verfasserin (orcid)0000-0003-1572-9585 aut Spatial–temporal attention fusion for traffic speed prediction 2021 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2021 Abstract Accurate vehicle speed prediction is of great significance to the urban traffic intelligent control system. However, in terms of traffic speed prediction, the modules that integrate temporal and spatial features in the existing traffic speed prediction methods are effective in short-term prediction, but the medium-term or long-term prediction errors are relatively large. In order to reduce the errors of existing methods in short-term prediction and predict the medium-term and long-term traffic speed, this paper proposes a traffic speed prediction method that combines attention and Spatial–temporal features, referred to as ASTCN. Specifically, unlike previous methods, ASTCN can use the temporal attention convolutional network (ATCN) to separately extract temporal features from the traffic speed features collected by each sensor, and use the spatial attention mechanism to extract spatial features and then perform spatial–temporal feature fusion. Experiments on three real-world datasets show that the proposed ASTCN model outperforms the state-of-the-art baselines. Traffic speed prediction Temporal attention convolutional network Spatial attention mechanism Spatial–temporal features Liu, Qizheng aut Zhang, Ting aut Enthalten in Soft computing Springer Berlin Heidelberg, 1997 26(2021), 2 vom: 19. Nov., Seite 695-707 (DE-627)231970536 (DE-600)1387526-7 (DE-576)060238259 1432-7643 nnns volume:26 year:2021 number:2 day:19 month:11 pages:695-707 https://doi.org/10.1007/s00500-021-06521-7 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_267 GBV_ILN_2018 GBV_ILN_4277 AR 26 2021 2 19 11 695-707 |
spelling |
10.1007/s00500-021-06521-7 doi (DE-627)OLC2077782188 (DE-He213)s00500-021-06521-7-p DE-627 ger DE-627 rakwb eng 004 VZ 004 VZ 11 ssgn Zhang, Anqin verfasserin (orcid)0000-0003-1572-9585 aut Spatial–temporal attention fusion for traffic speed prediction 2021 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2021 Abstract Accurate vehicle speed prediction is of great significance to the urban traffic intelligent control system. However, in terms of traffic speed prediction, the modules that integrate temporal and spatial features in the existing traffic speed prediction methods are effective in short-term prediction, but the medium-term or long-term prediction errors are relatively large. In order to reduce the errors of existing methods in short-term prediction and predict the medium-term and long-term traffic speed, this paper proposes a traffic speed prediction method that combines attention and Spatial–temporal features, referred to as ASTCN. Specifically, unlike previous methods, ASTCN can use the temporal attention convolutional network (ATCN) to separately extract temporal features from the traffic speed features collected by each sensor, and use the spatial attention mechanism to extract spatial features and then perform spatial–temporal feature fusion. Experiments on three real-world datasets show that the proposed ASTCN model outperforms the state-of-the-art baselines. Traffic speed prediction Temporal attention convolutional network Spatial attention mechanism Spatial–temporal features Liu, Qizheng aut Zhang, Ting aut Enthalten in Soft computing Springer Berlin Heidelberg, 1997 26(2021), 2 vom: 19. Nov., Seite 695-707 (DE-627)231970536 (DE-600)1387526-7 (DE-576)060238259 1432-7643 nnns volume:26 year:2021 number:2 day:19 month:11 pages:695-707 https://doi.org/10.1007/s00500-021-06521-7 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_267 GBV_ILN_2018 GBV_ILN_4277 AR 26 2021 2 19 11 695-707 |
allfields_unstemmed |
10.1007/s00500-021-06521-7 doi (DE-627)OLC2077782188 (DE-He213)s00500-021-06521-7-p DE-627 ger DE-627 rakwb eng 004 VZ 004 VZ 11 ssgn Zhang, Anqin verfasserin (orcid)0000-0003-1572-9585 aut Spatial–temporal attention fusion for traffic speed prediction 2021 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2021 Abstract Accurate vehicle speed prediction is of great significance to the urban traffic intelligent control system. However, in terms of traffic speed prediction, the modules that integrate temporal and spatial features in the existing traffic speed prediction methods are effective in short-term prediction, but the medium-term or long-term prediction errors are relatively large. In order to reduce the errors of existing methods in short-term prediction and predict the medium-term and long-term traffic speed, this paper proposes a traffic speed prediction method that combines attention and Spatial–temporal features, referred to as ASTCN. Specifically, unlike previous methods, ASTCN can use the temporal attention convolutional network (ATCN) to separately extract temporal features from the traffic speed features collected by each sensor, and use the spatial attention mechanism to extract spatial features and then perform spatial–temporal feature fusion. Experiments on three real-world datasets show that the proposed ASTCN model outperforms the state-of-the-art baselines. Traffic speed prediction Temporal attention convolutional network Spatial attention mechanism Spatial–temporal features Liu, Qizheng aut Zhang, Ting aut Enthalten in Soft computing Springer Berlin Heidelberg, 1997 26(2021), 2 vom: 19. Nov., Seite 695-707 (DE-627)231970536 (DE-600)1387526-7 (DE-576)060238259 1432-7643 nnns volume:26 year:2021 number:2 day:19 month:11 pages:695-707 https://doi.org/10.1007/s00500-021-06521-7 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_267 GBV_ILN_2018 GBV_ILN_4277 AR 26 2021 2 19 11 695-707 |
allfieldsGer |
10.1007/s00500-021-06521-7 doi (DE-627)OLC2077782188 (DE-He213)s00500-021-06521-7-p DE-627 ger DE-627 rakwb eng 004 VZ 004 VZ 11 ssgn Zhang, Anqin verfasserin (orcid)0000-0003-1572-9585 aut Spatial–temporal attention fusion for traffic speed prediction 2021 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2021 Abstract Accurate vehicle speed prediction is of great significance to the urban traffic intelligent control system. However, in terms of traffic speed prediction, the modules that integrate temporal and spatial features in the existing traffic speed prediction methods are effective in short-term prediction, but the medium-term or long-term prediction errors are relatively large. In order to reduce the errors of existing methods in short-term prediction and predict the medium-term and long-term traffic speed, this paper proposes a traffic speed prediction method that combines attention and Spatial–temporal features, referred to as ASTCN. Specifically, unlike previous methods, ASTCN can use the temporal attention convolutional network (ATCN) to separately extract temporal features from the traffic speed features collected by each sensor, and use the spatial attention mechanism to extract spatial features and then perform spatial–temporal feature fusion. Experiments on three real-world datasets show that the proposed ASTCN model outperforms the state-of-the-art baselines. Traffic speed prediction Temporal attention convolutional network Spatial attention mechanism Spatial–temporal features Liu, Qizheng aut Zhang, Ting aut Enthalten in Soft computing Springer Berlin Heidelberg, 1997 26(2021), 2 vom: 19. Nov., Seite 695-707 (DE-627)231970536 (DE-600)1387526-7 (DE-576)060238259 1432-7643 nnns volume:26 year:2021 number:2 day:19 month:11 pages:695-707 https://doi.org/10.1007/s00500-021-06521-7 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_267 GBV_ILN_2018 GBV_ILN_4277 AR 26 2021 2 19 11 695-707 |
allfieldsSound |
10.1007/s00500-021-06521-7 doi (DE-627)OLC2077782188 (DE-He213)s00500-021-06521-7-p DE-627 ger DE-627 rakwb eng 004 VZ 004 VZ 11 ssgn Zhang, Anqin verfasserin (orcid)0000-0003-1572-9585 aut Spatial–temporal attention fusion for traffic speed prediction 2021 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2021 Abstract Accurate vehicle speed prediction is of great significance to the urban traffic intelligent control system. However, in terms of traffic speed prediction, the modules that integrate temporal and spatial features in the existing traffic speed prediction methods are effective in short-term prediction, but the medium-term or long-term prediction errors are relatively large. In order to reduce the errors of existing methods in short-term prediction and predict the medium-term and long-term traffic speed, this paper proposes a traffic speed prediction method that combines attention and Spatial–temporal features, referred to as ASTCN. Specifically, unlike previous methods, ASTCN can use the temporal attention convolutional network (ATCN) to separately extract temporal features from the traffic speed features collected by each sensor, and use the spatial attention mechanism to extract spatial features and then perform spatial–temporal feature fusion. Experiments on three real-world datasets show that the proposed ASTCN model outperforms the state-of-the-art baselines. Traffic speed prediction Temporal attention convolutional network Spatial attention mechanism Spatial–temporal features Liu, Qizheng aut Zhang, Ting aut Enthalten in Soft computing Springer Berlin Heidelberg, 1997 26(2021), 2 vom: 19. Nov., Seite 695-707 (DE-627)231970536 (DE-600)1387526-7 (DE-576)060238259 1432-7643 nnns volume:26 year:2021 number:2 day:19 month:11 pages:695-707 https://doi.org/10.1007/s00500-021-06521-7 lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_267 GBV_ILN_2018 GBV_ILN_4277 AR 26 2021 2 19 11 695-707 |
language |
English |
source |
Enthalten in Soft computing 26(2021), 2 vom: 19. Nov., Seite 695-707 volume:26 year:2021 number:2 day:19 month:11 pages:695-707 |
sourceStr |
Enthalten in Soft computing 26(2021), 2 vom: 19. Nov., Seite 695-707 volume:26 year:2021 number:2 day:19 month:11 pages:695-707 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Traffic speed prediction Temporal attention convolutional network Spatial attention mechanism Spatial–temporal features |
dewey-raw |
004 |
isfreeaccess_bool |
false |
container_title |
Soft computing |
authorswithroles_txt_mv |
Zhang, Anqin @@aut@@ Liu, Qizheng @@aut@@ Zhang, Ting @@aut@@ |
publishDateDaySort_date |
2021-11-19T00:00:00Z |
hierarchy_top_id |
231970536 |
dewey-sort |
14 |
id |
OLC2077782188 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">OLC2077782188</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230505192038.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">221220s2021 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s00500-021-06521-7</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC2077782188</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s00500-021-06521-7-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">11</subfield><subfield code="2">ssgn</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Zhang, Anqin</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0003-1572-9585</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Spatial–temporal attention fusion for traffic speed prediction</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2021</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2021</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Accurate vehicle speed prediction is of great significance to the urban traffic intelligent control system. However, in terms of traffic speed prediction, the modules that integrate temporal and spatial features in the existing traffic speed prediction methods are effective in short-term prediction, but the medium-term or long-term prediction errors are relatively large. In order to reduce the errors of existing methods in short-term prediction and predict the medium-term and long-term traffic speed, this paper proposes a traffic speed prediction method that combines attention and Spatial–temporal features, referred to as ASTCN. Specifically, unlike previous methods, ASTCN can use the temporal attention convolutional network (ATCN) to separately extract temporal features from the traffic speed features collected by each sensor, and use the spatial attention mechanism to extract spatial features and then perform spatial–temporal feature fusion. Experiments on three real-world datasets show that the proposed ASTCN model outperforms the state-of-the-art baselines.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Traffic speed prediction</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Temporal attention convolutional network</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Spatial attention mechanism</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Spatial–temporal features</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Liu, Qizheng</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zhang, Ting</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Soft computing</subfield><subfield code="d">Springer Berlin Heidelberg, 1997</subfield><subfield code="g">26(2021), 2 vom: 19. Nov., Seite 695-707</subfield><subfield code="w">(DE-627)231970536</subfield><subfield code="w">(DE-600)1387526-7</subfield><subfield code="w">(DE-576)060238259</subfield><subfield code="x">1432-7643</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:26</subfield><subfield code="g">year:2021</subfield><subfield code="g">number:2</subfield><subfield code="g">day:19</subfield><subfield code="g">month:11</subfield><subfield code="g">pages:695-707</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s00500-021-06521-7</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MAT</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_267</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2018</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4277</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">26</subfield><subfield code="j">2021</subfield><subfield code="e">2</subfield><subfield code="b">19</subfield><subfield code="c">11</subfield><subfield code="h">695-707</subfield></datafield></record></collection>
|
author |
Zhang, Anqin |
spellingShingle |
Zhang, Anqin ddc 004 ssgn 11 misc Traffic speed prediction misc Temporal attention convolutional network misc Spatial attention mechanism misc Spatial–temporal features Spatial–temporal attention fusion for traffic speed prediction |
authorStr |
Zhang, Anqin |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)231970536 |
format |
Article |
dewey-ones |
004 - Data processing & computer science |
delete_txt_mv |
keep |
author_role |
aut aut aut |
collection |
OLC |
remote_str |
false |
illustrated |
Not Illustrated |
issn |
1432-7643 |
topic_title |
004 VZ 11 ssgn Spatial–temporal attention fusion for traffic speed prediction Traffic speed prediction Temporal attention convolutional network Spatial attention mechanism Spatial–temporal features |
topic |
ddc 004 ssgn 11 misc Traffic speed prediction misc Temporal attention convolutional network misc Spatial attention mechanism misc Spatial–temporal features |
topic_unstemmed |
ddc 004 ssgn 11 misc Traffic speed prediction misc Temporal attention convolutional network misc Spatial attention mechanism misc Spatial–temporal features |
topic_browse |
ddc 004 ssgn 11 misc Traffic speed prediction misc Temporal attention convolutional network misc Spatial attention mechanism misc Spatial–temporal features |
format_facet |
Aufsätze Gedruckte Aufsätze |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
nc |
hierarchy_parent_title |
Soft computing |
hierarchy_parent_id |
231970536 |
dewey-tens |
000 - Computer science, knowledge & systems |
hierarchy_top_title |
Soft computing |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)231970536 (DE-600)1387526-7 (DE-576)060238259 |
title |
Spatial–temporal attention fusion for traffic speed prediction |
ctrlnum |
(DE-627)OLC2077782188 (DE-He213)s00500-021-06521-7-p |
title_full |
Spatial–temporal attention fusion for traffic speed prediction |
author_sort |
Zhang, Anqin |
journal |
Soft computing |
journalStr |
Soft computing |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
000 - Computer science, information & general works |
recordtype |
marc |
publishDateSort |
2021 |
contenttype_str_mv |
txt |
container_start_page |
695 |
author_browse |
Zhang, Anqin Liu, Qizheng Zhang, Ting |
container_volume |
26 |
class |
004 VZ 11 ssgn |
format_se |
Aufsätze |
author-letter |
Zhang, Anqin |
doi_str_mv |
10.1007/s00500-021-06521-7 |
normlink |
(ORCID)0000-0003-1572-9585 |
normlink_prefix_str_mv |
(orcid)0000-0003-1572-9585 |
dewey-full |
004 |
title_sort |
spatial–temporal attention fusion for traffic speed prediction |
title_auth |
Spatial–temporal attention fusion for traffic speed prediction |
abstract |
Abstract Accurate vehicle speed prediction is of great significance to the urban traffic intelligent control system. However, in terms of traffic speed prediction, the modules that integrate temporal and spatial features in the existing traffic speed prediction methods are effective in short-term prediction, but the medium-term or long-term prediction errors are relatively large. In order to reduce the errors of existing methods in short-term prediction and predict the medium-term and long-term traffic speed, this paper proposes a traffic speed prediction method that combines attention and Spatial–temporal features, referred to as ASTCN. Specifically, unlike previous methods, ASTCN can use the temporal attention convolutional network (ATCN) to separately extract temporal features from the traffic speed features collected by each sensor, and use the spatial attention mechanism to extract spatial features and then perform spatial–temporal feature fusion. Experiments on three real-world datasets show that the proposed ASTCN model outperforms the state-of-the-art baselines. © The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2021 |
abstractGer |
Abstract Accurate vehicle speed prediction is of great significance to the urban traffic intelligent control system. However, in terms of traffic speed prediction, the modules that integrate temporal and spatial features in the existing traffic speed prediction methods are effective in short-term prediction, but the medium-term or long-term prediction errors are relatively large. In order to reduce the errors of existing methods in short-term prediction and predict the medium-term and long-term traffic speed, this paper proposes a traffic speed prediction method that combines attention and Spatial–temporal features, referred to as ASTCN. Specifically, unlike previous methods, ASTCN can use the temporal attention convolutional network (ATCN) to separately extract temporal features from the traffic speed features collected by each sensor, and use the spatial attention mechanism to extract spatial features and then perform spatial–temporal feature fusion. Experiments on three real-world datasets show that the proposed ASTCN model outperforms the state-of-the-art baselines. © The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2021 |
abstract_unstemmed |
Abstract Accurate vehicle speed prediction is of great significance to the urban traffic intelligent control system. However, in terms of traffic speed prediction, the modules that integrate temporal and spatial features in the existing traffic speed prediction methods are effective in short-term prediction, but the medium-term or long-term prediction errors are relatively large. In order to reduce the errors of existing methods in short-term prediction and predict the medium-term and long-term traffic speed, this paper proposes a traffic speed prediction method that combines attention and Spatial–temporal features, referred to as ASTCN. Specifically, unlike previous methods, ASTCN can use the temporal attention convolutional network (ATCN) to separately extract temporal features from the traffic speed features collected by each sensor, and use the spatial attention mechanism to extract spatial features and then perform spatial–temporal feature fusion. Experiments on three real-world datasets show that the proposed ASTCN model outperforms the state-of-the-art baselines. © The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2021 |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT GBV_ILN_267 GBV_ILN_2018 GBV_ILN_4277 |
container_issue |
2 |
title_short |
Spatial–temporal attention fusion for traffic speed prediction |
url |
https://doi.org/10.1007/s00500-021-06521-7 |
remote_bool |
false |
author2 |
Liu, Qizheng Zhang, Ting |
author2Str |
Liu, Qizheng Zhang, Ting |
ppnlink |
231970536 |
mediatype_str_mv |
n |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1007/s00500-021-06521-7 |
up_date |
2024-07-03T17:16:40.530Z |
_version_ |
1803579032108793856 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">OLC2077782188</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230505192038.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">221220s2021 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s00500-021-06521-7</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC2077782188</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s00500-021-06521-7-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="084" ind1=" " ind2=" "><subfield code="a">11</subfield><subfield code="2">ssgn</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Zhang, Anqin</subfield><subfield code="e">verfasserin</subfield><subfield code="0">(orcid)0000-0003-1572-9585</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Spatial–temporal attention fusion for traffic speed prediction</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2021</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2021</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Accurate vehicle speed prediction is of great significance to the urban traffic intelligent control system. However, in terms of traffic speed prediction, the modules that integrate temporal and spatial features in the existing traffic speed prediction methods are effective in short-term prediction, but the medium-term or long-term prediction errors are relatively large. In order to reduce the errors of existing methods in short-term prediction and predict the medium-term and long-term traffic speed, this paper proposes a traffic speed prediction method that combines attention and Spatial–temporal features, referred to as ASTCN. Specifically, unlike previous methods, ASTCN can use the temporal attention convolutional network (ATCN) to separately extract temporal features from the traffic speed features collected by each sensor, and use the spatial attention mechanism to extract spatial features and then perform spatial–temporal feature fusion. Experiments on three real-world datasets show that the proposed ASTCN model outperforms the state-of-the-art baselines.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Traffic speed prediction</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Temporal attention convolutional network</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Spatial attention mechanism</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Spatial–temporal features</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Liu, Qizheng</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Zhang, Ting</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Soft computing</subfield><subfield code="d">Springer Berlin Heidelberg, 1997</subfield><subfield code="g">26(2021), 2 vom: 19. Nov., Seite 695-707</subfield><subfield code="w">(DE-627)231970536</subfield><subfield code="w">(DE-600)1387526-7</subfield><subfield code="w">(DE-576)060238259</subfield><subfield code="x">1432-7643</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:26</subfield><subfield code="g">year:2021</subfield><subfield code="g">number:2</subfield><subfield code="g">day:19</subfield><subfield code="g">month:11</subfield><subfield code="g">pages:695-707</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s00500-021-06521-7</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MAT</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_267</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2018</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4277</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">26</subfield><subfield code="j">2021</subfield><subfield code="e">2</subfield><subfield code="b">19</subfield><subfield code="c">11</subfield><subfield code="h">695-707</subfield></datafield></record></collection>
|
score |
7.401473 |