A Reinforcement Learning Based Data Storage and Traffic Management in Information-Centric Data Center Networks
Abstract Data Center Networks (DCN), a core infrastructure of cloud computing, place heavy demands on efficient storage and management of massive data. The data storage scheme, which decides how to assign data to nodes for storage, has a significant impact on the performance of the data center. Howe...
Ausführliche Beschreibung
Autor*in: |
Yang, Weihong [verfasserIn] |
---|
Format: |
Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2020 |
---|
Schlagwörter: |
---|
Anmerkung: |
© Springer Science+Business Media, LLC, part of Springer Nature 2020 |
---|
Übergeordnetes Werk: |
Enthalten in: Mobile networks and applications - Springer US, 1996, 27(2020), 1 vom: 30. Juli, Seite 266-275 |
---|---|
Übergeordnetes Werk: |
volume:27 ; year:2020 ; number:1 ; day:30 ; month:07 ; pages:266-275 |
Links: |
---|
DOI / URN: |
10.1007/s11036-020-01629-w |
---|
Katalog-ID: |
OLC2078335835 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | OLC2078335835 | ||
003 | DE-627 | ||
005 | 20230506002415.0 | ||
007 | tu | ||
008 | 221220s2020 xx ||||| 00| ||eng c | ||
024 | 7 | |a 10.1007/s11036-020-01629-w |2 doi | |
035 | |a (DE-627)OLC2078335835 | ||
035 | |a (DE-He213)s11036-020-01629-w-p | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
082 | 0 | 4 | |a 004 |q VZ |
100 | 1 | |a Yang, Weihong |e verfasserin |4 aut | |
245 | 1 | 0 | |a A Reinforcement Learning Based Data Storage and Traffic Management in Information-Centric Data Center Networks |
264 | 1 | |c 2020 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ohne Hilfsmittel zu benutzen |b n |2 rdamedia | ||
338 | |a Band |b nc |2 rdacarrier | ||
500 | |a © Springer Science+Business Media, LLC, part of Springer Nature 2020 | ||
520 | |a Abstract Data Center Networks (DCN), a core infrastructure of cloud computing, place heavy demands on efficient storage and management of massive data. The data storage scheme, which decides how to assign data to nodes for storage, has a significant impact on the performance of the data center. However, most of the existing solutions focus on where to store the data (i.e., the selection of storage node) but have not considered how to store them (i.e., the traffic management such as routing and transmission rate adjustment). By leveraging the Information-Centric Networks (ICN) architecture, this paper tackles the data storage and traffic management issue in Information-Centric Data Center Networks (ICDCN) based on Reinforcement Learning (RL) method, since RL has been developed as a promising solution to address dynamic network issues. We present a global optimization of joint traffic management and data storage and then solve it by the distributed multi-agent Q-learning. In ICDCN, the data is routed based on the data’s name, which achieves better routing scalability by decoupling the data and its physical location. Compared with IP’s stateless forwarding plane, the stateful forwarding information maintained at every node supports adaptively routing and hop-by-hop traffic control by using the Q-learning method. We evaluate our proposal on an NS-3-based simulator, and the results show that the proposed scheme can effectively reduce transmission time and increase throughput while achieving load-balanced among servers. | ||
650 | 4 | |a Information-centric data center networks | |
650 | 4 | |a Data storage | |
650 | 4 | |a Traffic management | |
650 | 4 | |a Distributed Q-learning | |
700 | 1 | |a Qin, Yang |4 aut | |
700 | 1 | |a Yang, ZhaoZheng |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Mobile networks and applications |d Springer US, 1996 |g 27(2020), 1 vom: 30. Juli, Seite 266-275 |w (DE-627)215279522 |w (DE-600)1342049-5 |w (DE-576)063244756 |x 1383-469X |7 nnns |
773 | 1 | 8 | |g volume:27 |g year:2020 |g number:1 |g day:30 |g month:07 |g pages:266-275 |
856 | 4 | 1 | |u https://doi.org/10.1007/s11036-020-01629-w |z lizenzpflichtig |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_OLC | ||
912 | |a SSG-OLC-MAT | ||
951 | |a AR | ||
952 | |d 27 |j 2020 |e 1 |b 30 |c 07 |h 266-275 |
author_variant |
w y wy y q yq z y zy |
---|---|
matchkey_str |
article:1383469X:2020----::rifreeterigaedtsoaentafcaaeetnnomtoc |
hierarchy_sort_str |
2020 |
publishDate |
2020 |
allfields |
10.1007/s11036-020-01629-w doi (DE-627)OLC2078335835 (DE-He213)s11036-020-01629-w-p DE-627 ger DE-627 rakwb eng 004 VZ Yang, Weihong verfasserin aut A Reinforcement Learning Based Data Storage and Traffic Management in Information-Centric Data Center Networks 2020 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media, LLC, part of Springer Nature 2020 Abstract Data Center Networks (DCN), a core infrastructure of cloud computing, place heavy demands on efficient storage and management of massive data. The data storage scheme, which decides how to assign data to nodes for storage, has a significant impact on the performance of the data center. However, most of the existing solutions focus on where to store the data (i.e., the selection of storage node) but have not considered how to store them (i.e., the traffic management such as routing and transmission rate adjustment). By leveraging the Information-Centric Networks (ICN) architecture, this paper tackles the data storage and traffic management issue in Information-Centric Data Center Networks (ICDCN) based on Reinforcement Learning (RL) method, since RL has been developed as a promising solution to address dynamic network issues. We present a global optimization of joint traffic management and data storage and then solve it by the distributed multi-agent Q-learning. In ICDCN, the data is routed based on the data’s name, which achieves better routing scalability by decoupling the data and its physical location. Compared with IP’s stateless forwarding plane, the stateful forwarding information maintained at every node supports adaptively routing and hop-by-hop traffic control by using the Q-learning method. We evaluate our proposal on an NS-3-based simulator, and the results show that the proposed scheme can effectively reduce transmission time and increase throughput while achieving load-balanced among servers. Information-centric data center networks Data storage Traffic management Distributed Q-learning Qin, Yang aut Yang, ZhaoZheng aut Enthalten in Mobile networks and applications Springer US, 1996 27(2020), 1 vom: 30. Juli, Seite 266-275 (DE-627)215279522 (DE-600)1342049-5 (DE-576)063244756 1383-469X nnns volume:27 year:2020 number:1 day:30 month:07 pages:266-275 https://doi.org/10.1007/s11036-020-01629-w lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT AR 27 2020 1 30 07 266-275 |
spelling |
10.1007/s11036-020-01629-w doi (DE-627)OLC2078335835 (DE-He213)s11036-020-01629-w-p DE-627 ger DE-627 rakwb eng 004 VZ Yang, Weihong verfasserin aut A Reinforcement Learning Based Data Storage and Traffic Management in Information-Centric Data Center Networks 2020 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media, LLC, part of Springer Nature 2020 Abstract Data Center Networks (DCN), a core infrastructure of cloud computing, place heavy demands on efficient storage and management of massive data. The data storage scheme, which decides how to assign data to nodes for storage, has a significant impact on the performance of the data center. However, most of the existing solutions focus on where to store the data (i.e., the selection of storage node) but have not considered how to store them (i.e., the traffic management such as routing and transmission rate adjustment). By leveraging the Information-Centric Networks (ICN) architecture, this paper tackles the data storage and traffic management issue in Information-Centric Data Center Networks (ICDCN) based on Reinforcement Learning (RL) method, since RL has been developed as a promising solution to address dynamic network issues. We present a global optimization of joint traffic management and data storage and then solve it by the distributed multi-agent Q-learning. In ICDCN, the data is routed based on the data’s name, which achieves better routing scalability by decoupling the data and its physical location. Compared with IP’s stateless forwarding plane, the stateful forwarding information maintained at every node supports adaptively routing and hop-by-hop traffic control by using the Q-learning method. We evaluate our proposal on an NS-3-based simulator, and the results show that the proposed scheme can effectively reduce transmission time and increase throughput while achieving load-balanced among servers. Information-centric data center networks Data storage Traffic management Distributed Q-learning Qin, Yang aut Yang, ZhaoZheng aut Enthalten in Mobile networks and applications Springer US, 1996 27(2020), 1 vom: 30. Juli, Seite 266-275 (DE-627)215279522 (DE-600)1342049-5 (DE-576)063244756 1383-469X nnns volume:27 year:2020 number:1 day:30 month:07 pages:266-275 https://doi.org/10.1007/s11036-020-01629-w lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT AR 27 2020 1 30 07 266-275 |
allfields_unstemmed |
10.1007/s11036-020-01629-w doi (DE-627)OLC2078335835 (DE-He213)s11036-020-01629-w-p DE-627 ger DE-627 rakwb eng 004 VZ Yang, Weihong verfasserin aut A Reinforcement Learning Based Data Storage and Traffic Management in Information-Centric Data Center Networks 2020 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media, LLC, part of Springer Nature 2020 Abstract Data Center Networks (DCN), a core infrastructure of cloud computing, place heavy demands on efficient storage and management of massive data. The data storage scheme, which decides how to assign data to nodes for storage, has a significant impact on the performance of the data center. However, most of the existing solutions focus on where to store the data (i.e., the selection of storage node) but have not considered how to store them (i.e., the traffic management such as routing and transmission rate adjustment). By leveraging the Information-Centric Networks (ICN) architecture, this paper tackles the data storage and traffic management issue in Information-Centric Data Center Networks (ICDCN) based on Reinforcement Learning (RL) method, since RL has been developed as a promising solution to address dynamic network issues. We present a global optimization of joint traffic management and data storage and then solve it by the distributed multi-agent Q-learning. In ICDCN, the data is routed based on the data’s name, which achieves better routing scalability by decoupling the data and its physical location. Compared with IP’s stateless forwarding plane, the stateful forwarding information maintained at every node supports adaptively routing and hop-by-hop traffic control by using the Q-learning method. We evaluate our proposal on an NS-3-based simulator, and the results show that the proposed scheme can effectively reduce transmission time and increase throughput while achieving load-balanced among servers. Information-centric data center networks Data storage Traffic management Distributed Q-learning Qin, Yang aut Yang, ZhaoZheng aut Enthalten in Mobile networks and applications Springer US, 1996 27(2020), 1 vom: 30. Juli, Seite 266-275 (DE-627)215279522 (DE-600)1342049-5 (DE-576)063244756 1383-469X nnns volume:27 year:2020 number:1 day:30 month:07 pages:266-275 https://doi.org/10.1007/s11036-020-01629-w lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT AR 27 2020 1 30 07 266-275 |
allfieldsGer |
10.1007/s11036-020-01629-w doi (DE-627)OLC2078335835 (DE-He213)s11036-020-01629-w-p DE-627 ger DE-627 rakwb eng 004 VZ Yang, Weihong verfasserin aut A Reinforcement Learning Based Data Storage and Traffic Management in Information-Centric Data Center Networks 2020 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media, LLC, part of Springer Nature 2020 Abstract Data Center Networks (DCN), a core infrastructure of cloud computing, place heavy demands on efficient storage and management of massive data. The data storage scheme, which decides how to assign data to nodes for storage, has a significant impact on the performance of the data center. However, most of the existing solutions focus on where to store the data (i.e., the selection of storage node) but have not considered how to store them (i.e., the traffic management such as routing and transmission rate adjustment). By leveraging the Information-Centric Networks (ICN) architecture, this paper tackles the data storage and traffic management issue in Information-Centric Data Center Networks (ICDCN) based on Reinforcement Learning (RL) method, since RL has been developed as a promising solution to address dynamic network issues. We present a global optimization of joint traffic management and data storage and then solve it by the distributed multi-agent Q-learning. In ICDCN, the data is routed based on the data’s name, which achieves better routing scalability by decoupling the data and its physical location. Compared with IP’s stateless forwarding plane, the stateful forwarding information maintained at every node supports adaptively routing and hop-by-hop traffic control by using the Q-learning method. We evaluate our proposal on an NS-3-based simulator, and the results show that the proposed scheme can effectively reduce transmission time and increase throughput while achieving load-balanced among servers. Information-centric data center networks Data storage Traffic management Distributed Q-learning Qin, Yang aut Yang, ZhaoZheng aut Enthalten in Mobile networks and applications Springer US, 1996 27(2020), 1 vom: 30. Juli, Seite 266-275 (DE-627)215279522 (DE-600)1342049-5 (DE-576)063244756 1383-469X nnns volume:27 year:2020 number:1 day:30 month:07 pages:266-275 https://doi.org/10.1007/s11036-020-01629-w lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT AR 27 2020 1 30 07 266-275 |
allfieldsSound |
10.1007/s11036-020-01629-w doi (DE-627)OLC2078335835 (DE-He213)s11036-020-01629-w-p DE-627 ger DE-627 rakwb eng 004 VZ Yang, Weihong verfasserin aut A Reinforcement Learning Based Data Storage and Traffic Management in Information-Centric Data Center Networks 2020 Text txt rdacontent ohne Hilfsmittel zu benutzen n rdamedia Band nc rdacarrier © Springer Science+Business Media, LLC, part of Springer Nature 2020 Abstract Data Center Networks (DCN), a core infrastructure of cloud computing, place heavy demands on efficient storage and management of massive data. The data storage scheme, which decides how to assign data to nodes for storage, has a significant impact on the performance of the data center. However, most of the existing solutions focus on where to store the data (i.e., the selection of storage node) but have not considered how to store them (i.e., the traffic management such as routing and transmission rate adjustment). By leveraging the Information-Centric Networks (ICN) architecture, this paper tackles the data storage and traffic management issue in Information-Centric Data Center Networks (ICDCN) based on Reinforcement Learning (RL) method, since RL has been developed as a promising solution to address dynamic network issues. We present a global optimization of joint traffic management and data storage and then solve it by the distributed multi-agent Q-learning. In ICDCN, the data is routed based on the data’s name, which achieves better routing scalability by decoupling the data and its physical location. Compared with IP’s stateless forwarding plane, the stateful forwarding information maintained at every node supports adaptively routing and hop-by-hop traffic control by using the Q-learning method. We evaluate our proposal on an NS-3-based simulator, and the results show that the proposed scheme can effectively reduce transmission time and increase throughput while achieving load-balanced among servers. Information-centric data center networks Data storage Traffic management Distributed Q-learning Qin, Yang aut Yang, ZhaoZheng aut Enthalten in Mobile networks and applications Springer US, 1996 27(2020), 1 vom: 30. Juli, Seite 266-275 (DE-627)215279522 (DE-600)1342049-5 (DE-576)063244756 1383-469X nnns volume:27 year:2020 number:1 day:30 month:07 pages:266-275 https://doi.org/10.1007/s11036-020-01629-w lizenzpflichtig Volltext GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT AR 27 2020 1 30 07 266-275 |
language |
English |
source |
Enthalten in Mobile networks and applications 27(2020), 1 vom: 30. Juli, Seite 266-275 volume:27 year:2020 number:1 day:30 month:07 pages:266-275 |
sourceStr |
Enthalten in Mobile networks and applications 27(2020), 1 vom: 30. Juli, Seite 266-275 volume:27 year:2020 number:1 day:30 month:07 pages:266-275 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Information-centric data center networks Data storage Traffic management Distributed Q-learning |
dewey-raw |
004 |
isfreeaccess_bool |
false |
container_title |
Mobile networks and applications |
authorswithroles_txt_mv |
Yang, Weihong @@aut@@ Qin, Yang @@aut@@ Yang, ZhaoZheng @@aut@@ |
publishDateDaySort_date |
2020-07-30T00:00:00Z |
hierarchy_top_id |
215279522 |
dewey-sort |
14 |
id |
OLC2078335835 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">OLC2078335835</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230506002415.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">221220s2020 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s11036-020-01629-w</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC2078335835</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s11036-020-01629-w-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Yang, Weihong</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">A Reinforcement Learning Based Data Storage and Traffic Management in Information-Centric Data Center Networks</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2020</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© Springer Science+Business Media, LLC, part of Springer Nature 2020</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Data Center Networks (DCN), a core infrastructure of cloud computing, place heavy demands on efficient storage and management of massive data. The data storage scheme, which decides how to assign data to nodes for storage, has a significant impact on the performance of the data center. However, most of the existing solutions focus on where to store the data (i.e., the selection of storage node) but have not considered how to store them (i.e., the traffic management such as routing and transmission rate adjustment). By leveraging the Information-Centric Networks (ICN) architecture, this paper tackles the data storage and traffic management issue in Information-Centric Data Center Networks (ICDCN) based on Reinforcement Learning (RL) method, since RL has been developed as a promising solution to address dynamic network issues. We present a global optimization of joint traffic management and data storage and then solve it by the distributed multi-agent Q-learning. In ICDCN, the data is routed based on the data’s name, which achieves better routing scalability by decoupling the data and its physical location. Compared with IP’s stateless forwarding plane, the stateful forwarding information maintained at every node supports adaptively routing and hop-by-hop traffic control by using the Q-learning method. We evaluate our proposal on an NS-3-based simulator, and the results show that the proposed scheme can effectively reduce transmission time and increase throughput while achieving load-balanced among servers.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Information-centric data center networks</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Data storage</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Traffic management</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Distributed Q-learning</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Qin, Yang</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Yang, ZhaoZheng</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Mobile networks and applications</subfield><subfield code="d">Springer US, 1996</subfield><subfield code="g">27(2020), 1 vom: 30. Juli, Seite 266-275</subfield><subfield code="w">(DE-627)215279522</subfield><subfield code="w">(DE-600)1342049-5</subfield><subfield code="w">(DE-576)063244756</subfield><subfield code="x">1383-469X</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:27</subfield><subfield code="g">year:2020</subfield><subfield code="g">number:1</subfield><subfield code="g">day:30</subfield><subfield code="g">month:07</subfield><subfield code="g">pages:266-275</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s11036-020-01629-w</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MAT</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">27</subfield><subfield code="j">2020</subfield><subfield code="e">1</subfield><subfield code="b">30</subfield><subfield code="c">07</subfield><subfield code="h">266-275</subfield></datafield></record></collection>
|
author |
Yang, Weihong |
spellingShingle |
Yang, Weihong ddc 004 misc Information-centric data center networks misc Data storage misc Traffic management misc Distributed Q-learning A Reinforcement Learning Based Data Storage and Traffic Management in Information-Centric Data Center Networks |
authorStr |
Yang, Weihong |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)215279522 |
format |
Article |
dewey-ones |
004 - Data processing & computer science |
delete_txt_mv |
keep |
author_role |
aut aut aut |
collection |
OLC |
remote_str |
false |
illustrated |
Not Illustrated |
issn |
1383-469X |
topic_title |
004 VZ A Reinforcement Learning Based Data Storage and Traffic Management in Information-Centric Data Center Networks Information-centric data center networks Data storage Traffic management Distributed Q-learning |
topic |
ddc 004 misc Information-centric data center networks misc Data storage misc Traffic management misc Distributed Q-learning |
topic_unstemmed |
ddc 004 misc Information-centric data center networks misc Data storage misc Traffic management misc Distributed Q-learning |
topic_browse |
ddc 004 misc Information-centric data center networks misc Data storage misc Traffic management misc Distributed Q-learning |
format_facet |
Aufsätze Gedruckte Aufsätze |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
nc |
hierarchy_parent_title |
Mobile networks and applications |
hierarchy_parent_id |
215279522 |
dewey-tens |
000 - Computer science, knowledge & systems |
hierarchy_top_title |
Mobile networks and applications |
isfreeaccess_txt |
false |
familylinks_str_mv |
(DE-627)215279522 (DE-600)1342049-5 (DE-576)063244756 |
title |
A Reinforcement Learning Based Data Storage and Traffic Management in Information-Centric Data Center Networks |
ctrlnum |
(DE-627)OLC2078335835 (DE-He213)s11036-020-01629-w-p |
title_full |
A Reinforcement Learning Based Data Storage and Traffic Management in Information-Centric Data Center Networks |
author_sort |
Yang, Weihong |
journal |
Mobile networks and applications |
journalStr |
Mobile networks and applications |
lang_code |
eng |
isOA_bool |
false |
dewey-hundreds |
000 - Computer science, information & general works |
recordtype |
marc |
publishDateSort |
2020 |
contenttype_str_mv |
txt |
container_start_page |
266 |
author_browse |
Yang, Weihong Qin, Yang Yang, ZhaoZheng |
container_volume |
27 |
class |
004 VZ |
format_se |
Aufsätze |
author-letter |
Yang, Weihong |
doi_str_mv |
10.1007/s11036-020-01629-w |
dewey-full |
004 |
title_sort |
a reinforcement learning based data storage and traffic management in information-centric data center networks |
title_auth |
A Reinforcement Learning Based Data Storage and Traffic Management in Information-Centric Data Center Networks |
abstract |
Abstract Data Center Networks (DCN), a core infrastructure of cloud computing, place heavy demands on efficient storage and management of massive data. The data storage scheme, which decides how to assign data to nodes for storage, has a significant impact on the performance of the data center. However, most of the existing solutions focus on where to store the data (i.e., the selection of storage node) but have not considered how to store them (i.e., the traffic management such as routing and transmission rate adjustment). By leveraging the Information-Centric Networks (ICN) architecture, this paper tackles the data storage and traffic management issue in Information-Centric Data Center Networks (ICDCN) based on Reinforcement Learning (RL) method, since RL has been developed as a promising solution to address dynamic network issues. We present a global optimization of joint traffic management and data storage and then solve it by the distributed multi-agent Q-learning. In ICDCN, the data is routed based on the data’s name, which achieves better routing scalability by decoupling the data and its physical location. Compared with IP’s stateless forwarding plane, the stateful forwarding information maintained at every node supports adaptively routing and hop-by-hop traffic control by using the Q-learning method. We evaluate our proposal on an NS-3-based simulator, and the results show that the proposed scheme can effectively reduce transmission time and increase throughput while achieving load-balanced among servers. © Springer Science+Business Media, LLC, part of Springer Nature 2020 |
abstractGer |
Abstract Data Center Networks (DCN), a core infrastructure of cloud computing, place heavy demands on efficient storage and management of massive data. The data storage scheme, which decides how to assign data to nodes for storage, has a significant impact on the performance of the data center. However, most of the existing solutions focus on where to store the data (i.e., the selection of storage node) but have not considered how to store them (i.e., the traffic management such as routing and transmission rate adjustment). By leveraging the Information-Centric Networks (ICN) architecture, this paper tackles the data storage and traffic management issue in Information-Centric Data Center Networks (ICDCN) based on Reinforcement Learning (RL) method, since RL has been developed as a promising solution to address dynamic network issues. We present a global optimization of joint traffic management and data storage and then solve it by the distributed multi-agent Q-learning. In ICDCN, the data is routed based on the data’s name, which achieves better routing scalability by decoupling the data and its physical location. Compared with IP’s stateless forwarding plane, the stateful forwarding information maintained at every node supports adaptively routing and hop-by-hop traffic control by using the Q-learning method. We evaluate our proposal on an NS-3-based simulator, and the results show that the proposed scheme can effectively reduce transmission time and increase throughput while achieving load-balanced among servers. © Springer Science+Business Media, LLC, part of Springer Nature 2020 |
abstract_unstemmed |
Abstract Data Center Networks (DCN), a core infrastructure of cloud computing, place heavy demands on efficient storage and management of massive data. The data storage scheme, which decides how to assign data to nodes for storage, has a significant impact on the performance of the data center. However, most of the existing solutions focus on where to store the data (i.e., the selection of storage node) but have not considered how to store them (i.e., the traffic management such as routing and transmission rate adjustment). By leveraging the Information-Centric Networks (ICN) architecture, this paper tackles the data storage and traffic management issue in Information-Centric Data Center Networks (ICDCN) based on Reinforcement Learning (RL) method, since RL has been developed as a promising solution to address dynamic network issues. We present a global optimization of joint traffic management and data storage and then solve it by the distributed multi-agent Q-learning. In ICDCN, the data is routed based on the data’s name, which achieves better routing scalability by decoupling the data and its physical location. Compared with IP’s stateless forwarding plane, the stateful forwarding information maintained at every node supports adaptively routing and hop-by-hop traffic control by using the Q-learning method. We evaluate our proposal on an NS-3-based simulator, and the results show that the proposed scheme can effectively reduce transmission time and increase throughput while achieving load-balanced among servers. © Springer Science+Business Media, LLC, part of Springer Nature 2020 |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_OLC SSG-OLC-MAT |
container_issue |
1 |
title_short |
A Reinforcement Learning Based Data Storage and Traffic Management in Information-Centric Data Center Networks |
url |
https://doi.org/10.1007/s11036-020-01629-w |
remote_bool |
false |
author2 |
Qin, Yang Yang, ZhaoZheng |
author2Str |
Qin, Yang Yang, ZhaoZheng |
ppnlink |
215279522 |
mediatype_str_mv |
n |
isOA_txt |
false |
hochschulschrift_bool |
false |
doi_str |
10.1007/s11036-020-01629-w |
up_date |
2024-07-03T19:57:31.207Z |
_version_ |
1803589151576031232 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">OLC2078335835</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230506002415.0</controlfield><controlfield tag="007">tu</controlfield><controlfield tag="008">221220s2020 xx ||||| 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1007/s11036-020-01629-w</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)OLC2078335835</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-He213)s11036-020-01629-w-p</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="082" ind1="0" ind2="4"><subfield code="a">004</subfield><subfield code="q">VZ</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Yang, Weihong</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">A Reinforcement Learning Based Data Storage and Traffic Management in Information-Centric Data Center Networks</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2020</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">ohne Hilfsmittel zu benutzen</subfield><subfield code="b">n</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Band</subfield><subfield code="b">nc</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="500" ind1=" " ind2=" "><subfield code="a">© Springer Science+Business Media, LLC, part of Springer Nature 2020</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Abstract Data Center Networks (DCN), a core infrastructure of cloud computing, place heavy demands on efficient storage and management of massive data. The data storage scheme, which decides how to assign data to nodes for storage, has a significant impact on the performance of the data center. However, most of the existing solutions focus on where to store the data (i.e., the selection of storage node) but have not considered how to store them (i.e., the traffic management such as routing and transmission rate adjustment). By leveraging the Information-Centric Networks (ICN) architecture, this paper tackles the data storage and traffic management issue in Information-Centric Data Center Networks (ICDCN) based on Reinforcement Learning (RL) method, since RL has been developed as a promising solution to address dynamic network issues. We present a global optimization of joint traffic management and data storage and then solve it by the distributed multi-agent Q-learning. In ICDCN, the data is routed based on the data’s name, which achieves better routing scalability by decoupling the data and its physical location. Compared with IP’s stateless forwarding plane, the stateful forwarding information maintained at every node supports adaptively routing and hop-by-hop traffic control by using the Q-learning method. We evaluate our proposal on an NS-3-based simulator, and the results show that the proposed scheme can effectively reduce transmission time and increase throughput while achieving load-balanced among servers.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Information-centric data center networks</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Data storage</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Traffic management</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Distributed Q-learning</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Qin, Yang</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Yang, ZhaoZheng</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">Enthalten in</subfield><subfield code="t">Mobile networks and applications</subfield><subfield code="d">Springer US, 1996</subfield><subfield code="g">27(2020), 1 vom: 30. Juli, Seite 266-275</subfield><subfield code="w">(DE-627)215279522</subfield><subfield code="w">(DE-600)1342049-5</subfield><subfield code="w">(DE-576)063244756</subfield><subfield code="x">1383-469X</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:27</subfield><subfield code="g">year:2020</subfield><subfield code="g">number:1</subfield><subfield code="g">day:30</subfield><subfield code="g">month:07</subfield><subfield code="g">pages:266-275</subfield></datafield><datafield tag="856" ind1="4" ind2="1"><subfield code="u">https://doi.org/10.1007/s11036-020-01629-w</subfield><subfield code="z">lizenzpflichtig</subfield><subfield code="3">Volltext</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_OLC</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SSG-OLC-MAT</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">27</subfield><subfield code="j">2020</subfield><subfield code="e">1</subfield><subfield code="b">30</subfield><subfield code="c">07</subfield><subfield code="h">266-275</subfield></datafield></record></collection>
|
score |
7.402009 |