AI-Driven Energy-Efficient Content Task Offloading in Cloud-Edge-End Cooperation Networks
To tackle a challenging energy efficiency problem caused by the growing mobile Internet traffic, this paper proposes a deep reinforcement learning (DRL)-based green content task offloading scheme in cloud-edge-end cooperation networks. Specifically, we formulate the problem as a power minimization m...
Ausführliche Beschreibung
Autor*in: |
Chao Fang [verfasserIn] Xiangheng Meng [verfasserIn] Zhaoming Hu [verfasserIn] Fangmin Xu [verfasserIn] Deze Zeng [verfasserIn] Mianxiong Dong [verfasserIn] Wei Ni [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2022 |
---|
Schlagwörter: |
---|
Übergeordnetes Werk: |
In: IEEE Open Journal of the Computer Society - IEEE, 2021, 3(2022), Seite 162-171 |
---|---|
Übergeordnetes Werk: |
volume:3 ; year:2022 ; pages:162-171 |
Links: |
---|
DOI / URN: |
10.1109/OJCS.2022.3206446 |
---|
Katalog-ID: |
DOAJ007771193 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | DOAJ007771193 | ||
003 | DE-627 | ||
005 | 20230307024201.0 | ||
007 | cr uuu---uuuuu | ||
008 | 230225s2022 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1109/OJCS.2022.3206446 |2 doi | |
035 | |a (DE-627)DOAJ007771193 | ||
035 | |a (DE-599)DOAJ70b511e2adc44d2cb9ac46ebfa133060 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
050 | 0 | |a QA75.5-76.95 | |
050 | 0 | |a T58.5-58.64 | |
100 | 0 | |a Chao Fang |e verfasserin |4 aut | |
245 | 1 | 0 | |a AI-Driven Energy-Efficient Content Task Offloading in Cloud-Edge-End Cooperation Networks |
264 | 1 | |c 2022 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a To tackle a challenging energy efficiency problem caused by the growing mobile Internet traffic, this paper proposes a deep reinforcement learning (DRL)-based green content task offloading scheme in cloud-edge-end cooperation networks. Specifically, we formulate the problem as a power minimization model, where requests arriving at a node for the same content can be aggregated in its queue and in-network caching is widely deployed in heterogeneous environments. A novel DRL algorithm is designed to minimize the power consumption by making collaborative caching and task offloading decisions in each slot on the basis of content request information in previous slots and current network state. Numerical results show that our proposed content task offloading model achieves better power efficiency than the existing popular counterparts in cloud-edge-end collaboration networks, and fast converges to the stable state. | ||
650 | 4 | |a Cloud-edge-end cooperation networks | |
650 | 4 | |a content popularity | |
650 | 4 | |a content task offloading | |
650 | 4 | |a deep reinforcement learning | |
653 | 0 | |a Electronic computers. Computer science | |
653 | 0 | |a Information technology | |
700 | 0 | |a Xiangheng Meng |e verfasserin |4 aut | |
700 | 0 | |a Zhaoming Hu |e verfasserin |4 aut | |
700 | 0 | |a Fangmin Xu |e verfasserin |4 aut | |
700 | 0 | |a Deze Zeng |e verfasserin |4 aut | |
700 | 0 | |a Mianxiong Dong |e verfasserin |4 aut | |
700 | 0 | |a Wei Ni |e verfasserin |4 aut | |
773 | 0 | 8 | |i In |t IEEE Open Journal of the Computer Society |d IEEE, 2021 |g 3(2022), Seite 162-171 |w (DE-627)1699314063 |w (DE-600)3025012-2 |x 26441268 |7 nnns |
773 | 1 | 8 | |g volume:3 |g year:2022 |g pages:162-171 |
856 | 4 | 0 | |u https://doi.org/10.1109/OJCS.2022.3206446 |z kostenfrei |
856 | 4 | 0 | |u https://doaj.org/article/70b511e2adc44d2cb9ac46ebfa133060 |z kostenfrei |
856 | 4 | 0 | |u https://ieeexplore.ieee.org/document/9891792/ |z kostenfrei |
856 | 4 | 2 | |u https://doaj.org/toc/2644-1268 |y Journal toc |z kostenfrei |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_DOAJ | ||
912 | |a GBV_ILN_11 | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_39 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_63 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_161 | ||
912 | |a GBV_ILN_170 | ||
912 | |a GBV_ILN_213 | ||
912 | |a GBV_ILN_230 | ||
912 | |a GBV_ILN_285 | ||
912 | |a GBV_ILN_293 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_4012 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4126 | ||
912 | |a GBV_ILN_4249 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4306 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4326 | ||
912 | |a GBV_ILN_4335 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4367 | ||
912 | |a GBV_ILN_4700 | ||
951 | |a AR | ||
952 | |d 3 |j 2022 |h 162-171 |
author_variant |
c f cf x m xm z h zh f x fx d z dz m d md w n wn |
---|---|
matchkey_str |
article:26441268:2022----::irvnnryfiincnetakflaignluege |
hierarchy_sort_str |
2022 |
callnumber-subject-code |
QA |
publishDate |
2022 |
allfields |
10.1109/OJCS.2022.3206446 doi (DE-627)DOAJ007771193 (DE-599)DOAJ70b511e2adc44d2cb9ac46ebfa133060 DE-627 ger DE-627 rakwb eng QA75.5-76.95 T58.5-58.64 Chao Fang verfasserin aut AI-Driven Energy-Efficient Content Task Offloading in Cloud-Edge-End Cooperation Networks 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier To tackle a challenging energy efficiency problem caused by the growing mobile Internet traffic, this paper proposes a deep reinforcement learning (DRL)-based green content task offloading scheme in cloud-edge-end cooperation networks. Specifically, we formulate the problem as a power minimization model, where requests arriving at a node for the same content can be aggregated in its queue and in-network caching is widely deployed in heterogeneous environments. A novel DRL algorithm is designed to minimize the power consumption by making collaborative caching and task offloading decisions in each slot on the basis of content request information in previous slots and current network state. Numerical results show that our proposed content task offloading model achieves better power efficiency than the existing popular counterparts in cloud-edge-end collaboration networks, and fast converges to the stable state. Cloud-edge-end cooperation networks content popularity content task offloading deep reinforcement learning Electronic computers. Computer science Information technology Xiangheng Meng verfasserin aut Zhaoming Hu verfasserin aut Fangmin Xu verfasserin aut Deze Zeng verfasserin aut Mianxiong Dong verfasserin aut Wei Ni verfasserin aut In IEEE Open Journal of the Computer Society IEEE, 2021 3(2022), Seite 162-171 (DE-627)1699314063 (DE-600)3025012-2 26441268 nnns volume:3 year:2022 pages:162-171 https://doi.org/10.1109/OJCS.2022.3206446 kostenfrei https://doaj.org/article/70b511e2adc44d2cb9ac46ebfa133060 kostenfrei https://ieeexplore.ieee.org/document/9891792/ kostenfrei https://doaj.org/toc/2644-1268 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 3 2022 162-171 |
spelling |
10.1109/OJCS.2022.3206446 doi (DE-627)DOAJ007771193 (DE-599)DOAJ70b511e2adc44d2cb9ac46ebfa133060 DE-627 ger DE-627 rakwb eng QA75.5-76.95 T58.5-58.64 Chao Fang verfasserin aut AI-Driven Energy-Efficient Content Task Offloading in Cloud-Edge-End Cooperation Networks 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier To tackle a challenging energy efficiency problem caused by the growing mobile Internet traffic, this paper proposes a deep reinforcement learning (DRL)-based green content task offloading scheme in cloud-edge-end cooperation networks. Specifically, we formulate the problem as a power minimization model, where requests arriving at a node for the same content can be aggregated in its queue and in-network caching is widely deployed in heterogeneous environments. A novel DRL algorithm is designed to minimize the power consumption by making collaborative caching and task offloading decisions in each slot on the basis of content request information in previous slots and current network state. Numerical results show that our proposed content task offloading model achieves better power efficiency than the existing popular counterparts in cloud-edge-end collaboration networks, and fast converges to the stable state. Cloud-edge-end cooperation networks content popularity content task offloading deep reinforcement learning Electronic computers. Computer science Information technology Xiangheng Meng verfasserin aut Zhaoming Hu verfasserin aut Fangmin Xu verfasserin aut Deze Zeng verfasserin aut Mianxiong Dong verfasserin aut Wei Ni verfasserin aut In IEEE Open Journal of the Computer Society IEEE, 2021 3(2022), Seite 162-171 (DE-627)1699314063 (DE-600)3025012-2 26441268 nnns volume:3 year:2022 pages:162-171 https://doi.org/10.1109/OJCS.2022.3206446 kostenfrei https://doaj.org/article/70b511e2adc44d2cb9ac46ebfa133060 kostenfrei https://ieeexplore.ieee.org/document/9891792/ kostenfrei https://doaj.org/toc/2644-1268 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 3 2022 162-171 |
allfields_unstemmed |
10.1109/OJCS.2022.3206446 doi (DE-627)DOAJ007771193 (DE-599)DOAJ70b511e2adc44d2cb9ac46ebfa133060 DE-627 ger DE-627 rakwb eng QA75.5-76.95 T58.5-58.64 Chao Fang verfasserin aut AI-Driven Energy-Efficient Content Task Offloading in Cloud-Edge-End Cooperation Networks 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier To tackle a challenging energy efficiency problem caused by the growing mobile Internet traffic, this paper proposes a deep reinforcement learning (DRL)-based green content task offloading scheme in cloud-edge-end cooperation networks. Specifically, we formulate the problem as a power minimization model, where requests arriving at a node for the same content can be aggregated in its queue and in-network caching is widely deployed in heterogeneous environments. A novel DRL algorithm is designed to minimize the power consumption by making collaborative caching and task offloading decisions in each slot on the basis of content request information in previous slots and current network state. Numerical results show that our proposed content task offloading model achieves better power efficiency than the existing popular counterparts in cloud-edge-end collaboration networks, and fast converges to the stable state. Cloud-edge-end cooperation networks content popularity content task offloading deep reinforcement learning Electronic computers. Computer science Information technology Xiangheng Meng verfasserin aut Zhaoming Hu verfasserin aut Fangmin Xu verfasserin aut Deze Zeng verfasserin aut Mianxiong Dong verfasserin aut Wei Ni verfasserin aut In IEEE Open Journal of the Computer Society IEEE, 2021 3(2022), Seite 162-171 (DE-627)1699314063 (DE-600)3025012-2 26441268 nnns volume:3 year:2022 pages:162-171 https://doi.org/10.1109/OJCS.2022.3206446 kostenfrei https://doaj.org/article/70b511e2adc44d2cb9ac46ebfa133060 kostenfrei https://ieeexplore.ieee.org/document/9891792/ kostenfrei https://doaj.org/toc/2644-1268 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 3 2022 162-171 |
allfieldsGer |
10.1109/OJCS.2022.3206446 doi (DE-627)DOAJ007771193 (DE-599)DOAJ70b511e2adc44d2cb9ac46ebfa133060 DE-627 ger DE-627 rakwb eng QA75.5-76.95 T58.5-58.64 Chao Fang verfasserin aut AI-Driven Energy-Efficient Content Task Offloading in Cloud-Edge-End Cooperation Networks 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier To tackle a challenging energy efficiency problem caused by the growing mobile Internet traffic, this paper proposes a deep reinforcement learning (DRL)-based green content task offloading scheme in cloud-edge-end cooperation networks. Specifically, we formulate the problem as a power minimization model, where requests arriving at a node for the same content can be aggregated in its queue and in-network caching is widely deployed in heterogeneous environments. A novel DRL algorithm is designed to minimize the power consumption by making collaborative caching and task offloading decisions in each slot on the basis of content request information in previous slots and current network state. Numerical results show that our proposed content task offloading model achieves better power efficiency than the existing popular counterparts in cloud-edge-end collaboration networks, and fast converges to the stable state. Cloud-edge-end cooperation networks content popularity content task offloading deep reinforcement learning Electronic computers. Computer science Information technology Xiangheng Meng verfasserin aut Zhaoming Hu verfasserin aut Fangmin Xu verfasserin aut Deze Zeng verfasserin aut Mianxiong Dong verfasserin aut Wei Ni verfasserin aut In IEEE Open Journal of the Computer Society IEEE, 2021 3(2022), Seite 162-171 (DE-627)1699314063 (DE-600)3025012-2 26441268 nnns volume:3 year:2022 pages:162-171 https://doi.org/10.1109/OJCS.2022.3206446 kostenfrei https://doaj.org/article/70b511e2adc44d2cb9ac46ebfa133060 kostenfrei https://ieeexplore.ieee.org/document/9891792/ kostenfrei https://doaj.org/toc/2644-1268 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 3 2022 162-171 |
allfieldsSound |
10.1109/OJCS.2022.3206446 doi (DE-627)DOAJ007771193 (DE-599)DOAJ70b511e2adc44d2cb9ac46ebfa133060 DE-627 ger DE-627 rakwb eng QA75.5-76.95 T58.5-58.64 Chao Fang verfasserin aut AI-Driven Energy-Efficient Content Task Offloading in Cloud-Edge-End Cooperation Networks 2022 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier To tackle a challenging energy efficiency problem caused by the growing mobile Internet traffic, this paper proposes a deep reinforcement learning (DRL)-based green content task offloading scheme in cloud-edge-end cooperation networks. Specifically, we formulate the problem as a power minimization model, where requests arriving at a node for the same content can be aggregated in its queue and in-network caching is widely deployed in heterogeneous environments. A novel DRL algorithm is designed to minimize the power consumption by making collaborative caching and task offloading decisions in each slot on the basis of content request information in previous slots and current network state. Numerical results show that our proposed content task offloading model achieves better power efficiency than the existing popular counterparts in cloud-edge-end collaboration networks, and fast converges to the stable state. Cloud-edge-end cooperation networks content popularity content task offloading deep reinforcement learning Electronic computers. Computer science Information technology Xiangheng Meng verfasserin aut Zhaoming Hu verfasserin aut Fangmin Xu verfasserin aut Deze Zeng verfasserin aut Mianxiong Dong verfasserin aut Wei Ni verfasserin aut In IEEE Open Journal of the Computer Society IEEE, 2021 3(2022), Seite 162-171 (DE-627)1699314063 (DE-600)3025012-2 26441268 nnns volume:3 year:2022 pages:162-171 https://doi.org/10.1109/OJCS.2022.3206446 kostenfrei https://doaj.org/article/70b511e2adc44d2cb9ac46ebfa133060 kostenfrei https://ieeexplore.ieee.org/document/9891792/ kostenfrei https://doaj.org/toc/2644-1268 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 3 2022 162-171 |
language |
English |
source |
In IEEE Open Journal of the Computer Society 3(2022), Seite 162-171 volume:3 year:2022 pages:162-171 |
sourceStr |
In IEEE Open Journal of the Computer Society 3(2022), Seite 162-171 volume:3 year:2022 pages:162-171 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Cloud-edge-end cooperation networks content popularity content task offloading deep reinforcement learning Electronic computers. Computer science Information technology |
isfreeaccess_bool |
true |
container_title |
IEEE Open Journal of the Computer Society |
authorswithroles_txt_mv |
Chao Fang @@aut@@ Xiangheng Meng @@aut@@ Zhaoming Hu @@aut@@ Fangmin Xu @@aut@@ Deze Zeng @@aut@@ Mianxiong Dong @@aut@@ Wei Ni @@aut@@ |
publishDateDaySort_date |
2022-01-01T00:00:00Z |
hierarchy_top_id |
1699314063 |
id |
DOAJ007771193 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ007771193</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230307024201.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230225s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1109/OJCS.2022.3206446</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ007771193</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJ70b511e2adc44d2cb9ac46ebfa133060</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">QA75.5-76.95</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">T58.5-58.64</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Chao Fang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">AI-Driven Energy-Efficient Content Task Offloading in Cloud-Edge-End Cooperation Networks</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">To tackle a challenging energy efficiency problem caused by the growing mobile Internet traffic, this paper proposes a deep reinforcement learning (DRL)-based green content task offloading scheme in cloud-edge-end cooperation networks. Specifically, we formulate the problem as a power minimization model, where requests arriving at a node for the same content can be aggregated in its queue and in-network caching is widely deployed in heterogeneous environments. A novel DRL algorithm is designed to minimize the power consumption by making collaborative caching and task offloading decisions in each slot on the basis of content request information in previous slots and current network state. Numerical results show that our proposed content task offloading model achieves better power efficiency than the existing popular counterparts in cloud-edge-end collaboration networks, and fast converges to the stable state.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Cloud-edge-end cooperation networks</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">content popularity</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">content task offloading</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">deep reinforcement learning</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Electronic computers. Computer science</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Information technology</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xiangheng Meng</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Zhaoming Hu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Fangmin Xu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Deze Zeng</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Mianxiong Dong</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Wei Ni</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">IEEE Open Journal of the Computer Society</subfield><subfield code="d">IEEE, 2021</subfield><subfield code="g">3(2022), Seite 162-171</subfield><subfield code="w">(DE-627)1699314063</subfield><subfield code="w">(DE-600)3025012-2</subfield><subfield code="x">26441268</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:3</subfield><subfield code="g">year:2022</subfield><subfield code="g">pages:162-171</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1109/OJCS.2022.3206446</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/70b511e2adc44d2cb9ac46ebfa133060</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://ieeexplore.ieee.org/document/9891792/</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/2644-1268</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">3</subfield><subfield code="j">2022</subfield><subfield code="h">162-171</subfield></datafield></record></collection>
|
callnumber-first |
Q - Science |
author |
Chao Fang |
spellingShingle |
Chao Fang misc QA75.5-76.95 misc T58.5-58.64 misc Cloud-edge-end cooperation networks misc content popularity misc content task offloading misc deep reinforcement learning misc Electronic computers. Computer science misc Information technology AI-Driven Energy-Efficient Content Task Offloading in Cloud-Edge-End Cooperation Networks |
authorStr |
Chao Fang |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)1699314063 |
format |
electronic Article |
delete_txt_mv |
keep |
author_role |
aut aut aut aut aut aut aut |
collection |
DOAJ |
remote_str |
true |
callnumber-label |
QA75 |
illustrated |
Not Illustrated |
issn |
26441268 |
topic_title |
QA75.5-76.95 T58.5-58.64 AI-Driven Energy-Efficient Content Task Offloading in Cloud-Edge-End Cooperation Networks Cloud-edge-end cooperation networks content popularity content task offloading deep reinforcement learning |
topic |
misc QA75.5-76.95 misc T58.5-58.64 misc Cloud-edge-end cooperation networks misc content popularity misc content task offloading misc deep reinforcement learning misc Electronic computers. Computer science misc Information technology |
topic_unstemmed |
misc QA75.5-76.95 misc T58.5-58.64 misc Cloud-edge-end cooperation networks misc content popularity misc content task offloading misc deep reinforcement learning misc Electronic computers. Computer science misc Information technology |
topic_browse |
misc QA75.5-76.95 misc T58.5-58.64 misc Cloud-edge-end cooperation networks misc content popularity misc content task offloading misc deep reinforcement learning misc Electronic computers. Computer science misc Information technology |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
IEEE Open Journal of the Computer Society |
hierarchy_parent_id |
1699314063 |
hierarchy_top_title |
IEEE Open Journal of the Computer Society |
isfreeaccess_txt |
true |
familylinks_str_mv |
(DE-627)1699314063 (DE-600)3025012-2 |
title |
AI-Driven Energy-Efficient Content Task Offloading in Cloud-Edge-End Cooperation Networks |
ctrlnum |
(DE-627)DOAJ007771193 (DE-599)DOAJ70b511e2adc44d2cb9ac46ebfa133060 |
title_full |
AI-Driven Energy-Efficient Content Task Offloading in Cloud-Edge-End Cooperation Networks |
author_sort |
Chao Fang |
journal |
IEEE Open Journal of the Computer Society |
journalStr |
IEEE Open Journal of the Computer Society |
callnumber-first-code |
Q |
lang_code |
eng |
isOA_bool |
true |
recordtype |
marc |
publishDateSort |
2022 |
contenttype_str_mv |
txt |
container_start_page |
162 |
author_browse |
Chao Fang Xiangheng Meng Zhaoming Hu Fangmin Xu Deze Zeng Mianxiong Dong Wei Ni |
container_volume |
3 |
class |
QA75.5-76.95 T58.5-58.64 |
format_se |
Elektronische Aufsätze |
author-letter |
Chao Fang |
doi_str_mv |
10.1109/OJCS.2022.3206446 |
author2-role |
verfasserin |
title_sort |
ai-driven energy-efficient content task offloading in cloud-edge-end cooperation networks |
callnumber |
QA75.5-76.95 |
title_auth |
AI-Driven Energy-Efficient Content Task Offloading in Cloud-Edge-End Cooperation Networks |
abstract |
To tackle a challenging energy efficiency problem caused by the growing mobile Internet traffic, this paper proposes a deep reinforcement learning (DRL)-based green content task offloading scheme in cloud-edge-end cooperation networks. Specifically, we formulate the problem as a power minimization model, where requests arriving at a node for the same content can be aggregated in its queue and in-network caching is widely deployed in heterogeneous environments. A novel DRL algorithm is designed to minimize the power consumption by making collaborative caching and task offloading decisions in each slot on the basis of content request information in previous slots and current network state. Numerical results show that our proposed content task offloading model achieves better power efficiency than the existing popular counterparts in cloud-edge-end collaboration networks, and fast converges to the stable state. |
abstractGer |
To tackle a challenging energy efficiency problem caused by the growing mobile Internet traffic, this paper proposes a deep reinforcement learning (DRL)-based green content task offloading scheme in cloud-edge-end cooperation networks. Specifically, we formulate the problem as a power minimization model, where requests arriving at a node for the same content can be aggregated in its queue and in-network caching is widely deployed in heterogeneous environments. A novel DRL algorithm is designed to minimize the power consumption by making collaborative caching and task offloading decisions in each slot on the basis of content request information in previous slots and current network state. Numerical results show that our proposed content task offloading model achieves better power efficiency than the existing popular counterparts in cloud-edge-end collaboration networks, and fast converges to the stable state. |
abstract_unstemmed |
To tackle a challenging energy efficiency problem caused by the growing mobile Internet traffic, this paper proposes a deep reinforcement learning (DRL)-based green content task offloading scheme in cloud-edge-end cooperation networks. Specifically, we formulate the problem as a power minimization model, where requests arriving at a node for the same content can be aggregated in its queue and in-network caching is widely deployed in heterogeneous environments. A novel DRL algorithm is designed to minimize the power consumption by making collaborative caching and task offloading decisions in each slot on the basis of content request information in previous slots and current network state. Numerical results show that our proposed content task offloading model achieves better power efficiency than the existing popular counterparts in cloud-edge-end collaboration networks, and fast converges to the stable state. |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4326 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 |
title_short |
AI-Driven Energy-Efficient Content Task Offloading in Cloud-Edge-End Cooperation Networks |
url |
https://doi.org/10.1109/OJCS.2022.3206446 https://doaj.org/article/70b511e2adc44d2cb9ac46ebfa133060 https://ieeexplore.ieee.org/document/9891792/ https://doaj.org/toc/2644-1268 |
remote_bool |
true |
author2 |
Xiangheng Meng Zhaoming Hu Fangmin Xu Deze Zeng Mianxiong Dong Wei Ni |
author2Str |
Xiangheng Meng Zhaoming Hu Fangmin Xu Deze Zeng Mianxiong Dong Wei Ni |
ppnlink |
1699314063 |
callnumber-subject |
QA - Mathematics |
mediatype_str_mv |
c |
isOA_txt |
true |
hochschulschrift_bool |
false |
doi_str |
10.1109/OJCS.2022.3206446 |
callnumber-a |
QA75.5-76.95 |
up_date |
2024-07-03T14:01:01.167Z |
_version_ |
1803566722497642496 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ007771193</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230307024201.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230225s2022 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1109/OJCS.2022.3206446</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ007771193</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJ70b511e2adc44d2cb9ac46ebfa133060</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">QA75.5-76.95</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">T58.5-58.64</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Chao Fang</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">AI-Driven Energy-Efficient Content Task Offloading in Cloud-Edge-End Cooperation Networks</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2022</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">To tackle a challenging energy efficiency problem caused by the growing mobile Internet traffic, this paper proposes a deep reinforcement learning (DRL)-based green content task offloading scheme in cloud-edge-end cooperation networks. Specifically, we formulate the problem as a power minimization model, where requests arriving at a node for the same content can be aggregated in its queue and in-network caching is widely deployed in heterogeneous environments. A novel DRL algorithm is designed to minimize the power consumption by making collaborative caching and task offloading decisions in each slot on the basis of content request information in previous slots and current network state. Numerical results show that our proposed content task offloading model achieves better power efficiency than the existing popular counterparts in cloud-edge-end collaboration networks, and fast converges to the stable state.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Cloud-edge-end cooperation networks</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">content popularity</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">content task offloading</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">deep reinforcement learning</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Electronic computers. Computer science</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Information technology</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Xiangheng Meng</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Zhaoming Hu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Fangmin Xu</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Deze Zeng</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Mianxiong Dong</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Wei Ni</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">IEEE Open Journal of the Computer Society</subfield><subfield code="d">IEEE, 2021</subfield><subfield code="g">3(2022), Seite 162-171</subfield><subfield code="w">(DE-627)1699314063</subfield><subfield code="w">(DE-600)3025012-2</subfield><subfield code="x">26441268</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:3</subfield><subfield code="g">year:2022</subfield><subfield code="g">pages:162-171</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1109/OJCS.2022.3206446</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/70b511e2adc44d2cb9ac46ebfa133060</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://ieeexplore.ieee.org/document/9891792/</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/2644-1268</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4326</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">3</subfield><subfield code="j">2022</subfield><subfield code="h">162-171</subfield></datafield></record></collection>
|
score |
7.399147 |