A Deep Hierarchical Reinforcement Learning Algorithm in Partially Observable Markov Decision Processes
In recent years, reinforcement learning (RL) has achieved remarkable success due to the growing adoption of deep learning techniques and the rapid growth of computing power. Nevertheless, it is well-known that flat reinforcement learning algorithms are often have trouble learning and are even data-e...
Ausführliche Beschreibung
Autor*in: |
Tuyen P. Le [verfasserIn] Ngo Anh Vien [verfasserIn] TaeChoong Chung [verfasserIn] |
---|
Format: |
E-Artikel |
---|---|
Sprache: |
Englisch |
Erschienen: |
2018 |
---|
Schlagwörter: |
Hierarchical deep reinforcement learning |
---|
Übergeordnetes Werk: |
In: IEEE Access - IEEE, 2014, 6(2018), Seite 49089-49102 |
---|---|
Übergeordnetes Werk: |
volume:6 ; year:2018 ; pages:49089-49102 |
Links: |
---|
DOI / URN: |
10.1109/ACCESS.2018.2854283 |
---|
Katalog-ID: |
DOAJ049810030 |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | DOAJ049810030 | ||
003 | DE-627 | ||
005 | 20230308145302.0 | ||
007 | cr uuu---uuuuu | ||
008 | 230227s2018 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1109/ACCESS.2018.2854283 |2 doi | |
035 | |a (DE-627)DOAJ049810030 | ||
035 | |a (DE-599)DOAJ3be1a47d41c84d2996029d49b53536eb | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
050 | 0 | |a TK1-9971 | |
100 | 0 | |a Tuyen P. Le |e verfasserin |4 aut | |
245 | 1 | 2 | |a A Deep Hierarchical Reinforcement Learning Algorithm in Partially Observable Markov Decision Processes |
264 | 1 | |c 2018 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a Computermedien |b c |2 rdamedia | ||
338 | |a Online-Ressource |b cr |2 rdacarrier | ||
520 | |a In recent years, reinforcement learning (RL) has achieved remarkable success due to the growing adoption of deep learning techniques and the rapid growth of computing power. Nevertheless, it is well-known that flat reinforcement learning algorithms are often have trouble learning and are even data-efficient with respect to tasks having hierarchical structures, e.g., those consisting of multiple subtasks. Hierarchical reinforcement learning is a principled approach that can tackle such challenging tasks. On the other hand, many real-world tasks usually have only partial observability in which state measurements are often imperfect and partially observable. The problems of RL in such settings can be formulated as a partially observable Markov decision process (POMDP). In this paper, we study hierarchical RL in a POMDP in which the tasks have only partial observability and possess hierarchical properties. We propose a hierarchical deep reinforcement learning approach for learning in hierarchical POMDP. The deep hierarchical RL algorithm is proposed for domains to both MDP and POMDP learning. We evaluate the proposed algorithm using various challenging hierarchical POMDPs. | ||
650 | 4 | |a Hierarchical deep reinforcement learning | |
650 | 4 | |a partially observable MDP (POMDP) | |
650 | 4 | |a semi-MDP | |
650 | 4 | |a partially observable semi-MDP (POSMDP) | |
653 | 0 | |a Electrical engineering. Electronics. Nuclear engineering | |
700 | 0 | |a Ngo Anh Vien |e verfasserin |4 aut | |
700 | 0 | |a TaeChoong Chung |e verfasserin |4 aut | |
773 | 0 | 8 | |i In |t IEEE Access |d IEEE, 2014 |g 6(2018), Seite 49089-49102 |w (DE-627)728440385 |w (DE-600)2687964-5 |x 21693536 |7 nnns |
773 | 1 | 8 | |g volume:6 |g year:2018 |g pages:49089-49102 |
856 | 4 | 0 | |u https://doi.org/10.1109/ACCESS.2018.2854283 |z kostenfrei |
856 | 4 | 0 | |u https://doaj.org/article/3be1a47d41c84d2996029d49b53536eb |z kostenfrei |
856 | 4 | 0 | |u https://ieeexplore.ieee.org/document/8421749/ |z kostenfrei |
856 | 4 | 2 | |u https://doaj.org/toc/2169-3536 |y Journal toc |z kostenfrei |
912 | |a GBV_USEFLAG_A | ||
912 | |a SYSFLAG_A | ||
912 | |a GBV_DOAJ | ||
912 | |a GBV_ILN_11 | ||
912 | |a GBV_ILN_20 | ||
912 | |a GBV_ILN_22 | ||
912 | |a GBV_ILN_23 | ||
912 | |a GBV_ILN_24 | ||
912 | |a GBV_ILN_31 | ||
912 | |a GBV_ILN_39 | ||
912 | |a GBV_ILN_40 | ||
912 | |a GBV_ILN_60 | ||
912 | |a GBV_ILN_62 | ||
912 | |a GBV_ILN_63 | ||
912 | |a GBV_ILN_65 | ||
912 | |a GBV_ILN_69 | ||
912 | |a GBV_ILN_70 | ||
912 | |a GBV_ILN_73 | ||
912 | |a GBV_ILN_95 | ||
912 | |a GBV_ILN_105 | ||
912 | |a GBV_ILN_110 | ||
912 | |a GBV_ILN_151 | ||
912 | |a GBV_ILN_161 | ||
912 | |a GBV_ILN_170 | ||
912 | |a GBV_ILN_213 | ||
912 | |a GBV_ILN_230 | ||
912 | |a GBV_ILN_285 | ||
912 | |a GBV_ILN_293 | ||
912 | |a GBV_ILN_370 | ||
912 | |a GBV_ILN_602 | ||
912 | |a GBV_ILN_2014 | ||
912 | |a GBV_ILN_4012 | ||
912 | |a GBV_ILN_4037 | ||
912 | |a GBV_ILN_4112 | ||
912 | |a GBV_ILN_4125 | ||
912 | |a GBV_ILN_4126 | ||
912 | |a GBV_ILN_4249 | ||
912 | |a GBV_ILN_4305 | ||
912 | |a GBV_ILN_4306 | ||
912 | |a GBV_ILN_4307 | ||
912 | |a GBV_ILN_4313 | ||
912 | |a GBV_ILN_4322 | ||
912 | |a GBV_ILN_4323 | ||
912 | |a GBV_ILN_4324 | ||
912 | |a GBV_ILN_4325 | ||
912 | |a GBV_ILN_4335 | ||
912 | |a GBV_ILN_4338 | ||
912 | |a GBV_ILN_4367 | ||
912 | |a GBV_ILN_4700 | ||
951 | |a AR | ||
952 | |d 6 |j 2018 |h 49089-49102 |
author_variant |
t p l tpl n a v nav t c tc |
---|---|
matchkey_str |
article:21693536:2018----::deheaciarifreeterigloihiprilybevb |
hierarchy_sort_str |
2018 |
callnumber-subject-code |
TK |
publishDate |
2018 |
allfields |
10.1109/ACCESS.2018.2854283 doi (DE-627)DOAJ049810030 (DE-599)DOAJ3be1a47d41c84d2996029d49b53536eb DE-627 ger DE-627 rakwb eng TK1-9971 Tuyen P. Le verfasserin aut A Deep Hierarchical Reinforcement Learning Algorithm in Partially Observable Markov Decision Processes 2018 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier In recent years, reinforcement learning (RL) has achieved remarkable success due to the growing adoption of deep learning techniques and the rapid growth of computing power. Nevertheless, it is well-known that flat reinforcement learning algorithms are often have trouble learning and are even data-efficient with respect to tasks having hierarchical structures, e.g., those consisting of multiple subtasks. Hierarchical reinforcement learning is a principled approach that can tackle such challenging tasks. On the other hand, many real-world tasks usually have only partial observability in which state measurements are often imperfect and partially observable. The problems of RL in such settings can be formulated as a partially observable Markov decision process (POMDP). In this paper, we study hierarchical RL in a POMDP in which the tasks have only partial observability and possess hierarchical properties. We propose a hierarchical deep reinforcement learning approach for learning in hierarchical POMDP. The deep hierarchical RL algorithm is proposed for domains to both MDP and POMDP learning. We evaluate the proposed algorithm using various challenging hierarchical POMDPs. Hierarchical deep reinforcement learning partially observable MDP (POMDP) semi-MDP partially observable semi-MDP (POSMDP) Electrical engineering. Electronics. Nuclear engineering Ngo Anh Vien verfasserin aut TaeChoong Chung verfasserin aut In IEEE Access IEEE, 2014 6(2018), Seite 49089-49102 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:6 year:2018 pages:49089-49102 https://doi.org/10.1109/ACCESS.2018.2854283 kostenfrei https://doaj.org/article/3be1a47d41c84d2996029d49b53536eb kostenfrei https://ieeexplore.ieee.org/document/8421749/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 6 2018 49089-49102 |
spelling |
10.1109/ACCESS.2018.2854283 doi (DE-627)DOAJ049810030 (DE-599)DOAJ3be1a47d41c84d2996029d49b53536eb DE-627 ger DE-627 rakwb eng TK1-9971 Tuyen P. Le verfasserin aut A Deep Hierarchical Reinforcement Learning Algorithm in Partially Observable Markov Decision Processes 2018 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier In recent years, reinforcement learning (RL) has achieved remarkable success due to the growing adoption of deep learning techniques and the rapid growth of computing power. Nevertheless, it is well-known that flat reinforcement learning algorithms are often have trouble learning and are even data-efficient with respect to tasks having hierarchical structures, e.g., those consisting of multiple subtasks. Hierarchical reinforcement learning is a principled approach that can tackle such challenging tasks. On the other hand, many real-world tasks usually have only partial observability in which state measurements are often imperfect and partially observable. The problems of RL in such settings can be formulated as a partially observable Markov decision process (POMDP). In this paper, we study hierarchical RL in a POMDP in which the tasks have only partial observability and possess hierarchical properties. We propose a hierarchical deep reinforcement learning approach for learning in hierarchical POMDP. The deep hierarchical RL algorithm is proposed for domains to both MDP and POMDP learning. We evaluate the proposed algorithm using various challenging hierarchical POMDPs. Hierarchical deep reinforcement learning partially observable MDP (POMDP) semi-MDP partially observable semi-MDP (POSMDP) Electrical engineering. Electronics. Nuclear engineering Ngo Anh Vien verfasserin aut TaeChoong Chung verfasserin aut In IEEE Access IEEE, 2014 6(2018), Seite 49089-49102 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:6 year:2018 pages:49089-49102 https://doi.org/10.1109/ACCESS.2018.2854283 kostenfrei https://doaj.org/article/3be1a47d41c84d2996029d49b53536eb kostenfrei https://ieeexplore.ieee.org/document/8421749/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 6 2018 49089-49102 |
allfields_unstemmed |
10.1109/ACCESS.2018.2854283 doi (DE-627)DOAJ049810030 (DE-599)DOAJ3be1a47d41c84d2996029d49b53536eb DE-627 ger DE-627 rakwb eng TK1-9971 Tuyen P. Le verfasserin aut A Deep Hierarchical Reinforcement Learning Algorithm in Partially Observable Markov Decision Processes 2018 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier In recent years, reinforcement learning (RL) has achieved remarkable success due to the growing adoption of deep learning techniques and the rapid growth of computing power. Nevertheless, it is well-known that flat reinforcement learning algorithms are often have trouble learning and are even data-efficient with respect to tasks having hierarchical structures, e.g., those consisting of multiple subtasks. Hierarchical reinforcement learning is a principled approach that can tackle such challenging tasks. On the other hand, many real-world tasks usually have only partial observability in which state measurements are often imperfect and partially observable. The problems of RL in such settings can be formulated as a partially observable Markov decision process (POMDP). In this paper, we study hierarchical RL in a POMDP in which the tasks have only partial observability and possess hierarchical properties. We propose a hierarchical deep reinforcement learning approach for learning in hierarchical POMDP. The deep hierarchical RL algorithm is proposed for domains to both MDP and POMDP learning. We evaluate the proposed algorithm using various challenging hierarchical POMDPs. Hierarchical deep reinforcement learning partially observable MDP (POMDP) semi-MDP partially observable semi-MDP (POSMDP) Electrical engineering. Electronics. Nuclear engineering Ngo Anh Vien verfasserin aut TaeChoong Chung verfasserin aut In IEEE Access IEEE, 2014 6(2018), Seite 49089-49102 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:6 year:2018 pages:49089-49102 https://doi.org/10.1109/ACCESS.2018.2854283 kostenfrei https://doaj.org/article/3be1a47d41c84d2996029d49b53536eb kostenfrei https://ieeexplore.ieee.org/document/8421749/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 6 2018 49089-49102 |
allfieldsGer |
10.1109/ACCESS.2018.2854283 doi (DE-627)DOAJ049810030 (DE-599)DOAJ3be1a47d41c84d2996029d49b53536eb DE-627 ger DE-627 rakwb eng TK1-9971 Tuyen P. Le verfasserin aut A Deep Hierarchical Reinforcement Learning Algorithm in Partially Observable Markov Decision Processes 2018 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier In recent years, reinforcement learning (RL) has achieved remarkable success due to the growing adoption of deep learning techniques and the rapid growth of computing power. Nevertheless, it is well-known that flat reinforcement learning algorithms are often have trouble learning and are even data-efficient with respect to tasks having hierarchical structures, e.g., those consisting of multiple subtasks. Hierarchical reinforcement learning is a principled approach that can tackle such challenging tasks. On the other hand, many real-world tasks usually have only partial observability in which state measurements are often imperfect and partially observable. The problems of RL in such settings can be formulated as a partially observable Markov decision process (POMDP). In this paper, we study hierarchical RL in a POMDP in which the tasks have only partial observability and possess hierarchical properties. We propose a hierarchical deep reinforcement learning approach for learning in hierarchical POMDP. The deep hierarchical RL algorithm is proposed for domains to both MDP and POMDP learning. We evaluate the proposed algorithm using various challenging hierarchical POMDPs. Hierarchical deep reinforcement learning partially observable MDP (POMDP) semi-MDP partially observable semi-MDP (POSMDP) Electrical engineering. Electronics. Nuclear engineering Ngo Anh Vien verfasserin aut TaeChoong Chung verfasserin aut In IEEE Access IEEE, 2014 6(2018), Seite 49089-49102 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:6 year:2018 pages:49089-49102 https://doi.org/10.1109/ACCESS.2018.2854283 kostenfrei https://doaj.org/article/3be1a47d41c84d2996029d49b53536eb kostenfrei https://ieeexplore.ieee.org/document/8421749/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 6 2018 49089-49102 |
allfieldsSound |
10.1109/ACCESS.2018.2854283 doi (DE-627)DOAJ049810030 (DE-599)DOAJ3be1a47d41c84d2996029d49b53536eb DE-627 ger DE-627 rakwb eng TK1-9971 Tuyen P. Le verfasserin aut A Deep Hierarchical Reinforcement Learning Algorithm in Partially Observable Markov Decision Processes 2018 Text txt rdacontent Computermedien c rdamedia Online-Ressource cr rdacarrier In recent years, reinforcement learning (RL) has achieved remarkable success due to the growing adoption of deep learning techniques and the rapid growth of computing power. Nevertheless, it is well-known that flat reinforcement learning algorithms are often have trouble learning and are even data-efficient with respect to tasks having hierarchical structures, e.g., those consisting of multiple subtasks. Hierarchical reinforcement learning is a principled approach that can tackle such challenging tasks. On the other hand, many real-world tasks usually have only partial observability in which state measurements are often imperfect and partially observable. The problems of RL in such settings can be formulated as a partially observable Markov decision process (POMDP). In this paper, we study hierarchical RL in a POMDP in which the tasks have only partial observability and possess hierarchical properties. We propose a hierarchical deep reinforcement learning approach for learning in hierarchical POMDP. The deep hierarchical RL algorithm is proposed for domains to both MDP and POMDP learning. We evaluate the proposed algorithm using various challenging hierarchical POMDPs. Hierarchical deep reinforcement learning partially observable MDP (POMDP) semi-MDP partially observable semi-MDP (POSMDP) Electrical engineering. Electronics. Nuclear engineering Ngo Anh Vien verfasserin aut TaeChoong Chung verfasserin aut In IEEE Access IEEE, 2014 6(2018), Seite 49089-49102 (DE-627)728440385 (DE-600)2687964-5 21693536 nnns volume:6 year:2018 pages:49089-49102 https://doi.org/10.1109/ACCESS.2018.2854283 kostenfrei https://doaj.org/article/3be1a47d41c84d2996029d49b53536eb kostenfrei https://ieeexplore.ieee.org/document/8421749/ kostenfrei https://doaj.org/toc/2169-3536 Journal toc kostenfrei GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 AR 6 2018 49089-49102 |
language |
English |
source |
In IEEE Access 6(2018), Seite 49089-49102 volume:6 year:2018 pages:49089-49102 |
sourceStr |
In IEEE Access 6(2018), Seite 49089-49102 volume:6 year:2018 pages:49089-49102 |
format_phy_str_mv |
Article |
institution |
findex.gbv.de |
topic_facet |
Hierarchical deep reinforcement learning partially observable MDP (POMDP) semi-MDP partially observable semi-MDP (POSMDP) Electrical engineering. Electronics. Nuclear engineering |
isfreeaccess_bool |
true |
container_title |
IEEE Access |
authorswithroles_txt_mv |
Tuyen P. Le @@aut@@ Ngo Anh Vien @@aut@@ TaeChoong Chung @@aut@@ |
publishDateDaySort_date |
2018-01-01T00:00:00Z |
hierarchy_top_id |
728440385 |
id |
DOAJ049810030 |
language_de |
englisch |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ049810030</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230308145302.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230227s2018 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1109/ACCESS.2018.2854283</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ049810030</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJ3be1a47d41c84d2996029d49b53536eb</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">TK1-9971</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Tuyen P. Le</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="2"><subfield code="a">A Deep Hierarchical Reinforcement Learning Algorithm in Partially Observable Markov Decision Processes</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2018</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">In recent years, reinforcement learning (RL) has achieved remarkable success due to the growing adoption of deep learning techniques and the rapid growth of computing power. Nevertheless, it is well-known that flat reinforcement learning algorithms are often have trouble learning and are even data-efficient with respect to tasks having hierarchical structures, e.g., those consisting of multiple subtasks. Hierarchical reinforcement learning is a principled approach that can tackle such challenging tasks. On the other hand, many real-world tasks usually have only partial observability in which state measurements are often imperfect and partially observable. The problems of RL in such settings can be formulated as a partially observable Markov decision process (POMDP). In this paper, we study hierarchical RL in a POMDP in which the tasks have only partial observability and possess hierarchical properties. We propose a hierarchical deep reinforcement learning approach for learning in hierarchical POMDP. The deep hierarchical RL algorithm is proposed for domains to both MDP and POMDP learning. We evaluate the proposed algorithm using various challenging hierarchical POMDPs.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Hierarchical deep reinforcement learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">partially observable MDP (POMDP)</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">semi-MDP</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">partially observable semi-MDP (POSMDP)</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Electrical engineering. Electronics. Nuclear engineering</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Ngo Anh Vien</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">TaeChoong Chung</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">IEEE Access</subfield><subfield code="d">IEEE, 2014</subfield><subfield code="g">6(2018), Seite 49089-49102</subfield><subfield code="w">(DE-627)728440385</subfield><subfield code="w">(DE-600)2687964-5</subfield><subfield code="x">21693536</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:6</subfield><subfield code="g">year:2018</subfield><subfield code="g">pages:49089-49102</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1109/ACCESS.2018.2854283</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/3be1a47d41c84d2996029d49b53536eb</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://ieeexplore.ieee.org/document/8421749/</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/2169-3536</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">6</subfield><subfield code="j">2018</subfield><subfield code="h">49089-49102</subfield></datafield></record></collection>
|
callnumber-first |
T - Technology |
author |
Tuyen P. Le |
spellingShingle |
Tuyen P. Le misc TK1-9971 misc Hierarchical deep reinforcement learning misc partially observable MDP (POMDP) misc semi-MDP misc partially observable semi-MDP (POSMDP) misc Electrical engineering. Electronics. Nuclear engineering A Deep Hierarchical Reinforcement Learning Algorithm in Partially Observable Markov Decision Processes |
authorStr |
Tuyen P. Le |
ppnlink_with_tag_str_mv |
@@773@@(DE-627)728440385 |
format |
electronic Article |
delete_txt_mv |
keep |
author_role |
aut aut aut |
collection |
DOAJ |
remote_str |
true |
callnumber-label |
TK1-9971 |
illustrated |
Not Illustrated |
issn |
21693536 |
topic_title |
TK1-9971 A Deep Hierarchical Reinforcement Learning Algorithm in Partially Observable Markov Decision Processes Hierarchical deep reinforcement learning partially observable MDP (POMDP) semi-MDP partially observable semi-MDP (POSMDP) |
topic |
misc TK1-9971 misc Hierarchical deep reinforcement learning misc partially observable MDP (POMDP) misc semi-MDP misc partially observable semi-MDP (POSMDP) misc Electrical engineering. Electronics. Nuclear engineering |
topic_unstemmed |
misc TK1-9971 misc Hierarchical deep reinforcement learning misc partially observable MDP (POMDP) misc semi-MDP misc partially observable semi-MDP (POSMDP) misc Electrical engineering. Electronics. Nuclear engineering |
topic_browse |
misc TK1-9971 misc Hierarchical deep reinforcement learning misc partially observable MDP (POMDP) misc semi-MDP misc partially observable semi-MDP (POSMDP) misc Electrical engineering. Electronics. Nuclear engineering |
format_facet |
Elektronische Aufsätze Aufsätze Elektronische Ressource |
format_main_str_mv |
Text Zeitschrift/Artikel |
carriertype_str_mv |
cr |
hierarchy_parent_title |
IEEE Access |
hierarchy_parent_id |
728440385 |
hierarchy_top_title |
IEEE Access |
isfreeaccess_txt |
true |
familylinks_str_mv |
(DE-627)728440385 (DE-600)2687964-5 |
title |
A Deep Hierarchical Reinforcement Learning Algorithm in Partially Observable Markov Decision Processes |
ctrlnum |
(DE-627)DOAJ049810030 (DE-599)DOAJ3be1a47d41c84d2996029d49b53536eb |
title_full |
A Deep Hierarchical Reinforcement Learning Algorithm in Partially Observable Markov Decision Processes |
author_sort |
Tuyen P. Le |
journal |
IEEE Access |
journalStr |
IEEE Access |
callnumber-first-code |
T |
lang_code |
eng |
isOA_bool |
true |
recordtype |
marc |
publishDateSort |
2018 |
contenttype_str_mv |
txt |
container_start_page |
49089 |
author_browse |
Tuyen P. Le Ngo Anh Vien TaeChoong Chung |
container_volume |
6 |
class |
TK1-9971 |
format_se |
Elektronische Aufsätze |
author-letter |
Tuyen P. Le |
doi_str_mv |
10.1109/ACCESS.2018.2854283 |
author2-role |
verfasserin |
title_sort |
deep hierarchical reinforcement learning algorithm in partially observable markov decision processes |
callnumber |
TK1-9971 |
title_auth |
A Deep Hierarchical Reinforcement Learning Algorithm in Partially Observable Markov Decision Processes |
abstract |
In recent years, reinforcement learning (RL) has achieved remarkable success due to the growing adoption of deep learning techniques and the rapid growth of computing power. Nevertheless, it is well-known that flat reinforcement learning algorithms are often have trouble learning and are even data-efficient with respect to tasks having hierarchical structures, e.g., those consisting of multiple subtasks. Hierarchical reinforcement learning is a principled approach that can tackle such challenging tasks. On the other hand, many real-world tasks usually have only partial observability in which state measurements are often imperfect and partially observable. The problems of RL in such settings can be formulated as a partially observable Markov decision process (POMDP). In this paper, we study hierarchical RL in a POMDP in which the tasks have only partial observability and possess hierarchical properties. We propose a hierarchical deep reinforcement learning approach for learning in hierarchical POMDP. The deep hierarchical RL algorithm is proposed for domains to both MDP and POMDP learning. We evaluate the proposed algorithm using various challenging hierarchical POMDPs. |
abstractGer |
In recent years, reinforcement learning (RL) has achieved remarkable success due to the growing adoption of deep learning techniques and the rapid growth of computing power. Nevertheless, it is well-known that flat reinforcement learning algorithms are often have trouble learning and are even data-efficient with respect to tasks having hierarchical structures, e.g., those consisting of multiple subtasks. Hierarchical reinforcement learning is a principled approach that can tackle such challenging tasks. On the other hand, many real-world tasks usually have only partial observability in which state measurements are often imperfect and partially observable. The problems of RL in such settings can be formulated as a partially observable Markov decision process (POMDP). In this paper, we study hierarchical RL in a POMDP in which the tasks have only partial observability and possess hierarchical properties. We propose a hierarchical deep reinforcement learning approach for learning in hierarchical POMDP. The deep hierarchical RL algorithm is proposed for domains to both MDP and POMDP learning. We evaluate the proposed algorithm using various challenging hierarchical POMDPs. |
abstract_unstemmed |
In recent years, reinforcement learning (RL) has achieved remarkable success due to the growing adoption of deep learning techniques and the rapid growth of computing power. Nevertheless, it is well-known that flat reinforcement learning algorithms are often have trouble learning and are even data-efficient with respect to tasks having hierarchical structures, e.g., those consisting of multiple subtasks. Hierarchical reinforcement learning is a principled approach that can tackle such challenging tasks. On the other hand, many real-world tasks usually have only partial observability in which state measurements are often imperfect and partially observable. The problems of RL in such settings can be formulated as a partially observable Markov decision process (POMDP). In this paper, we study hierarchical RL in a POMDP in which the tasks have only partial observability and possess hierarchical properties. We propose a hierarchical deep reinforcement learning approach for learning in hierarchical POMDP. The deep hierarchical RL algorithm is proposed for domains to both MDP and POMDP learning. We evaluate the proposed algorithm using various challenging hierarchical POMDPs. |
collection_details |
GBV_USEFLAG_A SYSFLAG_A GBV_DOAJ GBV_ILN_11 GBV_ILN_20 GBV_ILN_22 GBV_ILN_23 GBV_ILN_24 GBV_ILN_31 GBV_ILN_39 GBV_ILN_40 GBV_ILN_60 GBV_ILN_62 GBV_ILN_63 GBV_ILN_65 GBV_ILN_69 GBV_ILN_70 GBV_ILN_73 GBV_ILN_95 GBV_ILN_105 GBV_ILN_110 GBV_ILN_151 GBV_ILN_161 GBV_ILN_170 GBV_ILN_213 GBV_ILN_230 GBV_ILN_285 GBV_ILN_293 GBV_ILN_370 GBV_ILN_602 GBV_ILN_2014 GBV_ILN_4012 GBV_ILN_4037 GBV_ILN_4112 GBV_ILN_4125 GBV_ILN_4126 GBV_ILN_4249 GBV_ILN_4305 GBV_ILN_4306 GBV_ILN_4307 GBV_ILN_4313 GBV_ILN_4322 GBV_ILN_4323 GBV_ILN_4324 GBV_ILN_4325 GBV_ILN_4335 GBV_ILN_4338 GBV_ILN_4367 GBV_ILN_4700 |
title_short |
A Deep Hierarchical Reinforcement Learning Algorithm in Partially Observable Markov Decision Processes |
url |
https://doi.org/10.1109/ACCESS.2018.2854283 https://doaj.org/article/3be1a47d41c84d2996029d49b53536eb https://ieeexplore.ieee.org/document/8421749/ https://doaj.org/toc/2169-3536 |
remote_bool |
true |
author2 |
Ngo Anh Vien TaeChoong Chung |
author2Str |
Ngo Anh Vien TaeChoong Chung |
ppnlink |
728440385 |
callnumber-subject |
TK - Electrical and Nuclear Engineering |
mediatype_str_mv |
c |
isOA_txt |
true |
hochschulschrift_bool |
false |
doi_str |
10.1109/ACCESS.2018.2854283 |
callnumber-a |
TK1-9971 |
up_date |
2024-07-04T00:54:10.874Z |
_version_ |
1803607815881752576 |
fullrecord_marcxml |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>01000caa a22002652 4500</leader><controlfield tag="001">DOAJ049810030</controlfield><controlfield tag="003">DE-627</controlfield><controlfield tag="005">20230308145302.0</controlfield><controlfield tag="007">cr uuu---uuuuu</controlfield><controlfield tag="008">230227s2018 xx |||||o 00| ||eng c</controlfield><datafield tag="024" ind1="7" ind2=" "><subfield code="a">10.1109/ACCESS.2018.2854283</subfield><subfield code="2">doi</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-627)DOAJ049810030</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(DE-599)DOAJ3be1a47d41c84d2996029d49b53536eb</subfield></datafield><datafield tag="040" ind1=" " ind2=" "><subfield code="a">DE-627</subfield><subfield code="b">ger</subfield><subfield code="c">DE-627</subfield><subfield code="e">rakwb</subfield></datafield><datafield tag="041" ind1=" " ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="050" ind1=" " ind2="0"><subfield code="a">TK1-9971</subfield></datafield><datafield tag="100" ind1="0" ind2=" "><subfield code="a">Tuyen P. Le</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="245" ind1="1" ind2="2"><subfield code="a">A Deep Hierarchical Reinforcement Learning Algorithm in Partially Observable Markov Decision Processes</subfield></datafield><datafield tag="264" ind1=" " ind2="1"><subfield code="c">2018</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">Text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">Computermedien</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">Online-Ressource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">In recent years, reinforcement learning (RL) has achieved remarkable success due to the growing adoption of deep learning techniques and the rapid growth of computing power. Nevertheless, it is well-known that flat reinforcement learning algorithms are often have trouble learning and are even data-efficient with respect to tasks having hierarchical structures, e.g., those consisting of multiple subtasks. Hierarchical reinforcement learning is a principled approach that can tackle such challenging tasks. On the other hand, many real-world tasks usually have only partial observability in which state measurements are often imperfect and partially observable. The problems of RL in such settings can be formulated as a partially observable Markov decision process (POMDP). In this paper, we study hierarchical RL in a POMDP in which the tasks have only partial observability and possess hierarchical properties. We propose a hierarchical deep reinforcement learning approach for learning in hierarchical POMDP. The deep hierarchical RL algorithm is proposed for domains to both MDP and POMDP learning. We evaluate the proposed algorithm using various challenging hierarchical POMDPs.</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">Hierarchical deep reinforcement learning</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">partially observable MDP (POMDP)</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">semi-MDP</subfield></datafield><datafield tag="650" ind1=" " ind2="4"><subfield code="a">partially observable semi-MDP (POSMDP)</subfield></datafield><datafield tag="653" ind1=" " ind2="0"><subfield code="a">Electrical engineering. Electronics. Nuclear engineering</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">Ngo Anh Vien</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="700" ind1="0" ind2=" "><subfield code="a">TaeChoong Chung</subfield><subfield code="e">verfasserin</subfield><subfield code="4">aut</subfield></datafield><datafield tag="773" ind1="0" ind2="8"><subfield code="i">In</subfield><subfield code="t">IEEE Access</subfield><subfield code="d">IEEE, 2014</subfield><subfield code="g">6(2018), Seite 49089-49102</subfield><subfield code="w">(DE-627)728440385</subfield><subfield code="w">(DE-600)2687964-5</subfield><subfield code="x">21693536</subfield><subfield code="7">nnns</subfield></datafield><datafield tag="773" ind1="1" ind2="8"><subfield code="g">volume:6</subfield><subfield code="g">year:2018</subfield><subfield code="g">pages:49089-49102</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doi.org/10.1109/ACCESS.2018.2854283</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://doaj.org/article/3be1a47d41c84d2996029d49b53536eb</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="0"><subfield code="u">https://ieeexplore.ieee.org/document/8421749/</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="856" ind1="4" ind2="2"><subfield code="u">https://doaj.org/toc/2169-3536</subfield><subfield code="y">Journal toc</subfield><subfield code="z">kostenfrei</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_USEFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">SYSFLAG_A</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_DOAJ</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_11</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_20</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_22</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_23</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_24</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_31</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_39</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_40</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_60</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_62</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_63</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_65</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_69</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_70</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_73</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_95</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_105</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_110</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_151</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_161</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_170</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_213</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_230</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_285</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_293</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_370</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_602</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_2014</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4012</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4037</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4112</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4125</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4126</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4249</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4305</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4306</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4307</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4313</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4322</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4323</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4324</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4325</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4335</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4338</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4367</subfield></datafield><datafield tag="912" ind1=" " ind2=" "><subfield code="a">GBV_ILN_4700</subfield></datafield><datafield tag="951" ind1=" " ind2=" "><subfield code="a">AR</subfield></datafield><datafield tag="952" ind1=" " ind2=" "><subfield code="d">6</subfield><subfield code="j">2018</subfield><subfield code="h">49089-49102</subfield></datafield></record></collection>
|
score |
7.4027395 |